Meta on Tuesday introduced LlamaFirewall, an open-source framework designed to safe synthetic intelligence (AI) methods in opposition to rising cyber dangers equivalent to immediate injection, jailbreaks, and insecure code, amongst others.
The framework, the corporate mentioned, incorporates three guardrails, together with PromptGuard 2, Agent Alignment Checks, and CodeShield.
PromptGuard 2 is designed to detect direct jailbreak and immediate injection makes an attempt in real-time, whereas Agent Alignment Checks is able to inspecting agent reasoning for potential aim hijacking and oblique immediate injection situations.
CodeShield refers to a web based static evaluation engine that seeks to stop the technology of insecure or harmful code by AI brokers.
“LlamaFirewall is constructed to function a versatile, real-time guardrail framework for securing LLM-powered functions,” the corporate mentioned in a GitHub description of the mission.
“Its structure is modular, enabling safety groups and builders to compose layered defenses that span from uncooked enter ingestion to ultimate output actions – throughout easy chat fashions and complicated autonomous brokers.”
Alongside LlamaFirewall, Meta has made accessible up to date variations of LlamaGuard and CyberSecEval to higher detect varied frequent forms of violating content material and measure the defensive cybersecurity capabilities of AI methods, respectively.
CyberSecEval 4 additionally features a new benchmark known as AutoPatchBench, which is engineered to guage the flexibility of a big language mannequin (LLM) agent to routinely restore a variety of C/C++ vulnerabilities recognized by way of fuzzing, an strategy often known as AI-powered patching.
“AutoPatchBench gives a standardized analysis framework for assessing the effectiveness of AI-assisted vulnerability restore instruments,” the corporate mentioned. “This benchmark goals to facilitate a complete understanding of the capabilities and limitations of assorted AI-driven approaches to repairing fuzzing-found bugs.”
Lastly, Meta has launched a brand new program dubbed Llama for Defenders to assist accomplice organizations and AI builders entry open, early-access, and closed AI options to deal with particular safety challenges, equivalent to detecting AI-generated content material utilized in scams, fraud, and phishing assaults.
The bulletins come as WhatsApp previewed a brand new know-how known as Personal Processing to permit customers to harness AI options with out compromising their privateness by offloading the requests to a safe, confidential atmosphere.
“We’re working with the safety neighborhood to audit and enhance our structure and can proceed to construct and strengthen Personal Processing within the open, in collaboration with researchers, earlier than we launch it in product,” Meta mentioned.