OpenAI has introduced the launch of an “agentic safety researcher” that is powered by its GPT-5 massive language mannequin (LLM) and is programmed to emulate a human knowledgeable able to scanning, understanding, and patching code.
Referred to as Aardvark, the unreal intelligence (AI) firm stated the autonomous agent is designed to assist builders and safety groups flag and repair safety vulnerabilities at scale. It is at the moment obtainable in non-public beta.
“Aardvark repeatedly analyzes supply code repositories to determine vulnerabilities, assess exploitability, prioritize severity, and suggest focused patches,” OpenAI famous.
It really works by embedding itself into the software program growth pipeline, monitoring commits and modifications to codebases, detecting safety points and the way they may be exploited, and proposing fixes to handle them utilizing LLM-based reasoning and tool-use.
Powering the agent is GPT‑5, which OpenAI launched in August 2025. The corporate describes it as a “good, environment friendly mannequin” that options deeper reasoning capabilities, courtesy of GPT‑5 pondering, and a “actual‑time router” to determine the fitting mannequin to make use of based mostly on dialog sort, complexity, and person intent.
Aardvark, OpenAI added, analyses a mission’s codebase to supply a risk mannequin that it thinks greatest represents its safety aims and design. With this contextual basis, the agent then scans its historical past to determine current points, in addition to detect new ones by scrutinizing incoming modifications to the repository.
As soon as a possible safety defect is discovered, it makes an attempt to set off it in an remoted, sandboxed surroundings to substantiate its exploitability and leverages OpenAI Codex, its coding agent, to supply a patch that may be reviewed by a human analyst.
OpenAI stated it has been working the agent throughout OpenAI’s inside codebases and a few of its exterior alpha companions, and that it has helped determine not less than 10 CVEs in open-source tasks.
The AI upstart is much from the one firm to trial AI brokers to deal with automated vulnerability discovery and patching. Earlier this month, Google introduced CodeMender that it stated detects, patches, and rewrites susceptible code to forestall future exploits. The tech big additionally famous that it intends to work with maintainers of crucial open-source tasks to combine CodeMender-generated patches to assist maintain tasks safe.
Seen in that gentle, Aardvark, CodeMender, and XBOW are being positioned as instruments for steady code evaluation, exploit validation, and patch technology. It additionally comes shut on the heels of OpenAI’s launch of the gpt-oss-safeguard fashions which are fine-tuned for security classification duties.
“Aardvark represents a brand new defender-first mannequin: an agentic safety researcher that companions with groups by delivering steady safety as code evolves,” OpenAI stated. “By catching vulnerabilities early, validating real-world exploitability, and providing clear fixes, Aardvark can strengthen safety with out slowing innovation. We consider in increasing entry to safety experience.”











