OpenAI has launched GPT-5.4-Cyber, a brand new model of its flagship AI. This launch comes only one week after Anthropic’s new AI mannequin Mythos (Claude Mythos) rollout. The brand new mannequin is a variant of the primary GPT-5.4 system, particularly optimized for defensive cybersecurity use instances.
In accordance with OpenAI’s announcement, the objective is to supply higher instruments for community and system defenders “chargeable for preserving programs, information, and customers protected,” in order that they will “discover and repair issues sooner.”
Scaling the Protection Program
Together with the brand new mannequin, OpenAI is increasing its Trusted Entry for Cyber (TAC) program, which first launched in February 2026. This venture is now out there to hundreds of authenticated particular person defenders and a whole bunch of groups chargeable for securing important infrastructure. By offering these professionals with extra superior capabilities, the corporate needs to assist them keep forward of menace actors who’re additionally experimenting with AI.
An integral a part of this technique is the Codex Safety software, which moved into analysis preview earlier in 2026. OpenAI claims this method has helped determine and patch over 3,000 important and high-severity vulnerabilities. It does so by monitoring codebases and suggesting fixes earlier than they are often exploited in a cyberattack.
New Technical Capabilities and Entry
The GPT-5.4-Cyber mannequin introduces a much-talked-about new characteristic known as binary reverse engineering. It’s particularly designed for safety consultants to assist them analyse compiled software program to search out malware and vulnerabilities, even when they don’t have the supply code. This characteristic has been beneath growth from GPT-5.2 by means of GPT-5.3-Codex earlier than its official launch now.
“Clients within the highest tiers will get entry to GPT‑5.4‑Cyber, a mannequin purposely fine-tuned for extra cyber capabilities and with fewer functionality restrictions. It is a model of GPT‑5.4, which lowers the refusal boundary for legit cybersecurity work and allows new capabilities for superior defensive workflows, together with binary reverse engineering capabilities that allow safety professionals to research compiled software program for malware potential, vulnerabilities, and safety robustness while not having entry to its supply code,” the corporate defined in an in depth announcement submit.
GPT-5.4-Cyber is extra permissive for safety duties, which is why OpenAI shouldn’t be permitting unrestricted entry, and to make use of essentially the most superior options, customers should first confirm their identification. The corporate makes use of an authentication course of to make sure its software program is utilized by legit safety professionals and never hackers or espionage actors.
Defenders/particular person customers want to enroll to confirm themselves at chatgpt.com/cyber, whereas enterprises can request entry by means of their official OpenAI representatives. OpenAI notes that whereas vulnerabilities in digital programs have existed for years, these new instruments may help legit actors defend public companies and important infrastructure extra profoundly.
OpenAI plans to proceed updating these defensive fashions all through 2026. They consider that as AI capabilities develop, the instruments used for system defence should even be improved too for enhancing system resilience and maintain digital environments safe.
Business Skilled Reactions
A number of trade consultants shared their views on this announcement with Hackread.com, noting each the advantages and the remaining hurdles for the sector.
Marcus Fowler, CEO of Darktrace Federal, known as the transfer a optimistic step however warned in regards to the actuality of fixing bugs. He said, “Most organisations are nonetheless constrained by the realities of remediation as soon as a difficulty is found: patch growth, testing, deployment, uptime necessities, and useful resource limitations. Quicker or deeper evaluation doesn’t routinely translate to sooner or more practical danger discount.”
Ronald Lewis from Black Duck highlighted the totally different kinds utilized by the 2 tech giants. He famous, “OpenAI’s TAC framework displays a extra conservative, tool-centric danger posture. It treats superior cyber capabilities as regulated devices, appropriate for managed deployment inside skilled workflows.” This stands in distinction to Anthropic’s method, which focuses extra on how the mannequin behaves moderately than who’s allowed to make use of it.
Picture by BoliviaInteligente on Unsplash









