OpenAI has formally launched the GPT-5.5 Bio Bug Bounty program to strengthen safeguards in opposition to rising organic dangers.
As synthetic intelligence fashions change into extra superior, the potential for malicious actors to generate harmful organic info will increase. Superior persistent threats (APTs) and lone attackers may doubtlessly misuse massive language fashions to speed up dangerous organic analysis.
To deal with this evolving risk panorama, OpenAI is actively inviting cybersecurity researchers, biosecurity consultants, and AI pink teamers to check the boundaries of its newest mannequin.
The first purpose is to determine vital vulnerabilities and logic flaws earlier than risk actors can exploit them within the wild.
The Common Jailbreak Problem
The core technical problem revolves round discovering a “common jailbreak” for the GPT-5.5 mannequin. Within the context of AI safety, a jailbreak is a extremely specialised immediate designed to bypass the mannequin’s built-in security filters and moral guardrails.
Members should craft a single, common immediate that forces the mannequin to reply a strict five-question bio-safety problem efficiently.
Researchers should execute this assault from a clear chat session with out triggering any automated moderation warnings or backend alerts.
This requires deep experience in immediate engineering and an understanding of how language fashions course of advanced organic queries. The testing setting for this particular bounty is strictly restricted to GPT-5.5 working inside Codex Desktop.
Bounty Rewards and Testing Timeline
Discovering a real common jailbreak is extremely tough, and the monetary payout displays the duty’s complexity.
This system operates on a strict timeline with particular reward tiers for profitable vulnerability disclosures.
Key particulars of this system embody:
- A high prize of $25,000 goes to the primary researcher who efficiently solutions all 5 biosafety questions utilizing a single immediate.
- Smaller, discretionary awards could also be granted for partial successes that also present priceless risk intelligence.
- Purposes opened on April 23, 2026, and shall be accepted on a rolling foundation by means of the deadline on June 22, 2026.
- The lively testing part begins on April 28, 2026, and concludes on July 27, 2026.
Entry to the Bio Bug Bounty program is extremely restricted to make sure accountable disclosure and stop the leak of delicate organic information.
OpenAI is sending direct invites to a vetted checklist of trusted bio pink teamers whereas concurrently reviewing new purposes submitted by means of their official portal.
To use, researchers should present their full title, organizational affiliation, and related technical expertise in both AI safety or biology.
Accepted researchers should have an lively ChatGPT account to affix the testing platform. As a result of organic risk intelligence is extremely delicate, all members should signal a strict Non-Disclosure Settlement.
This authorized settlement fully restricts the general public sharing of any testing information. Coated supplies embody all engineered prompts, mannequin completions, safety findings, and direct communications with the OpenAI engineering workforce.
This bio-specific initiative operates alongside OpenAI’s broader safety and risk analysis efforts.
Cybersecurity professionals curious about discovering conventional software program vulnerabilities or different AI logic flaws can nonetheless take part within the ongoing Security Bug Bounty and Safety Bug Bounty applications.
By crowdsourcing superior risk discovery, the group goals to construct much more resilient guardrails round frontier AI methods.
Observe us on Google Information, LinkedIn, and X to Get Immediate Updates and Set GBH as a Most well-liked Supply in Google.









