AI guarantees to radically rework companies and governments, and its tantalizing potential is driving huge funding exercise. Alphabet, Amazon, Meta and Microsoft dedicated to spending greater than $300 billion mixed in 2025 on AI infrastructure and growth, a 46% improve over the earlier yr. Many extra organizations throughout industries are additionally investing closely in AI.
Enterprises aren’t the one ones trying to AI for his or her subsequent income alternative, nevertheless. Whilst companies race to develop proprietary AI programs, menace actors are already discovering methods to steal them and the delicate information they course of. Analysis suggests an absence of preparedness on the defensive aspect. A 2024 survey of 150 IT professionals revealed by AI safety vendor Hidden Layer discovered that whereas 97% mentioned their organizations are prioritizing AI safety, simply 20% are planning and testing for mannequin theft.
What AI mannequin theft is and why it issues
An AI mannequin is computing software program skilled on a knowledge set to acknowledge relationships and patterns amongst new inputs and assess that info to attract conclusions or take motion. As foundational parts of AI programs, AI fashions use algorithms to make selections and set duties in movement with out human instruction.
As a result of proprietary AI fashions are costly and time-consuming to create and practice, one of the severe threats organizations face is theft of the fashions themselves. AI mannequin theft is the unsanctioned entry, duplication or reverse-engineering of those packages. If menace actors can seize a mannequin’s parameters and structure, they will each set up a duplicate of the unique mannequin for their very own use and extract useful information that was used to coach the mannequin.
The potential fallout from AI mannequin theft is critical. Contemplate the next situations:
- Mental property loss. Proprietary AI fashions and the knowledge they course of are extremely useful mental property. Shedding an AI mannequin to theft might compromise an enterprise’s aggressive standing and jeopardize its long-term income outlook.
- Delicate information loss. Cybercriminals might achieve entry to any delicate or confidential information used to coach a stolen mannequin and, in flip, use that info to breach different belongings within the enterprise. Information theft can lead to monetary losses, broken buyer belief and regulatory fines.
- Malicious content material creation. Unhealthy actors might use a stolen AI mannequin to create malicious content material, equivalent to deepfakes, malware and phishing schemes.
- Reputational harm. A company that fails to guard its AI programs and delicate information faces the potential for severe and long-lasting reputational harm.
AI mannequin theft assault sorts
The phrases AI mannequin theft and mannequin extraction are interchangeable. In mannequin extraction, malicious hackers use query-based assaults to systematically interrogate an AI system with prompts designed to tease out info in regards to the mannequin’s structure and parameters. If profitable, mannequin extraction assaults can create a shadow mannequin by reverse-engineering the unique. A mannequin inversion assault is a associated sort of query-based assault that particularly goals to acquire the information a corporation used to coach its proprietary AI mannequin.
A secondary sort of AI mannequin theft assault, known as mannequin republishing, entails malicious hackers making a direct copy of a publicly launched or stolen AI mannequin with out permission. They may retrain it — in some instances, to behave maliciously — to raised go well with their wants.
Of their quest to steal an AI mannequin, cybercriminals may use strategies equivalent to side-channel assaults that monitor system exercise, together with execution time, energy consumption and sound waves, to raised perceive an AI system’s operations.
Lastly, basic cyberthreats — equivalent to malicious insiders and exploitation of misconfigurations or unpatched software program — can not directly expose AI fashions to menace actors.
AI mannequin theft prevention and mitigation
To stop and mitigate AI mannequin theft, OWASP recommends implementing the next safety mechanisms:
- Entry management. Put stringent entry management measures in place, equivalent to MFA.
- Backups. Again up the mannequin, together with its code and coaching information, in case it’s stolen.
- Encryption. Encrypt the AI mannequin’s code, coaching information and confidential info.
- Authorized safety. Contemplate in search of patents or different official mental property protections for AI fashions, which give clear authorized recourse within the case of theft.
- Mannequin obfuscation. Obfuscate the mannequin’s code to make it troublesome for malicious hackers to reverse-engineer it utilizing query-based assaults.
- Monitoring. Monitor and audit the mannequin’s exercise to establish potential breach makes an attempt earlier than a full-fledged theft happens.
- Watermarks. Watermark AI mannequin code and coaching information to maximise the chances of monitoring down thieves.
Amy Larsen DeCarlo has lined the IT trade for greater than 30 years, as a journalist, editor and analyst. As a principal analyst at GlobalData, she covers managed safety and cloud providers.