2023 was the 12 months of AI hype. 2024 was the 12 months of AI experimentation. 2025 was the 12 months of AI hype correction. So, what’s going to 2026 convey? Will the bubble burst — or possibly deflate somewhat? Will AI ROI be realized?
Within the cybersecurity realm, one of many huge questions is how adversaries will use AI of their assaults. It is well-known that AI permits risk actors to craft extra lifelike phishing assaults at a larger scale than ever, create deepfakes that impersonate official staff and generate polymorphic malware that evades detection. Moreover, AI methods have vulnerabilities that dangerous actors exploit, for instance, utilizing immediate injection assaults.
Here is what some specialists predict for offensive AI in 2026:
- “An agentic AI deployment will trigger a public breach and result in worker dismissals.” Paddy Harrington, analyst at Forrester.
- “Offensive autonomous and agentic AI will emerge as a mainstream risk, with attackers unleashing absolutely automated phishing, lateral motion and exploit-chain engines that require little or no human operator engagement.” Marcus Sachs, senior vp and chief engineer on the Middle for Web Safety (CIS).
- “As attackers proceed to make use of AI and shift to agent-based assaults, the prevalence of living-off-the-land assaults will solely develop.” John Grady, analyst at Omdia, a division of Informa TechTarget.
- “AI continues to dominate the headlines and safety panorama.” Sean Atkinson, CISO at CIS.
Atkinson’s prediction is already proving true simply 9 days into the 12 months, as evidenced on this week’s featured information.
Moody’s 2026 outlook: AI threats and regulatory challenges
Moody’s 2026 cyber outlook report warned of escalating AI-driven cyberattacks, together with adaptive malware and autonomous threats, as firms more and more undertake AI with out ample safeguards.
AI has already enabled extra customized phishing and deepfake assaults, and future dangers embrace mannequin poisoning and sooner, AI-assisted hacking. Whereas AI-powered defenses are important, Moody’s cautioned that they introduce new dangers, resembling unpredictable habits, requiring robust governance.
The report additionally highlighted the contrasting regulatory approaches of the EU, the U.S. and Asia-Pacific international locations. Because the EU pursues coordinated frameworks, such because the Community and Data Safety Directive, the Trump administration has scaled again or delayed regulatory efforts. Regional harmonization would possibly progress in 2026, nonetheless, Moody’s predicted world alignment will stay difficult resulting from conflicting home priorities.
AI-driven cyberattacks push CIOs to strengthen safety measures
As AI accelerates innovation, it additionally introduces vital cyber-risks. Almost 90% of CISOs recognized AI-driven assaults as a serious risk, in response to a research from cybersecurity vendor Trellix.
Healthcare methods are notably weak, with 275 million affected person information uncovered in 2024 alone. CIOs, like these at UC San Diego Well being, are rising investments in AI-powered cybersecurity instruments whereas balancing budgets for innovation.
AI can also be fueling subtle phishing assaults, with 40% of enterprise electronic mail compromise emails now AI-generated. Consultants emphasised the significance of fundamental safety practices, resembling zero belief, safety consciousness coaching and MFA, as vital defenses towards evolving AI threats.
Learn the total story by Jen A. Miller on Cybersecurity Dive.
NIST seeks public enter on managing AI safety dangers
NIST is inviting public suggestions on approaches to managing safety dangers related to AI brokers. Via its Middle for AI Requirements and Innovation (CAISI), NIST goals to assemble insights on greatest practices, methodologies and case research to enhance the safe improvement and deployment of AI methods.
The company highlighted rising issues over poorly secured AI brokers, which might expose vital infrastructure to cyberattacks and jeopardize public security. Public enter will assist CAISI develop technical pointers and voluntary safety requirements to handle vulnerabilities, assess dangers and improve AI safety measures. Submissions are open for 60 days.
AI-powered impersonation scams to surge in 2026
A report from id vendor Nametag predicted a pointy rise in AI-driven impersonation scams focusing on enterprises, fueled by the rising accessibility of deepfake expertise. Fraudsters are more and more utilizing AI to imitate voices, pictures and movies, enabling assaults resembling hiring fraud and social engineering schemes.
Excessive-profile circumstances, resembling a $25 million rip-off involving British agency Arup, spotlight the dangers. IT, HR and finance departments are prime targets, with deepfake impersonation turning into a typical tactic. Nametag warned that agentic AI might amplify threats, and urged organizations to rethink workforce id verification to make sure the appropriate human is behind each motion.
Learn the total story by Alexei Alexis on Cybersecurity Dive.
Editor’s observe: An editor used AI instruments to assist within the era of this information temporary. Our skilled editors all the time evaluate and edit content material earlier than publishing.
Sharon Shea is govt editor of Informa TechTarget’s SearchSecurity website.









