As we enter the ultimate quarter of 2025, two letters of the alphabet proceed to dominate enterprise tech conversations and information: AI. Corporations are matching all that discuss with motion, with 78% of organizations now utilizing AI in at the very least one enterprise operate, in response to a worldwide survey by McKinsey & Firm.
In cybersecurity, some specialists hope defensive AI will finally give enterprises the sting over attackers. Others, nonetheless, are shedding sleep over methods AI might expose their organizations to new threats — from each inside and outside.
This week’s featured articles discover AI cybersecurity anxiousness, a troubling ChatGPT vulnerability and the draw back of AI-powered vulnerability detection. Plus, study why specialists say zero belief should evolve whether it is to efficiently meet the AI second.
AI cyber threats fear IT defenders
A September 2025 Lenovo report revealed widespread concern amongst IT defenders relating to AI-powered cyberattacks. Solely 31% of IT leaders stated they really feel considerably assured of their defensive capabilities, with a mere 10% expressing sturdy confidence.
The report highlights how AI allows assaults to evolve in opposition to protection mechanisms, probably bypassing safety platforms. Past offensive AI, which 61% cited as an rising threat, IT leaders fear about workers utilizing public AI instruments and their organizations’ fast adoption of AI brokers — described as “a brand new form of insider risk.”
ChatGPT vulnerability allows invisible e-mail theft
Researchers at Radware found a vulnerability referred to as “ShadowLeak” that allows hackers to steal emails from customers who combine ChatGPT with their e-mail accounts. The assault works by sending victims emails containing hidden HTML code — utilizing tiny or white-on-white textual content — that instructs the AI to exfiltrate information when requested to summarize emails.
For the reason that processing occurs on OpenAI’s infrastructure, the assault leaves no hint on the sufferer’s community, making it undetectable. OpenAI addressed the vulnerability in August after Radware reported it in June, although particulars of the repair stay unclear. Specialists urged that efficient safety requires layered defenses, together with AI instruments to detect malicious intent.
AI vulnerability detection might harm enterprise cybersecurity
Former U.S. cyber official Rob Joyce warned that AI-powered vulnerability detection might worsen cybersecurity somewhat than enhance it. Whereas AI programs reminiscent of XBOW can discover software program flaws sooner than people, Joyce stated that patching capabilities can’t maintain tempo, particularly for unsupported or legacy programs.
The hole between vulnerability discovery and remediation creates vital threat, probably resulting in catastrophic safety failures. Moreover, Joyce cautioned about new threats involving the exploitation of AI brokers built-in into company programs to determine priceless information for ransomware or extortion assaults.
To maintain tempo with AI-powered assaults, zero belief should evolve
Zero-trust structure, with its “by no means belief, all the time confirm” method, is essential as attackers more and more undertake AI. Whereas zero-trust rules reminiscent of community segmentation assist restrict entry and confirm identities, they need to evolve to counter AI-enhanced threats.
Attackers now use AI to extend assault velocity and create convincing deepfakes, significantly focusing on identity-based vulnerabilities by way of stolen credentials and tokens. The current Salesloft Drift breach demonstrates these evolving threats. Safety specialists have urged that zero belief should adapt by implementing stronger id verification and sustaining correct segmentation, particularly as organizations combine AI brokers with entry to delicate information.
Learn the total story by Arielle Waldman on Darkish Studying.
Editor’s notice: An editor used AI instruments to assist within the era of this information transient. Our knowledgeable editors all the time evaluate and edit content material earlier than publishing.
Alissa Irei is senior website editor of Informa TechTarget Safety.