Ninety-nine % will not be a statistic you anticipate to see in a safety report. However that’s the discovering from a brand new survey of 500 U.S. CISOs: 99.4% of organizations skilled not less than one safety incident tied to their SaaS or AI ecosystem in 2025. Solely three respondents reported zero incidents. Three.
The survey, performed by Consensuswide, coated corporations starting from 500 to 10,000 staff throughout all main {industry} verticals. It requested 17 questions on safety posture, tooling, incidents, and preparedness. These organizations had been working a median of 13 devoted safety instruments every when these incidents occurred. Monetary companies companies, probably the most security-invested sector within the survey, averaged 15.6 instruments—and nonetheless skilled SaaS provide chain assaults at 26% above the cross-industry charge.
The Risk Has Moved
I had a chance to talk with Amir Khayat, co-founder and CEO of Vorlon, about what the info reveals. His rationalization begins with how enterprise workflows have essentially modified—and why safety monitoring hasn’t stored up.
Conventional SaaS automation is deterministic—if this, then that. It breaks the second a variable adjustments. AI brokers work otherwise. They use massive language fashions to interpret intent, deal with edge instances on the fly, and choose instruments and APIs primarily based on real-time targets reasonably than hard-coded paths. That creates a monitoring downside that safety instruments weren’t designed for.
“When conduct is deterministic, you may outline regular and alert on deviation,” Khayat stated. “When an agent is reasoning its approach via a workflow, establishing a behavioral baseline turns into a essentially completely different downside.”
Most enterprise safety structure was constructed round what Khayat calls the entrance door: person logins, credential validation, permission audits, and community perimeter controls. That coated two distinct entrances—human customers coming via browsers, and service-to-service APIs on the infrastructure degree. Instruments like CASBs, WAFs, and cloud safety posture administration had been constructed for these patterns. The conduct was predictable sufficient to outline regular and detect deviation.
The engine room is a distinct state of affairs solely. An AI agent resolving a routine IT ticket may autonomously contact id programs, permissions, and configurations throughout Okta, Slack, GitHub, DocuSign, and payroll platforms—all in minutes, with no human concerned. Every system logs its personal slice. No person sees the total image. The agent isn’t following a recognized sample as a result of it’s deciding the sample because it goes. That doesn’t appear to be a suspicious login. It doesn’t set off a configuration alert.
Asking the Flawed Questions
The instruments most enterprises are working had been constructed to reply particular questions: what are the configurations, who has what permissions, is something misconfigured? These are useful questions. They’re simply not the appropriate questions when an AI agent is transferring information via a professional OAuth-authorized integration.
The questions that matter in that situation are: what is that this agent really doing, what information is it touching, and is that conduct per what it was licensed to do. As Khayat put it: “You’ll be able to have 15 of them working and nonetheless be blind to that exercise.”
When CISOs had been requested to charge their instruments throughout 11 particular functionality limitations, between 83% and 87% of organizations reported some degree of limitation on each single one. The vary spans solely 4 share factors throughout all 11. That’s not proof that some distributors are outperforming others—it’s proof that your complete class was constructed across the similar assumptions, and people assumptions don’t maintain for the agentic layer.
Confidence Versus What Truly Occurred
Almost 90% of CISOs surveyed claimed sturdy or complete OAuth token governance. However 27.4% had been breached via compromised OAuth tokens or API keys that very same yr. About 79% claimed complete, real-time information circulation mapping throughout SaaS and AI. However 86.8% stated they’ll’t really see what information AI instruments are exchanging with SaaS purposes. These numbers can’t concurrently be true.
Khayat traces that again to the distinction between configuration-layer governance and runtime governance. Most organizations know which tokens exist, can audit permissions, and might revoke tokens manually. What they don’t have is visibility into whether or not lively tokens are getting used constantly with their meant scope, or whether or not a token’s conduct has drifted. Figuring out a token exists isn’t the identical as realizing what it’s doing proper now.
ITDR platforms that observe non-human id exercise run into the identical wall—they usually cease on the authentication layer. They will inform you an agent is logged in. What they’ll’t inform you is what that agent did with information as soon as it was inside: what it queried, what it moved, the place it despatched it, and whether or not any of that was inside scope. 83.4% of CISOs stated distinguishing between human and non-human conduct is a present limitation of their instruments. That quantity needs to be a part of each dialog about enterprise AI safety proper now.
Extra Finances, Identical Structure
Greater than 86% of organizations plan to extend SaaS safety spending in 2026. 84% plan to extend AI safety spending. Finances directed on the similar device classes will produce the identical outcomes. The 99.4% breach charge occurred at 13 instruments on common. Including a 14th device that displays the entrance door gained’t change something within the engine room.
Khayat’s argument is that the layer itself wants to alter—from configuration auditing to runtime monitoring. Behavioral baselines constructed round information interplay reasonably than login patterns. Actual-time token governance tied to precise utilization, not simply stock. And the power to reconstruct a forensic timeline of agent exercise throughout each linked system after one thing goes improper. When a provide chain assault executes via a SaaS integration, the blast radius extends to each system that the token was licensed to entry. With out that reconstruction functionality, scoping remediation and assembly regulatory disclosure timelines get tougher than they need to be.








