In fact, nobody who’s been paying any consideration to the fast proliferation of AI all through the enterprise wants me to inform them that that is removed from the standard new-product announcement.
We’re positively within the awkward in-between section the place AI can look like a little bit of an uncontrollable animal: very highly effective, considerably unpredictable, and generally slightly bit reckless.
It’s in opposition to that background that we discover the introduction of the brand new Agent Commander platform at the moment – a platform that guarantees not solely to determine AI issues, but additionally to “rollback” AI-based errors.
Whereas that’s one thing we’ve all been used to in different areas of IT for years, in terms of AI safety, most of us haven’t even seen this within the playbook but.
Provided that, it’s not laborious to see why Veeam is getting quite a lot of curiosity in its new platform. When AI brokers are making their very own selections at mild velocity, you’re clearly not going to have the ability to wait hours to detect the issue and days to repair it.
That’s been an actual drawback that many in safety have been seeing firsthand. What’s producing the joy round Agent Commander – and why many CTOs and different leaders in enterprise tech are getting slightly excited – is that Veeam is promising to do all three: detect AI danger, defend AI workloads, and undo AI actions.
In an setting the place most of our instruments have been designed to detect and defend, that’s an fascinating concept. Veeam’s method is to supply safety groups a single view throughout knowledge, identification, and AI actions in actual time.
Many current instruments do a few of these duties, however not all of them, or not in actual time. I can virtually think about the talk that’s occurring in CTO-level workers conferences at the moment: “Yeah, yeah, detecting AI threats is a superb factor.
However what about when an AI agent has already finished one thing it shouldn’t do? What about when it’s already misused some delicate knowledge?” I’m certain many safety groups have been wrestling with that query for months.
In keeping with surveys, a excessive share of enterprises have already skilled AI-related safety issues. Present instruments aren’t doing the job right here. That’s why IT groups I’ve spoken to aren’t simply speaking about safety anymore; they’re speaking about belief.
They need to belief AI brokers sufficient to proceed to undertake AI for all of the highly effective advantages it may well carry to their organizations with out having to concern that their subsequent knowledge breach will come from their AI methods.
Whereas Veeam’s Agent Commander isn’t a panacea, its promise of attributing AI actions and selectively reversing them may very well be the beginning of what these groups have been in search of.
However whereas I feel that’s a giant deal, I additionally suppose it’s necessary to take a step again and perceive that that is another signal of a bigger actuality about AI: We’re now formally within the age of “agentic” AI, the place AI brokers are usually not merely responding to queries and requests however are actively taking impartial actions, typically linked in chains of companies and knowledge.
They’re making selections at a velocity and scale that’s laborious for people to maintain up with. That’s creating some brand-new dangers and is why instruments and frameworks designed for a earlier technology of static functions look so outdated at the moment.
Veeam’s announcement at the moment is much less a new-product launch and extra of a declaration that AI is totally different. So the place does that go away us at the moment? I feel that Veeam’s new Agent Commander is a crucial reminder that AI isn’t simply one thing we’re going to have to arrange for – it’s one thing we’re going to have to manage.
And whereas we’ll undoubtedly have many extra questions as these autonomous AI brokers proceed to evolve, I feel that it is a nice begin to the dialog. And who is aware of? Perhaps in the future we’ll even have AI that may assist us management itself.









