Cybersecurity leaders ought to capitalize on AI mania within the enterprise to handle longstanding safety issues, urged Arizona State College CISO Lester Godsey.
“Government administration is all [in on] AI,” Godsey mentioned throughout a current session at CactusCon, an annual cybersecurity convention in Mesa, Ariz. “I might encourage you to be shameless in leveraging this second in time.”
AI, with its game-changing capabilities and government assist, presents main technical and strategic alternatives for CISOs. At ASU, for instance, Godsey’s group is utilizing AI to enhance information classification, information loss prevention (DLP) and id and entry administration (IAM). In flip, these enhancements and diversifications are key to sturdy safety and governance for the college’s in-house AI platform, which helps greater than 60 giant language fashions and serves the biggest pupil physique within the U.S.
At ASU, AI for information classification — and information classification for AI safety
Organizations trying to adapt their cybersecurity packages to satisfy new AI wants — and clear up longstanding safety issues within the course of — would possibly contemplate beginning with information safety, Godsey mentioned. With some tweaking, current information classification, DLP and IAM methods can readily adapt to new AI safety and governance use circumstances, he added.
ASU, for instance, had an current information safety program, however — like many giant organizations — it additionally had a decades-long battle with information sprawl. Godsey mentioned his group just lately ran a proof-of-concept check utilizing AI to automate the classification of unstructured information. It yielded high-fidelity outputs.
“The result’s that we’ll lastly have the ability to leverage DLP,” Godsey mentioned. “The know-how has been round for over 20 years, arguably, however we’ll truly have the ability to use it now because of AI.”
In flip, an optimized information safety program permits ASU to correctly safe and govern its AI methods, in accordance with Godsey. By using the precept of least privilege, for instance, the safety group can block each human and nonhuman customers from accessing belongings they needn’t carry out their outlined roles.
“One in every of my greatest fears is agentic AI by default,” Godsey mentioned, including that an overprivileged, rogue AI agent might wreak havoc on an enterprise — posting delicate information to public channels, for instance. “Particularly when AI begins doing increasingly more by itself, you want these guardrails in place, and you could double- and triple-check them.”
On this case, the issue can be a part of the answer: ASU has created a customized cybersecurity AI agent whose sole goal is to make sure that different AI brokers function inside safe parameters. It alerts human operators if it finds different brokers deviating too removed from acceptable set habits.
Godsey mentioned his group additionally plans to make use of AI to additional strengthen ASU’s asset administration, shadow IT discovery and API safety methods.
Alissa Irei is senior website editor of Informa TechTarget’s SearchSecurity.









