In the summertime of 2025, a younger tech skilled named Trevor Roth* landed a distant job at cybersecurity vendor Exabeam.
Roth had aced his technical interview and check with flying colours. He additionally handed his video interview — though the hiring crew felt he may need leaned on generative AI instruments for actual time help — and Exabeam prolonged a proposal. After the usual pre-employment clearance course of, together with a background examine and I-9 validation, he acquired his laptop computer from IT and instantly started working.
There was only one downside. “Trevor Roth” was really a malicious overseas actor from North Korea, utilizing a stolen id and cast paperwork. And he was now inside Exabeam’s personal community.
Malicious overseas actors from the Democratic Individuals’s Republic of Korea, or DPRK, signify a pervasive and escalating risk to Fortune 500 corporations. The U.S. Division of the Treasury estimates 1000’s are on American corporations’ payrolls and have entry to their company techniques. North Korean operatives’ targets are twofold: first, to earn cash for his or her nation’s authoritarian regime, and second, to allow malicious intrusions. In current instances, American employers have been victims of cryptocurrency theft, delicate knowledge theft and knowledge extortion by the hands of malicious insiders from the DPRK.
Complicating detection efforts is the truth that such overseas risk actors typically goal to maintain their jobs for months, if not years, motivating them to maintain their heads down. “Usually, you are going to see these low-and-slow varieties of assaults, dwelling off the land, stuff that’s not tremendous apparent,” mentioned Exabeam Vice President of AI and Safety Analysis Steve Povolny, throughout a presentation at RSAC 2026. “You may see behaviors that fly beneath the radar, till they do not.”
Sadly for Exabeam’s new rent, his first day of employment was additionally his final — thanks partially to agentic AI.
To catch a malicious overseas risk actor
The primary time “Trevor Roth” signed into his Exabeam company account, the SOC’s risk intelligence feed flagged his username as excessive threat, noting that it had been related to North Korean risk actor exercise. Based mostly on that data, incident responders quietly accessed Roth’s laptop computer and remoted it from the remainder of the community.
Initially, the incident response crew was open to the likelihood that the risk intelligence was improper, mentioned CISO Kevin Kirkwood, who introduced alongside Povolny at RSAC. “At first, we ascribed constructive intent. It is a brand-new consumer, and perhaps we simply acquired the improper man,” he added.
On the identical time, the SIEM began producing scattered alerts on Roth’s exercise, which included the next:
Downloaded recordsdata from a malicious Zoom website.
Tried to connect with a third-party VPN.
Put in Leap Desktop software program.
Loaded a streaming service.
Taken individually and out of context — and with out the heads up from the risk intelligence feed — every alert might have amounted to little greater than noise, in accordance with Kirkwood. That is when AI entered the chat.
Exabeam Nova, the group’s investigative AI agent within the SOC, autonomously collected Roth’s scattered consumer and entity habits analytics (UEBA) knowledge and evaluated it within the context of his position and new-hire standing. Deciding a full investigation was warranted, Nova then analyzed the consumer’s habits and certain intent and introduced human operators with its conclusion:
“The sample of actions aligns with the ‘Malicious Software program’ risk vector, which is a precursor to a compromised insider state of affairs.”
Lastly, the AI assistant steered SOC analysts take the next subsequent steps:
Isolate the affected host to forestall additional compromise or lateral motion.
Provoke a full forensic evaluation of the affected host to establish the preliminary an infection vector and full scope of compromise.
Assessment the consumer’s exercise, together with current emails and browser historical past, for potential phishing makes an attempt or unauthorized software program downloads that would have led to the malware execution.
Examine for persistence mechanisms, together with scheduled duties and modified registry keys.
Analyze community visitors for connections made by the affected host to suspicious exterior IPs or domains.
Replace endpoint safety, making certain endpoint detection and response and antivirus software program are updated, and carry out a full scan on the affected machine and different probably weak techniques.
An investigation that Kirkwood mentioned would have taken SOC analysts three to 4 hours took the AI agent seconds.
“That is actually the place the mix of conventional UEBA and trendy AI capabilities turns into actually, actually highly effective — having the ability to take all that scattered, [seemingly] unrelated, nonsuspicious noise and switch it into indicators,” Povolny added. “The AI that we had deployed internally caught this very, in a short time.”
After quietly isolating the DPRK risk actor’s machine, Kirkwood and his incident response crew spent the subsequent 5 hours observing his habits, which included putting in command-and-control software program and making an attempt to exfiltrate firm knowledge.
“It was a enjoyable 5 hours,” Kirkwood mentioned. “It was sort of like sitting again and watching the prize fights. You are ingesting beer and consuming peanuts and watching the blows land.”
It was sort of like sitting again and watching the prize fights. You are ingesting beer and consuming peanuts and watching the blows land. Kevin KirkwoodCISO, Exabeam
When the malicious overseas actor lastly realized he was being watched, he began making an attempt to delete his short-term recordsdata. That is when Kirkwood known as time, and the incident response crew bricked the machine. “It was an enormous piece of metallic at that time — nothing extra,” he mentioned.
Subsequent, the Exabeam crew despatched the indications of compromise they’d collected to the FBI, together with the deal with in Austin the place the risk actor had requested the corporate to ship his laptop computer.
“A couple of week after that, we noticed that the FBI had shut down a laptop computer farm within the Austin space,” Kirkwood mentioned.
The best way to mitigate the AI-enabled malicious overseas actor risk
North Korean IT staff started infiltrating American corporations in giant numbers in 2020, throughout the distant work increase. Now, AI is making an already unhealthy downside worse. In response to researchers at CrowdStrike, DPRK-affiliated adversary group Well-known Chollima infiltrated greater than 320 corporations in 2025 — a 220% year-over-year enhance. Researchers attributed the group’s current success to its use of GenAI all through the hiring and employment processes.
With AI, malicious actors can simply forge official paperwork and cheat on technical exams. Deepfake and voice cloning know-how lets them impersonate others in actual time. And in accordance with Kirkwood and Povolny, many job candidates — North Korean and in any other case — now use AI-powered interview copilots to optimize their solutions throughout distant job interviews. Many such instruments are designed to be invisible to 3rd events when customers share their screens, making detection tough.
To vet for unsanctioned AI use and potential malicious overseas actor exercise throughout video interviews, the Exabeam executives steered the next techniques:
Deliberately under-specify issues to look at candidates’ clarification abilities.
Ask candidates to share private experiences that illustrate how they make selections.
Change technical issues mid-answer to check candidates’ adaptability.
Introduce off-topic or surprising prompts — e.g., how would you construct a bridge? — to see if the candidate responds with human confusion or AI confidence.
Ask job candidates to make use of exterior webcams that present their workspaces and displays, fairly than share their screens.
Kirkwood and Povolny additionally urged CISOs to place all new hires on a SOC watchlist for enhanced monitoring, ideally with help from agentic AI.
“When you’ve got 500 or 1,000 new workers, you must have brokers which might be able to understanding and prioritizing their behaviors, driving a cherry-picked handful to your human analysts, who stay within the loop,” Povolny mentioned. “These human analysts can then double-click on that worker and dig deeper to see if it is a risk.”
*Editor’s notice: SearchSecurity has modified the identify that the risk actor fraudulently used to guard a possible sufferer of id theft.
Alissa Irei is senior website editor of Informa TechTarget Safety.