Deepfake-related cybercrime is on the rise as menace actors exploit AI to deceive and defraud unsuspecting targets, together with enterprise customers. Deepfakes use deep studying, a class of AI that depends on neural networks, to generate artificial picture, video and audio content material.
Whereas deepfakes can be utilized for benign causes, menace actors create them with the first goal of duping targets into enabling them entry to digital and monetary belongings. In 2025, 41% of safety professionals reported deepfake campaigns had not too long ago focused executives at their organizations, in line with a Ponemon Institute survey. Deloitte’s Middle for Monetary Providers additionally not too long ago warned that monetary losses ensuing from generative AI might attain $40 billion by 2027, up from $12.3 billion in 2023.
As deepfake expertise turns into each extra convincing and extensively accessible, CISOs should take proactive steps to guard their organizations and finish customers from fraud.
3 methods CISOs can defend towards deepfake phishing assaults
Whilst attackers race to capitalize on deepfake expertise, analysis means that enterprises’ defensive capabilities are lagging. Simply 12% have safeguards in place to detect and deflect deepfake voice phishing, for instance, and solely 17% have deployed protections towards AI-driven assaults, in line with a 2025 Verizon survey.
It is essential that CISOs take the next key steps to establish and repel artificial AI assaults.
1. Observe good organizational cyber hygiene
As is so typically the case, cyber hygiene fundamentals go a good distance towards defending towards rising and evolving threats, together with deepfake phishing assaults.
Id and entry administration. Fastidiously handle finish customers’ identities. Promptly decommission these of former staff, for instance — and restrict their entry privileges to simply the sources they should do their jobs.
Information loss prevention and encryption. Guarantee the suitable insurance policies, procedures and controls are in place to guard delicate and high-value information.
2. Think about defensive AI instruments
Whereas defensive AI expertise continues to be in its early phases, some suppliers are already integrating machine learning-driven deepfake detection capabilities into their instruments and providers. CISOs ought to keep watch over accessible choices, as they’re more likely to develop and enhance rapidly within the coming months and years.
Whilst attackers race to capitalize on deepfake expertise, analysis means that enterprises’ defensive capabilities are lagging.
Alternatively, enterprises with ample sources can construct and practice in-house AI fashions to evaluate and detect artificial content material, primarily based on technical and behavioral baselines, patterns and anomalies.
3. Step up safety consciousness coaching
Whilst expertise evolves, the primary and most vital step in phishing prevention stays the identical: consciousness. However artificial AI has improved at such a fast price that many finish customers are nonetheless unaware of the next:
How convincing deepfake content material has turn into. In a single high-profile deepfake phishing case, a workers member joined a video name with what gave the impression to be the corporate’s CFO, plus a number of different staff. All had been deepfake impersonations, and the scammers efficiently tricked the worker into transferring $25 million to their accounts.
How menace actors use deepfakes to threaten people and organizations and compromise their reputations. Malicious hackers can create damaging deepfake content material that seems to point out company workers concerned in incriminating actions. They may then attempt to blackmail staff into giving them entry to company sources, blackmail the group into paying a ransom or broadcast the pretend content material to undermine the corporate’s status and inventory worth.
How criminals mix stolen information and deepfakes. Dangerous actors typically mix a mixture of stolen identification information, akin to usernames and passwords, with AI-generated photographs and voice cloning to attempt to impersonate actual customers and circumvent MFA. They may then apply for credit score, entry current enterprise and private accounts, open new accounts and extra.
With social engineering and phishing threats evolving on the velocity of AI, the menace panorama now modifications an excessive amount of every year to rely solely on annual cybersecurity consciousness coaching. With this in thoughts, CISOs ought to repeatedly disseminate details about new techniques dangerous actors use to govern unsuspecting targets, together with steering for workers ought to they encounter such assaults.
CISOs ought to educate finish customers on the tell-tale indicators of artificial media, whereas additionally emphasizing that essentially the most subtle deepfakes are sometimes undetectable to people.
Amy Larsen DeCarlo has coated the IT business for greater than 30 years, as a journalist, editor and analyst. As a principal analyst at GlobalData, she covers managed safety and cloud providers.