The rapid proliferation of advanced artificial intelligence (AI) systems has forever changed the way we shop, bank, drive, search the internet, create and consume content, and perform various other tasks. While chatbots and algorithms enhance user experiences and are exceptionally adaptable to various applications, they have also opened our personal and corporate lives to new – and darker – social engineering attacks.
The 2025 Identity Fraud Report states that a deepfake attempt occurs every five minutes on average. The World Economic Forum reports that as much as 90% of online content may be synthetically generated by 2026. At first glance, one might assume that the biggest deepfake and AI-phishing targets would be celebrities or high-profile figures. However, the primary targets and objectives remain consistent with those of traditional scams and fraud: individuals, with their personal, banking, and payment information, and businesses, as custodians of valuable data and funds.
Three ways AI can already be used by adversaries to acquire your data
Phishing enhanced by AI. Phishing is a type of internet fraud designed to trick victims into inadvertently disclosing credentials or card details. It can target both individuals and businesses and may be massive or tailored. Phishing messages usually take the form of fake notifications from banks, service providers, e-payment systems, or other organizations. In cases of targeted phishing, these messages may even appear to come from an acquaintance.
Traditional phishing resources are often poorly written and generic, riddled with errors. Now, Large Language Models (LLMs) enable attackers to craft personalized, convincing messages and pages with good grammar, flow, and structure. Since phishing is a worldwide threat, attackers can now more effectively target individuals in languages they don’t speak by relying on generative AI. Additionally, perpetrators can mimic the style of specific individuals, such as a business partner or colleague, by analyzing social media posts, comments, or other content associated with their identity.
Moreover, in the modern digital era, AI technologies offer the ability to seamlessly create striking visuals or even build complete landing pages. Sadly, these same tools can be exploited by cybercriminals to produce even more persuasive phishing materials.
Audio deepfakes. Deepfakes are synthetic media where AI convincingly replicates a person’s likeness or voice. With just seconds of a voice recording, AI can generate audio clips in a person’s voice, allowing adversaries to create fake voice messages mimicking trusted sources, such as friends or family. Imagine attackers stealing an account in a messaging app and using voice messages from the chats to create fake recordings that mimic the owner’s voice. They could then use these recordings to message the owner’s friends and relatives, pretending to be them. It’s alarming, but entirely possible – attackers could exploit your voice to request urgent financial transfers or sensitive information, taking advantage of personal trust to commit fraud both at personal, or corporate levels.
Video deepfakes. Threat actors can also use AI tools to create video deep fakes – from just a single image. If you believe it’s like sci-fi CGI at the movies requiring a PhD in computer science, you might be surprised. After a few simple tutorials, it is possible to do face and lip swaps even using the complex software. You can even swap faces in a video, refine AI-generated imperfections, and add a realistic voice to the character. With these tools at their disposal, an attacker could devise seemingly inconceivable schemes – such as creating fake advertisements, placing deceptive calls, or even conducting live video calls while impersonating a trusted associate or someone close to you romantically – ultimately leading to significant financial losses through these manipulative tactics.
Real-life examples: AI-generated phishing to deepfaked high-profile figures
At Kaspersky, we have already observed cybercriminals using Large Language Models (LLMs) to generate content for large-scale phishing and scam attacks. These attacks often leave distinctive AI-specific artifacts, such as phrases like “As an AI language model…” or “While I can’t do exactly what you want, I can try something similar.” These and other markers expose fraudulent content generated with LLMs, which enable perpetrators to automate the creation of dozens or even hundreds of phishing and scam web pages with convincing content, making these attacks more plausible.
Beyond phishing, numerous high-profile examples of deepfake attacks have already emerged. For example, one victim was scammed after being told they had been selected by Elon Musk to invest in a new project and invited to a conference call. At the appointed time, a deepfake of Musk explained the project details to a group of listeners, who were then asked to contribute funds – resulting in significant losses.
Cybercriminals also exploit fraudulent ads featuring deepfakes of well-known figures, such as global actors or politicians, which they display across various platforms. In one instance, a deepfake video depicted Canada’s Prime Minister Justin Trudeau promoting an investment scheme.
Deep fakes extend beyond investment scams. For instance, AI romantic scams use deepfakes to create fictional personas and interact with victims through video calls. After gaining the victim’s trust, the scammers request money for emergencies, travel, or loans. Recently, a group of over two dozen people involved in such scams was arrested after stealing $46 million from victims from Taiwan to Singapore and India.
Voice fakes have been known to be used in scams targeting individuals, as well as in attacks on banks that utilize voice authentication systems.
How to protect against AI-driven threats
As AI technology evolves, so must our defenses. Defensive measures can be divided into technical and non-technical approaches. Regarding the former, large language models could – and likely will – embed watermarks in the future. These watermarks, imperceptible to humans but detectable by algorithms, can help identify and label AI-generated content online. It’s important to note that this approach is mainly effective for large models developed by major companies. If a malicious actor creates and uses their own large language model, they could easily circumvent these markings.
Another promising defensive technology is deepfake detectors. These tools identify specific characteristics of manipulated images, unexpected vocal fluctuations in audio, atypical phrasing in text, and other artifacts indicative of AI use. The main challenge for these detectors is to evolve as rapidly as generative AI, ensuring they remain effective.
Digital signatures, already widely used for bank transactions and important communications, could also be adopted to verify the authenticity of videos or audio. Essentially, this is one of the most reliable technical solutions. Just as many websites today use digital signatures that are seamlessly verified in the background, it is likely that, in the future, digital signatures will also be implemented for videos and audio messages, with verification occurring discreetly in the background.
However, countering social engineering cannot rely solely on technology. Fraud existed long before deepfake technology, and education and awareness have always been critical defenses. Today, a critical gap remains: many people are unaware of how easily this technology can be exploited. Attackers take advantage of this lack of knowledge, highlighting the urgent need for open dialogue and comprehensive educational campaigns on the subject.
In conclusion, while AI-driven scams and deepfakes present growing challenges, understanding these risks is an important first step toward addressing them. There is no need for fear; instead, fostering awareness and improving cyber literacy are key. Individuals can protect themselves by staying informed, vigilant, and thoughtful in their online interactions, while organizations can take proactive steps to reduce AI-related risks through proven security solutions, enhanced security practices, and initiatives. By working together and approaching these challenges with care, we can create a safer and more resilient digital environment.