Imagine receiving a video message from your boss asking you to wire money, or seeing a video of a celebrity or an influencer promoting a new cryptocurrency. Whether or not you spot these videos as fake, witnessing someone you trust doing something out of character can be deeply unsettling and blur the line between genuine and fake content.
As scary as it sounds, scams like these are becoming more common thanks to recent leaps in artificial intelligence development. This kind of synthetic media is known as a deepfake.
Deepfakes are fake but incredibly realistic images, videos, or audio files generated with artificial intelligence. Although deepfakes are often easy to spot as false when they’re used for fun in social media, they are also being weaponized to scam people, steal money, and spread misinformation.
Read on and learn how a deepfake works and how you can spot one without getting scammed.
What are deepfakes?
A deepfake is usually a piece of image, video or audio that has been manipulated using artificial intelligence.
The purpose of deepfake technology is to make it appear as if someone said or did something that never really happened. Sometimes, deepfake content is harmless and easy to spot as fake, such as funny filters that change your voice or appearance.
However, things get tricky once a deepfake video, image or audio is no longer distinguishable from the real thing. Scammers and other bad actors know this and have started to use deepfake technology to deceive and influence people.
How do deepfakes work?
Deepfakes are powered by deep learning, a type of machine learning that allows computers to analyze vast amounts of data and detect patterns. This is where the term gets its name: “deep learning” and “fake.”
One key piece of technology that has enabled deepfakes is known as GAN — short for Generative Adversarial Network. It’s a complex process, but here’s a simplified explanation:
One part of the AI (the “generator”) creates fake content.
Another part (the “discriminator”) evaluates how realistic the AI-generated content looks.
These two systems compete with each other until the fake content is almost indistinguishable from the real thing.
These systems require training data to work and improve, such as video clips, images or voice recordings. The more data the AI has, the more realistic it can make an image, video or voice recording seem. With the prevalence of social media and people sharing videos of themselves online, there is no lack of material for AI to learn from.
Furthermore, deepfake technology is now more accessible than ever, with apps and online tools that anyone can use to create a passable image, video or recording of someone else. What once required advanced technical skills is now widely accessible — even to scammers.
Dangers of deepfake technology
Deepfakes pose several serious threats when used with bad intentions. However, scammers are not the only ones exploiting deepfake technology. Some dangers of deepfake technology include:
Impersonation and identity theft: Impostors use deepfake videos, images or voice recordings to trick their victims into thinking they are interacting with someone else, such as a family member, friend, or a public figure.
Financial scams: Deepfakes are used in phone and video scams to trick people into transferring money. Scammers can even use a short audio recording from social media or voicemail to fake a person’s speech.
Disinformation and fake news: Fake videos can spread political propaganda or false information. Deepfakes are nowadays used to influence people in political elections.
Deepfake pornography and sextortion: So-called “face swap” tools can be used to fake compromising material where someone’s facial features are combined with adult content. Deepfakes are also used in revenge porn and sextortion scams.
Don’t fall for deepfake scams
Deepfake scams are far from the only form of fraud on the internet, so being vigilant can save you from a lot of harm online. Learn the telltale signs of deepfake content, such as fabricated video on social media or deceptive audio recordings.
Real-life example of a deepfake video scam
Deepfakes are no longer just a theoretical threat — they’re being used to cause real-world harm. Take a recent deepfake scam from 2023, for example:
In this successful scam, fraudsters in Hong Kong used deepfakes during a Zoom call to impersonate a company’s CFO and other employees. The deepfakes were realistic enough to convince a company’s finance worker to transfer $25 million.
The target of this scam had doubts after receiving suspicious messages from the scammers. However, after joining a Zoom meeting with people he recognized as colleagues, he was convinced to make the transfer.
And it is not just high-profile targets like companies that are under attack — anyone with an internet connection and a smart device can become a victim of a deepfake scam. A French woman who lost €830,000 is an extreme example of a scam that involves celebrities and well-known figures. In this case, the victim thought she was in a special relationship with the famous actor Brad Pitt.
The scammers were able to convince the victim to send money using AI-generated photos of Pitt in a hospital bed. Once the victim started to become suspicious, the scammers sent her a fake video featuring an AI-generated news anchor discussing Pitt’s alleged relationship with someone sharing the victim’s name.
Thanks to scams like these, concerns about deepfakes have grown in the past few years. However, face swaps and other deepfake techniques are not a new phenomenon. For instance, BBC News reported on researchers who had created a deepfake video of Barack Obama back in 2017.
Nowadays, almost anyone can use deepfake tools to create convincing fake audio, images, and videos with very little technical skill.
How to detect deepfakes
Deepfakes are getting harder to spot as the technology advances. Furthermore, with so much online content flooding our social media feeds, even a sloppy AI creation can pass as real if you do not pay close attention. Luckily, there are some red flags you can look for to keep yourself safe:
Unnatural eye movement or lack of blinking: Early deepfakes often fail to replicate natural blinking patterns.
Lip-sync issues: The mouth movements in a deepfake video may not perfectly match the audio you are hearing.
Blurry or flickering edges: The edges of deepfake content might look odd or unstable as a result of face swapping.
Inconsistent lighting or shadows: Deepfakes may struggle to replicate natural lighting conditions or the person’s real skin tone.
Strange audio: Audio deepfakes often struggle to replicate the person’s real voice. Listen for robotic, distorted, or out-of-tune sounds to tell real content from scams.
When in doubt, use reverse image search tools to distinguish deepfakes from the real thing.
If you see a public figure saying or doing something outrageous, always check trusted sources before jumping to conclusions. And always pause before reacting to a suspicious video or message, especially if you are urged to do something, like send money or provide personal information.
)
)

)