Skip to main content

Article

What is a deepfake scam?

F-Secure

7 min read

Imagine receiving a video message from your boss asking you to wire money, or seeing a video of a celebrity or an influencer promoting a new crypto­currency. Whether or not you spot these videos as fake, witnessing some­one you trust doing some­thing out of character can be deeply unsettling and blur the line between genuine and fake content.

As scary as it sounds, scams like these are becoming more common thanks to recent leaps in artificial intelligence development. This kind of synthetic media is known as a deepfake.

Deepfakes are fake but incredibly realistic images, videos, or audio files generated with artificial intelligence. Although deep­fakes are often easy to spot as false when they’re used for fun in social media, they are also being weaponized to scam people, steal money, and spread misinformation.

Read on and learn how a deepfake works and how you can spot one without getting scammed.

What are deepfakes?

A deepfake is usually a piece of image, video or audio that has been manipulated using artificial intelligence.

The purpose of deepfake technology is to make it appear as if some­one said or did some­thing that never really happened. Some­times, deep­fake content is harmless and easy to spot as fake, such as funny filters that change your voice or appearance.

However, things get tricky once a deep­fake video, image or audio is no longer distinguishable from the real thing. Scammers and other bad actors know this and have started to use deep­fake technology to deceive and influence people.

How do deepfakes work?

Deepfakes are powered by deep learning, a type of machine learning that allows computers to analyze vast amounts of data and detect patterns. This is where the term gets its name: “deep learning” and “fake.”

One key piece of technology that has enabled deep­fakes is known as GAN — short for Generative Adversarial Network. It’s a complex process, but here’s a simplified explanation:

  • One part of the AI (the “generator”) creates fake content.

  • Another part (the “discriminator”) evaluates how realistic the AI-generated content looks.

  • These two systems compete with each other until the fake content is almost indistinguishable from the real thing.

These systems require training data to work and improve, such as video clips, images or voice recordings. The more data the AI has, the more realistic it can make an image, video or voice recording seem. With the prevalence of social media and people sharing videos of them­selves online, there is no lack of material for AI to learn from.

Furthermore, deepfake technology is now more accessible than ever, with apps and online tools that any­one can use to create a passable image, video or recording of some­one else. What once required advanced technical skills is now widely accessible — even to scammers.

Dangers of deepfake technology

Deepfakes pose several serious threats when used with bad intentions. How­ever, scammers are not the only ones exploiting deep­fake technology. Some dangers of deep­fake technology include:

  • Impersonation and identity theft: Impostors use deep­fake videos, images or voice recordings to trick their victims into thinking they are interacting with some­one else, such as a family member, friend, or a public figure.

  • Financial scams: Deepfakes are used in phone and video scams to trick people into trans­ferring money. Scammers can even use a short audio recording from social media or voice­mail to fake a person’s speech.

  • Disinformation and fake news: Fake videos can spread political propaganda or false information. Deep­fakes are nowadays used to influence people in political elections.

  • Deepfake pornography and sextortion: So-called “face swap” tools can be used to fake compromising material where some­one’s facial features are combined with adult content. Deep­fakes are also used in revenge porn and sextortion scams.

Don’t fall for deepfake scams

Deepfake scams are far from the only form of fraud on the internet, so being vigilant can save you from a lot of harm online. Learn the tell­tale signs of deep­fake content, such as fabricated video on social media or deceptive audio recordings.

Real-life example of a deepfake video scam

Deepfakes are no longer just a theoretical threat — they’re being used to cause real-world harm. Take a recent deepfake scam from 2023, for example:

In this successful scam, fraudsters in Hong Kong used deep­fakes during a Zoom call to impersonate a company’s CFO and other employees. The deep­fakes were realistic enough to convince a company’s finance worker to transfer $25 million.

The target of this scam had doubts after receiving suspicious messages from the scammers. How­ever, after joining a Zoom meeting with people he recognized as colleagues, he was convinced to make the transfer.

And it is not just high-profile targets like companies that are under attack — any­one with an internet connection and a smart device can become a victim of a deep­fake scam. A French woman who lost €830,000 is an extreme example of a scam that involves celebrities and well-known figures. In this case, the victim thought she was in a special relationship with the famous actor Brad Pitt.

The scammers were able to convince the victim to send money using AI-generated photos of Pitt in a hospital bed. Once the victim started to become suspicious, the scammers sent her a fake video featuring an AI-generated news anchor discussing Pitt’s alleged relation­ship with some­one sharing the victim’s name.

Thanks to scams like these, concerns about deep­fakes have grown in the past few years. How­ever, face swaps and other deep­fake techniques are not a new phenomenon. For instance, BBC News reported on researchers who had created a deep­fake video of Barack Obama back in 2017.

Nowadays, almost anyone can use deep­fake tools to create convincing fake audio, images, and videos with very little technical skill.

How to detect deepfakes

Deepfakes are getting harder to spot as the technology advances. Further­more, with so much online content flooding our social media feeds, even a sloppy AI creation can pass as real if you do not pay close attention. Luckily, there are some red flags you can look for to keep your­self safe:

  • Unnatural eye movement or lack of blinking: Early deep­fakes often fail to replicate natural blinking patterns.

  • Lip-sync issues: The mouth movements in a deep­fake video may not perfectly match the audio you are hearing.

  • Blurry or flickering edges: The edges of deep­fake content might look odd or unstable as a result of face swapping.

  • Inconsistent lighting or shadows: Deep­fakes may struggle to replicate natural lighting conditions or the person’s real skin tone.

  • Strange audio: Audio deep­fakes often struggle to replicate the person’s real voice. Listen for robotic, distorted, or out-of-tune sounds to tell real content from scams.

When in doubt, use reverse image search tools to distinguish deep­fakes from the real thing.

If you see a public figure saying or doing some­thing outrageous, always check trusted sources before jumping to conclusions. And always pause before reacting to a suspicious video or message, especially if you are urged to do some­thing, like send money or provide personal information.

Stay protected from scams with F‑Secure Total

Deepfakes are not the only online threat to look out for. F‑Secure Total protects every­thing you do online and helps you fend off scammers. Use Total’s advanced anti­virus, VPN, identity protection and pass­word management tools to browse the internet safely. Total’s Scam Protection keeps your personal information and money safe from scammers.

  • Shop safely online with shopping protection

  • Block fake banking sites with banking protection

  • Avoid malicious links with browsing protection