Article

How imposter scams generated $2.7 billion in just one year

device image
Corey McAuley
Corey McAuley
|
Jul 18, 2024
|
7 min read

“The AI revolution will bring us immense rewards — and inconceivable new challenges. We are more reliant on technology than ever, and AI affects computer security and privacy in unlimited ways.”

That’s what Mikko Hyppönen, Principal Research Advisor at F‑Secure, described in our “Staying ahead of cyber threats in the age of AI” ebook. The latest data for money lost to scams from the US Federal Trade Commission (FTC) reinforces this statement, with imposter scams named 2023’s top fraud category.

US consumers reportedly lost an eye-watering $2.7 billion to imposter scammers in just 12 months — a statistic that has increased exponentially year-on-year thanks to rapid techno­logical advancements in, and wide-spread accessibility of, artificial intelligence. But what tricks did they use?

3 most prevalent imposter scams using AI

Imposter scams come in many forms but all work in a similar way: scammers impersonate some­one you trust to convince you to act in a certain way, such as parting with your money. These can be the most personal and intimate of all online scams, making imposter scammers some of the most callous.

1. Phishing and smishing

If you’ve ever received an email with a malicious link masked as something intriguing to get you to click it, that’s phishing. When a scammer does the same thing but via text message or SMS, that’s smishing.

How does AI make it more effective?

  • WormGPT — a malicious alternative to ChatGPT — is extremely effective for constructing phishing or smishing attacks that steal individuals’ private data or trick victims into installing malware.

  • Even without this customized tool, generative AI apps like ChatGPT can be used to improve the effectiveness of these imposter scams. Thanks to the natural language processing in large language models, a chatbot can write a professional-sounding email or SMS in seconds.

  • The bots can proofread the text of scam emails and automatically eliminate any grammar errors, and even quickly and accurately translate them into different languages — massively increasing a scam’s target group.

How to stay secure

  • Try out ChatGPT so you can experience firsthand how convincing a conversation you can have today with an AI. This awareness alone could help you avoid potential phishing attacks.

  • Use strong passwords and multi-factor authentication to protect your accounts.

  • Proactively protect your devices from malware with endpoint protection.

2. Vishing

Vishing is when you receive a phone call from a scammer trying to get your personal information or asking you to send them money, all the while masquerading as some­one you know or may trust.

How does AI make it more effective?

Imagine receiving a voice call in the middle of the night and it’s a family member or close friend begging you to help them. But then you find out that the audio is faked and the product of AI.

As the power of generative AI becomes more accessible and they require even less existing voice sample data for copying someone’s voice, these very tailored voice attacks may become far more common, explains Laura Kankaala, Threat Intelligence Lead at F‑Secure.

How to stay secure

  • Beware of any unexpected or unusual demands for money, especially if they ask for gift cards, crypto­currency, or a wire transfer out of the blue. Take a break and do a little Googling about the situation you’re in before making any decisions to send money.

  • In the era of AI, we must be — at first — skeptical of every­thing, even contact from the people we love most. Remember that now almost everything can be spoofed.

  • If you can, call the person directly. You may confuse someone or interrupt a deep sleep, but at least you won’t have let criminals take advantage of your best intentions.

3. Deepfakes

With AI, it’s not only easy to mimic text and people’s voices. With enough computing power and time, criminals can make fake images and videos that transpose one face onto someone else’s body in almost any situation. These phony visuals are known as deepfakes.

How are they enabling cyber crime?

  • In June of 2023, the FBI announced an uptick in malicious actors using this technology to create explicit content and commit sextortion. This includes the threat of exposing deep­faked videos to family and friends.

  • There’s no real limitation to how these deep­fakes can be used, whether for emotional abuse, political manipulation, or out­right fraud. We’ve even seen deep­fake videos of Elon Musk on X talking about a crypto­currency investment he’s not involved with.

  • As the technology for this improves, not only will the number of scam attempts increase, but the quality of them likely will too — making them increasingly difficult to spot.

How to stay secure

To avoid becoming a victim of deepfakes, the FBI suggests:

  • Monitoring the social media activity of your kids.

  • Regularly searching your­self on the web.

  • Locking down social media accounts’ privacy settings and account security.

If you’re in a situation where you’re being extorted with deep­faked images, Laura Kankaala suggests:

  • The closest thing to an actual fix for deep­fakes is simply not giving in to criminals’ demands and instead reporting these incidents to law enforcement.

  • These criminals are going after multiple people at the same time, so they likely don’t have the time to carry out their threats, such as digging up your family and friends to show them the faked images.

  • And even if the criminals go that far, the only thing you can do is warn your loved ones that they may see fake images. It’s not ideal but it’s far better than encouraging this kind of extortion.

Who do imposter scammers impersonate?

Scammers will impersonate anyone people deem trust­worthy or of high authority or value, such as:

  • Governmental organizations 

  • Landlords or utility companies

  • Potential love interests

  • Well-known companies such as Amazon

  • Family members and close friends

  • Colleagues or the CEO of your company

  • Companies offering jobs

  • Delivery services

  • Nannies and caregivers

  • Celebrities and world leaders

Real-life examples of imposter scams

Imposter scammers generally use one of two tactics: informing you about a problem and then offering you the opportunity to fix it or tugging at your heartstrings with a very personal plea for money.

Fake suspicious activity alerts

Abigail Bishop, Head of External Relations, Scam Prevention at Amazon explains in The Global State of Scams Report that Amazon received many reports of scammers sending fake suspicious activity alerts in 2023.

Consumers who believed their account was compromised and clicked the malicious link asking them to verify their information would give away their payment or login information to scammers. Amazon offers cyber security awareness training to help consumers identify imposter scams such as these.

Voice cloning family members

Swedish resident, Ann-Lis, was deceived in a much different way. She received a text message saying her daughter had changed phone number, followed by a request for money for a new phone and computer.

Ann-Lis was suspicious until she received a phone call and heard her daughter’s voice encouraging her to send thousands of Swedish kronor, at which point she let down her barriers. Fortunately, her bank inter­vened and blocked the trans­action when they suspected that some­thing wasn’t right.

devices secured illustration

Keep shopping scammers at bay

Shop online worry-free with F‑Secure Total

total app on different devices

Ensure safe online shopping with Total

F‑Secure’s Browsing protection (included in F‑Secure Total) enables you to evaluate the safety of shopping sites and prevents you from unintentionally accessing harmful URLs.

  • Quickly identify safe sites in your search results

  • Block scam web­sites automatically

  • Get feed­back on potentially harmful sites with safety ratings

Read more about Total