Skip to main content

Choose your country

The AI Scam Boom

4 Ways Scammers Are Using AI in 2025

Published on - 7 min read
Megan SquireThreat Intelligence Researcher, F‑Secure

This year, AI is fueling a new wave of scams—helping threat actors scale faster and appear more convincing. This chapter breaks down how they’re putting AI to work and explores the human cost of these increasingly sophisticated attacks.

On this page

How Scammers Are Using AI in 2025

Our comprehensive analysis of documented AI-driven scams in 2025 reveals four distinct categories of AI application in fraudulent activities:

  1. Target Selection – AI is used to identify and profile potential victims.

  2. Infrastructure Development – AI builds the digital tools needed for attacks.

  3. Content Generation – AI improves the personalization and credibility of scam bait.

  4. Victim Communication – AI enables scammers to engage directly with targets.

We found that in 89% of our sample of AI‑enhanced scams, AI was used for content generation. The vast majority of these involved enhancing phishing emails or impersonating people using voice cloning and deepfake video technology.

How Scammers Are Using AI in 2025.
How Scammers Are Using AI in 2025.

Making Scam Bait More Convincing

In most cases we documented, scammers used AI tools to enhance the customization and credibility of the bait used to deceive victims. Voice cloning, for example, requires only a brief audio sample to build a replica of someone’s voice. This enables scammers to deliver emotionally charged messages that appear to come from relatives in crisis.

These scams—often called “family emergency” or “grandparent” scams—once relied on muffled or garbled messages to impersonate a loved one in distress, claiming to be kidnapped or in urgent need of bail money.

Hear a voice cloning scam in action

Now, with just a few seconds of audio, AI can generate longer, highly convincing messages in the target’s own voice. Victims consistently report that the recordings sound indistinguishable from their relatives or friends.

Deepfake Videos and Fake Endorsements

AI video generation tools are also being used to fabricate celebrity endorsements—promoting questionable products and fake investment schemes.

Social media platforms are flooded with deepfakes purporting to be public figures like Elon Musk, Al Roker, Jamie Lee Curtis, Keanu Reeves, and many others. In July, The Hollywood Reporter detailed how several frequently impersonated celebrities have turned to AI detection firms to find and remove fake content featuring their likeness.

Policy Responses to Crime Using Deepfakes

The marked increase in deepfakes used for deceptive, inauthentic content has prompted two new pieces of legislation in the US:

  • The TAKE IT DOWN Act – Passed in Congress with overwhelming bipartisan support, criminalizing non-consensual intimate deepfakes.

  • The DEFIANCE Act – Advanced in Congress and could be passed soon, creating a civil right to sue over this type of deepfake forgery.

While these laws address specific types of harm, our analysis shows that victims of financial fraud still lack adequate protection against deepfakes used for impersonation and scams.

AI for Phone Calls and Messaging

While deepfakes and voice cloning are being actively exploited by scammers, we found limited evidence of threat actors using AI to initiate phone calls or send SMS messages directly. However, as AI‑based chatbots and agents become more reliable, this type of automated contact is likely to become increasingly prevalent.

Building Victim Target Lists with AI

In some cases, AI is being used to analyze previously scraped social media data to identify potential victims and build more effective target lists.

For example, The Financial Times reported that British insurance firm Beazley and ecommerce platform eBay have warned about scammers using AI to craft fraudulent emails by extracting and analyzing victim details from social media profiles.

A Moving Target for Scam Defenses

Scammers are expected to continue applying AI across the broader scam kill chain, making fraud detection and prevention a constantly shifting challenge for the scam protection industry. Our research highlights two emerging areas of concern:

  • Reconnaissance and target acquisition: Using AI to analyze data and prioritize high-value targets.

  • Persistence of scam: Deploying AI chatbots to handle the time-consuming psychological manipulation involved in long-term romance or investment scams.

How the F‑Secure Scam Kill Chain Has Evolved

F-Secure Scam Kill Chain.
F-Secure Scam Kill Chain.

Since its launch earlier this year, the F‑Secure Scam Kill Chain— our comprehensive knowledge base detailing both high-level scam tactics and specific techniques—has evolved to offer improved readability, accessibility, and a greater range of techniques.

Each scam tactic now includes a clearly defined goal describing what the scammer aims to achieve. For example, the goal of the ‘reconnaissance and target acquisition’ tactic is to identify potential victims based on factors including available data, personal interests, demographics, and overall suitability for the scam. This tactic is then broken down into techniques, such as manually building target profiles or using automated data collection.

The Human Cost of AI‑enhanced Scams

There’s no question that AI‑powered scam tactics are evolving rapidly. And while detection tools and defensive measures are improving in parallel, consumers remain at risk—unwittingly caught in an arms race between scammers and those working to stop them.

As scams grow more frequent and convincing, many individuals will begin to doubt their own ability to tell real from fake—leading to a broad erosion of trust in digital spaces. Others will face scam fatigue: overwhelmed by the constant stream of threats, they become desensitized and more likely to overlook red flags.

Looking Ahead:
The Role of Human Connection

  • Research shows that one of the most effective ways to help people navigate the AI‑enabled fraud landscape is to emphasize genuine human connections.

  • Whether through peer education as a prevention mechanism, or by encouraging the verification of suspicious communications via trusted human channels, fostering personal connections creates a powerful circuit breaker in the fraud cycle.

  • In an era where seeing is no longer believing, our most powerful tool against AI‑enabled fraud may be the very thing machines can’t replicate.

Download the Report

Explore comprehensive consumer data and scam insights in the F‑Secure Scam Intelligence & Impacts Report 2025.

More From Our Experts

  • 2025 Scam Landscape

    By Timo Salmi
    Senior Solution Marketing Manager, F‑Secure

  • Humanizing Scams

    By Tracy Hall
    Author, Speaker, and Advocate

  • The AI Scam Boom

    By Dr Megan Squire
    Threat Intelligence Researcher, F‑Secure

  • The Silent Toll of Scams

    By Jorij Abraham
    Managing Director, Global Anti‑Scam Alliance

  • Risk vs Reality

    By Laura Kankaala
    Head of Threat Intelligence, F‑Secure

  • The Future of Trust

    By Dr Laura James
    Vice President of Research, F‑Secure

  • Inside Scam Centers

    By Timo Salmi
    Senior Solution Marketing Manager, F‑Secure

  • The Future of Scams

    By Sarogini Muniyandi, Joel Latto and Laura Kankaala