Skip to main content

Choose your country

The Future of Scams

What the Next 5 Years Could Bring

Published on - 4 min read
F-Secure

Scams are adapting fast—and the next wave could be more personal than ever. This chapter explores what might come next, from AI agents and all-in identity theft to moments of trust that open the door to deception.

On this page

Sarogini Muniyandi
Head of Scam Research & Defense Engineering, F-Secure

Trust Triggers: Scammers Are Hacking Human Behavior

Over the past decade, scams have evolved far beyond suspicious links and malicious files. As devices and security systems become more resilient, scammers have shifted their focus—not to vulnerabilities in code, but to vulnerabilities in people. Today, trust, emotion, and behavioral patterns are the new attack surface.

Modern scams increasingly target ‘money moments’: key interactions when people move or manage money online, like paying bills, transferring funds, applying for loans, or hiring services through social media. These aren’t random attacks; they’re engineered to strike when emotional pressure is high, and attention is low.

AI now enables scammers to hijack conversations, mimic voices, and craft messages that feel personal and urgent. A fake vendor demanding a deposit. A deepfake voice message from a loved one asking for a favor. The pressure to act fast creates the perfect opening for manipulation.

As cyber security grows stronger, the human layer is becoming—and will remain—the primary attack surface. Scam protection must evolve to recognize behavioral risks, detect emotional manipulation, and intervene before a costly decision is made.

In a world where scams begin with trust, the question is no longer what users click, but why, and when, they click it.

Joel Latto
Threat Advisor, F-Secure

All-In Identity Theft: The Next Frontier for Scammers?

Identity theft isn’t new, but it’s always evolving. As outlined in the F‑Secure Scam Kill Chain, most scams begin with basic personal details: a name, address, phone number, or email address. While not enough to commit fraud directly, this data enables phishing attacks, impersonation, or account recovery abuse.

Next, I expect to see the emergence of something deeper: all-in identity theft. Instead of stopping at surface-level details, scammers could assemble full digital personas.

It begins with common personal data, then adds stronger identifiers like passports or Social Security numbers. Public social media posts, breached health records, and even audio or video clips to create deepfakes can round out the profile. In some cases, scammers may collect biometric data like fingerprints to bypass advanced security.

This kind of comprehensive identity theft isn’t common—yet. Most scams succeed with less effort. But as with phishing kits and malware-as-a-service, the barrier to entry is dropping. Semi-automated tools may soon build ‘identity packages’ from multiple sources. And biometric data could be key to that shift—you can only change your fingerprint scan nine times, then you’re out of options.

Once all-in identity theft becomes profitable at scale, scammers won’t need much incentive to take it further.

Laura Kankaala
Head of Threat Intelligence, F-Secure

AI Agents: A Future Tool for Scammers—But Not Yet

AI agents are designed to do more than answer questions—they can act on our behalf. That sounds promising for users, but also for scammers.

In theory, scammers could harness AI agents to automate spam campaigns, carry out convincing conversations with victims, or even commit financial theft. They could also use them to validate stolen data, like login credentials or credit card details, without the manual effort usually required.

In the consumer cyber security space, there’s plenty of speculation about what AI agents could do. But here’s the reality: scammers aren’t using them today.

Why not? For one, scams already rely on simpler forms of automation that are cheaper and easier to deploy. AI agents, by contrast, remain costly to operate and too niche for widespread use in the cyber crime ecosystem. While generative AI is already helping scammers create convincing content or translate phishing messages, AI agents aren’t yet practical.

It will take another breakthrough—one that makes AI agents significantly more accessible and affordable—before scammers are likely to adopt them at scale. Whether that tipping point arrives in the next few years remains to be seen.

Download the Report

Explore comprehensive consumer data and scam insights in the F‑Secure Scam Intelligence & Impacts Report 2025.

More From Our Experts

  • 2025 Scam Landscape

    By Timo Salmi
    Senior Solution Marketing Manager, F‑Secure

  • Humanizing Scams

    By Tracy Hall
    Author, Speaker, and Advocate

  • The AI Scam Boom

    By Dr Megan Squire
    Threat Intelligence Researcher, F‑Secure

  • The Silent Toll of Scams

    By Jorij Abraham
    Managing Director, Global Anti‑Scam Alliance

  • Risk vs Reality

    By Laura Kankaala
    Head of Threat Intelligence, F‑Secure

  • The Future of Trust

    By Dr Laura James
    Vice President of Research, F‑Secure

  • Inside Scam Centers

    By Timo Salmi
    Senior Solution Marketing Manager, F‑Secure

  • The Future of Scams

    By Sarogini Muniyandi, Joel Latto and Laura Kankaala