Skip to main content

Choose your country

Trust Under Attack

The Scam Threats of 2026

Published on - 8 min read
Dr. Megan SquireThreat Intelligence Researcher, F‑Secure

You wake up to a cancelled flight and ask ChatGPT for your airline’s customer service number to book an alternate flight. Later that day, an AI assistant recommends a discounted product from a new online store. In the evening, a phone call arrives from a family member in distress, asking for urgent financial help.

Any one of these interactions could be legitimate. Increasingly, however, they may also be scams.

The global scam economy has grown into a multi-billion-dollar industry operated by organized criminal groups and powered by increasingly sophisticated technology.

Artificial intelligence is accelerating this shift — helping criminals generate convincing messages, impersonate trusted voices, and manipulate the platforms consumers rely on every day. At the same time, large-scale scam compounds and industrialized fraud operations allow criminals to run scams on an unprecedented scale.

This chapter examines four threats likely to shape the fraud landscape in 2026 — from AI manipulation of search and shopping tools to deepfake-enabled deception and the industrialization of scam operations. Understanding these threats will help digital service providers strengthen the trusted services consumers rely on as scams become increasingly difficult to recognize.

On This Page

1. AI Data Poisoning Undercuts Digital Trust

To trick ChatGPT and other LLMs into promoting scams, fraudsters are experimenting with ways to poison the data AI systems rely on and influence their responses.

This risk is growing as consumer search behavior shifts from traditional search engines to AI assistants. Instead of reviewing multiple links, users increasingly rely on AI‑generated responses for a single answer.

AI search is particularly vulnerable for several reasons:

  • Data poisoning: Threat actors can manipulate the information AI models learn from or retrieve when generating answers.

  • Generative Engine Optimization (GEO): These techniques attempt to influence how AI systems surface information, potentially bypassing ranking protections developed by traditional search engines over the last 25 years.

  • Perceived authority: Users often treat AI responses as definitive answers rather than evaluating multiple sources.

Evidence of exploitation

Our internal threat research shows that scammers are already taking advantage of this shift. During a major winter storm in the United States — when more than 5,000 flights were cancelled — we asked ChatGPT for airline customer service numbers. Several of the numbers returned were fraudulent and answered by scammers when called.

ChatGPT response listing fraudulent customer service numbers.
ChatGPT response listing fraudulent customer service numbers.

In our analysis, the same phone number appeared for multiple airlines and was answered as “Microsoft Customer Support,” suggesting scammers were repurposing infrastructure from an existing tech support scam.

Search results promoting fraudulent customer service numbers.
Search results promoting fraudulent customer service numbers.

This is only one example. The same manipulation techniques can be applied to any scenario where a consumer searches for help — “How do I contact X?” or “How do I resolve Y?” — allowing scammers to intercept victims at the exact moment they seek assistance.

Disclaimer: AI system behaviors reflect observations from research conducted in February 2026 and may have changed since publication.

2. AI Shopping Will Change Who We Trust Online

AI‑powered shopping tools are being introduced rapidly, but many lack meaningful anti-scam protections. In many cases, safeguards against fraudulent merchants appear limited or are added as an afterthought.

Unlike traditional search engines that return a list of retailers, AI shopping tools function more like personal recommendations. When an AI assistant suggests a product or store, it can grant credibility to merchants consumers would otherwise treat with skepticism.

This creates several risks:

  • Recommendation framing: AI suggestions can make unfamiliar or fraudulent merchants appear trustworthy simply because they were recommended.

  • Deal prioritization: AI shopping tools often highlight the “best deals,” which aligns with common scam tactics such as steep markdowns, limited time offers, or “closing down” sales.

  • Amplified messaging: In some cases, AI tools repeat promotional claims from merchant websites verbatim — including urgency tactics frequently used by fake shops.

Evidence of exploitation

Our testing shows that AI shopping tools can inadvertently amplify scam messaging. In one example, ChatGPT repeated a fake shop’s claim that it was “going out of business,” echoing the urgency used on the merchant’s website.

ChatGPT response recommending a fake online store.
ChatGPT response recommending a fake online store.

As AI‑powered shopping expands, weak merchant verification and limited safeguards may allow fraudulent stores to gain visibility through AI‑generated recommendations — placing scam websites directly in front of consumers at the moment they’re ready to buy. 

For a deeper analysis of how fraudulent storefronts can exploit advertising networks and AI shopping tools, see our recent Partner Insights article.

Disclaimer: AI system behaviors reflect observations from research conducted in November 2025 and may have changed since publication.

3. Scam Compounds Change the Scale of Cyber Crime

Scamming is now a multi-billion-dollar global industry, with organized crime groups operating large-scale fraud enterprises across multiple countries. By some estimates, the global scam economy now rivals the scale of the illegal narcotics trade.

Despite dozens of law enforcement raids on international scam compounds, the problem persists. These operations continue to grow, often relocating or rebuilding after enforcement actions.

Several factors enable scams to operate at industrial scale:

  • Organized criminal infrastructure: Many scams are run from large compounds employing hundreds or even thousands of workers.

  • Operational playbooks: Investigations regularly uncover scripts and training materials used to teach scammers how to manipulate victims emotionally.

  • Specialized technology: Scam centers often operate sophisticated telecommunications infrastructure that enables large-scale messaging, calling, and victim management.

Evidence of exploitation

Each raid on a scam compound reveals new intelligence about how these operations function, including the software used to run campaigns and the playbooks used to train scammers. An FBI photo from a scam center closed in 2025 shows a training session where new recruits are guided through the steps of executing a romance scam, illustrating how methodical these operations have become.

Photo released by the FBI showing a training session inside a scam compound.
Photo released by the FBI showing a training session inside a scam compound.

Another photo from a U.S. scam compound raid shows the script used in “family emergency” scams targeting grandparents, highlighting how scripted and repeatable these operations have become — often requiring little more than a script reader and AI software that mimics a victim’s family member.

Photo of a scam script released by the U.S. Department of Justice.
Photo of a scam script released by the U.S. Department of Justice.

Satellite imagery and investigative photos also reveal the scale of these facilities — some housing thousands of workers — showing how modern scam operations increasingly resemble organized call centers.

Google Earth images showing a scam compound before and after construction.
Google Earth images showing a scam compound before and after construction.

One thing is clear: modern scamming operates on an industrial scale. These aren’t isolated criminals with disposable phones, but organized enterprises. As long as these operations continue to function across borders, the global fraud industry will keep expanding — delivering a constant stream of scams to consumers worldwide.

4. Deepfakes Distort Reality to Promote Scams

Scammers are increasingly using AI to make their scams more believable. As generative tools become cheaper and easier to use, they’re becoming part of the standard toolkit for fraud operations.

For years, many scams were easy to spot because of obvious signs — bad grammar, awkward phrasing, poorly edited images. AI is erasing those signals, allowing scammers to produce polished messages, convincing voices, and realistic visuals at scale.

Several factors are accelerating this shift:

  • Higher-quality bait: AI tools help scammers produce convincing emails, messages, and profiles.

  • Synthetic voices and images: AI‑generated media can be used to impersonate trusted individuals or organizations.

  • Lower barriers to entry: Tools that once required technical expertise are now widely accessible.

Evidence of exploitation

Our internal threat research shows that 89% of scammers’ AI use focuses on improving the quality of their bait, using AI‑generated messages, images, audio, and video to make scams harder to detect.

At the same time, our latest consumer survey shows growing concern about distinguishing real from fake: 84% of respondents worry AI will make it impossible to tell what’s genuine online (F‑Secure Global Consumer Market Survey 2026, n = 10,000).

As AI‑generated content becomes increasingly difficult to recognize, the warning signs that once helped consumers identify scams may fade. Trusted, AI‑powered security from digital service providers will therefore become more important than ever.

For a deeper look at how consumers use AI — and the conditions required to earn their trust — see the F‑Secure Cyber Trust Report: AI Adoption in an Era of Conditional Trust.

Download the Report

Explore comprehensive consumer data and scam insights in the F‑Secure Scam Intelligence & Impacts Report 2026.

More From Our Experts

  • The Cost of Scams: Same Exposure, Double the Loss

    Scam exposure remains high in 2026, but the real shift is in impact. Financial losses have surged as cyber criminals become more effective at converting attempts into real monetary harm. Our 2026 consumer survey reveals how the scam landscape is changing.

    Timo Salmi
    Senior Solution Manager, F‑Secure

  • Trust Under Attack: The Scam Threats of 2026

    Artificial intelligence is eroding trust in everyday digital interactions — from search results to voice calls. As scams infiltrate the online platforms and services people rely on, this chapter explores the key threats shaping 2026 and what they mean for digital trust.

    Dr Megan Squire
    Threat Intelligence Researcher, F‑Secure

  • The AI Arms Race: Inside the Global Fight Against Scams

    AI is transforming scams into highly efficient operations run by organized crime groups. As attacks become more sophisticated, the challenge isn’t just exposure — it's recognition. This chapter examines how AI is reshaping the scam economy and the global fight against it.

    Jorij Abraham 
    Managing Director, Global Anti‑Scam Alliance

  • Live Your Best Digital Life: A New Model for Trust

    Discover how F‑Secure is building a new model for digital trust — empowering digital service providers with AI‑powered, human-centered cyber security to enable trusted experiences across everyday digital life, so your customers feel secure, confident, and in control.