Artificial Intelligence is rapidly transforming the global scam landscape. What was once a largely manual criminal activity is becoming automated, scalable, and increasingly difficult to detect.
For years, scam operations have been professionalizing. Organized crime groups now run large-scale scam enterprises that resemble legitimate companies, complete with specialized teams, standardized processes, and optimized workflows. Their goal is simple: scam anyone, anywhere, as efficiently as possible.
AI is now accelerating this professionalization. Recent law enforcement raids on scam compounds revealed that virtually all staff were using AI tools in their daily operations — whether to optimize phishing messages, localize scam content, generate fake voices, or create convincing images and videos.
This shift is already visible in the data. In GASA’s Global State of Scams survey, the primary reason victims reported being scammed changed in just one year from “I saw an offer and went for it” to “I didn’t recognize that it was a scam.” As scams become more sophisticated, they are also becoming harder to identify.
In this chapter, we speak with GASA’s Jorij Abraham about how AI will reshape the scam landscape — and the fight against scams — in 2026.
On This Page
In short: AI. Mass messaging has long been used to find potential victims. The difference now is that messages can not only be personalized using stolen data, but the entire scam chain can also be automated by professional scam networks.
As the GASA network grows, we’ve added a fourth pillar — action — to our existing focus on networking, knowledge sharing, and research. We now run six working groups developing cross-sector policy recommendations and solutions.
No single nation or sector sees the full picture. Scam syndicates operate across borders and deliberately exploit gaps in international legislation.
The internet industry, social media platforms, telecom operators, and the financial sector are all feeling the growing burden of scams. The success of the Global Signal Exchange (GSE) shows that data can be shared at scale and across borders.
AI is significantly strengthening the capabilities of scammers. Phishing texts written by AI have a click-through rate of 54% compared to 12% for standard attempts, while vishing increased by 442% between the first and second half of 2024.
Join GASA and become part of the global solution-building community. Turning the tide on scams will require coordinated action across sectors and borders.
How Will the Scam Landscape Change in 2026?
In short: AI. Mass messaging has long been used to find potential victims. The difference now is that messages can not only be personalized using stolen data, but the entire scam chain can also be automated by professional scam networks.
AI identifies potential victims on social media, after which LLMs initiate contact and quickly move conversations to messaging apps to further manipulate targets. Victims are then directed to AI‑generated online stores, crypto exchanges, or fake banking websites designed to steal their money. Finally, AI helps criminals obscure the money trail by splitting funds into micro-transactions through crypto mixers.
AI‑enabled scams will cause significantly more harm in 2026, particularly in the hands of constantly evolving criminal syndicates.
What Does Meaningful Action Against Scams Look Like?
As the GASA network grows, we’ve added a fourth pillar — action — to our existing focus on networking, knowledge sharing, and research. We now run six working groups developing cross-sector policy recommendations and solutions.
Education: Awareness campaigns aren’t enough. People need continuous education from school to later life. GASA supports this through SpotScam, a free program that helps consumers build practical anti-scam skills.
Prevention and Intervention: As AI scams become harder to recognize, we launched Scam.org — a global hub where people can check suspicious activity, report scams, and access victim support.
Intelligence Sharing: Through the Global Signal Exchange (GSE), members now share more than one million scam signals across sectors every day.
Research: Much scientific research on scams remains underused by the commercial sector. We aim to bridge this gap through stronger collaboration and ongoing research, including the Global State of Scams survey.
Finance: Financial services play a crucial role in preventing criminals from benefiting from scams. This group focuses on key challenges such as money mules.
Enforcement: According to the World Economic Forum, only 0.05% of cyber criminals are prosecuted. This group works to raise that figure through stronger policies and improved intelligence sharing between industry and law enforcement.
Why is Cross-Sector Cooperation Non-Negotiable?
No single nation or sector sees the full picture. Scam syndicates operate across borders and deliberately exploit gaps in international legislation. While money may be stolen through financial services, scams often begin earlier — through online marketplaces, social media, and telecom networks.
Only coordinated cross-sector and international collaboration can disrupt these criminal ecosystems and prevent scammers from simply moving to the next weak link.
How Does Shared Intelligence Help Stop Scams in Practice?
The internet industry, social media platforms, telecom operators, and the financial sector are all feeling the growing burden of scams. The success of the GSE shows that data can be shared at scale and across borders. In 2025, the platform expanded rapidly:
Signals grew from 40 million to more than 1 billion
Data sources increased from fewer than 10 to 54
More than 160 organizations are now onboarded or in the onboarding pipeline
Major technology companies including Amazon, Microsoft, and Meta have joined co‑founder Google, alongside four law enforcement agencies. Several cross-sector pilots have also launched covering finance, malvertising, law enforcement, cloud, and publishing scams.
These collaborations are already producing results. League tables are exposing weaknesses in scam supply chains and motivating policy interventions. The UK Malvertising pilot with Amazon and Google established core taxonomies and generated entirely new intelligence signals in its first phase. In another example, a GSE-facilitated partnership helped GovTech Singapore restrict 17,000 scam entities on Meta platforms.
Data sharing is still an emerging practice, but its impact is clear. To become truly effective, companies must build trust and actively participate in sharing intelligence.
How is AI Reshaping the Balance Between Scammers and Defenders in the Scam Landscape?
AI is significantly strengthening the capabilities of scammers. Phishing texts written by AI have a click-through rate of 54% compared to 12% for standard attempts, while vishing increased by 442% between the first and second half of 2024, largely driven by AI impersonation and SIM farms.
Soon, text scams may stand out only because they are better written than those sent by the average user. Experts also believe synthetic voices could become indistinguishable from real ones within a few years.
Defensive AI will help raise the bar by detecting scam behavior rather than simply identifying AI‑generated content. However, criminals are not constrained by regulation and often have significant resources. AI alone will not solve the problem — broader cooperation and policy measures are also needed.
In time, we may reach a plateau like computer viruses: they still exist and cause harm, but there is some level of control.
What Should Organizations Do If They Want to Contribute to the Fight Against Scams?
Join GASA and become part of the global solution-building community. Turning the tide on scams will require coordinated action across sectors and borders.





