
Discover the latest online threats and cyber security trends impacting businesses and consumers worldwide, brought to you by F-Secure’s threat intelligence specialists.
February’s F‑Alert highlights our cyber threat predictions for 2026, covering autonomous AI agents, synthetic identity fraud, international scam centers, and rising job scam activity on LinkedIn. Throughout, we provide expert commentary and practical guidance to help navigate these risks.
In 2026, agentic AI will fundamentally reshape the digital threat landscape. In this article, Timo Laaksonen examines how these systems will move beyond assistance to act autonomously, making decisions, moving money, and interacting with services on our behalf.
In 2026, industrial-scale scam centers operating out of Southeast Asia will become one of the most visible and politically charged cyber security threats facing the U.S. In this article, Dr Megan Squire explains how these operations manipulate trust and why platforms now need stronger verification safeguards.
A new 'OPCOPRO' scam uses AI and bots to build trust over weeks via fake online communities, drawing mobile users into a fabricated "Truman Show" style digital world. In this article, we break down how victims are lured, how the scheme unfolds, and how consumers can protect themselves.
Instagram is denying an alleged breach after users reported receiving unexpected password reset emails. The claims come amid reports that data tied to more than 17 million accounts was scraped and leaked online. In this article, we explore what may have actually happened.
In 2026, criminals will increasingly use AI to build synthetic identities by mixing stolen personal data with fabricated details, creating personas that can pass verification checks and access financial services. In this article, Bill Lott outlines how scammers obtain this information and what they use AI‑generated identities for.
In 2026, LinkedIn is likely to remain a major channel for employment scams. In this article, Joel Latto outlines how the platform's scale and high-trust reputation can make fraudulent job offers appear credible, and how job market uncertainty—compounded by anxiety about AI replacing roles—may further enable these schemes.
2026 Prediction: AI Agents Acting on Our Behalf Will Redefine Cyber Risk
In 2026, agentic AI will fundamentally reshape the digital threat landscape. These systems will no longer just assist users, but act independently, making decisions, moving money, and interacting with services on our behalf.
Key facts:
This shift in the way we interact in the digital world introduces a new class of cyber risk, where software becomes both the operator and the victim.
Agentic AI lacks the contextual judgment humans rely on to detect manipulation, deception, or social engineering. As adoption accelerates across consumer and business services, attackers will focus on exploiting decision logic rather than user behavior.
Security, oversight, and clearly defined limits on AI autonomy are critical foundations for the next phase of digital life.
When AI systems are trusted to act autonomously, the security challenge changes completely. It's no longer enough to protect people from making bad decisions online. We now must protect AI systems that make those decisions for them, often at machine speed and without human intuition.
Timo Laaksonen, President and CEO at F‑Secure

2026 Prediction: Scam Centers Will Become a High-Profile Security Threat
In 2026, industrial-scale scam centers operating out of Southeast Asia will become one of the most visible and politically charged cyber security threats facing the U.S.
Key facts:
Major browsers are racing to integrate AI assistants directly The newly formed Scam Center Strike Force—a joint effort between the U.S. Attorney’s Office, DOJ, FBI, and Secret Service—has already seized more than $400 million in cryptocurrency and dismantled major scam infrastructure linked to Cambodia, Laos, and Myanmar.
Last year, U.S. officials seized $15bn in bitcoin allegedly tied to scam mastermind Chen Zhi and charged him with fraud and money laundering. He was recently arrested by Cambodian authorities and extradited to China.
While enforcement is accelerating, experts warn that lasting success will require sustained international cooperation and deeper public-private partnerships to stop platforms and internet services from being used to enable cyber crime.
These scam centers represent a massive wealth transfer from American families to organized crime. Nearly $10 billion is stolen from Americans every year, and what’s different now is the coordinated effort to dismantle not just the scammers, but the entire ecosystem that enables them, including U.S.-based internet infrastructure.
Dr Megan Squire, Threat Intelligence Researcher at F‑Secure

Trending Scam: "Truman Show" WhatsApp Scam Steals IDs and Selfies
What's happening:
A new fraud operation dubbed the 'OPCOPRO scam' uses AI and bots to build fake communities and trust over weeks, drawing mobile users into a fabricated "Truman Show" style digital world.
Victims are typically lured via SMS messages impersonating Goldman Sachs and then added to a WhatsApp "investing" group run by two AI‑generated personas: Professor James and his assistant Lily.
After several weeks, victims are pushed to download the OPCOPRO app, complete an identity check by submitting photo ID and a liveness selfie, and deposit funds with promises of 370%–700% returns. The app has no real trading functionality and simply shows fake figures to harvest money and personal data.
What to do:
Stolen IDs and liveness selfies can enable SIM swaps, workplace account takeover, and financial fraud.
Treat unsolicited investment offers as scams, avoid links or investing groups from unexpected messages, and never submit ID or liveness checks unless independently verified through official channels.
Breach That Matters: Instagram Denies Breach as Reset Emails Alarm Users
What's happening:
Instagram continues to deny an alleged breach after many users reported receiving password reset emails. The reports come amid claims that data linked to more than 17 million accounts was scraped and leaked online.
In response, the platform says it has "fixed an issue that allowed an external party to request password reset emails for some Instagram users" and that "people's Instagram accounts remain secure."
At this stage, there's no confirmed evidence of a new breach. The activity could stem from API abuse, recycled or older data aggregated into a larger list, or unrelated password-reset activity misinterpreted as a hack.
What to do:
Enable two-factor authentication and stay alert for phishing emails, smishing texts, and social engineering attempts designed to steal passwords.
Ignore and delete any password reset emails or SMS messages that were not requested.
2026 Prediction: AI Identity Fraud Will Reach an Industrial Scale
In 2026, criminals will increasingly use AI to generate synthetic identities by combining stolen real data with fabricated details, creating new personas that can pass verification checks, open bank accounts, and secure loans.
Key facts:
Research shows that individuals are willingly selling their identities, giving criminals legitimate credentials to exploit.
The impact is already measurable: more than 6.4m identity theft and fraud reports are estimated to have been sent to the Federal Trade Commission in the U.S. alone over the last year.
Financial institutions face mounting pressure to adopt multi-layered verification capable of detecting these hybrid identities. Meanwhile, individuals who sell their credentials risk legal liability for crimes and debts committed in their name, creating victims on both sides of the fraud.
AI has made synthetic identity fraud far more convincing and scalable. Criminals can now generate fake documents, bypass video and photo ID checks, and build credit histories that appear legitimate across multiple systems.
Bill Lott, Head of Marketing, Embedded Solutions at F‑Secure

2026 Prediction: LinkedIn's Trust Signals Will Keep It a Target for Job Scams
In 2026, LinkedIn will remain a major channel for employment scams. The platform's scale makes it a prime target—and its high-trust reputation can lend credibility to fraudulent job offers. Job market uncertainty, combined with anxiety about AI replacing roles, will only make conditions even more conducive for these schemes.
Key facts:
Scammers typically reach victims via direct messages or comments under popular posts. Their "job offers" may promise high pay for minimal experience, flexible hours, and fully remote work—so they never have to meet in person. Job titles may be vague, and the hiring process unrealistically fast.
These scams often include obvious red flags, but they target people in vulnerable situations (for example, after a layoff), making victims more likely to engage.
Once contact is established, scammers exploit victims by harvesting personal data for identity theft, attempting account takeover, requesting an "advance fee" for recruitment or onboarding, or running a fake equipment reimbursement scheme.
LinkedIn detects and removes tens of millions of fake accounts each year, but some still slip through. To spot these scams, check the sender's profile for low engagement and few connections, look for a verification badge and review its details, and run a reverse image search. If it seems too good to be true, verify the role independently by contacting the company directly.
Joel Latto, Threat Advisor at F‑Secure

