Skip to main content

Choose your country

Report

F-Alert U.S. Cyber Threats Bulletin February 2026

Discover the latest online threats and cyber security trends impacting businesses and consumers in the United States, brought to you by F-Secure's threat intelligence specialists.

February's F‑Alert highlights our cyber threat predictions for 2026, covering autonomous AI agents, synthetic identity fraud, international scam centers, and rising job scam activity on LinkedIn. Throughout, we provide expert commentary and practical guidance to help navigate these risks.

2026 Prediction: AI Agents Acting on Our Behalf Will Redefine Cyber Risk

In 2026, agentic AI will fundamentally reshape the digital threat landscape. These systems will no longer just assist users, but act independently, making decisions, moving money, and interacting with services on our behalf.

Key facts:

  • This shift in the way we interact in the digital world introduces a new class of cyber risk, where software becomes both the operator and the victim.

  • Agentic AI lacks the contextual judgment humans rely on to detect manipulation, deception, or social engineering. As adoption accelerates across consumer and business services, attackers will focus on exploiting decision logic rather than user behavior.

  • Security, oversight, and clearly defined limits on AI autonomy are critical foundations for the next phase of digital life.

When AI systems are trusted to act autonomously, the security challenge changes completely. It's no longer enough to protect people from making bad decisions online. We now must protect AI systems that make those decisions for them, often at machine speed and without human intuition.

Timo Laaksonen, President and CEO at F‑Secure

2026 Prediction: Scam Centers Will Become a High-Profile Security Threat

In 2026, industrial-scale scam centers operating out of Southeast Asia will become one of the most visible and politically charged cyber security threats facing the U.S.

Key facts:

  • Major browsers are racing to integrate AI assistants directly The newly formed Scam Center Strike Force—a joint effort between the U.S. Attorney’s Office, DOJ, FBI, and Secret Service—has already seized more than $400 million in cryptocurrency and dismantled major scam infrastructure linked to Cambodia, Laos, and Myanmar.

  • Last year, U.S. officials seized $15bn in bitcoin allegedly tied to scam mastermind Chen Zhi and charged him with fraud and money laundering. He was recently arrested by Cambodian authorities and extradited to China.

  • While enforcement is accelerating, experts warn that lasting success will require sustained international cooperation and deeper public-private partnerships to stop platforms and internet services from being used to enable cyber crime.

These scam centers represent a massive wealth transfer from American families to organized crime. Nearly $10 billion is stolen from Americans every year, and what’s different now is the coordinated effort to dismantle not just the scammers, but the entire ecosystem that enables them, including U.S.-based internet infrastructure.

Dr Megan Squire, Threat Intelligence Researcher at F‑Secure

Trending Scam: "Truman Show" WhatsApp Scam Steals IDs and Selfies

What's happening:

  • A new fraud operation dubbed the 'OPCOPRO scam' uses AI and bots to build fake communities and trust over weeks, drawing mobile users into a fabricated "Truman Show" style digital world.

  • Victims are typically lured via SMS messages impersonating Goldman Sachs and then added to a WhatsApp "investing" group run by two AI‑generated personas: Professor James and his assistant Lily.

  • After several weeks, victims are pushed to download the OPCOPRO app, complete an identity check by submitting photo ID and a liveness selfie, and deposit funds with promises of 370%–700% returns. The app has no real trading functionality and simply shows fake figures to harvest money and personal data.

What to do:

  • Stolen IDs and liveness selfies can enable SIM swaps, workplace account takeover, and financial fraud.

  • Treat unsolicited investment offers as scams, avoid links or investing groups from unexpected messages, and never submit ID or liveness checks unless independently verified through official channels.

Breach That Matters: Instagram Denies Breach as Reset Emails Alarm Users

What's happening:

  • Instagram continues to deny an alleged breach after many users reported receiving password reset emails. The reports come amid claims that data linked to more than 17 million accounts was scraped and leaked online.

  • In response, the platform says it has "fixed an issue that allowed an external party to request password reset emails for some Instagram users" and that "people's Instagram accounts remain secure."

  • At this stage, there's no confirmed evidence of a new breach. The activity could stem from API abuse, recycled or older data aggregated into a larger list, or unrelated password-reset activity misinterpreted as a hack.

What to do:

  • Enable two-factor authentication and stay alert for phishing emails, smishing texts, and social engineering attempts designed to steal passwords.

  • Ignore and delete any password reset emails or SMS messages that were not requested.

2026 Prediction: AI Identity Fraud Will Reach an Industrial Scale

In 2026, criminals will increasingly use AI to generate synthetic identities by combining stolen real data with fabricated details, creating new personas that can pass verification checks, open bank accounts, and secure loans.

Key facts:

  • Research shows that individuals are willingly selling their identities, giving criminals legitimate credentials to exploit.

  • The impact is already measurable: more than 6.4m identity theft and fraud reports are estimated to have been sent to the Federal Trade Commission in the U.S. alone over the last year.

  • Financial institutions face mounting pressure to adopt multi-layered verification capable of detecting these hybrid identities. Meanwhile, individuals who sell their credentials risk legal liability for crimes and debts committed in their name, creating victims on both sides of the fraud.

AI has made synthetic identity fraud far more convincing and scalable. Criminals can now generate fake documents, bypass video and photo ID checks, and build credit histories that appear legitimate across multiple systems.

Bill Lott, Head of Marketing, Embedded Solutions at F‑Secure

2026 Prediction: LinkedIn's Trust Signals Will Keep It a Target for Job Scams

In 2026, LinkedIn will remain a major channel for employment scams. The platform's scale makes it a prime target—and its high-trust reputation can lend credibility to fraudulent job offers. Job market uncertainty, combined with anxiety about AI replacing roles, will only make conditions even more conducive for these schemes.

Key facts:

  • Scammers typically reach victims via direct messages or comments under popular posts. Their "job offers" may promise high pay for minimal experience, flexible hours, and fully remote work—so they never have to meet in person. Job titles may be vague, and the hiring process unrealistically fast.

  • These scams often include obvious red flags, but they target people in vulnerable situations (for example, after a layoff), making victims more likely to engage.

  • Once contact is established, scammers exploit victims by harvesting personal data for identity theft, attempting account takeover, requesting an "advance fee" for recruitment or onboarding, or running a fake equipment reimbursement scheme.

LinkedIn detects and removes tens of millions of fake accounts each year, but some still slip through. To spot these scams, check the sender's profile for low engagement and few connections, look for a verification badge and review its details, and run a reverse image search. If it seems too good to be true, verify the role independently by contacting the company directly.

Joel Latto, Threat Advisor at F‑Secure

Experts Behind the Insights

  • Timo Laaksonen

    President and CEO, F‑Secure

    Timo Laaksonen is President and CEO of F‑Secure, leading the company's strategic growth and operations since 2022. A seasoned leader with extensive experience in cyber security, he also contributes to the broader tech community through industry advisory and board roles.

  • Dr Megan Squire

    Threat Intelligence Researcher, F‑Secure

    Megan Squire holds a PhD in computer science and is the author of two books and 40+ peer-reviewed articles. A recipient of Best Paper Awards and a recognized cyber threat expert, she has been featured in major media including The New York Times, WIRED, and PBS Frontline.

  • Bill Lott

    Head of Marketing, Embedded Solutions, F‑Secure

    Bill Lott is a product marketing leader specializing in launching innovative tech solutions. Holding an Executive Certificate from MIT and an MBA from the University of Georgia, he leads the go‑to-market strategy for F‑Secure's embedded security portfolio.

  • Joel Latto

    Threat Advisor, F‑Secure

    Joel Latto is a threat researcher focused on scams and social media. A regular contributor to threat reports, including F‑Secure's F-Alerts, he has also collaborated with Laurea University of Applied Sciences to educate the public about cyber crime.

Contact us

If you’re interested in finding out how we can personalize our solutions to your business strategy and consumer needs, contacting our team of experts is easy. Get in touch to find out:

  • How we integrate our security solutions

  • How we can add value to your business

  • How your customers benefit from our security

By submitting this form, you agree to receive emails and equivalent communications from F‑Secure, including news­letters, event invitations, offers, and product-related information. However, if you wish to change your consent settings in the future, we always provide that possibility through the Preference Center link at the bottom of our emails. We process the personal data you share with us in accordance with our privacy statement.

Thank you for your interest

We'll be in touch soon.