Social media has turned influencers into familiar figures—people we follow, trust, and sometimes feel we know. That dynamic can be powerful. But what happens when the influencer isn't human?
You may have noticed a new kind of creator in your feeds: algorithm-optimized, always on-brand, and sometimes entirely synthetic. AI influencers don't just blur the line between advertising and identity; they can give scammers a reusable front for manufacturing trust at speed and scale.
For brands, AI influencers create a tempting playbook: cheaper, controllable creators that can produce endless content without the unpredictability of human talent. But misused, those same personas create new risk for consumers—persuasive figures that look credible, feel familiar, and can steer audiences toward counterfeit products, phishing links, or fake investment offers.
The question isn't whether AI influencers will become mainstream. They already have. The real issue is whether they're being used for scams, or if this is more speculation than reality.
What's an AI Influencer?
AI influencers are virtual characters created with generative AI and deployed on social platforms to influence behavior. Some are photorealistic and hard to distinguish from humans, while others are deliberately stylized. In most cases, they're still tightly scripted and human-directed—but AI is increasingly used to generate images, videos, voices, captions, and even automated interactions.
You'll find AI influencers most often on visual-first platforms like TikTok, YouTube, and Instagram, and increasingly on X. They're typically monetized through a mix of platform revenue, affiliate links, paid partnerships, and subscriptions.
AI Influencers vs Deepfake Celebrity Ads
The AI influencer phenomenon is distinct from scammers using AI-generated celebrity likenesses to promote scams. Rather than appearing in one-off ads, these accounts behave like ongoing creators: they post on social media regularly, interact with followers, and build parasocial relationships over time. A consistent identity, storyline, and community are part of the package.
Celebrity deepfakes, by contrast, are typically deployed as short promotional clips. They don't rely on long-term engagement—just instant recognition and authority. That distinction matters because the scam mechanics differ: celebrity deepfakes sell "borrowed trust," while influencer-style personas manufacture trust over time.
How AI Influencers Emerged on Social Media
Social media began as a way to scale real-world relationships online, connecting communities beyond geography. But over time, those incentives shifted toward monetization.
Today, most major platforms run on the attention economy, turning every scroll and interaction into something that can be sold. Influencers became the engine of that system, and brands learned to ride the algorithm alongside them.
Now the system is evolving again. AI influencers—virtual personas that can be generated, scripted, and scaled—are entering the feed not as a novelty, but as a business advantage.
Virtual influencers have been around since the mid-2010s, with Lil Miquela widely cited as the first breakout CGI-created influencer in 2016. She built a huge Instagram following and landed major fashion and music partnerships, proving that a non-human persona could influence at scale. In the 2020s, generative AI accelerated this model, making realistic faces, voices, and always-on content pipelines faster and cheaper to produce—and fueling the AI influencers we recognize today.
Popular AI Influencers and Their Impact on Trust
1. Lil Miquela (@lilmiquela)
)
LA-based Lil Miquela is the original breakout virtual influencer. She has millions of followers across TikTok, Instagram, and YouTube, and runs paid partnerships like a conventional influencer.
Why it matters for fraud: Lil Miquela helped normalize synthetic personas as credible figures in everyday feeds. That cultural acceptance lowers friction for newer synthetic accounts—legitimate or not—to gain trust quickly.
2. Aitana Lopez (@fit_aitana)
)
Aitana Lopez is an AI fitness influencer with hundreds of thousands of Instagram followers, reportedly monetized by a Barcelona-based AI modelling agency.
Why it matters for fraud: The fitness and wellness sector is a high-fraud vertical; convincing synthetic influencers can make it easier to sell dubious supplements or counterfeit products using "trusted" lifestyle framing.
3. Mia Zelu (@miazelu)
)
Mia Zelu is an AI fashion influencer who went viral in 2025 after posting photos of her "attending" Wimbledon, blending seamlessly into an authentic social moment.
Why it matters for fraud: Her popularity shows that a synthetic persona can convincingly fake real-world presence—exactly the credibility trick scammers need to push fake offers or impersonate brands.
How AI Influencers Are Used in Scams: Current Evidence and Limits
While AI influencers are often hyped as a scam risk in the media, real-world evidence suggests misuse is currently concentrated in a small number of use cases. These patterns show where synthetic personas are already profitable for scammers and where technical or human constraints still limit their usefulness.
Below are five ways AI influencers are being used in scams today—or could plausibly be used next—based on documented cases and current threat intelligence.
1. Adult Content Monetization
Goal: Monetize fake personas for adult content platforms by building parasocial funnels to paid subscription platforms.
Status: The most profitable and well-documented use of AI influencers in scams.
Key evidence: A 404 Media investigation revealed Adrianna Avellino, an AI influencer with 94K Instagram followers, used stolen and face-swapped content from real adult content creators.
Fraud methodology:
Creators download videos from real models, use AI to swap faces, and then promote paid subscriptions on platforms like Fanvue, an OnlyFans competitor.
Users may or may not be aware that the influencer is AI-generated.
Using a real model's likeness without permission may constitute impersonation or identity theft.
Why this works: Adult content platforms already rely on parasocial relationships and visual appeal, making them highly vulnerable to synthetic personas. Face-swapping technology lets scammers reuse proven content while presenting a "new" identity, lowering production costs.
2. Affiliate Marketing and Product Promotion
Goal: Drive sales or affiliate revenue using automated engagement strategies.
Status: Documented use of AI influencers in scams.
Key evidence: A Startup Spells investigation documented Wellness Maddie, an AI influencer with 291.4K TikTok followers and 565K Instagram followers, promoted supplements and used automated chats to convert engagement into sales.
Fraud methodology: Creators deploy deceptive personas, sometimes posing as doctors or health experts, to promote products. This is often supported by automated comments and DM workflows that simulate personal interaction.
Why this works: Affiliate fraud doesn't require long-term credibility—just enough perceived authority to trigger a purchase. AI influencers can rapidly simulate expertise and trust, while automated DMs and comments scale one-to-one persuasion without human labor.
3. Crypto and Investment Promotion
Goal: Inflate token visibility and price using promotion from AI influencers.
Status: Partially documented use of AI influencers in scams.
Key evidence: An AI agent known as Truth Terminal used social media to promote a cryptocurrency called $GOAT, triggering a reported 9x price surge. Suspicious trading activity was later detected around its promotion of the RUSSELL token.
Fraud methodology: Followers may not recognize the limited or dubious value of financial advice delivered by AI-generated personas, especially when framed as analysis or insight.
Why this works: Crypto markets are highly reactive to narratives and social momentum. AI-generated personas can project confidence, consistency, and technical fluency, creating the illusion of insight or inevitability—even when the underlying financial claims are weak or unverified.
4. Romance Scams
Goal: Extract money through emotional grooming over extended periods.
Status: While AI tools are used in romance scams, there is currently no evidence of AI influencer accounts conducting romance fraud.
Key evidence:
FBI and Interpol warnings about AI use in romance scams refer to chatbots and deepfakes, not influencer accounts.
A raid in the Philippines rescued 800 human scam workers, many forced to pose as romantic interests, suggesting that humans are still preferred even when face-changing tools are used.
AI chatbots often handle initial contact, with humans taking over during later financial extraction stages.
Fraud methodology: None documented for influencer-style AI personas at this time.
Why this hasn't worked yet: Romance fraud depends on prolonged, adaptive emotional manipulation that still favors human operators. While AI tools assist with initial contact, sustaining believable intimacy, improvisation, and long-term grooming remains difficult for influencer-style synthetic personas.
5. Political Manipulation
Goal: Shape beliefs or polarize communities using synthetic parasocial influence.
Status: Despite theoretical potential, there's limited evidence of coordinated AI influencer networks being used for high-risk political manipulation as of late 2025.
Key evidence: Public examples of AI political influencers are sparse and primarily documented in a recent LinkedIn roundup.
Fraud methodology: Attractive AI influencers comment on political news and repeat talking points using emotional language and personal storytelling. Users may not realize the personas are AI-generated, increasing perceived authenticity.
Why this could work:
Political influence relies on identity, emotion, and repetition.
AI influencers can deliver personal stories and opinions at scale, lowering skepticism when audiences perceive them as relatable individuals, especially in low-trust environments.
The highest-risk scenarios involve incitement (e.g. stochastic terror) and large-scale amplification of conspiracy theories or disinformation.
What the Evidence Shows and What Comes Next
AI influencers are starting to appear in fraud discussions as multi-purpose tools, but current public evidence shows real concentration in adult content impersonation and affiliate-driven product promotion. The threat is growing, but it's more specialized than the broader hype suggests.
Where fraud is documented, it relies heavily on stolen content and face-swapping rather than fully synthetic, end-to-end AI personas. That dependency adds friction and limits scale for more complex operations. The most sophisticated scams—romance fraud and high-complexity investment manipulation—still require human operators and improvisation that AI influencer pipelines aren't reliably replicating.
For now, the risk is real but limited in scope. That's good news from a defense perspective. The most visible abuse patterns are already identifiable, measurable, and easier to disrupt. Looking ahead, the biggest risk lies in convergence—more sophisticated influencer-style scam personas, increasingly coordinated synthetic accounts, and persistent gaps in disclosure and enforcement.


