Skip to main content

Choose your country

Article

Cozy Sweater, Cold Scam: How Fraudulent Ads Slip Past Publishers and into AI Shopping Tools

Megan Squire

5 min read

Two incidents involving the same fake retailer reveal how modern scams can slip past both trusted publishers and even AI shopping tools—failing the very consumers who rely on them.

Recently, while scrolling through a popular news app, I noticed an advertisement featuring a cozy green sweater with little sheep on it. It came with a heartwarming story: an elderly artisan was retiring after decades of handcrafting knitwear, and her boutique was holding a 70% off goodbye sale. The copy was warm, nostalgic, and designed to make you feel before you think.

The problem? The whole thing was fake.

The sweater "boutique" was part of a well-documented network of fraudulent storefronts that ship overpriced junk—if they ship anything at all—and make returns nearly impossible. The "Handcraft Weekly" publication promoting the story wasn't an independent outlet, either. It was just a subpage of the scam site itself, dressed up to resemble editorial content. "Owner retiring" stories are a dime a dozen in this ecosystem: sentimental bait, mass-produced at scale.

If this was built to fool humans, I wanted to see how it would perform against a machine. So I tested an AI shopping assistant.

When AI Does the Selling for Scammers

Just days after OpenAI launched its new 'Shopping Research' feature, I decided to test it. I described the same green sweater with sheep on it that I saw in the news site's advertisement, and mentioned part of the name of the store running the ad. Within moments, ChatGPT identified the boutique and its flagship product: "The Little Flock Sweater."

It took the bait.

The AI assistant helpfully noted that the sweater was on sale for $59.99, down from $199, as part of a "70% OFF RETIREMENT SALE." It added that only six were left in stock. It acknowledged seeing some negative reviews on a review site and recommended paying with a credit card for buyer protection.

In other words, ChatGPT did exactly what the scammers designed this operation to encourage: it accepted the emotional story at face value, found the "deal," conveyed the artificial urgency, and—with only a mild caveat about being cautious—essentially validated the purchase.

Two Failures, One Broken Trust Pipeline

What makes these two incidents notable isn't that scams exist, or that they're becoming more sophisticated. Both are true. What matters is that this one scam successfully exploited two systems that consumers increasingly rely on as proxies for trust.

Failure 1: Trusted publishers as gatekeepers

This news outlet isn't just any website—it's widely respected, known for high journalistic standards and editorial integrity. When a scam ad appears here, it borrows the site's standing and credibility. Even though the publisher didn't create the scam, its presence acts as a trust signal.

And that effect is amplified by advertorial formatting, where ads mimic the look and feel of journalism. When scam advertising is styled like editorial content, it doesn't just grab attention—it inherits legitimacy.

Failure 2: AI assistants as informed advisors

OpenAI has positioned its 'Shopping Research' feature as something that "researches deeply across the internet" and "reviews quality sources." The company claims it uses a specialized GPT-5 variant trained to "read trusted sites" and "cite reliable sources."

But when asked about the fake sweater boutique, the AI didn't detect a scam. It essentially functioned like a sales funnel for a known scam site.

What It Will Take to Make AI Shopping Safer

The solution isn't "stop using AI" or "don't trust publishers." The solution is to recognize that the systems we rely on as trust filters are now being actively targeted—and to build defenses that match the new threat model.

  • Publishers need adversarial ad review: The news outlet didn't create this scam, but the scam borrowed their credibility. Similar things happen with many other credible outlets that publish dubious advertisements. Publishers should consider adversarial review processes that specifically look for the telltale signs of shopping scams: fabricated editorial properties, too-good-to-be-true narratives, and recently registered domains.

  • AI shopping tools need scam detection layers: OpenAI claims its 'Shopping Research' feature is trained on "trusted sites," but it clearly lacks real-time integration with robust scam databases. Before recommending a relatively unknown retailer, AI shopping assistants should cross-reference against multiple sources and weight negative signals more heavily than they currently do.

  • Consumers need realistic expectations about AI shopping: The marketing around AI shopping assistants emphasizes their sophistication—they "research deeply," "review quality sources," and build "personalized buyer's guides." This creates an impression of due diligence that may not match reality. Consumers should understand that AI shopping tools are optimized for finding products that match descriptions, not necessarily for detecting fraud.

  • The ecosystem needs better information sharing: The speed at which scam operations can rebrand and relaunch outpaces the speed at which consumer protection databases update and AI systems retrain. Closing that gap requires a coordinated approach: real-time threat intelligence sharing between publishers and fraud detection services.

Trust at Scale, Exploited at Scale

We're at a turning point in how people shop. As AI mediates more of the buying journey, the trust we place in these systems becomes a vulnerability that can be exploited at scale.

The sheep sweater story isn't an anomaly—it's a case study in what's already happening. And in a world where scams are optimized for both traditional advertising platforms and AI shopping agents, a single well-crafted operation can reach millions of consumers through channels they've been taught to trust.

Expert behind the insights

Dr Megan Squire

Threat Intelligence Researcher, F-Secure

Megan Squire holds a PhD in computer science and is the author of two books and 40+ peer-reviewed articles. A recipient of Best Paper Awards and a recognized cyber threat expert, she has been featured in major media including The New York Times and PBS Frontline.