Skip to main content

Choose your country

News

US Congress Builds a Deepfake Defense, But Forgets Some Victims

Megan Squire

4 min read

US Congress is finally taking deepfakes seriously. The TAKE IT DOWN Act just passed with overwhelming bipartisan support, criminalizing non-consensual intimate deepfakes. The DEFIANCE Act, which creates a civil right to sue over these forgeries, is expected to follow.

Together, these bills attempt to remediate the most damaging deepfake harms. But they leave a $12.5 billion blind spot: financial fraud.

The Current Foundation

TAKE IT DOWN and DEFIANCE both address deepfakes as violations of personal dignity. The TAKE IT DOWN Act recognizes that sexual deepfakes are a form of abuse, punishable by criminal law and requiring platforms to remove content within 48 hours. The DEFIANCE Act empowers victims to seek civil damages directly.

This framework makes sense for its targets. When someone creates a pornographic deepfake, the harm is immediate and personal: the victim is violated, their reputation damaged, and their opportunities stolen. So, a legislative response that imposes criminal and civil penalties directly addresses these harms.

The Financial Fraud Gap

Financial deepfakes operate differently. When scammers use AI to clone your grandson's voice and claim he's been arrested, they're not stealing his likeness for profit. They're weaponizing your trust to steal your money. This harms not just the person being impersonated, but it also harms you: the family member who wires $10,000 for "bail" money.

The Preventing Deep Fake Scams Act, introduced in both chambers, acknowledges this gap but offers only a band-aid: a Treasury-led task force to study the problem and report back in a year. While the bipartisan sponsorship team deserves credit for recognizing the issue, financial fraud victims need protection now, not recommendations later.

What Financial Fraud Legislation Should Include

Building on the Preventing Deep Fake Scams Act’s task force approach, comprehensive financial deepfake legislation needs several key components:

Liability Framework

Current law leaves victims holding the bag when they are tricked into authorizing fraudulent transfers. We need clear rules about when financial institutions bear responsibility for failing to detect obvious red flags, balanced with safe harbors for the banks that do implement best practices.

Real-Time Response Requirements

The TAKE IT DOWN Act's 48-hour removal deadline shows Congress understands urgency. Financial deepfake legislation needs similar speed requirements for freezing suspicious transactions and responding to fraud reports. Every hour counts when money is being moved through a labyrinth of offshore banks.

Enhanced Penalties

While existing wire fraud statutes apply, deepfake-enabled fraud should carry enhanced penalties similar to how we treat crimes against seniors and other vulnerable victims. Using AI to impersonate someone's loved one represents a particularly heinous form of psychological manipulation.

Information Sharing Between Banks

As a recent ProPublica investigation revealed, fraudsters exploit the fact that banks are not communicating with each other. While Section 314(b) of the USA PATRIOT Act allows voluntary sharing between banks, application is scattershot.

We need mandatory sharing of suspect account information, similar to Australia's Scams Prevention Framework or Thailand's Central Fraud Register. When Chase discovers a fraudulent account, Wells Fargo needs to know immediately, before the same criminals open accounts there.

This information-sharing requirement is particularly crucial because scammers systematically exploit multiple banks, knowing that Bank of America doesn't have to tell Chase about suspicious accounts, and Chase doesn't need to warn Wells Fargo. This silence enables billions in fraud that could be prevented with mandatory real-time communication between financial institutions. 

Enhanced Transaction Verification Standards

Banks sometimes verify you are who you say you are, but they don't verify the story behind the wire. Could banks implement cooling-off periods, callback procedures to verified numbers, or automated flags for common scam narratives?

The ProPublica investigation found that one victim successfully wired money 10 out of 11 times he attempted it, despite having no history of making wire payments. Some banks already do this voluntarily by probing customers about wire destinations, but it's not required. As the ProPublica piece shows, most banks simply process the transaction without questioning the underlying story.

Completing the Legislative Framework

The current bills create important protections, and sexual exploitation and celebrity impersonation are real harms deserving real consequences. But as deepfake technology spreads, the primary threat to most Americans isn't having their likeness stolen, it's having their money stolen.

Congress should pass the Preventing Deep Fake Scams Act as a first step, then immediately begin work on comprehensive financial fraud legislation. We cannot afford to wait a year for task force recommendations while Americans lose hundreds of millions to entirely preventable scams.

The current bills are important first steps. Now we need to close the financial fraud gap before more families lose their life savings.