States Demand Meta Act on Rampant Scam Ads

Meta's platforms enable investment scams, costing billions. States demand action to shield consumers from financial devastation.

States demand stronger safeguards as Meta’s AI fails to stop fraudulent ads. FactArrow

Published: June 11, 2025

Written by Alessandro Nguyen

The Digital Deception Devastating Lives

Scrolling through Facebook, you spot an ad showcasing a familiar investor like Warren Buffett, promising wealth with a single click. It looks credible, so you follow the link. Soon, you're in a WhatsApp group, pressured to buy a rising stock. But it is a trap, a pump-and-dump scheme that vanishes with your savings. This is a widespread problem. Thousands fall victim on Meta's platforms, and the company's defenses are failing to stop the flood of fraud.

These scams inflict profound harm, erasing life savings and leaving victims financially stranded. According to the Federal Trade Commission, investment fraud drained $5.7 billion from Americans in 2024, with social media as the primary conduit. Seniors face especially brutal losses, with median scams exceeding $9,000. Beyond money, the emotional toll is crushing, leading to ruined retirements, eroded trust, and a sense of violation by platforms we use every day.

California Attorney General Rob Bonta, alongside 41 other state attorneys general, has issued a powerful demand. Their June 2025 letter to Meta, representing over 90% of the U.S. population, calls for immediate action to curb the surge of scam ads on Facebook and WhatsApp. This bipartisan coalition underscores a shared urgency, as Meta's current approach, relying on flawed AI and minimal human oversight, is letting consumers down.

Why Meta's Systems Fall Short

Meta touts its AI-driven moderation as a solution, yet the reality exposes glaring weaknesses. State attorneys general highlight how fake ads, often using deepfaked celebrity images, bypass filters and reach millions. These ads persist for weeks, funneling users into WhatsApp groups where scammers orchestrate their schemes. The financial impact is staggering, with losses in the hundreds of millions and some victims losing over $600,000 in a single con.

The root issue lies in Meta's priorities. Its ad-driven model favors speed over scrutiny, approving ads to boost revenue even at the cost of consumer safety. The Consumer Federation of America reported $16 billion in online scam losses in 2024, a 33% rise in one year, with Facebook driving over half of social-media fraud. Meta's automated systems are overwhelmed, missing scams that trained human reviewers could catch.

This situation marks another alarm for Meta. In 2024, a 41-state coalition flagged a 1,000% spike in account takeovers on Facebook and Instagram. In 2023, attorneys general sued Meta for harmful youth-focused features. Despite promises to improve, the problems endure. Can we really expect change without relentless pressure?

Holding Big Tech Accountable

Some defend Meta, arguing that platforms shouldn't be liable for user-generated scams under Section 230 protections. They claim users bear the responsibility to stay vigilant and warn that government oversight could curb free expression. This perspective crumbles under examination. Paid ads are Meta's direct product, selected by its algorithms and sold for profit. When those ads deceive, Meta shares responsibility for the harm caused.

Legal tides are shifting. A 2024 Third Circuit decision against TikTok permitted negligence claims for algorithm-driven harm, suggesting platforms cannot evade accountability when their systems amplify fraud. Scholars advocate for a 'Platform Design Negligence' framework, where companies face liability for design choices that enable deception. This approach prioritizes consumer protection over concerns about speech restrictions.

Meta faces the demand to enhance its AI filters, implement robust human review, and verify advertiser identities. If it cannot deliver, state attorneys general propose banning all investment ads. This solution is practical and aims to protect consumers, rather than punish platforms. With the FTC reporting $1.2 billion in social-media fraud losses in 2022, and numbers rising, waiting for voluntary fixes is no longer an option.

Protecting Our Communities Now

Every day Meta hesitates, more lives are upended. Seniors, who often cannot recover from such losses, suffer the most, but no one is immune. Your family, your friends, or you could be the next target. Why do we tolerate a trillion-dollar company neglecting its duty to protect us?

The 41-state coalition of attorneys general demonstrates the power of unified action. Their demand serves as a clear signal that legal consequences may loom if Meta fails to respond. States have a strong legacy of consumer protection, from California's 2018 Privacy Act to the 1998 Tobacco Settlement, stepping in when federal efforts falter. Meta faces pressure to act swiftly or face multistate enforcement.

We need platforms that value our security as much as their profits. Meta needs to commit to stronger AI, expanded human oversight, and zero tolerance for fraud. If it refuses, lawmakers and attorneys general are urged to impose penalties and reforms to ensure no one else loses their future to a scam. Our savings, our trust, and our communities demand nothing less.