Meta Oversight Board Orders Removal of AI-Faked Ronaldo Ad Promoting Scam Game

Meta’s Oversight Board Forces Removal of Fake Ronaldo AI Video

Facebook’s parent company, Meta, has been told to take down an AI-altered video of Brazilian football star Ronaldo Nazário after the clip was used to promote an online game. The Oversight Board—an independent group that reviews Meta’s content decisions—ruled that the post broke the platform’s rules on fraud and spam.

The video, which had over 600,000 views before being flagged, showed Ronaldo with a badly synced voiceover pushing users to download an app called Plinko. It claimed players could earn more money than from regular jobs in Brazil—something Ronaldo never actually endorsed.

What’s interesting, though, is how long it took for the post to come down. Even after being reported, Meta didn’t prioritize reviewing it. The person who flagged it had to escalate the case twice before the Oversight Board stepped in.

Why This Case Matters

This isn’t just about one misleading ad. The bigger issue is how often AI-generated fakes slip through the cracks. The Board pointed out that only certain teams at Meta have the authority to remove this kind of content, which might explain why so much of it lingers.

And it’s not just Ronaldo. Last month, actress Jamie Lee Curtis called out Mark Zuckerberg directly after her face was used in a deepfake ad. Meta took down the ad but left the original post up, which feels like a half-measure.

The Board’s verdict pushes Meta to enforce its anti-fraud policies more evenly. But let’s be honest—that’s easier said than done. With AI tools getting cheaper and more accessible, these scams are only going to multiply.

The Wider Fight Against Deepfakes

Governments are starting to take notice. Back in May, a bipartisan U.S. law called the Take It Down Act was signed, forcing platforms to remove non-consensual AI-generated images—especially explicit ones—within 48 hours. It’s a response to the surge in deepfake abuse, particularly targeting women and minors.

Even politicians aren’t safe. Just this week, a bizarre deepfake of Donald Trump circulated, showing him seriously suggesting dinosaurs should patrol the U.S.-Mexico border. Absurd? Yes. But it’s another example of how easily these fakes can spread.

The Oversight Board’s decision is a small win, but it’s clear Meta—and other platforms—have a long way to go. For now, the best defense might just be a healthy dose of skepticism. If a post seems off, it probably is.

Edited by Sebastian Sinclair

Hot Topics

Related Articles