Inside the Reluctant Fight to Ban Deepfake Ads

Without new rules, campaigns could hoodwink voters with AI-generated ads. And no one really seems to be taking the threat seriously.
Photoillustration of a politician's ad on a billboard and the text Made with AI behind it
Photo-illustration: Jacqui VanLiew; Getty Images

Two things happened this week that got me really worried about AI’s role in the US election:

First, WIRED published a massive story on how voters in India have received over 50 million deepfaked voice calls imitating candidates and political figures. That’s a lot of deepfakes, and voters are confusing them for the real thing.

Second, the Federal Communications Commission announced this week that it’s considering new AI ad rules only a few months after it banned synthetic robocalls. (Synthetic ads are ads that are created or altered with AI.) Excuse me, but why is the FCC the only government entity that’s approved new AI and elections rules this year? The Indian election should be a warning sign for the US to get busy regulating, but the FCC is the only one picking up the phone.

Let’s talk about it.


This is an edition of the WIRED Politics Lab newsletter. Sign up now to get it in your inbox every week.

Politics has never been stranger—or more online. WIRED Politics Lab is your guide through the vortex of extremism, conspiracies, and disinformation.


The US Is Running Out of Time to Stamp Out Deepfake Political Ads

Remember when the Republican National Committee put out an AI-generated ad attacking Biden? Or when Florida governor Ron DeSantis’ super PAC released an AI ad that mimicked former president Donald Trump? It’s almost been a year since both these ads came out, and there aren't any new laws governing AI ads, despite all the outrage at the time.

Last year, Senate majority leader Chuck Schumer started holding meetings with a rotating set of stakeholders and AI industry leaders to develop solutions to issues raised by generative AI. One of the leader’s priorities was to protect US elections from whatever mess the tech may create ahead of November. He has issued a report and pushed senators to turn that guidance into law, but that’s about all that’s happened.

The FCC can’t do as much as Congress can, but it’s done the most out of the two. In February, the agency outlawed using generative AI in robocalls in response to the New Hampshire call impersonating President Joe Biden. On Wednesday, chairwoman Jessica Rosenworcel went further, proposing that broadcast television, radio, and some cable political ads disclose when synthetic material is used.

“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in a statement. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

This is all great, but voters are probably going to encounter more digital fakes online than over broadcast. And for digital ads, the government hasn’t issued any solutions.

The Federal Election Commission was petitioned by the advocacy group Public Citizen to create rules requiring FCC-like disclosures for all political ads, regardless of the medium, but the agency has yet to act. A January Washington Post report said that the FEC plans to make some decision by early summer. But summer is around the corner, and we haven’t heard much. The Senate Rules Committee passed three bills to regulate the use of AI in elections, including disclosures, earlier this month, but there’s no promise it will hit the floor in time to make a difference.

If you really want to get scared, there are only 166 days until the presidential election. That’s not many days to get something related to AI disclosures over the finish line, especially before the Biden and Trump campaigns, and all the downballot politicians, start dumping even more cash into ads on social platforms.

Without regulations, tech companies will carry much of the responsibility for protecting our elections from disinformation. If it doesn’t sound that different from 2020, I feel the same way! It’s a new issue, but with the same companies leading the charge. In November, Meta said that political ads must include disclaimers when they contain AI-generated content. TikTok doesn’t allow political ads, but it does require creators to label AI content when they share synthetic content depicting realistic images, audio, and video.

It’s something, but what happens if they make a huge mistake? Sure, Mark Zuckerberg and every other tech CEO may get hauled in by Congress for a hearing or two, but it’s unlikely they’d face regulatory consequences before the election takes place.

There’s a lot at stake here, and we’re running out of time. If Congress or an agency were to issue some guidance, they’d need to do it in the next few months. Otherwise, it might not be worth the effort.

The Chatroom

At the end of the podcast this week, we asked listeners to write in, describing how their experience following politics online has changed since the last presidential election. Are you navigating directly to news sites for election updates? Do you still have a decent relationship with X/Twitter? Maybe you subscribe to newsletters like this one? I want to know about it!

Leave a comment on the site, or send me an email at [email protected].

💬 Leave a comment below this article.

WIRED Reads

Want more? Subscribe now for unlimited access to WIRED.

What Else We’re Reading

🔗 See How Easily A.I. Chatbots Can Be Taught to Spew Disinformation: The New York Times created two chatbots, one liberal and one conservative. Each delivered partisan responses to political questions, sounding a bit too similar to how people speak to one another online. (The New York Times)

🔗 The Good News for Biden About Young Voters: While Biden is polling worse with young voters than in 2020, the numbers may not be as disastrous as they seem. (The Atlantic)

🔗 OpenAI Just Gave Away the Entire Game: Scarlett Johansson’s fiery statement responding to OpenAI’s most recent voice model shows how the company gobbles up data no matter what. (The Atlantic)

The Download

Let me gloat and gush about my desk for a sec, sorry. This week, the WIRED Politics Lab podcast reached the top 20 in Apple Podcast’s news ranking. We were also one of Amazon Music’s best podcasts of the week!

I’m back on the pod this week with Leah and David, talking about the actual final end of Twitter (X, ugh), the future of digital political communication, and what it all has to do with the New York–Dublin Portal. Check it out here!

And one last thing. Sometimes making good posts is recognizing when you’re out of the loop.

That’s it for today—thanks again for subscribing. You can get in touch with me via email, Instagram, X, and Signal at makenakelly.32.