FCC May Require Political Ads to Disclose AI-Generated Content

The FCC is considering a new rule that would require the disclosure of the use of artificial intelligence in political ads — while the agency would not prohibit AI-generated content.

FCC Chairwoman Jessica Rosenworcel on Wednesday announced a new agency proposal that, if adopted, would look into whether radio and TV broadcasters, cable TV operators and satellite TV providers should be required to disclose when there is AI-generated content in candidate or issue-oriented political ads.

More from Variety

“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel (pictured above) said in a statement. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

Last year, the Republican National Committee released an AI-generated attack ad depicting a dystopian future if President Biden is reelected in 2024 — featuring realistic-looking photos of boarded-up storefronts, military patrols in the streets and waves of immigrants creating panic. The RNC ad did feature a disclosure that said, “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.”

Concerns over the use of AI specifically to promulgate political misinformation go back years. In 2018, filmmaker Jordan Peele produced a video for BuzzFeed that put Peele’s words into the mouth of former President Barack Obama, aiming to raise awareness of deepfakes.

As noted by the AP, the FCC earlier this year ruled that AI voice-cloning tools in robocalls are banned under existing law. That came after a Democratic political consultant, saying he wanted to draw attention to the issue of AI-generated political shenanigans, used an AI-faked voice mimicking President Biden to urge Democrats to not vote in the New Hampshire primary.

The FCC’s proposal seeks comment on whether to require an on-air disclosure and written disclosure in broadcasters’ political files when there is AI-generated content in political ads, as well as a comment on a “specific definition of AI-generated content.”

Advocacy groups including Common Cause weighed in with support for the FCC proposal.

“This rulemaking is welcome news as the use of deceptive AI and deepfakes threaten our democracy and is already being used to erode trust in our institutions and our elections,” Common Cause media and democracy program director Ishan Mehta said in a statement. “We have seen the impact of AI in politics in the form of primary ads using AI voices and images, and in robocalls during the primary in New Hampshire.”

Mehta added, “We urge Congress and other agencies like the Federal Election Commission to follow the FCC’s lead and take proactive steps to protect our democracy from very serious threat posed by AI.”

Best of Variety

Sign up for Variety’s Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.