Opinion: Here’s what’s at risk if Big Tech doesn’t address deceptive AI content
Editor’s Note: Timothy Karr is the senior director at Free Press, a non-partisan, not-for-profit organization that advocates for a more just and democratic media system. The views expressed here are his own. Read more opinion on CNN.
Last Friday, 20 technology platforms agreed to better label and curtail AI-generated disinformation that’s being spread online to deceive voters during a busy election year. They pledged to provide “swift and proportionate responses” to deceptive AI content about the election, including sharing more information about “ways citizens can protect themselves from being manipulated or deceived.”
This voluntary commitment, signed by Google, Microsoft, Meta, OpenAI, TikTok and X (formerly Twitter), among others, does not outrightly ban the use of so-called political “deepfakes” — false video or audio depictions — of candidates, leaders and other influential public figures. Nor do the platforms agree to restore the sizable teams they had in place to safeguard election integrity in 2020. Even at those previous levels, these teams struggled to stop the spread of disinformation about the election result, helping to fuel violence at the US Capitol Building as Congress prepared to certify President Joe Biden’s victory.
In response, the platforms have pledged to set high expectations in 2024 for how they “will manage the risks arising from deceptive AI election content,” according to the joint accord. And their actions will be guided by several principles, including prevention, detection, evaluation and public awareness.
If the platforms want to prevent a repeat of 2020, they need to be doing much more now that technology has made it possible to dupe voters with these deceptively believable facsimiles. And they need to match their pledges to do better in 2024 with actual enforcement and debunking that can be documented and shared with the public, something they’ve failed to do with any consistency in the past.
In December, Free Press found that, between November 2022 and November 2023, Meta, X and YouTube eliminated a total of 17 critical policies across their platforms. This included rolling back election misinformation policies designed to limit “Big Lie” content about the 2020 vote. During roughly the same time period, Google, Meta and X collectively laid off approximately 40,000 employees, with significant cuts occurring in the content moderation and trust and safety categories. At the time, platforms described the staffing cuts as necessary to align their companies with a “different economic reality” (Google) or because earlier capital expenditures “did not play out [as] … expected” (Meta).
This backsliding fosters less accountability across prominent platforms as tech companies turn their backs on years of evidence pointing to the outsized role they play in shaping public discourse that affects civic engagement and democracy. Their role as conduits of misinformation will likely increase as the sophisticated AI tools needed to create deepfakes of politicians become more widely available to users of social media. Without the platforms demonstrably enforcing these and even stronger rules against the spread of voter disinformation, we’ll face more high-tech efforts to hijack elections worldwide.
It’s already happening. Last year in the Chicago mayoral race, a fake audio recording meant to mimic candidate Paul Vallas was circulating on X. The audio falsely claimed that Vallas was supportive of police violence in the city.
At the end of 2023, Free Press urged companies like Google, Meta and X to implement a detailed set of guardrails against rampant abuse of AI tools during the 2024 election year. These include reinvesting in real people, especially those needed to safeguard voters, and moderating content. They must also become more transparent by regularly sharing core metrics data with researchers, lawmakers and journalists.
At the same time, we called on lawmakers to establish clear rules against abuses of AI technology, especially in light of the increasing use of deepfakes both in the United States and abroad. This includes passing laws that require tech platforms to publish regular transparency reports on their AI vetting and moderation tools and to disclose their decision-making process when taking down questionable political ads.
There has been a lot of activity on the topic in Congress, including numerous briefings, forums and listening sessions, with very few actionable results. Senators and representatives have introduced dozens of bills — some good, some bad — but none have made it to a floor vote. Meanwhile, the Federal Trade Commission is stepping in to fill the regulatory void, last week proposing a new rule that would make it illegal to use AI to impersonate anyone, including elected officials. The agency may vote on a new rule as early as spring after inviting and reviewing public comments on the issue.
With the widespread use of AI, the online landscape has shifted dramatically since 2020. But the fact remains: Democracy cannot survive without reliable sources of accurate news and information. As we learned in the aftermath of the 2020 vote, there are dangerous, real-world consequences when platform companies retreat from commitments to root out disinformation.
Voluntary pledges must be more than a PR exercise. Unless the companies permanently restore election integrity teams and actually enforce rules against the rampant abuse of AI tools, democracy worldwide could well hang in the balance.
For more CNN news and newsletters create an account at CNN.com