The era of ChatGPT-powered propaganda is upon us

  • OpenAI said it disrupted five covert influence operations in the last three months.

  • Groups in China, Russia, Iran, and Israel used its products, OpenAI said.

  • The buzzy AI firm has previously pushed back on safety concerns around its tools.

China, Iran, Russia, and others are using OpenAI tools for covert influence operations, according to the company.

In a blog post on Thursday, OpenAI said it's been quick to react, disrupting five operations in the last three months that had tried to manipulate public opinion and sway political outcomes through deception.

The operations OpenAI shut down harnessed AI to generate comments and articles in different languages, make up names and bios for fake social media accounts, debug code, and more.

OpenAI said it thwarted two operations in Russia, one in China, one in Iran, and one by a commercial company in Israel.

The campaigns involved "Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments," OpenAI wrote in its blog.

The company said the campaigns didn't rely only on AI tools, but also used human operators.

Some of the actors used AI to improve the quality of their campaigns, like producing text with fewer errors, while others used AI to increase their output, like generating larger volumes of fake comments on social media posts.

For example, OpenAI said the operation in Israel used AI to produce short texts about the war in Gaza, post them on social media, and then manufacture AI-generated replies and comments on those posts from fake accounts.

But, OpenAI noted, none of the campaigns had any meaningful engagement from actual humans, and their use of AI did not help them increase their audience or reach.

OpenAI said its own AI helped track the bad actors down. In its blog post, the company said it partnered with businesses and government organizations on the investigations, which were fueled by AI.

The investigations "took days, rather than weeks or months, thanks to our tooling," OpenAI said.

The company said its AI products also have built-in safety defenses that helped reduce the extent to which bad actors can misuse them. In multiple cases, OpenAI's tools refused to produce the images and texts the actors had requested, the company explained.

OpenAI continues to flex its commitment to safety and transparency, but not everyone is buying it. Some have argued, including OpenAI CEO Sam Altman himself, that highly advanced AI could pose an existential threat to humanity.

Stuart Russell, a leading AI researcher and a pioneer of the technology, previously told Business Insider that he thinks Altman is building out technology before figuring out how to make it safe — and called that "completely unacceptable."

"This is why most of the safety people at OpenAI have left," Russell told BI.

Read the original article on Business Insider