Meta and OpenAI ‘disrupt’ Israeli firm’s covert operation to influence views on Gaza war
Meta and OpenAI claim to have disrupted covert online influence operations run by an Israeli company amid the intensifying war in Gaza.
The tech giants said STOIC, a political marketing and business intelligence firm based in Tel Aviv, is deploying their products and tools to manipulate political conversations online.
OpenAI, the maker of ChatGPT, said in a report on Thursday that it banned a network of accounts linked to STOIC, which it accused of posting anti-Hamas and pro-Israel content and acting as a “for-hire Israeli threat actor”.
The accounts used OpenAI models to spread disinformation about the war in Gaza and, to a lesser extent, about the ongoing Indian election.
Specifically, they used the AI models to generate “articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation’”, OpenAI said in a blog post.
This included texts on specific themes such as the Gaza war.
The influence operation also faked engagement, OpenAI alleged.
“Some of the campaigns we disrupted used our models to create the appearance of engagement across social media, for example, by generating replies to their own posts to create false online engagement,” it said.
But the network’s activity, according to OpenAI, “appears to have attracted little if any engagement, other than from its own inauthentic accounts”.
STOIC describes itself as an AI content creation system that helps users “automatically create targeted content and organically distribute it quickly to the relevant platforms”.
Meta confirmed in a quarterly security report on Wednesday that it removed over 500 Facebook accounts, one group and 11 pages, along with more than 30 Instagram accounts, tied to the same influence operation.
It said accounts posing as Jewish students, African Americans and other concerned citizens targeted audiences in the US and Canada as part of the covert campaign linked to STOIC.
“There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn’t really impacted our ability to detect them,” Meta’s head of threat investigations Mike Dvilyanski told Reuters.
The Facebook parent said it has banned STOIC and issued a letter “demanding that they immediately stop activity that violates Meta’s policies”.
STOIC did not immediately respond to a request for comment from The Independent.