OpenAI disrupts threat actors that were using its AI models

31 May 2024

Image: © dikushin/Stock.adobe.com

Accounts across Russia, China, Iran and Israel were terminated after posting ‘deceptive’ AI-generated content relating to global geopolitics.

OpenAI says it has disrupted five “covert influence operations” based across the world, including in Russia and China, that were using its AI models to manipulate public opinion and influence political outcomes deceptively.

The San Francisco start-up that runs the free and public AI chatbot called ChatGPT said that these internet campaigns – disrupted by OpenAI over the last three months – do not appear to have “meaningfully” increased their audience engagement or reach because of its products.

These covert operations include two networks in Russia, one in China, one in Iran, and a commercial company based in Israel.

While the Russian operations mainly focused on spreading content targeting Ukraine, the Baltic countries and the US, the Chinese threat actor described by OpenAI posted content praising China and criticising its critics.

Meanwhile, the Iranian covert operations focused on supporting Iran while criticising Israel and the US. In Israel, OpenAI said a political campaign management firm called Stoic generated content about Israel’s ongoing war against Palestine.

Some of the other topics dealt with across these AI-generated content operations include the Indian elections and politics in Europe and the US.

“While these campaigns differed widely in their origins, tactics, use of AI and apparent aims, we identified a number of common trends that illustrate the current state of the IO threat, and the ways the defender community can use AI and more traditional tools to disrupt them,” OpenAI wrote in its report.

“Overall, these trends reveal a threat landscape marked by evolution, not revolution. Threat actors are using our platform to improve their content and work more efficiently. But so far, they are still struggling to reach and engage authentic audiences.”

This report comes just days after the company announced the formation of a new safety and security committee as it begins training its next frontier AI model. It will be led by Sam Altman, Bret Taylor, Adam D’Angelo and Nicole Seligman.

OpenAI said it anticipates its new frontier AI model – presumably GPT-5, although the company hasn’t called it that yet – to “bring us to the next level of capabilities on our path” towards artificial general intelligence.

The company is also forging ties with US and global news organisations including News Corp, Axel Springer, The Atlantic and Vox Media to bring mainstream news to its AI products.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com