AI employees are asking for more whistleblower protections

5 Jun 2024

Sam Altman in 2022. Image: Village Global/Flickr (CC by 2.0)

In a letter, OpenAI and Google DeepMind employees said that current and former staff are ‘among the few people’ who can hold AI companies accountable to the public.

Current and former employees of frontier AI companies OpenAI, Google DeepMind and Anthropic have raised concerns that “ordinary” whistleblower protections are insufficient to ensure they can safely talk about risks associated with the technology.

In a letter published yesterday (4 June), the employees – mostly associated with OpenAI – called on companies leading the charge in AI development to commit to a set of principles that will establish safeguards for employees who criticise the company based on risk-related concerns.

“We believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” the letter reads.

According to the 13 employees, two of whom are associated with DeepMind and Anthropic, even though they are hopeful that these risks can be “adequately mitigated” with guidance from the scientific community, policymakers and the public, AI companies have “strong financial incentives” to avoid oversight.

“We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter continues, adding that the AI companies possess “substantial” non-public information about the capabilities and limitations of their systems.

“[The companies] currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily. So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.”

‘Ordinary whistleblower protections insufficient’

This comes as current and former OpenAI employees blew the whistle on what they call a culture of “recklessness and secrecy” at the AI start-up led by CEO Sam Altman. Speaking to The New York Times, they called for greater transparency and protections for whistleblowers.

“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the whistleblowers’ letter reads.

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.”

Now, they want these frontier AI firms to make a sweeping set of promises that will ensure they do not prohibit criticism of the companies and retaliate by “hindering any vested economic benefit”. They want the companies to facilitate an anonymous process for current and former employees to raise risk-related concerns with the board.

Overall, the employees want companies such as OpenAI, Google DeepMind and Anthropic to support a culture of “open criticism” and not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.

“We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily,” the letter continues.

“Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators and to an appropriate independent organisation with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.”

In response to the New York Times article and the open letter, a spokesperson for OpenAI, Lindsey Held, said in a statement: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world.”

A Google spokesperson declined to comment to the Times.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Sam Altman in 2022. Image: Village Global via Flickr (CC by 2.0)

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com