OpenAI gets defensive after losing top safety researchers

20 May 2024

OpenAI CEO Sam Altman at the World Economic Forum 2024. Image: World Economic Forum/Benedikt von Loebell via Flickr (CC BY-NC-SA 2.0)

OpenAI’s president and CEO claim the company has laid the foundations for the safe deployment of future AI technology, after one of its departing employees claimed safety has taken a ‘backseat to shiny products’.

Executives at OpenAI are defending the company’s methods after losing two of its leading AI safety researchers.

Last week saw two veteran OpenAI employees – Ilya Sutskever and Jan Leike – resign from the company. Sutskever was the company’s chief scientist, while both individuals co-led the company’s superalignment team, which was focused on the safety of future AI systems.

Following the departure of these two employees, OpenAI opted to dissolve the superalignment team as a standalone group and integrate it across its research efforts, Bloomberg reports. Leike was initially quiet about his resignation last week, but shared more details on X on 17 May and revealed his issues with OpenAI.

In this post, Leike said he had been disagreeing with OpenAI’s leadership about the company’s core priorities “for some time” and that these issues reached a “breaking point”. He also claimed his team had been “sailing against the wind” for months and that it was getting “harder and harder” to get crucial research done.

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact and related topics,” Leike said.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there … Over the past years, safety culture and processes have taken a backseat to shiny products.”

OpenAI’s response

In response to Leike’s claims, OpenAI president Greg Brockman shared a post on X, written by himself and the company’s CEO Sam Altman. The executives claim OpenAI has raised awareness of the “risks and opportunities” presented by AGI (artificial general intelligence).

“We’ve repeatedly demonstrated the incredible possibilities from scaling up deep learning and analysed their implications, called for international governance of AGI before such calls were popular and helped pioneer the science of assessing AI systems for catastrophic risks,” Brockman and Altman said.

The two also claim that OpenAI has been putting in place the “foundations needed” for the safe deployment of “increasingly capable systems”.

“Figuring out how to make a new technology safe for the first time isn’t easy,” the said. “For example, our teams did a great deal of work to bring GPT-4 to the world in a safe way, and since then have continuously improved model behaviour and abuse monitoring in response to lessons learned from deployment.”

In a later post, Altman responded to criticism about the way OpenAI handles the equity of departing employees and said there was a provision about potential equity cancellation that “should never have been” in any documents or communication.

“This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have,” Altman said.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

OpenAI CEO Sam Altman at the World Economic Forum 2024. Image: World Economic Forum/Benedikt von Loebell via Flickr (CC BY-NC-SA 2.0)

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com