Former OpenAI exec’s AI safety start-up raises $1bn

5 Sep 2024

Image: © PureSolution/Stock.adobe.com

Sutskever left OpenAI and co-founded Safe Superintelligence, which aims to make advanced AI in a way that ensures safety ‘always remains ahead’.

Ilya Sutskever, the former chief scientist and co-founder of OpenAI, has managed to gain some serious financial traction for his ‘superintelligence’ start-up.

The fledgling company – Safe Superintelligence – was announced in June and has already managed to raise $1bn, with backing from major names such as a16z, NFDG, Sequoia, DST Global and SV Angel.

The start-up confirmed the funding in a X post and also said it is hiring – though it did not specify what roles it is looking to fill. The start-up was formed a month after Sutskever resigned from OpenAI after nearly 10 years with the company. He co-founded the start-up with former Y Combinator partner Daniel Gross and former OpenAI engineer Daniel Levy.

Sutskever was one of the leaders of OpenAI’s superalignment team, which was focused on the safety of future AI systems. At the time of his departure, the other head of this team – Jan Leike – also resigned from OpenAI, claiming AI safety had taken a “backseat to shiny products”.

As the name suggests, Sutskever’s start-up is focused on the creation of “safe superintelligence”. The company claims this is its name, mission, product roadmap and “sole focus”.

“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company’s founders said in a blogpost. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security and progress are all insulated from short-term commercial pressures.”

The company has 10 employees and plans to use the latest funding to boost its computing power and build a small, trusted team of experts, Reuters reports.

Leike went on to join Anthropic shortly after resigning from OpenAI, where he works in a similar safety role. Leike said he planned to “continue the superalignment mission”.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com