Anthropic updates policy to address AI risks

16 Oct 2024

Image: © Timon/Stock.adobe.com

Anthropic has developed a framework for assessing different AI capabilities to be better able to respond to emerging risks.

Anthropic, the AI safety and research start-up behind the chatbot Claude, has updated its scaling policy, which develops a more flexible approach to assessing and managing AI risks.

To help with this new approach, the start-up is hiring a number of roles focused on risk management, including a head of responsible scaling.

The risks associated with AI and its capabilities are growing exponentially as the technology develops at a staggering speed. Last year, the Nobel Laureate known as the “godfather of AI” Geoffrey Hinton quit Google to speak openly about the dangers of AI. “Given the rate of progress, we expect things to get better quite fast,” Hinton told the BBC at the time. “So we need to worry about that.”

First announced in September 2023, Anthropic’s Responsible Scaling Policy is a framework that looks to manage risks from increasingly “capable” AI systems. The framework proposes increased security and safety measures depending on the AI model’s capability – the higher its capability, the higher the security measures.

In the announcement yesterday (15 October), Anthropic said it maintains its commitment not to train or deploy AI models “unless we have implemented safety and security measures that keep risks below acceptable levels”, while making updates to how it perceives and addresses emerging risks.

Examples of the lowest risk AI safety level-1 (ASL), the start-up said, include old large language models (LLM), while a step up to ASL-2 include most current LLMs, including Anthropic’s own Claude, that have the ability to provide dangerous information – however, not more than what a search engine could.

The higher risk ASL-3 includes models that show low-level autonomous capability while the higher ASL-4 and up is reserved for future advances, with Anthropic saying this technology could have “catastrophic misuse potential and autonomy”.

Anthropic has now updated its methodology for assessing specific capabilities of AI models and their associated risks to include a focus on capability thresholds, that is “specific AI abilities that, if reached, would require stronger safeguards than our current baseline” and required safeguards, which are “the specific ASL standards needed to mitigate risks once a capability threshold has been reached”.

Anthropic said that all of its current models meet the ASL-2 standard. However, if a model can conduct complex AI research tasks usually requiring human expertise, this would meet a capability threshold and require the greater security of ASL-4 or higher, the company said. Also, if a model can “meaningfully assist someone with a basic technical background” in creating or deploying chemical, biological or nuclear weapons, this would meet another capability threshold and require ASL-3 standards of security and deployment safeguards.

The AI start-up said will conduct routine evaluations of its AI models to ensure its currently applied safeguards are appropriate.

Jared Kaplan, the Anthropic co-founder and chief science officer, who previously worked as a research consultant at OpenAI, will take over as the start-up’s responsible scaling officer, a role that was previously held by co-founder and CTO Sam McCandlish.

Anthropic, founded in 2021 by former employees of ChatGPT-creator OpenAI, positions itself as a safety-oriented AI company. Earlier this year, the company announced the opening of an office in Dublin, saying that it will hopefully be its main establishment in the EU market.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Suhasini Srinivasaragavan is a sci-tech reporter for Silicon Republic

editorial@siliconrepublic.com