The company’s safety revelations come amid ChatGPT’s ban in Italy, a Canadian data investigation and recent reports of false accusations being created by the chatbot.
OpenAI has shared the measures it takes to ensure its AI systems such as ChatGPT are built and deployed in a safe manner.
The company said it works to ensure safety is “built into our system at all levels” due to the risks that come with tools such as ChatGPT, its highly popular chatbot.
OpenAI’s blog post lists the company’s safety measures, such as efforts to protect children by minimising harmful content, improving factual accuracy and removing personal information from its training data “where feasible”.
The company also said it spent six months focusing on the safety measures around its latest AI creation, GPT-4, before releasing it publicly.
OpenAI’s details about safety were released the day after US president Joe Biden made remarks about the risks of AI systems.
In a White House speech, Biden said tech companies have a “responsibility” to make sure their products are safe before making them public and that it “remains to be seen” if AI is dangerous.
OpenAI said learning from real-world examples is a “critical component” of creating and releasing safe AI systems over time.
“We cautiously and gradually release new AI systems – with substantial safeguards in place – to a steadily broadening group of people and make continuous improvements based on the lessons we learn,” OpenAI said.
“Real-world use has also led us to develop increasingly nuanced policies against behaviour that represents a genuine risk to people while still allowing for the many beneficial uses of our technology.”
ChatGPT under scrutiny
Meanwhile, OpenAI’s flagship product, ChatGPT, is facing controversy in multiple countries over data concerns and claims the AI model has released false information on real individuals.
In Australia, a regional mayor has threatened to sue OpenAI if it does not correct false claims from ChatGPT that he served time in prison for bribery, Reuters reports.
US law professor Johnathan Turley claims ChatGPT issued a false story that accused him of sexually assaulting students. In a USA Today column, Turley said the AI model cited a Washington Post article as the source, but the article never existed.
The Washington Post has also written about Turley’s claims and confirmed that the article cited by ChatGPT never existed.
An OpenAI spokesperson told The Washington Post that the company strives “to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress”.
Amid these issues of ChatGPT hallucinating facts, the chatbot is also being investigated in Canada, due to an allegation that OpenAI is collecting, using and disclosing personal information without consent.
ChatGPT faced a significant hit last week, when Italy’s privacy regulator issued a ban on the AI model due to alleged privacy violations.
The Italian authority claims OpenAI processes data inaccurately and lacks a legal basis to justify the mass collection and storage of data. It also claims that there is no age verification system in place for children.
OpenAI told the BBC that it complied with data laws, while the move was criticised by Italy’s prime minister, who described the watchdog’s decision as “disproportionate”, Reuters reports.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.