OpenAI is also working on a ChatGPT business subscription to protect enterprise data, while US agencies have pledged to protect the public from bias in AI systems.
OpenAI has added the ability for users to turn off their chat history when using ChatGPT, preventing these conversations from being used to train the model.
Previously, any conversation a user had with ChatGPT could be used to improve the AI model, unless the user filled out a form to opt-out of this data usage.
With the new update, users that turn off chat history will have their conversations stored for 30 days before being deleted. OpenAI said these conversations will be reviewed “only when needed” to monitor for potential abuse and will not be used for training purposes.
“We hope this provides an easier way to manage your data than our existing opt-out process,” OpenAI said in a blog post.
The update appears to be part of a larger plan by OpenAI to improve its data collection practices. The company said users can now export their ChatGPT data to understand what information the chatbot collects.
OpenAI also said it is working on a “ChatGPT Business subscription” to give enterprises more control of their data, or the data of their end users.
“ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” OpenAI said. “We plan to make ChatGPT Business available in the coming months.”
The company’s focus on data privacy comes amid a period of controversy for ChatGPT, which is being investigated in multiple countries for its data collection and storage practices.
Earlier this month, The EU’s key GDPR regulator created a dedicated ChatGPT taskforce, designed to “foster cooperation and to exchange information on possible enforcement actions”.
This taskforce was created after Italy’s privacy regulator issued a nationwide ChatGPT ban due to alleged privacy violations. This agency set out a list of requirements OpenAI needs to meet in order for the ban to be lifted, Reuters reports.
ChatGPT is also being investigated in Canada, due to an allegation that OpenAI is collecting, using and disclosing personal information without consent.
Agencies have their eye on AI
Meanwhile, multiple federal agencies in the US have issued a joint statement, pledging to protect the public from discrimination and bias in “automated systems”, which includes AI.
This joint statement claims automated systems rely on “vast amounts of data” and have the potential to produce “outcomes that result in unlawful discrimination”.
The statement includes the Federal Trade Commission, the Civil Rights Division of the US Department of Justice, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission.
Some of the listed concerns include outcomes being skewed by unbalanced datasets, a lack of transparency in AI systems and developers not understanding how their creations will be used by private and public entities.
“Today, our agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation,” the agencies said in the statement.
“We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.