Despite the call for regulation, Microsoft has been racing to bring AI to various products in recent months, while OpenAI has threatened to leave the EU market over the upcoming AI Act.
Microsoft and OpenAI, two of the biggest names in the current wave of global AI products, are making statements about the regulation of this developing technology.
Microsoft has released a new report about creating responsible AI and includes its own recommendations for regulating this sector. The report includes a foreword by Microsoft president and vice chair Brad Smith, who said countries worldwide are asking questions about how to control AI technology and avoid new problems it might create.
“These questions call not only for broad and thoughtful conversation, but decisive and effective action,” Smith said. “This paper offers some of our ideas and suggestions as a company.”
The five key recommendations in the report include creating new government-led AI frameworks and implementing “effective safety brakes” for AI systems that control critical infrastructure.
Smith also suggested the creation of a “new government agency” to develop new laws and regulations for AI foundation models.
“We need to think early on and in a clear-eyed way about the problems that could lie ahead,” Smith said. “As technology moves forward, it’s just as important to ensure proper control over AI as it is to pursue its benefits.
“We are committed and determined as a company to develop and deploy AI in a safe and responsible way. We also recognise, however, that the guardrails needed for AI require a broadly shared sense of responsibility and should not be left to technology companies alone.”
Last year, Microsoft said it was conducting a push to tighten the usage of its AI products. For example, the company restricted access to parts of its facial recognition technology last June due to issues around how the technology can be abused.
But Microsoft appeared to change its cautious stance towards the start of 2023, following the success of ChatGPT, the advanced AI chatbot created by OpenAI.
Microsoft has been one of OpenAI’s biggest investors in recent years and also helped the AI company develop ChatGPT with the support of its Azure infrastructure.
Since then, Microsoft has been integrating AI into a wide range of its products, with new AI assistants announced for enterprises, cybersecurity and personal use on Windows 11.
One of the most significant developments was Microsoft’s push to update Bing with OpenAI technology, creating a shake-up in the search engine market that is dominated by Google.
This was one of the factors in the recent AI wave, as the two tech giants have been constantly working to get an edge over the other with new generative AI products.
OpenAI clashes with the EU
OpenAI, meanwhile, has also been making public announcements about the need to regulate AI properly.
In a US congress hearing earlier this month, OpenAI CEO Sam Altman raised concerns about the potential misuses of AI ahead of the US election and called on lawmakers to regulate the rapidly-advancing sector.
“We believe it is essential to develop regulations that incentivise AI safety while ensuring that people are able to access the technology’s many benefits,” Altman said in his testimony.
The company has shared various actions it is taking to push for more responsible AI. Yesterday (25 May), OpenAI launched a programme to award 10 grants worth €100,000 each to fund experiments into “democratic inputs to AI”.
The company said the goal is to create a democratic process for deciding what rules AI systems should follow.
“By ‘democratic process’, we mean a process in which a broadly representative group of people exchange opinions, engage in deliberative discussions and ultimately decide on an outcome via a transparent decision-making process,” OpenAI said in a blog post.
But despite the call for more AI regulation in the US, OpenAI has had a negative reaction to news surround the AI Act, the EU’s long-awaited rules around this sector.
Altman has been clashing with EU regulators over this act, as the draft legislation will hold AI companies accountable for how their systems are used, Bloomberg reports.
As a result of this, Altman has warned that OpenAI might leave the EU market entirely if the company feels it can’t comply with the new regulation.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.