OpenAI CEO Sam Altman made quite an impression on US lawmakers yesterday, when he asked for more, not less, government regulation of AI.
Congressional hearings of tech CEOs in the past, such as the famous Mark Zuckerberg hearing of 2019 or the more recent one involving TikTok chief Shou Zi Chew, have often been described with one word: grilling. But for OpenAI chief Sam Altman, this was far from the case.
At a hearing before a US Senate subcommittee on privacy, technology and the law yesterday (16 May), Altman raised concerns about the potential misuses of AI ahead of the US election next year and – unconventionally for US CEOs – called on lawmakers to regulate the fast-advancing sector.
“We believe it is essential to develop regulations that incentivise AI safety while ensuring that people are able to access the technology’s many benefits,” Altman wrote in the first few lines of his testimony.
“It is also essential that a technology as powerful as AI is developed with democratic values in mind.”
AI companies should be ‘licensed’ and ‘registered’
Joining him in testifying to the lawmakers were two other AI experts: IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus. The mood was starkly different from other tech hearings, as lawmakers seemed charmed by Altman.
Altman, whose San Francisco-based start-up is behind the AI chatbot ChatGPT, had a few key recommendations to US lawmakers in his speech.
First, that AI companies working on the “most powerful models” should be subject to licensing and registration requirements from the government, alongside “incentives for full compliance”.
For instance, he said models should have to pass certain safety tests, such as whether they could “self-replicate” and “exfiltrate into the wild”.
Altman also suggested the US government should take input from a broad range of AI experts and stakeholders so that it can regularly update the appropriate safety standards, evaluation requirements, disclosure practices and other factors pertaining to approved AI systems.
“We are not alone in developing this technology. It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety,” he added in his testimony.
‘There will be enough competition’
What Altman did not propose, however, is requiring AI companies to be transparent about the training data they use or restricting them from using copyrighted works.
But that didn’t stop senators from being charmed by the CEO, who was even asked if he considered himself qualified to oversee a federal body for the regulation of AI.
Lawmakers were also concerned about the concentration of AI advancement in the hands of a few big companies, such as Microsoft (Bing) and Google (Bard). OpenAI itself is backed by billions in investment from Microsoft.
“It is really terrifying to see how few companies now control and affect the lives of so many of us, and these companies are getting bigger and more powerful,” said Cory Booker, a US senator present at the hearing.
Altman argued that generative AI models being made by “a relatively small number of providers” is inevitable. Such models require massive amounts of data and computing power, requiring hefty investments and resources only giant corporations can afford.
“The fewer of us that you really have to keep an eye on … there’s benefits there. But I think there needs to be enough [competition] and there will,” he said.
Across the pond, EU makers are getting closer to fully implementing an AI Act that aims to ensure AI systems work for people and are safe, transparent, non-discriminatory and environmentally friendly, and that citizens’ rights will be protected as the tech advances.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
Sam Altman in 2019. Image: Steve Jennings/Getty Images for TechCrunch (CC BY 2.0)