Google and Alphabet CEO Sundar Pichai believes that international alignment will be critical to creating a global regulatory standard for the development of AI.
Artificial intelligence is “too important” not to be regulated because of the damage it could cause if left unchecked, the boss of Google has said.
Sundar Pichai said the correct use of AI had the potential to save lives, but issues such as deep fakes and the “nefarious uses of facial recognition” showed it could also be a danger to public safety.
Writing in the Financial Times, Pichai said regulation of the technology was needed in order to prevent AI being influenced by bias, as well as protect public safety and privacy.
‘Companies such as ours cannot just build promising new technology and let market forces decide how it will be used’
– SUNDAR PICHAI
‘Technology’s virtues aren’t guaranteed’
The CEO said: “Growing up in India, I was fascinated by technology. Each new invention changed my family’s life in meaningful ways. The telephone saved us long trips to the hospital for test results.
“The refrigerator meant we could spend less time preparing meals, and television allowed us to see the world news and cricket matches we had only imagined while listening to the short-wave radio. Now, it is my privilege to help to shape new technologies that we hope will be life-changing for people everywhere. One of the most promising is artificial intelligence.
“Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”
The foundation for AI regulation
Pichai pointed to Google’s own published principles on AI, and said existing rules such as GDPR in the EU could be used as the foundation for AI regulation.
“International alignment will be critical to making global standards work. To get there, we need agreement on core values. Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone,” he said.
Pichai added that the tech giant wanted to work with others on crafting regulation.
He said: “Google’s role starts with recognising the need for a principled and regulated approach to applying AI, but it doesn’t end there. We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.
“AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do.”
Google is one of the world’s most prominent AI developers – its virtual helper, the Google Assistant, is powered by the technology, and the company is also working on a number of other products, including driverless cars, which utilise AI.
Pichai also revealed that Google’s own principles specify that the company will not design or deploy artificial intelligence in some situations, including those which “support mass surveillance or violate human rights”.
– PA Media