Futurist Amy Webb envisions how AI technology could go off the rails

23 Aug 2019

Amy Webb. Image: Elena Seibert

Futurists, Amy Webb explains, are not oracles but they can give a good indication of the direction in which society is going.

Amy Webb is a quantitative futurist – a “silly, made-up-sounding title”, she says. You could be forgiven for not immediately grasping what a futurist actually does, but she likens it to something more akin to a journalist, except one that reports on trends that they see flowering on the edges of society, as opposed to the here-and-now news.

Webb’s professional background is varied. She spent her first year after high school studying clarinet and piano. “So I dropped out, changed schools and studied game theory, economics, political science, computer science and sociology,” she tells Siliconrepublic.com.

After graduating, she spent her 20s working as a journalist in China and Japan. “But I became interested in uncertainty and decision science, and specifically how the choices we make today would shape the future. That led me to study strategic foresight and to found the Future Today Institute 15 years ago.”

Now, she advises leaders in Fortune 500 companies and the US government about emerging threats and opportunities. She collaborates with screenwriters, producers and showrunners on TV and film. “And, of course, I write books about the future.”

‘What happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone?’
– AMY WEBB

Webb’s latest book, The Big Nine, examines the development of AI and how the ‘big nine’ corporations – Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple – have all taken control over the direction that development is heading. She says that the foundation upon which AI is built is fundamentally broken and that, within our lifetimes, AI will begin to behave unpredictably, to our detriment.

“AI is the next era of computing, and because it intersects with so many other facets of everyday life it became a focus of our research about a decade ago. During the normal course of building models and scenarios, I found that I kept coming back to the same few companies over and over.

“That got me wondering: what happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? The answer isn’t as simple as it may seem, because we now rely on these companies – and they are entrenched into our economies and systems of governing.”

One of the main issues is that corporations have a much greater incentive to push out this kind of technology quickly than they do to release it safely.

“There is tremendous pressure for these companies to build practical and commercial applications for AI as quickly as possible,” Webb says. “Paradoxically, systems intended to augment our work and optimise our personal lives are learning to make decisions that we ourselves wouldn’t.”

Misplaced optimism

Webb says that when it comes to AI, both the optimism and fear about it are “misplaced”. Personally, she leans towards not being optimistic about how AI is going to influence our society.

She points out that on a corporate level, the incentive for firms is to push out technology quickly, as opposed to safely. However, she also argues that laying the blame solely at the feet of these companies is a little reductive.

“It isn’t only the big tech companies doing damage. This is a systemic problem, one that involves our governments, financers, universities, tech companies and everyday people.”

She adds that while certain nations are developing frameworks to address AI, they aren’t collaborating enough internationally. Universities, she argues, have been slow to incorporate ethics training into the curricula of students who will go on to build AI systems.

The solution that she envisions is a global one that involves the creation of an international entity to oversee AI. “This body would be responsible for setting guardrails for AI and enforcing standards, testing advanced systems before their commercial release, and monitoring activity as AI progresses from narrow to general to super intelligence.” This, she hopes, could help to keep the ‘Big Nine’ in check.

“The tech giants should prioritise our human rights first and should not view us as resources to be mined for either profit or political gain. The economic prosperity AI promises and these companies deliver should broadly benefit everyone.”

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Eva Short was a journalist at Silicon Republic

editorial@siliconrepublic.com