Will the brave new world ushered in by Industry 4.0 be good or bad for society? Dr Fiachra Ó Brolcháin from DCU’s Institute of Ethics examines the ethics of new technology.
In an era of climate change, political instability, biodiversity loss and economic uncertainty, the pace of technological innovation is widely celebrated. Governments compete with each other to attract tech companies, with tax and education policies increasingly focused on the needs of technology developers. Some people speak of us being in the midst of a new Industrial Revolution. We seem to revere novel technologies and pin many of our hopes for the future upon them.
A large number of these technological developments bring many societal benefits, but our collective enthusiasm for technology can lead us to overlook or underplay many of the downsides. The speed of technological change – bringing us big data, driverless cars, genetic engineering and smart cities, with true AI and geo-engineering distinct future possibilities – is truly astounding. Society is like a jockey wearing a blindfold. The power and pace of the horse is exhilarating, but we have little to no idea where we are going.
‘Society is like a jockey wearing a blindfold. The power and pace of the horse is exhilarating, but we have little to no idea where we are going’
That new technologies will significantly change our world is obvious. Whether this will be beneficial or harmful remains to be seen. Novel technologies and those in the early stages of development have the potential to exacerbate the myriad problems of the globe, or to mitigate them. Much will depend on the choices we make regarding their use.
These choices do not take place in a vacuum and ethical philosophy can provide us with guidance as we attempt to navigate our way. The choices available to us in relation to these new technologies are ethical choices. We need to be guided by our best ethical principles if we are to ensure that the current technological revolution does not result in misery for future generations.
‘Technologies used by the most vulnerable members of our society make the ethical issues particularly important’
Take, for instance, the burgeoning field of assistive technologies. A whole range of assistive technologies are now being developed to help people with physical or intellectual disabilities, as well as the ageing populations across the Western world. Addressing a range of needs, these tools are designed to make the lives of users and carers easier. These technologies will be used by the most vulnerable members of our society, making the ethical issues particularly important.
Indeed, the general populace is increasingly using assistive devices, from mobile phones to wearables. While there are clear benefits of assistive technologies, there are ethical concerns – the most prominent of which is a concern with privacy.
What do we mean when we talk about privacy? This is not an easy thing to answer. The meaning of privacy is historically and philosophically complex. Some argue that it is a moral right with inherent value; others contend that its value is instrumental.
Conceptually, privacy is often associated with human dignity and with the development of the authentic self. People are likely to behave differently when they know that they are being observed.
We need privacy if we are to avoid self-censorship, or if we are to be able to have certain discussions with each other. Without a space to think and explore various ideas, a person’s psychological development is at risk of being stunted. This has led many thinkers to stress the normative importance of informational privacy – the idea that I should be able to control access to information about myself. Many of my thoughts, acts and words should be inaccessible to others. Novel technologies, including assistive technologies, that monitor and gather data about the person constitute a threat to privacy.
‘Novel technologies that monitor and gather data about the person constitute a threat to privacy’
Why should we care about privacy? Privacy is also conceptually connected to the concept of autonomy, ie, being able to form your own opinions and make decisions without external influence. Autonomy is a central value in liberal thought, which reveres the liberty of the autonomous individual.
The autonomous individual weighs up their options, ponders their choices, and makes individual decisions without undue external influence. As new technologies – from big data to eye-tracking, facial recognition and emotion capture – undermine privacy, our autonomy is threatened. Increased data about the way individuals are likely to behave, their preferences and dislikes, and their emotional responses to various stimuli, makes them easier to manipulate and control.
One might argue that those who don’t want to share their information could simply refuse to use the new devices. However, this is unlikely to be sufficient. The internet of things – in which connected objects ‘talk’ to each other – promises the creation of ‘smart cities’.
We will be living in cities where buildings can communicate with each other and with our devices, driverless cars will take us from place to place, and our fridges will remind us to buy milk. The benefits of these technologies have been heralded continuously and are, no doubt, real. For example, from an environmental perspective, increased data about air and water quality and energy use can play an important role in combatting climate change.
‘In a capitalist and consumerist society, much of the data about us will be used for commercial purposes’
However, it will also mean that a person living in such a city could be continuously under surveillance. The use totalitarian regimes could make of such technologies would be familiar to Orwell.
Orwell’s dystopian vision could yet be combined with that of Aldous Huxley’s Brave New World. In a capitalist and consumerist society, much of the data about us will be used for commercial purposes. Omnipresent advertisers armed with huge data sets about each person would make it increasingly difficult for anyone to experience anything that has not been engineered and tailored to grab our individual attention. Already, our lives are inundated with demands on our attention – the internet of things and smart cities will exacerbate this while reducing our privacy significantly. Our mental lives will be less our own. Our encounters with the world will be mediated through technologies designed to catch our attention. This is far from the liberty and autonomy envisioned during the Enlightenment.
It is worth asking who will design these technologies and what their aims are. We must address the issue of responsibility for the negative impact of novel technologies. We must consider the reasons we hold for creating these new technologies – not just in terms of how they will benefit individual people and companies, but their overall societal effect.
The decisions we make now in relation to the technologies we are inventing will shape the societies we, and future generations, will live in. These choices will not take place in a moral vacuum and it is essential that we give deep consideration to the values guiding them.
Dr Fiachra Ó Brolcháin has worked on various aspects of applied ethics, including the ethical and social implications of virtual reality and social networking in association with the EU’s Reverie Project, and the ethical implications of human enhancement technologies. He is currently working as a Marie Curie ASSISTID Fellow at Dublin City University (DCU), looking at the ethics of the development, use and distribution of assistive technologies for people with intellectual disabilities and autism spectrum disorder.