If AI becomes human, are we the new AI tools?


5 Dec 2024

Image: © local_doctor/Stock.adobe.com

While AI that mimics human qualities can enhance user experiences and foster emotional connections, it brings with it a host of ethical and legal concerns that we cannot ignore, argues Dr José Albornoz.

In the rapidly evolving world of artificial intelligence (AI), a spirited debate rages on: should AI strive for human-like qualities, or should we focus on creating human-centric AI that serves us without mimicking us? This quandary has deep roots, dating back to Alan Turing’s groundbreaking ‘Imitation Game’ in the 1950s. Today, the stakes are high as the debate evolves in the context of enterprise technologies.

On one hand, proponents of humanising AI argue that the Turing Test, which centres on creating machines that can think and respond as we do, remains a worthy goal. They envision AI not just as a tool but as a companion that understands and emulates our emotions to augment customer and employee experiences.

On the other hand, advocates of human-centric AI support designing systems that unlock human potential without the possibility of AI circumventing legal, ethical and moral considerations or necessarily humanising the AI itself.

As we stand at this crossroads, the question remains: should we strive for AI that mirrors our humanity – and tricks people into believing it’s human? Or should we focus on crafting technology that personalises our experiences and enhances our lives while retaining its distinctly non-human identity?

Why humanise AI?

Training AI to humanise experiences involves teaching it to empathise, understand and communicate like us. It begins with exposing AI to vast datasets of diverse human interactions to learn patterns and emotional nuances. Machine learning techniques then refine the AI’s understanding of user behaviour, enabling it to respond with a human touch, such as injecting humour or using a conversational tone. Real-time data analysis helps the AI adapt to evolving user expectations, creating a continuously humanising feedback loop that reinforces the idea that another person is behind the screen.

Humanising digital experiences can make conversations feel personal rather than mechanical. Forbes, for example, advocates for humanised AI as a way to democratise industries, such as financial services, traditionally seen as elitist, by empowering customers and increasing accessibility for underserved communities. In healthcare, it provides the necessary clinical expertise as well as a, so to speak, better, empathetic bedside manner. Others highlight that tapping into our emotions is nothing new. After all, it’s been a part of marketing and advertising campaigns since their inception.

Innovations such as Emotion AI (also known as affective computing or artificial emotional intelligence), a subset of AI that measures, understands, simulates and reacts to human emotion, are increasingly being used to understand, simulate, acknowledge and react to human emotions. By picking up on our unconscious reactions such as micro-expressions or inflection changes, Emotion AI recognises subtle shifts in mood or emotions such as stress or anger and modulates its outputs accordingly. Companies that develop and supply AI often require clients to agree not to exploit their technology for surveillance or decision-making purposes. Still, ethical considerations remain.

Ethical considerations

Humanised AI can create strong emotional connections but raises concerns about irresponsible use, emotional manipulation and transparency. If organisations rely on AI to build customer loyalty, disruptions such as significant changes or discontinuation of the AI could lead to negative consequences.

Trust may be compromised when users realise they’re interacting with a machine instead of a person – imagine discovering that your online therapist is an algorithm or that you are being manipulated into buying a product or service by an AI that can detect emotional nuances a human would miss. Even unintentional personification, through naming chatbots and assigning them personalities, can result in negative outcomes.

On a much deeper level, some researchers worry that making AI seem more like us could lead to broader harms that challenge our very human essence. The argument proposes a paradox wherein making AI seem more human actually makes us less human because it dilutes authenticity and emotional complexity to predictable and replicable responses. Put simply, by treating AI tools as humans, there’s a danger we also start treating humans as mere tools.

The ethical implications of non-transparent AI systems are of enough concern that jurisdictions have begun introducing protective legislation. The first of its kind, the EU AI Act prohibits the use of AI systems that present a risk, such as various biases, manipulation, deception, exploitation and profile-based judgements. This is driving industry-wide changes and prompting us to seek more balanced solutions.

Hyper-personalised AI

AI-powered hyper-personalisation takes to the next level the personal touches we’re all used to seeing when our names, locations and history pop up on emails and account pages. It uses data, analytics and automation to gather deep insights into our behaviour. In enterprises, hyper-personalisation automatically feeds customers highly relevant, real-time content and fires lightning-fast responses to employee queries.

Human-centric AI doesn’t try to trick people into believing it’s a real person. It makes it clear that it’s a chatbot or automated assistant, for instance. It also sticks to the job it’s best at –predictable, repetitive and process-driven tasks – seamlessly escalating more complicated or nuanced queries and decision-making to people. This harmonious blend of human and artificial intelligence not only helps overcome ethical concerns about humanised AI, it also frees people to focus on more engaging and creative elements.

Putting AI to work across operations has become a major goal for most organisations. The market for enterprise-grade AI has seen significant growth, reaching a staggering $10.08bn last year. However, as recent research reveals, a substantial proportion of business leaders believe AI requires greater human oversight and efforts around transparency. By taking a human-centric approach to AI, we can create intelligent systems that collaborate with rather than replace people, enhancing existing capabilities and improving experiences across the enterprise.

By Dr José Albornoz

Dr José Albornoz heads the Data Science practice at GlobalLogic in the UK&I region. He has experience in various data science roles at companies such as Catalyst BI, DataRobot, Unisys, Capgemini and Fujitsu, and an academic background as an associate professor of electrical and computer engineering at Universidad de Los Andes.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.