Inside the quest to humanise AI

A digital face
(Image credit: Getty Images)

This article originally appeared in issue 28 of IT Pro 20/20, available here. To sign up to receive each new issue in your inbox, click here

Artificial intelligence (AI) is seeping into our everyday lives, from smart technologies to digital voice assistants and streaming services. Public perceptions, though, range from appreciating its usefulness in augmenting us to paving the way toward dystopia. Science fiction is particularly culpable for promoting negative stereotypes, with humanised AI often preceding disaster scenarios in titles such as Ex Machina, Black Mirror and classics like the Alien franchise.

Although existing AI technology can mimic human behaviour, reality hasn’t yet come to mimic art in this way. Many in the field, nevertheless, are determined to teach AI systems how to think, understand and process information in the same way humans do. A handful of companies, too, are advocating for AI humanisation – hoping to bake this concept into their products. This mission isn't without its challenges, though, and achieving fully humanised AI may yet yield a plethora of unintended consequences.

What does it mean to humanise AI?

When analysing the current state of AI, it’s evident these technologies work best when they augment humans and amplify our creativity, innovation, abstract thinking and capacity to empathise with each other.

We’ve already begun the humanisation process by giving AI human-like physical features, mannerisms and names. Consider IBM’s Watson, Hanson’s Sophia, Amazon’s Alexa and Apple’s Siri, for example. Companies will likely take these steps in humanising AI to increase public acceptance of these technologies. The movement to humanise AI, though, involves more than human likeness alone.

RELATED RESOURCE

Accelerating AI modernisation with data infrastructure

Generate business value from your AI initiatives

FREE DOWNLOAD

“The goal of human-like AI is to replicate the positive capabilities of human beings, not the weaknesses,” according to Mike Myer, CEO of conversational AI platform Quiq. He explains that more human-like AI in these circumstances would involve making the AI more personable so that the interaction feels authentic, as though coming from a human being.

Some may wonder: what if AI could understand thoughts and emotions and gain the ability to empathise and respond to human behaviour? Dr Aditi Paul, assistant professor of communication studies at Pace University, New York, says “researchers agree that added benefits in online interaction not only personalises human connection but hyperpersonalises it.”

An example of significant AI personalisation is Stevie the robot, created by engineering student Conor McGinn and his colleagues at Trinity College Dublin, which was tested in the Knollwood Military Retirement Community in Washington DC. Stevie took over entertainment activities like singing and calling a game of bingo, allowing staff to focus on individual residents’ needs. In this vein, scientists and researchers at major AI companies are now attempting to add complex, human-like features to AI, including self-awareness and consciousness.

What are the barriers to humanising AI?

If the mission to humanise AI was simple, we’d already see human-like AI products or services flooding the market, but various barriers are slowing down the process.

Impacts on credit, blame and responsibility

It’s understood that the more human-like AI becomes, the more responsibility people will allocate to it. Sometimes AI systems malfunction, and it can be challenging for IT professionals to determine the root of the problem and who's to blame. Is something wrong with the machine? Did a developer make a mistake? Often, the answers are unclear.

Ethical considerations

IBM’s human-centred AI team – composed of Wener Geyer, Justin Weisz, Claudio Santos Pinhanez and Elizabeth Daly – assert: “You also need to gauge the negative consequences of AI systems and include ways to minimise bias and measure perceptions of the AI system.”

Deciding how to accomplish this and who would be held responsible for negative outcomes is one major roadblock in AI humanisation. For example, the quality of data fed into an AI system can have negative implications. If data is embedded with bias, it’s more likely the AI will replicate and amplify that bias, which will likely be an ethical challenge in development.

The answer partly resides in defining the role we expect human-centred AI to perform. As the IBM team states: “The ultimate goal will be for humans to collaborate with AI to achieve more, faster than would ever have been possible before.” Humans should be responsible for tasks such as being the high-level creative driver behind a project, setting goals and governing the task’s completion. AI could supplement human ability through tasks like low-level detail maintenance and scaled design management.

A lack of public acceptance

For AI humanisation to succeed, widespread implementation will be necessary, although 45% of American adults say they're equally concerned and excited about the increased use of AI. Indeed, privacy concerns persist, as well as issues regarding worker displacement and the concept that an AI’s skills may have the ability to surpass a human’s.

Some believe that artificial human connection may lead to a lack of human-to-human connection, with AI possibly misused or relied on too much. AI developers will have to shift these general perceptions towards public feelings of acceptance.

How can human-like AI breakthroughs improve business decision making?

The global AI market will reach $432.8 billion this year, growing by almost one-fifth. Businesses are heavily investing in AI hardware and services, with some major companies looking to integrate human-like functionality into AI, including Microsoft, Apple, IBM, Tesla, OpenAI, DeepMind and SAS.

Because new technology generally penetrates the industry before reaching customers, it’s interesting to see how human-like AI could potentially impact such businesses and their operations. Human-like AI in business can particularly help leaders make more data-driven decisions with more accuracy. Clunky chatbots, for example, require human intervention, but AI humanisation may reduce this level of human intervention in future.

RELATED RESOURCE

Smarter AIOps

AI powered automation helping your business assure app performance

FREE DOWNLOAD

Dr Iain Brown, head of data science at analytic software development company SAS, believes AI in business is more than just what the technology can do. “A major challenge is how to deploy the technology in a real-world environment at scale," he says. The deployment will certainly be a challenge, despite the wide range of business benefits humanised AI can provide.

At the Japan Advanced Institute of Science and Technology, researchers are focusing on using physiological signals in AI systems to improve their ability to interact with humans. AI may reach the next level with sentiment-sensing capabilities. Meanwhile, three machine learning (ML) systems – DALL-E, PaLM, and GPT – have also reportedly improved their ability to generate creative art pieces. DALL-E 2, for example, can create an image based on text captions, such as “show me a koala dunking a basketball” or “teddy bears working on AI on the moon”. Additionally, researchers at the University of Central Florida are working on developing an AI brain that doesn't require an internet connection, so it can work in remote regions or in space.

As society relies more heavily on AI, now is the time for companies and researchers to reorient AI in more human-like ways. These businesses, however, must consider the barriers and proceed with caution. If AI can sense and understand human emotions, though, it may be able to provide a better user experience (UX). It seems we’re on the right track for AI humanisation, but more work needs to be done to make that possibility a more widespread reality.