The ethical implications of conversational AI

With chatbots becoming part of daily life, we look at whether the right to data privacy is being impeded

How many conversations have you had with a chatbot in the last three months? Chances are that you might not have a clue.

As consumers, we take interaction with online services in our stride, and don't tend to recall how many of them we have in total, let alone how many are with human beings and how many with chatbots. In fact, in some cases, we might not always be able to tell the difference.

Ultimately, does it even matter whether we're talking to people or bots, so long as the experience is smooth and the job gets done? Well, from a data protection standpoint, it certainly does.

Chatbots are the future

Opinions vary on how widespread chatbots will be in the near future, but whichever way you slice and dice the stats, we're in the midst of a revolution. Analyst firm Gartner expects that by 2020 a quarter of all customer service operations will use what it calls 'virtual customer assistants', while IBM has said that by the same year 85% of all customer interaction will be handled without a human involved.

There are plenty of reasons why companies are favouring AI-powered bots over humans. They don't need a lot of training, they can strip away queues as there is always a slot can always be allocated for a customer at any given time, and they're capable of running 24 hours a day.

Consumers too are becoming to favour using a chatbot over having to deal with a person. Customer satisfaction rises by 33% when bots are used, according to Gartner, and lead to a 70% reduction in call/chat or email enquiries.

Listen and learn

There is no doubt AI can make inferences or deductions about us on the basis of our interactions. "This is relatively trivial to accomplish if there is a substantial dataset behind the AI interface," explains Steffen Sorrell, principal analyst at Juniper Research.

Mark Stephen Meadows, trustee and co-founder of Seed Token, an open source AI framework, explains to IT Pro: "The way I said the word can be modelled and this is of particular relevance when the user utters a phrase or sentence. [Using] over 200 other vectors (like your gender, age, etc) these listening systems can detect 'affect' or 'sentiment' or, simply, emotion. This offers deep insights into the mental state of the user."

AI doesn't even need our words. The University of Vermont has identified signs in Instagram pictures that might indicate depression. Yet advancements like these are raising questions among the research community as to whether the right to data privacy is being impeded.

A question of ethics

Perhaps this is a question of power symmetry the idea that the company and chatbot are in a stronger position in comparison to the consumer.

"We have a potential problem," Meadows explains to IT Pro. "Artificial intelligence quickly becomes unethical because of its capacity to accumulate more information on the person than the person has about the system." A great deal of this gathering is passive in order to inform the training of the AI, and is not always covered by consent.

Steffen Sorrell of Juniper Research suggests the example of a short-term lending company offering its services to a user who had previously expressed concerns about their financial situation to an AI chatbot.

"Most consumers would consider their reasonable expectations of privacy for such a sensitive issue to have been breached," explains Sorrell. "That changes, however, if the user has explicitly given his or her consent for this kind of information sharing."The UK Information Commissioner agrees. A spokesperson told us: "Wherever personal data is being processed, data protection law applies. Organisations are required to process personal data fairly, lawfully and transparently, irrespective of the technology used. The key is that individuals are properly informed about the processing and are clear on how they can exercise their rights."

A clash with GDPR

Companies should continue to ensure that a policy of data protection by design is followed even in the case of AI development, according to the ICO.

"When looking into certain technologies, organisations should ensure that they follow a data protection by design approach and, if necessary, undertake a data protection impact assessment. This allows them to ensure that their processing complies with the fundamental data protection principles."

Where the General Data Protection Regulations pertain which is for any company processing data on EU citizens users have improved controls over what information about them is shared. However, Sorrell argues that a lack of information, in a format that's easily understood by the general public, means that users are unlikely to know the full scope of the new powers.

For example, "even in the context of the GDPR, few consumers will understand that they have a right to ask for an explanation of any algorithmically-based decision," he explains.

Meadows' Seed Token project is designed to counter the traditional idea of organisations owning the data that AI generates and instead uses blockchain and open source to allow users to specify their privacy and data sharing preferences. It's pitched as a tool that can reset the balance by taking power away from a select few organisations currently commanding the AI space, opening up development and allowing for greater collaboration between users and developers.

It's a noble idea, but one that is likely to struggle given the highly contested and lucrative market that conversational AI is going to ignite over the coming years. The current trajectory suggests we are going to spend more and more time interacting with bots in the coming few years. The AI behind them is likely to generate more and more knowledge about us including inferences that we aren't necessarily aware of.

It appears as if this emerging conversational AI industry could well be on a collision course with GDPR. Just as the IoT, which enjoyed years of passive information gathering as part of its makeup, was labelled as being "irreconcilable with GDPR" by the UK's own data protection authority, so too might chatbots in the coming years.

Featured Resources

Four cyber security essentials that your board of directors wants to know

The insights to help you deliver what they need

Download now

Data: A resource much too valuable to leave unprotected

Protect your data to protect your company

Download now

Improving cyber security for remote working

13 recommendations for security from any location

Download now

Why CEOS should care about the move to SAP S/4HANA

And how they can accelerate business value

Download now

Recommended

How to protect against a DDoS attack
Security

How to protect against a DDoS attack

17 Nov 2020
Workday's Accounting Center helps businesses manage financial data
chief financial officer (CFO)

Workday's Accounting Center helps businesses manage financial data

30 Oct 2020
The IT Pro Panel
Business strategy

The IT Pro Panel

26 Oct 2020
What is ethical AI?
artificial intelligence (AI)

What is ethical AI?

20 Oct 2020

Most Popular

Cisco acquires container security startup Banzai Cloud
Security

Cisco acquires container security startup Banzai Cloud

18 Nov 2020
macOS Big Sur is bricking some older MacBooks
operating systems

macOS Big Sur is bricking some older MacBooks

16 Nov 2020
46 million Animal Jam accounts leaked after comms software breach
Security

46 million Animal Jam accounts leaked after comms software breach

13 Nov 2020