The ethical implications of conversational AI

With chatbots becoming part of daily life, we look at whether the right to data privacy is being impeded

How many conversations have you had with a chatbot in the last three months? Chances are that you might not have a clue.

As consumers, we take interaction with online services in our stride, and don't tend to recall how many of them we have in total, let alone how many are with human beings and how many with chatbots. In fact, in some cases, we might not always be able to tell the difference.

Advertisement - Article continues below

Ultimately, does it even matter whether we're talking to people or bots, so long as the experience is smooth and the job gets done? Well, from a data protection standpoint, it certainly does.

Chatbots are the future

Opinions vary on how widespread chatbots will be in the near future, but whichever way you slice and dice the stats, we're in the midst of a revolution. Analyst firm Gartner expects that by 2020 a quarter of all customer service operations will use what it calls 'virtual customer assistants', while IBM has said that by the same year 85% of all customer interaction will be handled without a human involved.

There are plenty of reasons why companies are favouring AI-powered bots over humans. They don't need a lot of training, they can strip away queues as there is always a slot can always be allocated for a customer at any given time, and they're capable of running 24 hours a day.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

Consumers too are becoming to favour using a chatbot over having to deal with a person. Customer satisfaction rises by 33% when bots are used, according to Gartner, and lead to a 70% reduction in call/chat or email enquiries.

Listen and learn

There is no doubt AI can make inferences or deductions about us on the basis of our interactions. "This is relatively trivial to accomplish if there is a substantial dataset behind the AI interface," explains Steffen Sorrell, principal analyst at Juniper Research.

Mark Stephen Meadows, trustee and co-founder of Seed Token, an open source AI framework, explains to IT Pro: "The way I said the word can be modelled and this is of particular relevance when the user utters a phrase or sentence. [Using] over 200 other vectors (like your gender, age, etc) these listening systems can detect 'affect' or 'sentiment' or, simply, emotion. This offers deep insights into the mental state of the user."

Advertisement - Article continues below

AI doesn't even need our words. The University of Vermont has identified signs in Instagram pictures that might indicate depression. Yet advancements like these are raising questions among the research community as to whether the right to data privacy is being impeded.

A question of ethics

Perhaps this is a question of power symmetry the idea that the company and chatbot are in a stronger position in comparison to the consumer.

"We have a potential problem," Meadows explains to IT Pro. "Artificial intelligence quickly becomes unethical because of its capacity to accumulate more information on the person than the person has about the system." A great deal of this gathering is passive in order to inform the training of the AI, and is not always covered by consent.

Steffen Sorrell of Juniper Research suggests the example of a short-term lending company offering its services to a user who had previously expressed concerns about their financial situation to an AI chatbot.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

"Most consumers would consider their reasonable expectations of privacy for such a sensitive issue to have been breached," explains Sorrell. "That changes, however, if the user has explicitly given his or her consent for this kind of information sharing."The UK Information Commissioner agrees. A spokesperson told us: "Wherever personal data is being processed, data protection law applies. Organisations are required to process personal data fairly, lawfully and transparently, irrespective of the technology used. The key is that individuals are properly informed about the processing and are clear on how they can exercise their rights."

A clash with GDPR

Companies should continue to ensure that a policy of data protection by design is followed even in the case of AI development, according to the ICO.

"When looking into certain technologies, organisations should ensure that they follow a data protection by design approach and, if necessary, undertake a data protection impact assessment. This allows them to ensure that their processing complies with the fundamental data protection principles."

Advertisement - Article continues below

Where the General Data Protection Regulations pertain which is for any company processing data on EU citizens users have improved controls over what information about them is shared. However, Sorrell argues that a lack of information, in a format that's easily understood by the general public, means that users are unlikely to know the full scope of the new powers.

For example, "even in the context of the GDPR, few consumers will understand that they have a right to ask for an explanation of any algorithmically-based decision," he explains.

Meadows' Seed Token project is designed to counter the traditional idea of organisations owning the data that AI generates and instead uses blockchain and open source to allow users to specify their privacy and data sharing preferences. It's pitched as a tool that can reset the balance by taking power away from a select few organisations currently commanding the AI space, opening up development and allowing for greater collaboration between users and developers.

Advertisement - Article continues below

It's a noble idea, but one that is likely to struggle given the highly contested and lucrative market that conversational AI is going to ignite over the coming years. The current trajectory suggests we are going to spend more and more time interacting with bots in the coming few years. The AI behind them is likely to generate more and more knowledge about us including inferences that we aren't necessarily aware of.

It appears as if this emerging conversational AI industry could well be on a collision course with GDPR. Just as the IoT, which enjoyed years of passive information gathering as part of its makeup, was labelled as being "irreconcilable with GDPR" by the UK's own data protection authority, so too might chatbots in the coming years.

Featured Resources

Preparing for long-term remote working after COVID-19

Learn how to safely and securely enable your remote workforce

Download now

Cloud vs on-premise storage: What’s right for you?

Key considerations driving document storage decisions for businesses

Download now

Staying ahead of the game in the world of data

Create successful marketing campaigns by understanding your customers better

Download now

Transforming productivity

Solutions that facilitate work at full speed

Download now
Advertisement
Advertisement

Recommended

Visit/security/28026/what-is-a-ddos-attack
Security

What is a DDoS attack?

8 Jul 2020
Visit/strategy/29089/six-benefits-of-hot-desking
Business strategy

The benefits of hot desking

28 May 2020
Visit/business-strategy/31780/the-it-pro-panel
Business strategy

The IT Pro Panel

25 May 2020
Visit/machine-learning/31708/what-are-the-pros-and-cons-of-ai
machine learning

What are the pros and cons of AI?

21 Apr 2020

Most Popular

Visit/business-strategy/careers-training/356422/ibm-job-ad-calls-for-12-year-experience-with-6-year-old
Careers & training

IBM job ad calls for 12-years of experience with six-year-old Kubernetes

13 Jul 2020
Visit/business/business-operations/356395/nvidia-overtakes-intel-as-most-valuable-us-chipmaker
Business operations

Nvidia overtakes Intel as most valuable US chipmaker

9 Jul 2020
Visit/security/cyber-attacks/356417/trump-confirms-cyber-attacks-on-russia-election-trolls
cyber attacks

Trump confirms US cyber attack on Russia election trolls

13 Jul 2020