How the driverless vehicles dilemma highlights wider system safety issues

The "trolley problem" is well known, but what are its wider implications for AI?

Artificial intelligence (AI) is already playing a significant part in the work of many organisations. This is only set to increase.  PWC has predicted a potential $15 trillion (£11.4 trillion) boost to global GDP from AI by 2030. 

While AI has great potential, it also raises challenges, notably around decision ethics. How will non-human systems that have to make decisions do so in an ethically sound way that is acceptable to the humans on the receiving end. Consider the much used example of a driverless vehicle caught in a circumstance where it has to harm someone, sometimes known as the "trolley problem". Which pedestrians or other road users does it “save”, and which does it “sacrifice”?

As we imbue systems with the ability to learn, how do we ensure their algorithms are set to do the right thing, and how do they make a judgement when none of the possible outcomes are positive?

Can we ever trust AI?

AIs are made by us, so ultimately we are responsible for ensuring they don’t exhibit any skewed ‘thinking’ or bias. Margherita Pagani, director of the Research Centre on Artificial Intelligence in Value Creation at Emlyon Business School tells IT Pro: “There are three major sources of bias in AI algorithms – the training data set, constraints that are given to algorithms to learn as we want them to, and the principles of AI algorithms themselves – what they look for.”

Related Resource

Diversity in the digital workplace

The future of work is a collaborative effort between humans and robots

Download now

This seems perfectly logical. But what about protecting public safety? If we design a system with a great deal of inbuilt AI, how do we ensure humans are protected? Richelle Dumond, UX Researcher at PARC, says: “We know AI systems will not always make the right decision.” She counsels against putting undue faith in them, explaining: “One doesn't have to listen to a judgment an algorithmic system makes, and I would encourage everyone always to question any decision made.”

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

When it comes to the wider question of trust, Pagani has this to say: “Commercial companies don't always have society's best interests in mind when developing AI systems. A business model is often centered around IP and because of this, their systems can be more opaque than we'd like them to be.”

A regulatory issue

For some, the responsibility for ensuring safety issues are adequately handled lies with regulators and legislators. Dr Jabe Wilson, consulting director of text and data analytics at Elsevier, says: “We’ll need to see regulators design new frameworks and pass additional legislation to ensure unethical use of AI is prohibited, and bias reduced wherever possible.”  

This question of bias is a crucial one when considering system safety issues. Our theoretical driverless vehicle needs to know how to value individuals in its “save or sacrifice” decision, and understand that nothing but a zero casualty rate is acceptable.  Then it just might need to act in an unacceptable way.

Searching for a single source of truth

In some sectors AIs are starting to benefit from consortia or other initiatives intent on developing a single source of truth to help avoid some potential system safety issues.  Sticking with driverless vehicles, the UK’s Ordnance Survey and government-and industry-backed self-driving hub, Zenzic are joining forces to define global standards for mapping. Such standards, plus a neutrally hosted platform for mapping data would increase confidence in the data, says the Ordnance Survey.

Other initiatives are being put forward in other sectors. For example, in life science, the Pistoia Alliance, a not-for-profit members’ organisation made up of life science companies, technology and service providers, publishers and academic groups, has an AI community of interest to encourage all parties to collaborate and tackle issues related to AI

Advertisement - Article continues below

Even with these initiatives, ‘truth’ often doesn’t really exist. As Caryn Tan, responsible AI manager at Accenture explains: “Ethical issues rarely present themselves as black and white. And to complicate matters, fairness doesn’t have a universal definition. This means AI will leave a huge grey area that organisations are yet to navigate.”

Related Resource

Diversity in the digital workplace

The future of work is a collaborative effort between humans and robots

Download now

Learning to live with uncertainty

Perhaps, in the end, we will just have to learn to live with the fact that there are some situations where it is impossible to make the “right” decision, because things are not that clear cut, and that the best we can hope for are decisions that meet our ethical standards. As Richelle Dumond puts it, we forget that “algorithmic systems are not only made by us ‘to err is human’ people but are also trained by the data we give them.” She notes, also  that “developers need to be explicit about the limitations of their systems.”

Dr Jabe Wilson concurs, noting: “AI can’t simply be a black box that spits out answers we’re unable to verify or interrogate – we need to know how it has reached its conclusions.”

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now
Advertisement

Most Popular

Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/operating-systems/microsoft-windows/354526/memes-and-viking-funerals-the-internet-reacts-to-the
Microsoft Windows

Memes and Viking funerals: The internet reacts to the death of Windows 7

14 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020