How the driverless vehicles dilemma highlights wider system safety issues

The "trolley problem" is well known, but what are its wider implications for AI?

Artificial intelligence (AI) is already playing a significant part in the work of many organisations. This is only set to increase.  PWC has predicted a potential $15 trillion (£11.4 trillion) boost to global GDP from AI by 2030. 

While AI has great potential, it also raises challenges, notably around decision ethics. How will non-human systems that have to make decisions do so in an ethically sound way that is acceptable to the humans on the receiving end. Consider the much used example of a driverless vehicle caught in a circumstance where it has to harm someone, sometimes known as the "trolley problem". Which pedestrians or other road users does it “save”, and which does it “sacrifice”?

Advertisement - Article continues below

As we imbue systems with the ability to learn, how do we ensure their algorithms are set to do the right thing, and how do they make a judgement when none of the possible outcomes are positive?

Can we ever trust AI?

AIs are made by us, so ultimately we are responsible for ensuring they don’t exhibit any skewed ‘thinking’ or bias. Margherita Pagani, director of the Research Centre on Artificial Intelligence in Value Creation at Emlyon Business School tells IT Pro: “There are three major sources of bias in AI algorithms – the training data set, constraints that are given to algorithms to learn as we want them to, and the principles of AI algorithms themselves – what they look for.”

Related Resource

Diversity in the digital workplace

The future of work is a collaborative effort between humans and robots

Download now

This seems perfectly logical. But what about protecting public safety? If we design a system with a great deal of inbuilt AI, how do we ensure humans are protected? Richelle Dumond, UX Researcher at PARC, says: “We know AI systems will not always make the right decision.” She counsels against putting undue faith in them, explaining: “One doesn't have to listen to a judgment an algorithmic system makes, and I would encourage everyone always to question any decision made.”

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

When it comes to the wider question of trust, Pagani has this to say: “Commercial companies don't always have society's best interests in mind when developing AI systems. A business model is often centered around IP and because of this, their systems can be more opaque than we'd like them to be.”

A regulatory issue

For some, the responsibility for ensuring safety issues are adequately handled lies with regulators and legislators. Dr Jabe Wilson, consulting director of text and data analytics at Elsevier, says: “We’ll need to see regulators design new frameworks and pass additional legislation to ensure unethical use of AI is prohibited, and bias reduced wherever possible.”  

This question of bias is a crucial one when considering system safety issues. Our theoretical driverless vehicle needs to know how to value individuals in its “save or sacrifice” decision, and understand that nothing but a zero casualty rate is acceptable.  Then it just might need to act in an unacceptable way.

Searching for a single source of truth

In some sectors AIs are starting to benefit from consortia or other initiatives intent on developing a single source of truth to help avoid some potential system safety issues.  Sticking with driverless vehicles, the UK’s Ordnance Survey and government-and industry-backed self-driving hub, Zenzic are joining forces to define global standards for mapping. Such standards, plus a neutrally hosted platform for mapping data would increase confidence in the data, says the Ordnance Survey.

Advertisement - Article continues below

Other initiatives are being put forward in other sectors. For example, in life science, the Pistoia Alliance, a not-for-profit members’ organisation made up of life science companies, technology and service providers, publishers and academic groups, has an AI community of interest to encourage all parties to collaborate and tackle issues related to AI

Even with these initiatives, ‘truth’ often doesn’t really exist. As Caryn Tan, responsible AI manager at Accenture explains: “Ethical issues rarely present themselves as black and white. And to complicate matters, fairness doesn’t have a universal definition. This means AI will leave a huge grey area that organisations are yet to navigate.”

Related Resource

Diversity in the digital workplace

The future of work is a collaborative effort between humans and robots

Download now

Learning to live with uncertainty

Perhaps, in the end, we will just have to learn to live with the fact that there are some situations where it is impossible to make the “right” decision, because things are not that clear cut, and that the best we can hope for are decisions that meet our ethical standards. As Richelle Dumond puts it, we forget that “algorithmic systems are not only made by us ‘to err is human’ people but are also trained by the data we give them.” She notes, also  that “developers need to be explicit about the limitations of their systems.”

Dr Jabe Wilson concurs, noting: “AI can’t simply be a black box that spits out answers we’re unable to verify or interrogate – we need to know how it has reached its conclusions.”

Featured Resources

Top 5 challenges of migrating applications to the cloud

Explore how VMware Cloud on AWS helps to address common cloud migration challenges

Download now

3 reasons why now is the time to rethink your network

Changing requirements call for new solutions

Download now

All-flash buyer’s guide

Tips for evaluating Solid-State Arrays

Download now

Enabling enterprise machine and deep learning with intelligent storage

The power of AI can only be realised through efficient and performant delivery of data

Download now
Advertisement
Advertisement

Most Popular

Visit/security/privacy/355155/zoom-kills-facebook-integration-after-data-transfer-backlash
privacy

Zoom kills Facebook integration after data transfer backlash

30 Mar 2020
Visit/security/data-breaches/355173/marriott-hit-by-data-breach-exposing-personal-data-of-52-million
data breaches

Marriott data breach exposes personal data of 5.2 million guests

31 Mar 2020
Visit/security/cyber-crime/355171/fbi-warns-of-zoom-bombing-hackers-amidst-coronavirus-usage-spike
cyber crime

FBI warns of ‘Zoom-bombing’ hackers amid coronavirus usage spike

31 Mar 2020
Visit/data-insights/data-management/355170/oracle-cloud-courses-are-free-during-coronavirus-lockdown
data management

Oracle cloud courses are free during coronavirus lockdown

31 Mar 2020