How the driverless vehicles dilemma highlights wider system safety issues

Good vs Evil

Artificial intelligence (AI) is already playing a significant part in the work of many organisations. This is only set to increase. PWC has predicted a potential $15 trillion (£11.4 trillion) boost to global GDP from AI by 2030.

While AI has great potential, it also raises challenges, notably around decision ethics. How will non-human systems that have to make decisions do so in an ethically sound way that is acceptable to the humans on the receiving end. Consider the much used example of a driverless vehicle caught in a circumstance where it has to harm someone, sometimes known as the "trolley problem". Which pedestrians or other road users does it “save”, and which does it “sacrifice”?

As we imbue systems with the ability to learn, how do we ensure their algorithms are set to do the right thing, and how do they make a judgement when none of the possible outcomes are positive?

Can we ever trust AI?

AIs are made by us, so ultimately we are responsible for ensuring they don’t exhibit any skewed ‘thinking’ or bias. Margherita Pagani, director of the Research Centre on Artificial Intelligence in Value Creation at Emlyon Business School tells IT Pro: “There are three major sources of bias in AI algorithms – the training data set, constraints that are given to algorithms to learn as we want them to, and the principles of AI algorithms themselves – what they look for.”

RELATED RESOURCE

Diversity in the digital workplace

The future of work is a collaborative effort between humans and robots

FREE DOWNLOAD

This seems perfectly logical. But what about protecting public safety? If we design a system with a great deal of inbuilt AI, how do we ensure humans are protected? Richelle Dumond, UX Researcher at PARC, says: “We know AI systems will not always make the right decision.” She counsels against putting undue faith in them, explaining: “One doesn't have to listen to a judgment an algorithmic system makes, and I would encourage everyone always to question any decision made.”

When it comes to the wider question of trust, Pagani has this to say: “Commercial companies don't always have society's best interests in mind when developing AI systems. A business model is often centered around IP and because of this, their systems can be more opaque than we'd like them to be.”

A regulatory issue

For some, the responsibility for ensuring safety issues are adequately handled lies with regulators and legislators. Dr Jabe Wilson, consulting director of text and data analytics at Elsevier, says: “We’ll need to see regulators design new frameworks and pass additional legislation to ensure unethical use of AI is prohibited, and bias reduced wherever possible.”

This question of bias is a crucial one when considering system safety issues. Our theoretical driverless vehicle needs to know how to value individuals in its “save or sacrifice” decision, and understand that nothing but a zero casualty rate is acceptable. Then it just might need to act in an unacceptable way.

Searching for a single source of truth

In some sectors AIs are starting to benefit from consortia or other initiatives intent on developing a single source of truth to help avoid some potential system safety issues. Sticking with driverless vehicles, the UK’s Ordnance Survey and government-and industry-backed self-driving hub, Zenzic are joining forces to define global standards for mapping. Such standards, plus a neutrally hosted platform for mapping data would increase confidence in the data, says the Ordnance Survey.

Other initiatives are being put forward in other sectors. For example, in life science, the Pistoia Alliance, a not-for-profit members’ organisation made up of life science companies, technology and service providers, publishers and academic groups, has an AI community of interest to encourage all parties to collaborate and tackle issues related to AI.

Even with these initiatives, ‘truth’ often doesn’t really exist. As Caryn Tan, responsible AI manager at Accenture explains: “Ethical issues rarely present themselves as black and white. And to complicate matters, fairness doesn’t have a universal definition. This means AI will leave a huge grey area that organisations are yet to navigate.”

RELATED RESOURCE

Diversity in the digital workplace

The future of work is a collaborative effort between humans and robots

FREE DOWNLOAD

Learning to live with uncertainty

Perhaps, in the end, we will just have to learn to live with the fact that there are some situations where it is impossible to make the “right” decision, because things are not that clear cut, and that the best we can hope for are decisions that meet our ethical standards. As Richelle Dumond puts it, we forget that “algorithmic systems are not only made by us ‘to err is human’ people but are also trained by the data we give them.” She notes, also that “developers need to be explicit about the limitations of their systems.”

RELATED RESOURCE

The IT Pro Podcast: Can AI ever be ethical?

As AI grows in sophistication, how can we make sure it’s being developed responsibly?

FREE DOWNLOAD

Dr Jabe Wilson concurs, noting: “AI can’t simply be a black box that spits out answers we’re unable to verify or interrogate – we need to know how it has reached its conclusions.”

Sandra Vogel
Freelance journalist

Sandra Vogel is a freelance journalist with decades of experience in long-form and explainer content, research papers, case studies, white papers, blogs, books, and hardware reviews. She has contributed to ZDNet, national newspapers and many of the best known technology web sites.

At ITPro, Sandra has contributed articles on artificial intelligence (AI), measures that can be taken to cope with inflation, the telecoms industry, risk management, and C-suite strategies. In the past, Sandra also contributed handset reviews for ITPro and has written for the brand for more than 13 years in total.