What is ethical AI?

How do we define what a 'good outcome' is when it comes to algorithms?

Robotic and human hands meeting

The chances of some dystopian rise of the machines in the mould of Skynet are slim, to say the least. Artificial intelligence (AI) technology is certainly improving, but the thought of machines being granted the essence of consciousness that could, in turn, lead to enslaving the human race, is better left to fiction.

Some of the most widely-used AI in the world, at least the type most of us will be familiar with, manifests as virtual assistants that power technologies such as Apple’s Siri or Amazon’s Alexa, paving the way for the development of natural language processing (NLP) technology. AI is also used for automated facial recognition, or in big data analytics, in which data is assessed to derive insights that can drive sharper business decision-making

Although machines with true human thought processing and intelligence remain a few years away, there’s a wealth of potential in AI given the amount of research that’s being done by business and academics alike. The more that this kind of innovation intrudes on our personal and professional lives, however, the greater the need for a conversation about the ethics involved in elements like recommendation engines and facial recognition.

It’s difficult to establish how algorithms arrive at certain conclusions, for example, and whether the results they produce can be trusted to be free of any biases already baked into the data used as a basis.

If we're to create systems that produce human-like results, how do we define human-like? Surely, if we're hoping for something that thinks like a human, shouldn't that come with all the myriad considerations that influence our decisions? How can we trust a machine to be fair?

It's because of these questions that we arrive at the idea of ethics – namely, the moral principles that govern the actions of an individual or group, or, in this case, a machine. This is to say that AI ethics does not simply concern the application of the technology – the results and predictions of AI are just as important.

Defining a 'good outcome'

AI systems represent a divergence away from traditional computers that base their results on mathematical principles. If you enter 4 + 4 into a computer, the answer should always be 8, regardless of how sophisticated it is. With app development, new software can be created to fit a variety of needs, but it is always based on a prebuilt coding language. In that sense, there is no ambiguity on what the result should or should not be.

Let's consider the example of a system designed to establish how happy a person is based on their facial characteristics. A system would need to be trained on a variety of demographics to account for all the combinations of race, age and gender possible. What's more, even if we were to assume the system could account for all of that, how do we establish beyond doubt what happiness looks like?

Bias is one of the major problems with artificial intelligence, as its development is always based on the choices of the researchers involved. This effectively makes it impossible to create a system that's entirely neutral, and why the field of AI ethics is so important.

Roboethics

Roboethics, or robot ethics, is the principle of designing artificially intelligent systems using codes of conduct that ensure an automated system is able to respond to situations in an ethical way. That is, ensure that a robot behaves in a way that would fit the ethical framework of the society it's operating in.

Like traditional ethics, roboethics involves ensuring that when a system that's capable of making its own decisions comes into contact with humans, it's able to prioritise the health and wellbeing of the human above all else, while also behaving in a way that's considered appropriate to the situation.

Roboethics often features heavily in discussions around the use of artificial intelligence in combat situations, a popular school of thought being that robots should never be built to explicitly harm or kill human beings.

While roboethics usually focuses on the resulting action of the robot, the field is only concerned with the thoughts and actions of the human developer behind it, rather than the robot itself. For that, we turn to machine ethics, which is concerned with the process of adding moral behaviours to AI machines.

Arguments against ethical AI

Some industry thinkers have, however, attacked ethical AI, saying it's not possible to treat robots and artificial intelligence as their human counterparts.

Famed computer scientist Joseph Weizenbaum argued since the 60s that non-human beings shouldn't be used in roles that rely on human interaction or relationship building. He said that roles of responsibility such as customer services, therapists, carers for the elderly, police officers, soldiers and judges should never be replaced by artificial intelligence – whether physical robots or any other system that would go against human intuition.

In these roles, humans need to experience empathy, and however human-like the interactions with artificial intelligence are, they will never be able to replace the emotions experienced in scenarios where these job roles exist.

Political reaction to ethical AI

The UK is taking a central role in the evolution of ethical AI. Former prime minister Theresa May pledged to develop a Centre for Data Ethics and Innovation that make sure society is prepared for data-driven technologies.

"From helping us deal with the novel ethical issues raised by rapidly-developing technologies such as artificial intelligence, agreeing best practice around data use to identifying potential new regulations, the Centre will set out the measures needed to build trust and enable innovation in data-driven technologies," May said. "Trust underpins a strong economy, and trust in data underpins a strong digital economy."

In April, the European Commission published a set of guidelines for the ethical development of artificial intelligence, chief among these being the need for consistent human oversight.

Business reaction to ethical AI

Google was one of the first companies to vow that its AI will only ever be used ethically – i.e. it will never be engineered to become a weapon. The company's boos, Sundar Pichai said Google won't partake in AI-powered surveillance either.

Google published its own ethical code of practice in June 2018 in response to widespread criticism over its relationship with the US government's weapon programme. The company has since said it will no longer cooperate with the US government on projects intending to weaponise algorithms.

Amazon, Google, Facebook, IBM, and Microsoft have joined forces to develop best practice for AI, with a big part of that examining how AI should be - and can be – used ethically as well as sharing ideas on educating the public about the uses of AI and other issues surrounding the technology.

The consortium explained: "This partnership on AI will conduct research, organise discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."

Related Resource

Humility in AI: Building trustworthy and ethical AI systems

How humble AI can help safeguard your business

Download now

Following a disastrous trial with its online chatbot Tay in March 2016, Microsoft has since taken steps to overhaul its internal policies regarding the development of AI, particularly when involving sensitive use cases. This includes the creation of the Office for Responsible AI, which is in charge of recommending and implementing AI policy across the business, and the so-called ‘Aether’ (AI, Ethics and Effects in Engineering and Research) Committee, a non-binding advisory body made up of key stakeholders.

Microsoft is also working closely with the European Union on the development of an AI regulatory framework, considered to be the first of its kind. The hope is to create something that offers the same sort of guardrails and principles that GDPR establishes around the use of data.

Featured Resources

Four cyber security essentials that your board of directors wants to know

The insights to help you deliver what they need

Download now

Data: A resource much too valuable to leave unprotected

Protect your data to protect your company

Download now

Improving cyber security for remote working

13 recommendations for security from any location

Download now

Why CEOS should care about the move to SAP S/4HANA

And how they can accelerate business value

Download now

Recommended

Secure your Wi-Fi against hackers in 10 steps
Security

Secure your Wi-Fi against hackers in 10 steps

23 Nov 2020
How to protect against a DDoS attack
Security

How to protect against a DDoS attack

17 Nov 2020
Workday's Accounting Center helps businesses manage financial data
chief financial officer (CFO)

Workday's Accounting Center helps businesses manage financial data

30 Oct 2020
The IT Pro Panel
Business strategy

The IT Pro Panel

26 Oct 2020

Most Popular

46 million Animal Jam accounts leaked after comms software breach
Security

46 million Animal Jam accounts leaked after comms software breach

13 Nov 2020
macOS Big Sur is bricking some older MacBooks
operating systems

macOS Big Sur is bricking some older MacBooks

16 Nov 2020
How computing has revolutionised Formula 1
Sponsored

How computing has revolutionised Formula 1

11 Nov 2020