IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

What is ethical AI?

How do we define what a 'good outcome' is when it comes to algorithms?

When you first have to imagine what the term ethical AI means, it can be very easy to immediately think about fictional AI, such as some kind of Skynet-linked technology that only serves to take over the world. Artificial intelligence (AI) technology is indeed improving drastically every year, but thankfully the idea that machines will develop a brain and take over the human race isn’t feasible (yet).

Several AI uses out there can be fairly boring when it comes to their day-to-day application, even though they help improve our lives. Take the Google Home or Amazon’s Alexa, where the revolutionary technology lies hidden in a virtual assistant. They contain natural language processing (NLP) to help improve the way the AI communicates with users. AI can be used in countless other ways, you might find it in automating office tasks or big data analytics.

AI constantly improves thanks to researchers and academics who work on creating new applications or approaches for businesses to make use of. It’s essential, however, that as the technology improves, there are conversations around the ethics of AI since it’s at the base of apps that could impact our privacy or data protection rights. One example of this is facial recognition, which is used by law protection, sometimes without the public’s knowledge, and is seen as highly controversial.

When it comes to ethical AI, a well-known issue is focused on how these algorithms make deductions. As the algorithms don’t tend to be transparent, it can be hard to know if their datasets contain any kind of bias which is taken into account when the system arrives at its conclusion. A problem with developing AI that aims to produce human-like results is that we aren’t sure whether these systems consider ethical issues that humans would consider when carrying out a decision.

The facial profile of a woman being analysed by facial recognition technology

Facial recognition is deemed a contentious application of AI technology

Shutterstock

It's because of these questions that we arrive at the idea of ethics – namely, the moral principles that govern the actions of an individual or group, or, in this case, a machine. This is to say that AI ethics does not simply concern the application of the technology – the results and predictions of AI are just as important.

Defining a 'good outcome'

AI systems represent a divergence away from traditional computers that base their results on mathematical principles. If you enter 4 + 4 into a computer, the answer should always be 8, regardless of how sophisticated it is. With app development, new software can be created to fit a variety of needs, but it is always based on a prebuilt coding language. In that sense, there is no ambiguity on what the result should or should not be.

Let's consider the example of a system designed to establish how happy a person is based on their facial characteristics. A system would need to be trained on a variety of demographics to account for all the combinations of race, age and gender possible. What's more, even if we were to assume the system could account for all of that, how do we establish beyond doubt what happiness looks like?

Bias is one of the major problems with artificial intelligence, as its development is always based on the choices of the researchers involved. This effectively makes it impossible to create an entirely neutral system, and why the field of AI ethics is so important.

Roboethics

Roboethics, or robot ethics, is the principle of designing artificially intelligent systems using codes of conduct that ensure an automated system can respond to situations ethically. That is, ensure that a robot behaves in a way that would fit the ethical framework of the society it's operating in.

Related Resource

Hybrid cloud: A smart choice for AI and HPC

Drive business benefits while solving top challenges

Whitepaper cover with black and grey colour blocks and line graph style ascending arrowsFree Download

Like traditional ethics, roboethics involves ensuring that when a system that's capable of making its own decisions comes into contact with humans, it's able to prioritise the health and wellbeing of the human above all else, while also behaving in a way that's considered appropriate to the situation.

Roboethics often features heavily in discussions around the use of artificial intelligence in combat situations, a popular school of thought being that robots should never be built to explicitly harm or kill human beings.

While roboethics usually focuses on the resulting action of the robot, the field is only concerned with the thoughts and actions of the human developer behind it, rather than the robot itself. For that, we turn to machine ethics, which is concerned with the process of adding moral behaviours to AI machines.

Arguments against ethical AI

Some industry thinkers have, however, attacked ethical AI, saying it's not possible to treat robots and artificial intelligence as their human counterparts.

Famed computer scientist Joseph Weizenbaum argued since the 60s that non-human beings shouldn't be used in roles that rely on human interaction or relationship building. He said that roles of responsibility such as customer services, therapists, carers for the elderly, police officers, soldiers and judges should never be replaced by artificial intelligence – whether physical robots or any other system that would go against human intuition.

In these roles, humans need to experience empathy and however human-like the interactions with artificial intelligence are, they will never be able to replace the emotions experienced in scenarios where these job roles exist.

Political reaction to ethical AI

The UK is taking a central role in the evolution of ethical AI. Former prime minister Theresa May pledged to develop a Centre for Data Ethics and Innovation that make sure society is prepared for data-driven technologies.

Related Resource

Hybrid cloud: A smart choice for AI and HPC

Drive business benefits while solving top challenges

Whitepaper cover with black and grey colour blocks and line graph style ascending arrowsFree Download

"From helping us deal with the novel ethical issues raised by rapidly-developing technologies such as artificial intelligence, agreeing on best practices around data use to identifying potential new regulations, the Centre will set out the measures needed to build trust and enable innovation in data-driven technologies," May said. "Trust underpins a strong economy, and trust in data underpins a strong digital economy."

In April, the European Commission published a set of guidelines for the ethical development of artificial intelligence, chief among these being the need for consistent human oversight.

Business reaction to ethical AI

Google was one of the first companies to vow that its AI will only ever be used ethically – i.e. it will never be engineered to become a weapon. The company's boos, Sundar Pichai said Google won't partake in AI-powered surveillance either.

Google published its ethical code of practice in June 2018 in response to widespread criticism over its relationship with the US government's weapon programme. The company has since said it will no longer cooperate with the US government on projects intending to weaponise algorithms.

Amazon, Google, Facebook, IBM, and Microsoft have joined forces to develop best practices for AI, with a big part of that examining how AI should be and can be, used ethically as well as sharing ideas on educating the public about the uses of AI and other issues surrounding the technology.

The consortium explained: "This partnership on AI will conduct research, organise discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advances the understanding of AI technologies including machine perception, learning, and automated reasoning."

Following a disastrous trial with its online chatbot Tay in March 2016, Microsoft has since taken steps to overhaul its internal policies regarding the development of AI, particularly when involving sensitive use cases. This includes the creation of the Office for Responsible AI, which is in charge of recommending and implementing AI policy across the business, and the so-called ‘Aether’ (AI, Ethics and Effects in Engineering and Research) Committee, a non-binding advisory body made up of key stakeholders.

Microsoft had also cooperated with the European Union on the development of an AI regulatory framework, a draft version of which was finally published on 21 April 2021. Under the proposed regulations, EU citizens will be protected from the use of AI for mass surveillance by law enforcement, which was ruled unlawful in the UK last year. The use of AI in recruitment, credit score evaluation, as well as border control management will also be classified as "high-risk” due to discrimination concerns, while systems that allow ‘social scoring' by governments will be banned.

Companies that break the rules would face fines up to 6% of their global turnover or €30 million, whichever is the higher figure - slightly higher than the already steep fines imposed by GDPR. The European Commission will now have to thrash out the details of the proposed regulations with EU national governments and the European Parliament before the rules can come into force, a process that can take several years.

Meanwhile, in the UK, the Trades Union Congress (TUC) is calling for increased legal protections for workers as the use of AI for employee-related decision-making, such as hiring and firing, becomes more common in the workplace. Recent examples of this in action include claims made by former Uber Eats workers, who accused the food delivery service of unfair dismissal after the facial identification software used by the company was incapable of recognising their faces. The system, known as a “photo comparison” tool, asks Uber couriers and drivers to take a photograph of their face, which is then authenticated using AI by comparing it to a photograph in the company’s database. Hence, according to the TUC, employers seeking to use “high-risk” AI should be legally obligated to consult with trade unions.

Featured Resources

Four strategies for building a hybrid workplace that works

All indications are that the future of work is hybrid, if it's not here already

Free webinar

The digital marketer’s guide to contextual insights and trends

How to use contextual intelligence to uncover new insights and inform strategies

Free Download

Ransomware and Microsoft 365 for business

What you need to know about reducing ransomware risk

Free Download

Building a modern strategy for analytics and machine learning success

Turning into business value

Free Download

Recommended

Europe's tech sector struggles to find employees with AI skills
Careers & training

Europe's tech sector struggles to find employees with AI skills

25 Apr 2022
The IT Pro Panel
Business strategy

The IT Pro Panel

25 Apr 2022
The benefits of hot desking
Business strategy

The benefits of hot desking

22 Feb 2022
What are the pros and cons of AI?
machine learning

What are the pros and cons of AI?

30 Nov 2021

Most Popular

16 ways to speed up your laptop
Laptops

16 ways to speed up your laptop

13 May 2022
Russian hackers declare war on 10 countries after failed Eurovision DDoS attack
hacking

Russian hackers declare war on 10 countries after failed Eurovision DDoS attack

16 May 2022
Microsoft says it's provided over $100 million in tech support to Ukrainian government
cyber attacks

Microsoft says it's provided over $100 million in tech support to Ukrainian government

20 May 2022