What is ethical AI?

How do we define what a 'good outcome' is when it comes to algorithms?

Robotic and human hands meeting

While the rise of sentient machines is still a dystopian vision of the far future, the rise of smart software and systems has already begun.

Yet we've only scratched the surface of what is possible with artificial intelligence, and the likes of virtual assistants (Siri, Alexa, Google etc), which is generally considered to be the most widespread application of the technology to date, only demonstrates a fraction of what is possible.

That said, our experimentation with AI does raise questions about its use. If we're to create systems that produce human-like results, how do we define human-like? Surely, if we're hoping for something that thinks like a human, shouldn't that come with all the myriad considerations that influence our decisions? How can we trust a machine to be fair?

It's because of these questions that we arrive at the idea of ethics - namely, the moral principles that govern the actions of an individual or group, or in this case a machine. This is to say that AI ethics does not simply concern the application of the technology - the results and predictions of AI are just as important.

Defining a 'good outcome'

AI systems represent a divergence away from traditional computers that base their results on mathematical principles. If you enter 4 + 4 into a computer, the answer should always be 8, regardless of how sophisticated it is. With app development, new software can be created to fit a variety of needs, but it is always based on a prebuilt coding language. In that sense, there is no ambiguity on what the result should or should not be.

Advertisement
Advertisement - Article continues below

Let's consider the example of a system designed to establish how happy a person is based on their facial characteristics. A system would have needed to be trained on a variety of demographics to account for all the combinations of race, age and gender possible. What's more, even if we were to assume the system could account for all of that, how do we establish beyond doubt what happiness looks like?

Bias is one of the major problems with artificial intelligence, as its development is always based on the choices of the researchers involved. This effectively makes it impossible to create a system that's entirely neutral, and why the field of AI ethics is so important.

Roboethics

Roboethics refers to the idea that humans should treat robots as if they were real humans. It defines AI as a moral entity, placing obligations on humans to care for the technology much in the same way as they would an animal.

This way of thinking asserts that robots have the right to exist alongside humans and, therefore, should be free to think and conduct their duties without intervention or impediment.

Roboethics is being taken so seriously by lawmakers that the Institute for the Future and the UK Department of Trade and Industry are now considering introducing formal laws protecting the rights of robots.

Some nations are even granting robots residency. For example, humanoid Sophia has been granted citizenship in Saudi Arabia.

Arguments against ethical AI

Some industry thinkers have, however, attacked ethical AI, saying it's not possible to treat robots and artificial intelligence as their human counterparts.

A study by computer scientist Joseph Weizenbaum suggested that non-human beings shouldn't be used in roles that rely on human interaction or relationship building. He said that roles of responsibility such as customer services, therapists, carers for the elderly, police officers, soldiers and judges should never be replaced by artificial intelligence - whether physical robots or any other system that would go against human intuition.

In these roles, humans need to experience empathy, and however human-like the interactions with artificial intelligence are, they will never be able to replace the emotions experienced in scenarios where these job roles exist.

Political reaction to ethical AI

The UK is taking a central role in the evolution of ethical AI. Former Prime Minister Theresa May pledged to develop a Centre for Data Ethics and Innovation that make sure society is prepared for data-driven technologies.

Advertisement
Advertisement - Article continues below

"From helping us deal with the novel ethical issues raised by rapidly-developing technologies such as artificial intelligence, agreeing best practice around data use to identifying potential new regulations, the Centre will set out the measures needed to build trust and enable innovation in data-driven technologies," May said. "Trust underpins a strong economy, and trust in data underpins a strong digital economy."

In April the European Commission published a set of guidelines for the ethical development of artificial intelligence, chief among these being the need for consistent human oversight.

Business reaction to ethical AI

Google was one of the first companies to vow that its AI will only ever be used ethically - i.e. it will never be engineered to become a weapon. The company's boos, Sundar Pichai said Google won't partake in AI-powered surveillance either.

Google published its own ethical code of practice in June 2018 in response to widespread criticism over its relationship with the US government's weapon programme. The company has since said it will no longer cooperate with the US government on projects intending to weaponise algorithms.

Amazon, Google, Facebook, IBM, and Microsoft have joined forces to develop best practice for AI, with a big part of that examining how AI should be - and can be - used ethically as well as sharing ideas on educating the public about the uses of AI and other issues surrounding the technology.

The consortium explained: "This partnership on AI will conduct research, organise discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."

Featured Resources

The IT Pro guide to Windows 10 migration

Everything you need to know for a successful transition

Download now

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Software-defined storage for dummies

Control storage costs, eliminate storage bottlenecks and solve storage management challenges

Download now

6 best practices for escaping ransomware

A complete guide to tackling ransomware attacks

Download now
Advertisement

Recommended

Visit/email-clients/19598/hotmail-outlookcom-upgrades-your-questions-answered
Software

Hotmail.co.uk migration to Outlook.com: Qs answered

11 Nov 2019
Visit/careers/28219/it-manager-job-description-what-does-an-it-manager-do
Careers & training

IT manager job description: What does an IT manager do?

28 Oct 2019
Visit/business-strategy/31780/the-it-pro-panel
Business strategy

The IT Pro Panel

28 Oct 2019
Visit/security/ddos/28039/how-to-protect-against-a-ddos-attack
Security

How to protect against a DDoS attack

25 Oct 2019

Most Popular

Visit/security/identity-and-access-management-iam/354289/44-million-microsoft-customers-found-using
identity and access management (IAM)

44 million Microsoft customers found using compromised passwords

6 Dec 2019
Visit/cloud/microsoft-azure/354230/microsoft-not-amazon-is-going-to-win-the-cloud-wars
Microsoft Azure

Microsoft, not Amazon, is going to win the cloud wars

30 Nov 2019
Visit/network-internet/wifi-hotspots/354283/industrial-wi-fi-6-trial-reveals-blistering-speeds
wifi & hotspots

Industrial Wi-Fi 6 trial reveals blistering speeds

5 Dec 2019
Visit/hardware/354237/five-signs-that-its-time-to-retire-it-kit
Sponsored

Five signs that it’s time to retire IT kit

29 Nov 2019