What is ethical AI?

A robotic hand and a human hand touching fingertips
(Image credit: Shutterstock)

As a concept, ethical artificial intelligence (AI) can seem quite abstract or like something out of a science-fiction film. Today’s AI systems are thankfully a far cry from the likes of Skynet or HAL 9000, but there is nevertheless an important conversation to be had about how AI can be ethically implemented.

In short, ethical AI is an artificial intelligence system that follows guidelines such as accountability and transparency and has been designed with factors such as privacy and individual human rights in mind.

Work on ethical AI solutions is going at pace, as dilemmas around AI have already become a public issue. Using AI with facial recognition has for example become a law enforcement tactic, a controversial decision that has been criticized especially due to the lack of transparency with which some such systems were deployed.

When it comes to ethical AI, a well-known issue is focused on how these algorithms make deductions. As the algorithms don’t tend to be transparent, it can be hard to know if their datasets contain any kind of bias which is taken into account when the system arrives at its conclusion.

Disclaimer

Ethical AI falls within the broader field of study, commonly known as AI ethics, which sees sector experts and academics attempt to set out an ethical and legal framework that developers can refer to in order to produce reliable ethical AI.    

In other words, AI ethics is the discipline through which ethical guidelines are set, while ethical AI is a practical implementation of AI technology that fits those guidelines. The two terms are often confused across the internet, but the focus of this article is on ethical AI.

A problem with developing AI that aims to produce human-like results is that one cannot easily enforce ‘ethics’ into a system that operates using an extremely involved series of mathematical decisions.

Have a text conversation with a chatbot that uses generative AI technology to function and one could easily convince oneself that the program was responding intelligently, or even taking factors such as what would be ‘right’ to say in any given situation.

But this is not the case — the responses you receive are merely what the system has been taught and are most likely to be the relevant response given the line of conversation.

It is important to note that a great deal of AI systems do not use any form of generative AI. Particularly in the business world, AI may be used to draw out patterns in data or automate tasks rather than create new content from inputs and training data.

RELATED RESOURCE

Purple whitepaper cover with white text over background image of suited female wearing glasses

(Image credit: Mimecast)

Understand why AI is crucial for cyber security, how it fits in, and its best use cases.

DOWNLOAD FOR FREE

All AI carries the risk of magnifying errors or biases to produce unethical results. An AI recruitment system could unethically favor one group of applicants over the other, such as Amazon’s AI hiring tool that demonstrated sex bias due to the quality of the data it was fed.

Similarly, generative AI can produce unethical text or image outputs. Microsoft learned this the hard way with its Tay chatbot, which was trained by Twitter users into spewing hate speech within 16 hours of its release in March 2016.

There is no question that all AI has to be approached with ethics in mind. But the risk profile associated with AI systems will vary. 

What is the purpose of ethical AI?

AI presents unique challenges when it comes to output. Unlike traditional algorithms which can easily be quantified and have predictable outcomes - say, a simple program for multiplication or even one for encryption - the results of an AI system are weighted by a complex series of statistical models.

Input also plays an important role in affecting how ‘good’ the outcome of an AI system is. Particularly for large generative AI models, adequately-prepared datasets and training methodologies can make the difference between the production of insightful or offensive content.

This applies to bias within data or even the bias of the framework through which a model was trained. 

Dell CTO John Roese gave ITPro an example of Dell’s efforts to remove all non-inclusive language from its internal code and content - if the company chose to use this data to train an LLM to produce marketing content on its internal environment this would in theory prevent any of the unwanted language from appearing in the LLM’s output.

Using this as a basis for ethical results, one could define a ‘good’ output for AI as that which falls within the ethical boundaries set out by the developers.

Recognizing that it is impossible to completely eliminate bias in AI is an important step in its development. It is effectively impossible to create an entirely neutral system, but the field of AI ethics can help to shape guardrails that developers include within systems.

A smartphone with the words OpenAI displayed on screen, being held in front of a background showing financial charts

(Image credit: Getty Images)

For example, many popular models such as OpenAI’s GPT-4 have been refined using reinforcement learning from human feedback (RLHF), in which human testers are given two potential outputs and choose the best outcome of the two.

Over time, the AI can infer the parameters within which it is expected to operate, becoming more accurate and also less likely to produce harmful output such as hate speech.

These methods can carry their own ethical concerns. A January 2023 report by Time alleged that OpenAI outsourced some of its model refinement work to a firm in Kenya where workers were paid as little as $1.32 per hour. 

Ethical AI is a complex field where all variables must be considered. Businesses must approach any AI system from a holistic perspective, and be prepared to continually reassess their strategy.

Arguments against ethical AI

Some industry thinkers have attacked the idea of ethical AI, arguing that it is fundamentally impossible to assess artificial intelligence on the same basis as humans.

This goes hand-in-hand with discussions around whether roles associated with particular ethical problems such as those advising customers or involved in processes that affect people’s fundamental rights are ever correct for AI.

Famed computer scientist Joseph Weizenbaum has argued since the 60s that non-human beings shouldn't be used in roles that rely on human interaction or relationship building. 

A close up photo of famed computer scientist Joseph Weizenbaum

Computer scientist Joseph Weizenbaum in 1977 (Image credit: Getty Images)

Prominent AI companies Anthropic, Google, Microsoft, and OpenAI have formed a group known as the Frontier Model Forum, which seeks to help the private and public sectors safely and ethically implement AI systems.

In these roles, humans need to experience empathy and however human-like the interactions with artificial intelligence are, they will never be able to replace the emotions experienced in scenarios where these job roles exist.

However, as developers and many businesses are already implementing AI systems these theoretical concerns must also occur alongside practical ethical mitigations within current AI models.

Boundaries for the use of AI are important considerations that technologists must take into account. Beyond the risks associated with replacing any given role with an AI system, one must also factor in the rights of individuals who come into contact with an AI system, particularly those whose creators attempt to pass them off as real human interaction.

Political reaction to ethical AI

This question of individual rights in relation to AI has been the focal point of proposed EU legislation, which seeks to set out specific responsibilities for AI developers and shield citizens from potential harm.

Under the terms of the risk-based regulation, AI systems would be ranked by the risk they pose with some deemed ‘unacceptable’ due to their potential to violate the rights of citizens. 

Such technologies include the use of AI for real-time biometrics tracking such as live facial recognition.

The developers of all AI systems will be required to provide insight into the data they have collected in order to inform their AI training data sets, and high-risk systems - those that pose “significant risks to the health and safety or fundamental rights of persons” - will be especially scrutinized. 

Copyright holders in particular could benefit from rigid requirements on developers to disclose the intellectual property on which their models have been trained.

RELATED RESOURCE

Whitepaper cover with image of the stock exchange with vast banks of computers and dual screens

(Image credit: IBM)

Accelerating FinOps & Sustainable IT

Optimize costs and automate resourcing actions with IBM Turbonomic Application Resource Management.

DOWNLOAD FOR FREE

Firms that fail to comply with the laws face up to €20 million in fines, or 4% of their annual worldwide turnover - whichever is higher.

The UK has moved slower than the EU on AI regulation, with the government having committed to not over-enforcing AI regulation over fears it could hamper AI innovation. 

Its whitepaper, A pro-innovation approach to AI regulation, lays out five principles for AI that the government says will guide the development of AI throughout the economy:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The UK government also subscribes to the AI principles of the United Nations Educational, Scientific and Cultural Organization (UNESCO), as laid out in its extensive recommendations document.

It has also appointed State of AI report co-author Ian Hogarth as the chair of the Foundation Model Taskforce, which will collaborate with experts throughout the sector to establish baselines for the ethics and safety of AI models.

The government hopes that the work of the taskforce will help to establish guidelines for AI that can be used worldwide, which can be held up alongside the upcoming UK-hosted AI summit as proof of the nation’s importance in the field.

Other groups such as Responsible AI UK continue to call for the ethical development of AI, and enable academics across a range of disciplines to collaborate on a safe and fruitful approach to AI systems.

The Trades Union Congress (TUC) has also called for greater legal rights for workers as AI becomes more common in the workplace. It has argued that the government’s whitepaper is too vague and fails to provide workers with reassurances that AI will be implemented ethically in the workplace.

Ethical AI in business

Google was one of the first companies to vow that its AI will only ever be used ethically, but has faced controversy such as for its involvement with the US Department of Defense (DoD).

Google published its ethical code of practice in June 2018 in response to widespread criticism over its relationship with the US government's weapon program. The company has since said it will no longer cooperate with the US government on projects intending to weaponize algorithms.

In recent months, the company has consolidated its DeepMind and Google Brain subsidiaries into one entity named Google DeepMind. Its CEO Demis Hassabis has committed to ethical AI, with a Google executive having stated that Hassabis only took the position under the condition that the firm would approach AI ethically.

DeepMind CEO Demis Hassabis sat on stage at the WSJ's Future Of Everything Festival 2023

DeepMind CEO Demis Hassabis (Image credit: Getty Images)

The consortium explained: "This partnership on AI will conduct research, organise discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advances the understanding of AI technologies including machine perception, learning, and automated reasoning."

But critics have said this alliance has little chance of success, given the competition between firms and the varying availability and transparency of their models.

The largest tech firms working on AI have public policies on responsible and ethical development of AI.

Both AWS and Meta have focused on producing open AI models produced and maintained by a large community of developers, identifying this as a key factor towards keeping AI responsible, safe, and ethical.

In February 2023 AWS partnered with Hugging Face to ‘democratize’ AI, and its approach to delivering AI through its Bedrock platform has attracted thousands of customers.

Private sector policies regarding AI will be refined in the years to come, as new frameworks become available and the regulatory landscape shifts.

Dale Walker

Dale Walker is the Managing Editor of ITPro, and its sibling sites CloudPro and ChannelPro. Dale has a keen interest in IT regulations, data protection, and cyber security. He spent a number of years reporting for ITPro from numerous domestic and international events, including IBM, Red Hat, Google, and has been a regular reporter for Microsoft's various yearly showcases, including Ignite.