Taming the machine: AI Governance

A blue 2D head in profile made up of binary and circuit boards
(Image credit: Shutterstock)

In the race to stay competitive, artificial intelligence (AI)particularly machine learning (ML) – has massively expanded. Businesses and organisations can see their competitive advantage means analysing the vast datasets they have collected to reveal value.

However, the speed at which AI is evolving has led to questions over whether these systems are being set up correctly and whether the actionable outputs from the AI are accurate. What’s more, as AI tools can often be described as ‘black box’ technologies, can governance ever be adequately applied?

It's critical as the first step with AI governance to define what we mean by this term. Governance is often associated with compliance and even policing. Here, how the AI is being used and how these systems are applied to specific datasets must be clearly defined.

Speaking to IT Pro, Simon McDougall, deputy commissioner of regulatory innovation and technology, Information Commissioner’s Office (ICO), explains that context and use cases are vital to defining whether an AI system will be accurately governed.

“Today, we have off-the-shelf AI, which is often offered as a cloud-based service, so one area that is vitally important to understand is actually the procurement of these systems. Is the business buying the right system for them? The issue is that the buyers of these services may not be fully aware of the risk of bias. How important explainability is, and context and use cases, are vital to understand if the AI is to be used correctly,” McDougall says.

This concept of explainability, where the outputs from any given system must be fully explained, has become increasingly important in discussions about ethical AI and is central to AI governance. Businesses and organisations – especially within some highly regulated industries – need to be able to explain how their AI is operating to their regulators.

In its report on the future evolution of AI governance, KPMG found that 87% of IT decision-makers believe technologies powered by AI should be subject to regulation. More telling is that 94% of the same group state businesses should consider AI governance within their enterprise's broader ethical responsibilities.

Michelle Lee, AI ethics lead in the risk analytics team at Deloitte, tells IT Pro the practical application of AI governance is still a challenge for many businesses. "While some companies have introduced ‘trustworthy’ AI principles, many are still struggling to roll-out these principles into concrete operating models and processes,” Lee says. “It’s important to have a systematic way of assessing the competing objectives in AI – such as privacy, performance, and the ability to explain their processing – as well as trade-offs they may need to make to ensure the technology works effectively and delivers results.”

Responsible AI

The field of AI is very new and as a consequence, how these technologies should be governed and regulated is also in development. This was highlighted by the Committee on Standards in Public Life report that concluded the development of risk-based governance for AI should be a priority.

How governance in the AI space should be approached has parallels with GDPR: The regulation establishes in law the concept of the data controller and accountability, both of which can be applied to the burgeoning AI industry.

Businesses should practice their usual due diligence assessments when considering the purchase of any AI system. Understanding how the systems work and, critically, which training datasets were used, will all lead to higher confidence levels in the system's outputs and deliver robust governance by extension.

Svetlana Sicular, Gartner’s vice president of research, says that good AI governance is multifaceted. "Governance policy is only part of AI governance. It also includes standards, guidelines, best practices, and often, education that is commonly known as responsible AI education and data AI literacy,” she explains. “Additionally, AI governance is about how to best organise for AI from the responsible AI and AI ethics perspectives.”

Sulabh Soral, chief AI officer at Deloitte, warns that particular care should be taken to prevent AI systems unintentionally learning bias. “Within AI governance and framework development, there is a growing awareness of ‘vulnerability and bias mining’, particularly due to the experiences with social media,” Soral says. “As most organisations digitise and embed AI within apps to manage customer engagement and personalisation, they will invariably face the challenge of how to control their algorithms from learning psychological or primordial vulnerabilities and biases.”

Watching the machines

The responsibility for AI governance rests firmly with the business using the technology, as McDougall states: “Who has responsibility for how these systems are set up and then deployed is not a complicated question for the ICO. For us, GDPR contains the concept of data controllers.

“Ultimately, the business or organisation that uses these machine learning systems is fully responsible for setting up and then applying it to the dataset. The buyers of these systems have to perform their due diligence. They have to ensure they own their decisions. The responsibility for this isn't with the vendor.”

Sicular, meanwhile, says that AI governance should be a business-wide initiative and not simply an IT exercise. “The AI community increasingly speaks about responsible AI that applies to organisations, people and society. AI governance is a mechanism to implement responsible AI. It should take into consideration the interests of [these groups]. And of course, it should apply to data science where it balances the guardrails with the data science freedom that is necessary for AI creativity,” she says.

Any AI strategy a business is developing must have governance at its foundation. As AI – and particularly ML – proliferate, how these systems are governed will increasingly be a critical component of their successful deployment.

There must be some level of pragmatism, too. It’s almost impossible to completely remove bias, for example, but governance should be about recognising and managing the levels of bias. The same can be said for the level of ethics or the level of explainability that an AI system has.

McDougall concludes: “We live in a world that is risk based. If a business thinks it has solved the issue of bias with its systems, I would argue, that is the moment their systems are at the most risky for any end user. I would be more comfortable with an AI vendor that states their systems are not perfect but can show a level of risk that their systems could be biased in some way is acceptable.”

Customer-facing services that have a machine learning element must have a level of trust by their end-users. Increasingly, showing that good governance has been used – particularly in sensitive areas such as healthcare and finance – will become a prerequisite for many of these systems.

David Howell

David Howell is a freelance writer, journalist, broadcaster and content creator helping enterprises communicate.

Focussing on business and technology, he has a particular interest in how enterprises are using technology to connect with their customers using AI, VR and mobile innovation.

His work over the past 30 years has appeared in the national press and a diverse range of business and technology publications. You can follow David on LinkedIn.