If you're using AI, you need to think about ethics

Abstract image showing two heads

The following article originally appeared in Issue 16 of IT Pro 20/20 as part of a new series that invites industry experts to share their experience of tackling some of the most pressing issues facing businesses today. To sign up to receive the latest issue of IT Pro 20/20 in your inbox every month, click here. For a list of previous issues, click here.

Artificial intelligence has immense potential to do good in the world. After decades of slow-burn development, it has become central to improving and automating complicated analytical tasks - studying data in real-time, adapting its behaviour, and increasing accuracy and efficiency.

The field of medicine has effectively become a domain of data, generated and gathered in the process of drug discovery, drug development, and patient care. AI is an accelerating force to help us address complex challenges in this value-chain and has the potential to be used to improve speed, accuracy, and efficiency.

However, while it has huge potential, the rate at which AI is being deployed means its use can be unchecked, and end users aren’t always confident that businesses have their best interests at heart. The European Commission is set to launch ethical principles and legal obligations that must be followed when developing, deploying, and using AI – reinforcing the need for greater regulation in this space. In the absence of formal guidelines, businesses are having to create their own set of principles, just as we have done.

The pandemic has thrown the spotlight on healthcare more than ever before, and tech companies are taking note. Microsoft, with whom we have a multi-year strategic alliance, recently announced its acquisition of Nuance Communications. This is part of its efforts to “bolster its software and artificial intelligence expertise for healthcare companies” after the huge increase in ‘telehealth’ and remote doctor visits in lockdown. Likewise, the pandemic has accelerated the implementation of new technologies and approaches to safely conduct clinical trials in remote environments.

With the increased use and implementation of AI into our workflows, and tech companies venturing further into the healthcare space, there’s a greater need than ever for ethical principles surrounding the use of AI. As a leading medicine company, we have a responsibility to serve patients and hold ourselves to a high standard when it comes to the ethical use of technologies like AI.

AI is already being used in the healthcare sector in a multitude of ways and the COVID-19 situation has only accelerated the need for creative technical solutions. This change in behaviour has also arguably made people more comfortable with embracing these new technologies. Novartis’ AI Nurse, for example, is a WeChat mini app that was developed in conjunction with Tencent for patients diagnosed with heart failure. The app uses AI-driven algorithms to anticipate disease progression, recommend activities, provide targeted coaching, and act as an educational tool. This data is then used to assess a patient’s condition and allows nurses and physicians to remotely track and stay in touch with them.

This is just one example of how AI can help transform people’s lives – but it's vital that robust guidelines are in place to ensure people are protected and aware of how their data is used, and that development of these technologies is done in ways that mitigate unconscious biases and enforce human accountability.

RELATED RESOURCE

IT Pro 20/20: Understanding our complicated relationship with AI

The 16th issue of IT Pro 20/20 looks at the very human problems associated with artificial intelligence

FREE DOWNLOAD

These technological developments come with both opportunities and challenges, leading to important questions which need to be addressed thoughtfully and affirmatively. With AI playing such a critical role in enabling our strategy, we recognise the need to define clear ethical principles. Hence our commitment to deploy AI systems in a transparent and responsible way. Through the development of eight core principles, we will ensure that any use of AI systems has a clear purpose that is respectful of human rights and is accurate and appropriate for the intended context, as well as ensuring it aligns with a core mission of extending and improving human life.

Novartis’ AI ethics principles are based on empowerment, accountability, security, privacy, and bias mitigation. We are also committed to making sure that our AI is transparent and explainable, and that it can consistently review, learn, and adapt.

In practice, the use of these principles involves things like using inclusive and representative data to design, train, and operate AI algorithms to eliminate possible biases on a number of lines, including race, gender, ethnicity, sexual orientation, and political or religious beliefs. It involves the performance of risk impact assessments on the systems to prevent the risk of unconscious bias. Human accountability is also a huge factor to consider when developing or operating any AI systems. Therefore, robust lines of command and impact assessments must always be enforced.

The need for, and challenges surrounding, AI ethics are by no means limited to the healthcare sector. It's the responsibility of all companies using AI, in any sector, to be accountable, meaningful, and transparent. The tech industry has long battled regulatory and ethical issues, and the time is now for businesses and governments to take this matter into their own hands to ensure that the future of AI is fair, ethical, and human-centred at heart.