Is artificial intelligence safe?

Blue outline of head with AI inside

This article, and the quotes featuring within it, was originally published in 2017. It has since been updated.

As an emerging technology, artificial intelligence (AI) offers businesses innovative capabilities through data analysis. Essentially, AI allows machines and computers to learn problem-solving and decision-making skills by mimicking human behaviour.

RELATED RESOURCE

Trusted AI: A guide to building fair and unbiased AI systems

How to tackle AI bias and more

FREE DOWNLOAD

‘Simple’ examples of AI we use on a daily basis are things like speech recognition and online chatbots or virtual assistants, which enable businesses to offer enhanced customer experiences through these transformative abilities.

AI’s popularity has, therefore, increased and organisations are keen to showcase how they’re using it. For example, in the retail sector, Tesco and Amazon have both recently launched till-free shopping.

Despite the many benefits machine learning brings, there are also drawbacks and negative connotations. Concerns are especially widespread about society being harmed by AI in various ways, with key areas of scepticism including the loss of jobs and a reduction in the sense of purpose for people whose roles have been taken over by machines.

In fact, in April 2017, Kai-Fu Lee, founder of leading venture capital firm Sinovation, told CNBC “AI will kill 50% of jobs within the next decade.”

More recently, in July 2020, Deloitte’s 3rd edition of the State of AI in the Enterprise found that although 90% of businesses believe AI is critical to their business, over half of adopters acknowledged slow adoption because of the risks of negative public perception.

For example, as AI enhances the ability to use personal information, can we trust that our privacy won’t be invaded, or that data tracking companies are using our information appropriately?

Ultimately, is AI safe?

Bad AI can mean life or death

Specialist software design and engineering company, Aricent, operating as Capgemini Engineering as of April 2021, was one of the pioneers of commercial in the world of AI, having provided AI and machine learning expertise to the likes of IBM, Microsoft and Amazon. As you would expect, Capgemini Engineering’s EVP and chief research and innovation officer, Walid Ne gm (formerly CTO at Aricent), is a huge believer in the technology but believes that organisations need to show responsibility when using it. He says businesses need to keep human involvement when it comes to implementation.

"Companies are investing in AI to create experiences that have already raised customer expectations. However, AI doesn't just happen. For the foreseeable future, businesses will need to be responsible for curating, selecting, evaluating and fine-tuning models that are meant to accurately understand and explain specific situations; for example, recognise what's a dog versus a hot dog or predict the onset of an engine's failure," he says.

"However, AI models are only as good as the underlying historical observations used to build them. When the data does not accurately represent the real world, or is biased in some way, the recommendations, suggestions, and forecasts can quickly run amok. An accident involving an Uber self-driving car highlights the dangers around AI."

If a business fails to take AI safety seriously, it can end up facing public humiliation and financially damaging lawsuits. "An AI algorithm that is fitted on faulty knowledge can mean life and death," he added. "In the best case, a faulty model will result in customers abandoning a product. So, without human judgment of machine learning models, companies introduce the risk of reputation damage, financial losses, potential lawsuits and/or a public backlash. Over time, product makers will need to figure out how to catch this AI tiger by the tail."

Data challenges

Nick Patience, co-founder of 451 Research, says AI will certainly have an impact on jobs but that the most challenging problem will be around data. He explains that companies need to create systems that are transparent and ethical. "AI and machine learning-driven applications will initially take over certain tasks - rather than entire jobs - currently performed by humans. Initially, these will be the most repetitive and mundane tasks. Over time, though, some jobs will be replaced entirely, in areas such as transportation and retail," he tells IT Pro.

"Data is the feedstock of AI, especially unstructured data, giving insights into customer intent, employee behaviour. However, as consumers realise quite how much data is being collected on them to fuel these models and algorithms, there will be pushback as more stringent privacy controls are demanded.

"There is the danger of bias being baked into machine learning applications at any stage, be it the data, the training of models and or the programming of algorithms. Developers and owners of those applications need to guard against this but also make the applications sufficiently transparent so biases can be detected and fixed at whatever stage they occur."

While AI can certainly speed up business processes, that's not to say the technology will be the best thing for companies. Jane Zavalishina, Co-Founder of Mechanica AI, argued that firms will likely struggle to integrate AI systems into existing business operations and humans will still be more capable in other areas, such as common sense and compassion.

"Due to its ability to make better predictions or recommendations for routine decisions, AI will become a natural part of business. While we may argue about job automation, task automation is inevitable. This leads us to a core challenge: successfully integrating functions, now executed by AI, into the existing business processes," she says.

"AI can be very efficient when applied well, but it is very different from your usual employee. For example, it doesn't have common sense. Thus, when defining the tasks, one should always be very careful in outlining restrictions and goal metrics - not forgetting the small details that seem obvious to humans.

"Nor can AI generate trust in a way humans do, by supporting decisions with solid arguments. AI will set us free from mundane, repetitive activities because it's much better at this job. In exchange, we will need to learn how to be better "bosses" for our AI "employees."

Academic views

Daniel Kroening, a professor of computer science at the University of Oxford and founder of AI company DiffBlue, says much unpredictability surrounds AI. This, in his opinion, is the most worrying thing about the technology. "The unpredictable and complex nature of AI presents one of the biggest challenges for humans in understanding its behaviour. This is why we need to develop AI that will be highly intelligent, but transparent enough for humans to understand its complex decisions. At Diffblue we are creating AI that fixes bad code in a way that a developer can comprehend and review easily," he says.

Meanwhile, Dr Aniko Ekart, senior lecturer at Aston University in Birmingham, says AI will become a core part of our daily lives, introducing many benefits rather than challenges. "AI and robotics aren't science fiction anymore; they are becoming part of our daily lives. Research in this field is driven by curiosity about how humans and animals operate, as much as desire to improve quality of life. The rapid advances are certainly leading to reduced need for some jobs and skills, while continuously changing and shaping other jobs," she says.

"But should we be scared that machines will take over? Consider the example of town criers, initially having a major role in making public announcements. Over the centuries, communication has been transformed through the invention of the loudspeaker, radio and television broadcast, internet, YouTube and social media. As some jobs disappeared, many new ones came into existence and instantaneous communication of news to large audiences is now available to virtually anyone.

"Similarly, advances in AI research are bound to further improve our quality of life and bring many benefits. It's our responsibility as scientists and educators to educate the public and prepare the next generation for a future alongside robots -- and empower them to embrace AIR and ensure its use for the benefit of humanity."

It's clear that over the next few decades, AI will play an integral role in our daily lives and society. While there will be benefits, however, the technology industry can't shy away from the challenges. Businesses need to take in mind the impact AI will have on jobs and develop systems that put safety first.

Nicholas Fearn is a freelance technology journalist and copywriter from the Welsh valleys. His work has appeared in publications such as the FT, the Independent, the Daily Telegraph, the Next Web, T3, Android Central, Computer Weekly, and many others. He also happens to be a diehard Mariah Carey fan. You can follow Nicholas on Twitter.