Gartner urges CIOs to consider AI ethics
A new report says CIOs must guarantee good ethics of "smart machines" in order to build trust
CIOs must concentrate on the ethics of "smart machines" - or AI - in business in order to build and maintain trust around the science fictional technology, a report from Gartner has found.
While the world is far from developing an artificially intelligent robot, the analyst house has released a report examining the importance of ethics in what it terms smart machines - whether they be connected Internet of Things (IoT) devices or autonomous robots.
Frank Buytendijk, research vice president and analyst at Gartner, said: "Clearly, people must trust smart machines if they are to accept and use them.
"The ability to earn trust must be part of any plan to implement artificial intelligence (AI) or smart machines, and will be an important selling point when marketing this technology.
"CIOs must be able to monitor smart machine technology for unintended consequences of public use and respond immediately, embracing unforeseen positive outcomes and countering undesirable ones."
To achieve this, Gartner has identified five programming levels including "Non-Ethical Programming" (limited ethical responsibility from the manufacturer), "Ethical Oversight" (responsibility rests with the user), "Ethical Programming" (responsibility is shared between the user, the service provider and the designer), "Evolutionary Ethical Programming" (tasks begin to be performed autonomously), and "Machine-Developed Ethics" (machines are self-aware).
It is by level three, "Evolutionary Ethical Programming", that trust in smart machines becomes more important, with user control lessening as the technology's autonomy increases.
The report notes that level four, at which machines become self-aware, is unlikely to come about in the near future.
"The questions that we should ask ourselves are: How will we ensure these machines stick to their responsibilities? Will we treat smart machines like pets, with owners remaining responsible? Or will we treat them like children, raising them until they are able to take responsibility for themselves?", added Buytendijk.
At the beginning of the year, Stephen Hawking spoke about the coming of artificial intelligence, signing the Future of Life Institute's letter warning against the potential threats AI could bring about.
The letter reads: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."
In contrast to this, chief of Microsoft Research Eric Horvitz claimed that "doomsday scenarios" surrounding AI are unfounded, saying: "There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen.
"I think we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economic to daily life."
Digital Risk Report 2020
A global view into the impact of digital transformation on risk and security managementDownload now
6 ways your business could suffer if you don’t backup Office 365
Office 365 makes it easy to lose valuable data regularly, unpredictably, unintentionally, and for goodDownload now
Get the best out of your workforce
7 steps to unleashing their true potential with robotic process automationDownload now
8 digital best practices for IT professionals
Don't leave anything to chance when going digitalDownload now