Fujitsu to establish AI Ethics and Governance Office

The logo of Japanese multinational information technology equipment and services company Fujitsu is seen on a skyscraper in Munich
(Image credit: Getty Images)

Fujitsu has announced plans to establish its own AI Ethics and Governance Office that will oversee the “safe and secure deployment” of artificial intelligence (AI) and machine learning technologies.

Scheduled to be created on 1 February 2022, the office will be led by Junichi Arahori, former head of Fujitsu's Digital Technology Promotion Legal Office,

According to the Japanese tech giant, the office will focus on “implementing measures to actively promote ethics related to the research, development, and implementation of advanced technologies”. These are to be based on international best practices, as well as existing policies and legal frameworks.

Commenting on the news, Deloitte’s AI Ethics lead Michelle Seng Ah Lee told IT Pro that it's exciting to see organisations like Fujitsu take steps to enhance governance of AI systems "beyond agreement on the principles”.

“It represents an increasing consensus in industry that AI systems may pose new risks and ethical considerations to businesses and to our society, which require robust governance and monitoring to be in place,” she said.

However, Lee also noted that, although establishing ethical principles is an important first step, those “need to be operationalised into day-to-day practice through policies and processes”.

“Only when AI risks are appropriately governed will organisations have the confidence to innovate,” she added.

Leiden University associate professor of AI, Peter van der Putten, believes that the news heralds a shift in attitudes on ethical AI:

“It is very much in line with how 2022 will be the year we see ethical AI move beyond ‘fluffy policy’ and become embedded in tangible tools and actual law and regulations," he told IT Pro.

"It’s been a fashionable trope in recent years for both businesses and governments to talk about using AI in a way that is both ethical and responsible. Organisations have become well-versed and even better practised at telling us how important this is, yet, actual steps to ensure their impact has been much rarer."

RELATED RESOURCE

Seven leading machine learning use cases

Seven ways machine learning solves business problems

FREE DOWNLOAD

This year has the potential to bring change, he added, allowing AI to “move into the realm of solid regulation”.

As an example, van der Putten referenced the draft regulations proposed last year by the EU that aim to deliver “harmonised rules” on AI across the EU’s 27 member states.

“We will see more organisations finding themselves having to prove that they are not just complying with regulations around ethics and responsibility in the way they are using AI, but also that they are using it to benefit customers and provide them with the transparency and explainability required to reassure consumers that it is being used as a force for good,” he added.

Sabina Weston

Having only graduated from City University in 2019, Sabina has already demonstrated her abilities as a keen writer and effective journalist. Currently a content writer for Drapers, Sabina spent a number of years writing for ITPro, specialising in networking and telecommunications, as well as charting the efforts of technology companies to improve their inclusion and diversity strategies, a topic close to her heart.

Sabina has also held a number of editorial roles at Harper's Bazaar, Cube Collective, and HighClouds.