IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

Google AI panel faces backlash as staff protest right-wing council member

Appointment of Kay Coles James goes against the company's AI ethics, Google's employees declare

Google Sign with LGBT colours

Google's employees have written an open letter demanding the removal of one of the AI council members over her track record on LGBT and immigration rights.

Kay Coles James, the president of the right-wing think tank Heritage Foundation, was announced as one of the members of Google's Advanced Technology External Advisory Council (ATEAC) last week, but the appointment has angered many Google employees who feel she is vocally anti-trans, anti-LGBTQ and anti-immigration.

Writing a letter, posted on Medium as well as internally, Googler's Against Transphobia and Hate said her record speaks for itself, over and over again.

"In selecting James, Google is making clear that its version of 'ethics' values proximity to power over the wellbeing of trans people, other LGBTQ people and immigrants. Such a position directly contravenes Google's stated values," the collective said.

Those stated values, announced by Google in June, included 'avoid creating or reinforcing unfair bias'. Google said it wanted to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief. But its appointment of James does show the company is saying one thing and doing another, whether intentional or not.

It follows a similar issue raised by last years women's walk where the company said it supported its female staff, who had opposed Google's handling of sexual harassment cases but was actually found to have tried to block the protest strike.

But, this incident has a deeper issue, particularly as many examples of artificial intelligence have been found to show an unfair bias. From AI that doesn't recognise trans people, doesn't 'hear' more feminine voices and doesn't 'see' women of colour, to AI used to enhance police surveillance, profile immigrants and automate weapons - those who are most marginalised are potentially most at risk.

Featured Resources

Join the 90% of enterprises accelerating to the cloud

Business transformation through digital modernisation

Free Download

Delivering on demand: Momentum builds toward flexible IT

A modern digital workplace strategy

Free download

Modernise the workforce experience

Actionable insights and an optimised experience for both IT and end users

Free Download

The digital workplace roadmap

A leader's guide to strategy and success

Free Download

Most Popular

Raspberry Pi launches next-gen Pico W microcontroller with networking support
Hardware

Raspberry Pi launches next-gen Pico W microcontroller with networking support

1 Jul 2022
Universities are fighting a cyber security war on multiple fronts
cyber security

Universities are fighting a cyber security war on multiple fronts

4 Jul 2022
Hackers claim to steal personal data of over a billion people in China
data breaches

Hackers claim to steal personal data of over a billion people in China

4 Jul 2022