Google AI panel faces backlash as staff protest right-wing council member

Appointment of Kay Coles James goes against the company's AI ethics, Google's employees declare

Google Sign with LGBT colours

Google's employees have written an open letter demanding the removal of one of the AI council members over her track record on LGBT and immigration rights.

Kay Coles James, the president of the right-wing think tank Heritage Foundation, was announced as one of the members of Google's Advanced Technology External Advisory Council (ATEAC) last week, but the appointment has angered many Google employees who feel she is vocally anti-trans, anti-LGBTQ and anti-immigration.

Writing a letter, posted on Medium as well as internally, Googler's Against Transphobia and Hate said her record speaks for itself, over and over again.

"In selecting James, Google is making clear that its version of 'ethics' values proximity to power over the wellbeing of trans people, other LGBTQ people and immigrants. Such a position directly contravenes Google's stated values," the collective said.

Advertisement
Advertisement - Article continues below

Those stated values, announced by Google in June, included 'avoid creating or reinforcing unfair bias'. Google said it wanted to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief. But its appointment of James does show the company is saying one thing and doing another, whether intentional or not.

It follows a similar issue raised by last years women's walk where the company said it supported its female staff, who had opposed Google's handling of sexual harassment cases but was actually found to have tried to block the protest strike.

But, this incident has a deeper issue, particularly as many examples of artificial intelligence have been found to show an unfair bias. From AI that doesn't recognise trans people, doesn't 'hear' more feminine voices and doesn't 'see' women of colour, to AI used to enhance police surveillance, profile immigrants and automate weapons - those who are most marginalised are potentially most at risk.

Featured Resources

The IT Pro guide to Windows 10 migration

Everything you need to know for a successful transition

Download now

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Software-defined storage for dummies

Control storage costs, eliminate storage bottlenecks and solve storage management challenges

Download now

6 best practices for escaping ransomware

A complete guide to tackling ransomware attacks

Download now
Advertisement

Most Popular

Visit/cloud/microsoft-azure/354230/microsoft-not-amazon-is-going-to-win-the-cloud-wars
Microsoft Azure

Microsoft, not Amazon, is going to win the cloud wars

30 Nov 2019
Visit/business/business-strategy/354252/huawei-takes-the-us-trade-sanctions-into-its-own-hands
Business strategy

Huawei takes the US trade sanctions into its own hands

3 Dec 2019
Visit/hardware/354237/five-signs-that-its-time-to-retire-it-kit
Sponsored

Five signs that it’s time to retire IT kit

29 Nov 2019
Visit/mobile/mobile-phones/354273/pablo-escobars-brother-launches-budget-foldable-phone
Mobile Phones

Pablo Escobar's brother launches budget foldable phone

4 Dec 2019