Google AI panel faces backlash as staff protest right-wing council member

Appointment of Kay Coles James goes against the company's AI ethics, Google's employees declare

Google Sign with LGBT colours

Google's employees have written an open letter demanding the removal of one of the AI council members over her track record on LGBT and immigration rights.

Kay Coles James, the president of the right-wing think tank Heritage Foundation, was announced as one of the members of Google's Advanced Technology External Advisory Council (ATEAC) last week, but the appointment has angered many Google employees who feel she is vocally anti-trans, anti-LGBTQ and anti-immigration.

Advertisement - Article continues below

Writing a letter, posted on Medium as well as internally, Googler's Against Transphobia and Hate said her record speaks for itself, over and over again.

"In selecting James, Google is making clear that its version of 'ethics' values proximity to power over the wellbeing of trans people, other LGBTQ people and immigrants. Such a position directly contravenes Google's stated values," the collective said.

Those stated values, announced by Google in June, included 'avoid creating or reinforcing unfair bias'. Google said it wanted to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief. But its appointment of James does show the company is saying one thing and doing another, whether intentional or not.

Advertisement
Advertisement - Article continues below

It follows a similar issue raised by last years women's walk where the company said it supported its female staff, who had opposed Google's handling of sexual harassment cases but was actually found to have tried to block the protest strike.

Advertisement - Article continues below

But, this incident has a deeper issue, particularly as many examples of artificial intelligence have been found to show an unfair bias. From AI that doesn't recognise trans people, doesn't 'hear' more feminine voices and doesn't 'see' women of colour, to AI used to enhance police surveillance, profile immigrants and automate weapons - those who are most marginalised are potentially most at risk.

Featured Resources

Successful digital transformations are future ready - now

Research findings identify key ingredients to complete your transformation journey

Download now

Cyber security for accountants

3 ways to protect yourself and your clients online

Download now

The future of database administrators in the era of the autonomous database

Autonomous databases are here. So who needs database administrators anymore?

Download now

The IT expert’s guide to AI and content management

Your guide to the biggest opportunities for IT teams when it comes to AI and content management

Download now
Advertisement

Most Popular

Visit/mobile/mobile-phones/355239/microsofts-patent-design-reveals-a-mobile-device-with-a-third-screen
Mobile Phones

Microsoft patents a mobile device with a third screen

6 Apr 2020
Visit/security/cyber-security/355271/microsoft-gobbles-up-corpcom-domain-to-keep-it-from-hackers
cyber security

Microsoft gobbles up corp.com domain to keep it from hackers

8 Apr 2020
Visit/server-storage/servers/355254/a-critical-flaw-in-350000-microsoft-exchange-remains-unpatched
servers

A critical flaw in 350,000 Microsoft Exchange remains unpatched

7 Apr 2020