Google AI panel faces backlash as staff protest right-wing council member

Appointment of Kay Coles James goes against the company's AI ethics, Google's employees declare

Google Sign with LGBT colours

Google's employees have written an open letter demanding the removal of one of the AI council members over her track record on LGBT and immigration rights.

Kay Coles James, the president of the right-wing think tank Heritage Foundation, was announced as one of the members of Google's Advanced Technology External Advisory Council (ATEAC) last week, but the appointment has angered many Google employees who feel she is vocally anti-trans, anti-LGBTQ and anti-immigration.

Advertisement - Article continues below

Writing a letter, posted on Medium as well as internally, Googler's Against Transphobia and Hate said her record speaks for itself, over and over again.

"In selecting James, Google is making clear that its version of 'ethics' values proximity to power over the wellbeing of trans people, other LGBTQ people and immigrants. Such a position directly contravenes Google's stated values," the collective said.

Those stated values, announced by Google in June, included 'avoid creating or reinforcing unfair bias'. Google said it wanted to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief. But its appointment of James does show the company is saying one thing and doing another, whether intentional or not.

Advertisement
Advertisement - Article continues below

It follows a similar issue raised by last years women's walk where the company said it supported its female staff, who had opposed Google's handling of sexual harassment cases but was actually found to have tried to block the protest strike.

Advertisement - Article continues below

But, this incident has a deeper issue, particularly as many examples of artificial intelligence have been found to show an unfair bias. From AI that doesn't recognise trans people, doesn't 'hear' more feminine voices and doesn't 'see' women of colour, to AI used to enhance police surveillance, profile immigrants and automate weapons - those who are most marginalised are potentially most at risk.

Featured Resources

Preparing for long-term remote working after COVID-19

Learn how to safely and securely enable your remote workforce

Download now

Cloud vs on-premise storage: What’s right for you?

Key considerations driving document storage decisions for businesses

Download now

Staying ahead of the game in the world of data

Create successful marketing campaigns by understanding your customers better

Download now

Transforming productivity

Solutions that facilitate work at full speed

Download now
Advertisement

Most Popular

Visit/mobile/google-android/356373/over-2-dozen-additional-android-apps-found-stealing-user-data
Google Android

Over two dozen Android apps found stealing user data

7 Jul 2020
Visit/laptops/29190/how-to-find-ram-speed-size-and-type
Laptops

How to find RAM speed, size and type

24 Jun 2020
Visit/cloud/356260/the-road-to-recovery
Sponsored

The road to recovery

30 Jun 2020