IBM to kill its own facial recognition technology

The unethical use of AI by law enforcement raises alarms after two weeks of Black Lives Matter protests

IBM has decided to “sunset” its general-purpose facial recognition and analysis software suite over ethical concerns following a fortnight of Black Lives Matter protests.

Despite putting a lot of efforts into developing its AI-powered tools, the cloud giant will no longer distribute these systems for fear that it could be used for purposes that go against the company’s principles of trust and transparency. 

Advertisement - Article continues below

Specifically, there are concerns the technology could be used for mass surveillance, racial profiling and the violations of basic human rights and freedoms. This is in addition to the company now deploring the use of facial recognition in principle, and by rival vendors, for such purposes.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” CEO Arvind Krishna outlined in a letter to the US Congress.

“Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”

The announcement represents a major shift, given the company has previously ploughed considerable money and effort into building out its capabilities, and has occasionally courted controversy in the process.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

In March 2019, for example, IBM was called out for using almost a million photos from photo-sharing site Flickr to train its facial recognition algorithms without the consent of the subjects. Those in the pictures weren’t advised the firm was going to use their images to help determine gender, race and other identifiable features, such as hair colour.

Several months before that, the company was found to have been secretly using video footage collected by the New York Police Department (NYPD) to develop software that can identify individuals based on distinguishable characteristics.

IBM had created a system that allowed officers to search for potential criminals based upon tags, including facial features, clothing colour, facial hair, skin colour, age, gender and more. Overall, it could identify more than 16,000 data points, rendering it extremely accurate in recognising faces.

Related Resource

Embedding AI-powered analytics into your application

How leveraging AI to power analytics in your software platform gives you a competitive advantage

Download now

While the general use of facial recognition in law enforcement is not entirely uncommon, it has run into legal blockades, with jurisdictions, such as San Francisco, banning its use altogether, for example.

Advertisement - Article continues below

Police forces in the UK, meanwhile, have been trialling such systems, but the Information Commissioner’s Office (ICO) has effectively neutered these plans after urging branches to assess data protection risks and ensure there’s no bias in the software being used.

In addition to permanently withdrawing its facial recognition technology, IBM has called for a national policy that encourages the use of technology to bring greater transparency and accountability to policing. These may include body cameras and data analytics techniques.

Much in step with IBM until now, a number of other major companies have engaged in developing their own AI-powered facial recognition capabilities which have often also courted controversy. 

AWS has come under fire for building its highly sophisticated Rekognition technology with alleged racial and gender bias. The company’s shareholders overturned an internal revolt over the sale of Rekognition to the police by an overwhelming majority of 97% in May 2019, for example.

The claims were based on MIT research that found it mistakenly identified some pictures of woman as men 31% of the time, which was more prevalent when it was shown pictures of darker-skinned women. This was against an error rate of 1.5% with Microsoft’s software.

Featured Resources

Preparing for long-term remote working after COVID-19

Learn how to safely and securely enable your remote workforce

Download now

Cloud vs on-premise storage: What’s right for you?

Key considerations driving document storage decisions for businesses

Download now

Staying ahead of the game in the world of data

Create successful marketing campaigns by understanding your customers better

Download now

Transforming productivity

Solutions that facilitate work at full speed

Download now
Advertisement
Advertisement

Most Popular

Visit/business/business-operations/356395/nvidia-overtakes-intel-as-most-valuable-us-chipmaker
Business operations

Nvidia overtakes Intel as most valuable US chipmaker

9 Jul 2020
Visit/laptops/29190/how-to-find-ram-speed-size-and-type
Laptops

How to find RAM speed, size and type

24 Jun 2020
Visit/hardware/components/356405/is-it-time-to-put-intel-outside
components

Is it time to put Intel Outside?

10 Jul 2020