Researchers develop AI to fool facial recognition tech

A team from the University of Toronto has created an algorithm to disrupt the technology

AI artificial intelligence

A team of engineering researchers from the University of Toronto have created an algorithm to dynamically disrupt facial recognition systems.

Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other.

Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race.

Advertisement - Article continues below

"The disruptive AI can 'attack' what the neural net for the face detection is looking for," Bose said in an interview with Eureka Alert.

"If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they're less noticeable. It creates very subtle disturbances in the photo, but to the detector, they're significant enough to fool the system."

The result looks similar to an Instagram filter that can be applied to photos to protect privacy. The algorithm targets very specific pixels in the image, making subtle changes that are almost imperceptible to the human eye.

"The key here was to train the two neural networks against each other, with one creating an increasingly robust facial detection system, and the other creating an ever stronger tool to disable facial detection," added Bose.

Advertisement - Article continues below

Concerns over privacy and data security are high with questions being asked of the likes of Google, Amazon and the Metropolitan Police in London who are implementing and providing facial recognition technology.

Advertisement - Article continues below

Google has unveiled doorbells that use facial recognition cameras, which will go on sale in British suburbs, raising concerns about invasion of privacy.

Amazon has come under fire from the American Civil liberties Union (ACLU) and others for providing the US police force with its facial recognition software.

London's Met police were said to be using 'dangerously inaccurate' facial recognition technology that is claimed to have a failure rate of 98%.

Aarabi believes 'anti' facial recognition systems can benefit personal privacy as the neural nets become more and more advanced.

"Personal privacy is a real issue as facial recognition becomes better and better," added Aarabi. "This is one way in which beneficial anti-facial-recognition systems can combat that ability."

"Ten years ago these algorithms would have to be human-defined, but now neural nets learn by themselves, you don't need to supply them anything except training data.

"In the end, they can do some really amazing things. It's a fascinating time in the field, there's enormous potential."

Image credit: Shutterstock 

Featured Resources

The case for a marketing content hub

Transform your digital marketing to deliver customer expectations

Download now

Fast, flexible and compliant e-signatures for global businesses

Be at the forefront of digital transformation with electronic signatures

Download now

Why CEOS should care about the move to SAP S/4HANA

And how they can accelerate business value

Download now

IT faces new security challenges in the wake of COVID-19

Beat the crisis by learning how to secure your network

Download now


Business strategy

The benefits of hot desking

28 May 2020
Business strategy

The IT Pro Panel

25 May 2020
machine learning

What are the pros and cons of AI?

21 Apr 2020

What is ethical AI?

8 Apr 2020

Most Popular


Ransomware collective claims to have hacked NASA IT contractor

3 Jun 2020

VMware Cloud Director exploit lets hackers seize corporate servers

2 Jun 2020

How data science is transforming business

29 May 2020