Facebook’s AI image recognition research aimed at increasing privacy not reducing it

The system subtly distorts images in videos to hinder image recognition systems

Facebook AI

Facebook has created an artificial intelligence system that works to de-identify people, rather than analyse images for facial recognition, such as is the norm with such technology.

While Facebook has previously used facial recognition tech to support automated photograph tagging on its platform, it has stopped doing so by default as part of a move to improve the privacy of its users.

But now it's potentially going a step further and using the same AI-powered technology to counteract facial-recognition systems.

It does this by combining a trained face classifier and an adversarial auto-encoder. The system maps a person's face creating a true image of their appearance and a mask used to hinder and distort identifying parameters of a person's face.

These images can then be used in a video of a person, which has the result of showing an image of a person that's easily identifiable to someone who knows them but has a suite of distortions and almost imperceivable changes to it. As a result, a facial recognition system being applied to the video will have trouble correctly identifying the person who is the subject of the video.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

"Our approach is called Smart Anonymization and it works by removing facial images from videos and still images, and replacing them with computer-generated, photorealistic faces of nonexistent people," explained Facebook's researchers. "The anonymized faces preserve the key, non-identifying attributes of the original faces, ie age, gender, expression, gaze direction, motion etc., but remove all other personally identifying information."

As this AI tech is the fruit of Facebook's research division, it's not likely to be out to commercial use or get integrated into Facebook's main social network platform for some time.

But it does demonstrate how the technology used to identify people can be essentially reversed and used to obfuscate their identity from AI-powered technology. Such technology could help curtail the use of deep fakes, whereby an AI system can superimpose the image of one person over another, potentially giving the impression of them doing something they weren't or putting them in a compromising position; the placing of Hollywood actors into pornographic videos is one such example.

As such, a technology currently seen by some as an invasion of privacy could end up aiding privacy in a potentially ironic turn of fate.

Featured Resources

What you need to know about migrating to SAP S/4HANA

Factors to assess how and when to begin migration

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

Testing for compliance just became easier

How you can use technology to ensure compliance in your organisation

Download now

Best practices for implementing security awareness training

How to develop a security awareness programme that will actually change behaviour

Download now
Advertisement

Most Popular

Visit/policy-legislation/data-governance/354496/brexit-security-talks-under-threat-after-uk-accused-of
data governance

Brexit security talks under threat after UK accused of illegally copying Schengen data

10 Jan 2020
Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/hardware/laptops/354533/dell-xps-13-new-9300-hands-on-review-chasing-perfection
Laptops

Dell XPS 13 (New 9300) hands-on review: Chasing perfection

14 Jan 2020
Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020