DeepMind's AI can lip read better than humans

DeepMind AI beats a human expert in lip reading competition

Google's DeepMind has partnered with Oxford University researchers to create a new AI that can read lips, calling it Watch, Listen and Spell (WLAS).

The researchers released a scientific paper suggesting the newly developed AI could correctly interpret more words that a trained professional in lip reading.

When tested on the same randomly selected 200 clips, a human professional lip reader was able to guess words correctly 12.4% of the time, while WLAS had an accuracy rate of 46.8%.

The paper reads: "The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available."

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

The system was trained on a dataset of 118,000 different sentences (17,500 words) using 5,000 hours of video footage from the BBC.

The BBC videos were prepared using machine learning algorithms, and the AI was also taught to realign video and audio when it was out of sync.

Earlier this month, the University of Oxford published a similar research paper, testing a lip reading program called LipNet. LipNet had a 93.4% level of lip reading accuracy, compared to 52.3% scored by a human expert on the same material presented.

However, LipNet was tested on videos with volunteers saying formulaic sentences, with a dataset of only 51 words, whereas WLAS was tested on a much larger range of data, analysing actual conversations from BBC shows.

There are various possible applications of this lip reading technology. An AI tool such as WLAS could be of great help to improve the quality of live subtitles and better support individuals whose hearing is impaired. 

It could also be a useful additional integration for virtual assistants such as Siri, as they could use the phone camera to lip read, improving their understanding of users' words even in crowded or noisy environments.

Advertisement - Article continues below

Such a tool could also be implemented for surveillance purposes, although reading lips from a grainy CCTV video could prove more challenging.

Featured Resources

What you need to know about migrating to SAP S/4HANA

Factors to assess how and when to begin migration

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

Testing for compliance just became easier

How you can use technology to ensure compliance in your organisation

Download now

Best practices for implementing security awareness training

How to develop a security awareness programme that will actually change behaviour

Download now
Advertisement

Recommended

Visit/technology/33253/toyota-partners-with-nvidia-to-create-the-future-of-autonomous-vehicles
Technology

Toyota, NVIDIA partner on self-driving cars

20 Mar 2019

Most Popular

Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/hardware/laptops/354533/dell-xps-13-new-9300-hands-on-review-chasing-perfection
Laptops

Dell XPS 13 (New 9300) hands-on review: Chasing perfection

14 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020