DeepMind's AI can lip read better than humans

DeepMind AI beats a human expert in lip reading competition

Google's DeepMind has partnered with Oxford University researchers to create a new AI that can read lips, calling it Watch, Listen and Spell (WLAS).

The researchers released a scientific paper suggesting the newly developed AI could correctly interpret more words that a trained professional in lip reading.

When tested on the same randomly selected 200 clips, a human professional lip reader was able to guess words correctly 12.4% of the time, while WLAS had an accuracy rate of 46.8%.

The paper reads: "The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available."

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

The system was trained on a dataset of 118,000 different sentences (17,500 words) using 5,000 hours of video footage from the BBC.

The BBC videos were prepared using machine learning algorithms, and the AI was also taught to realign video and audio when it was out of sync.

Earlier this month, the University of Oxford published a similar research paper, testing a lip reading program called LipNet. LipNet had a 93.4% level of lip reading accuracy, compared to 52.3% scored by a human expert on the same material presented.

However, LipNet was tested on videos with volunteers saying formulaic sentences, with a dataset of only 51 words, whereas WLAS was tested on a much larger range of data, analysing actual conversations from BBC shows.

There are various possible applications of this lip reading technology. An AI tool such as WLAS could be of great help to improve the quality of live subtitles and better support individuals whose hearing is impaired. 

It could also be a useful additional integration for virtual assistants such as Siri, as they could use the phone camera to lip read, improving their understanding of users' words even in crowded or noisy environments.

Advertisement - Article continues below

Such a tool could also be implemented for surveillance purposes, although reading lips from a grainy CCTV video could prove more challenging.

Featured Resources

Digital Risk Report 2020

A global view into the impact of digital transformation on risk and security management

Download now

6 ways your business could suffer if you don’t backup Office 365

Office 365 makes it easy to lose valuable data regularly, unpredictably, unintentionally, and for good

Download now

Get the best out of your workforce

7 steps to unleashing their true potential with robotic process automation

Download now

8 digital best practices for IT professionals

Don't leave anything to chance when going digital

Download now
Advertisement

Recommended

Visit/technology/artificial-intelligence-ai/354766/mit-develops-ai-tech-to-edit-outdated-wikipedia
artificial intelligence (AI)

MIT develops AI tech to edit outdated Wikipedia articles

13 Feb 2020
Visit/technology/33253/toyota-partners-with-nvidia-to-create-the-future-of-autonomous-vehicles
Technology

Toyota, NVIDIA partner on self-driving cars

20 Mar 2019

Most Popular

Visit/mobile/28299/how-to-use-chromecast-without-wi-fi
Mobile

How to use Chromecast without Wi-Fi

5 Feb 2020
Visit/security/cyber-security/354827/mcafee-researchers-trick-tesla-autopilot-with-a-strip-of-tape
cyber security

McAfee researchers trick Tesla autopilot with a strip of tape

21 Feb 2020
Visit/operating-systems/27717/how-to-fix-a-stuck-windows-10-update
operating systems

How to fix a stuck Windows 10 update

12 Feb 2020
Visit/security/34616/the-top-ten-password-cracking-techniques-used-by-hackers
Security

The top ten password-cracking techniques used by hackers

10 Feb 2020