MIT researchers teach AI to spot depression

Researchers used machine learning to build a neural network to recognise the signs of depression in speech and text

Robot psychologist

Researchers at MIT have created a neural network that can be used to spot the signs of depression in human speech.

In a paper being presented at the Interspeech Conference, the researchers detail a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression.

"The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech," says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory.

It is so advanced, the researchers say that given a new subject, it can accurately predict if the individual is depressed, without needing any other information about the questions and answers.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

"If you want to deploy depression-detection models in a scalable way, you want to minimize the number of constraints you have on the data you're using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual," said Alhanai. 

It is hoped this method has the potential to be developed as a tool to detect the signs of depression in natural conversation, such as a mobile app that monitors a user's text and voice for mental distress and send alerts.

The researchers' model was trained and tested on a dataset of 142 interactions from audio, text, and video interviews of patients with mental-health issues and virtual agents controlled by humans.

Each subject was scored in terms of depression on a scale between 0 to 27, using a personal health questionnaire. Scores between 10 to 14 were considered moderate and those between 15 to 19 were considered depressed, while all others below that threshold were considered not depressed. Out of all the subjects in the dataset, 20% were labelled as depressed.

A key insight from the research was that during experiments, the model needed much more data to predict depression from audio than it did text. With text, the model accurately detects depression using an average of seven question-answer sequences. Whereas with audio, the model needed around 30 sequences.

"That implies that the patterns in words people use that are predictive of depression happen in shorter time span in text than in audio," Alhanai added.

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now
Advertisement

Most Popular

Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020
Visit/business-strategy/mergers-and-acquisitions/354602/xerox-to-nominate-directors-to-hps-board-reports
mergers and acquisitions

Xerox to nominate directors to HP's board – reports

22 Jan 2020