Skip to main content

Our voices may convey subtle clues about our mood and psychological state. Now, scientists are using artificial intelligence to pick up these clues, with the aim of building voice-analyzing technologies that can identify individuals in need of mental-health care. But others caution they could do more harm than good.

At the University of Alberta, computing science PhD student Mashrura Tasnim has developed a machine-learning model that can recognize the speech qualities of people with depression. Her goal is to create a smartphone application that would monitor users’ conversations and alert their emergency contacts or mental-health professionals when it detects depression.

Her work, described in a paper presented in May at the Canadian Conference on Artificial Intelligence, was spurred by tragedy, Ms. Tasnim said. A few years ago, she was working as a lecturer at a university, where a student, who was under the care of a psychosocial counsellor friend of hers, unexpectedly took his own life.

“[My friend] was commenting very regretfully that, ‘[If only] I knew at that moment my patient was in so much stress, so much trouble that they couldn’t handle it anymore,’” Ms. Tasnim said, adding this incident drove her to seek a technological solution.

Ms. Tasnim is among a number of researchers creating artificial-intelligence tools to identify individuals with mental-health issues who might otherwise fail to receive help. Researchers at Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, for instance, developed a machine-learning model that can predict whether people are depressed based on the audio features and text transcriptions of their spoken interactions.

Meanwhile, a team from the University of Vermont and the University of Michigan demonstrated they can use machine learning to identify anxiety and depression in young children through patterns of their speech. And scientists at New York University found they can use a machine-learning program to distinguish the voices of individuals with post-traumatic stress disorder.

These voice-analyzing tools add to the artificial intelligence detection programs already in use. Facebook, for example, began using machine learning in 2017 to scan posts and flag users at risk of suicide.

But while researchers believe these technologies hold great promise and could eventually help save lives, some also warn of serious potential pitfalls.

“This kind of thing is really exciting because … right now, we don’t have good ways to identify people who might benefit from help,” said Brett Thombs, a professor in the department of psychiatry at McGill University in Montreal and a senior investigator at the Lady Davis Institute research centre. At the same time, he said: “People need to be very careful that they could do a lot of harm if they roll this out … before they actually know that it does provide benefit.”

Prof. Thombs, who also chairs the Canadian Task Force on Preventative Health Care, said artificial-intelligence technologies designed to detect mental illnesses should be regarded in the same way as other kinds of medical screening procedures, which require patients’ consent and careful consideration of the costs and benefits.

When it comes to breast cancer, for example, screening may help extend the lives of some women, but others may get false positive results, which can lead to anxiety and unnecessary, invasive tests and treatment, he said. Similar problems can arise from identifying individuals who may not think they have a mental disorder and then telling them they may have one, Prof. Thombs said. Doing so could cause them anxiety or lead to them taking medications that have side effects, he said.

“We’re definitely going to have some costs to it and some harms to some people. That comes with any kind of [screening] like this,” he said. “The question here becomes, well, are there benefits?”

The trouble is, until now, there is no evidence to show that mental-health screening actually provides any benefit, he said. In other words, those who undergo screening do not necessarily fare better than those who do not. Mental-health screening questionnaires used in doctor offices may indeed help physicians identify patients who do not report any problems and do not exhibit any symptoms, he explained. But being identified as having a mental disorder is not, in itself, a benefit; improving patients’ mental health is, he said.

“If we find you and ask you to go through treatment and maybe put you on medication but don’t improve your mental health, we’re actually harming you,” Prof. Thombs said.

A potential reason screening has not been shown to lead to improvements in mental health is patients who have recognizable psychiatric symptoms are generally already easily identified without the use of screening. This means screening tests likely tend to detect those who have very mild to moderate symptoms, which are often tricky to treat and in many cases get better on their own, Prof. Thombs suggested.

Whether artificial-intelligence tools will lead to better outcomes for patients is yet unknown. But for now, there is evidence showing they can detect certain mood disorders with some accuracy.

Using data sets consisting of nearly 500 audio clips of depressed and non-depressed participants, Ms. Tasnim refined her algorithms to distinguish between the two groups with more than 70-per-cent accuracy. The differences are too numerous and too subtle to easily characterize, she explained. For each voice recording, numerical measurements were taken for about 2,200 features for every 20 seconds, including volume, beat, pauses and frequency ranges, she said.

Next, her aim is to train algorithms to detect changes within individuals, to recognize differences in speech patterns when those diagnosed with depression are actually experiencing symptoms, compared with when they are not.

Ms. Tasnim said app developers and users need to be careful to protect users’ privacy and security. (She intends to design her own app to track and use only the numerical measurements of users’ voices and not the content of their conversations. The app would also only send notifications to those whom users entered as their emergency contacts when they installed the app.)

For such technologies to be effective, she added, help must be available when needed.

“We can help people to learn to listen better or to know how people around them are feeling, but it is up to us to move forward, to take a step and to offer support,” she said. “If we know [someone needs help] and we do nothing, it is of no use.”

Our Morning Update and Evening Update newsletters are written by Globe editors, giving you a concise summary of the day’s most important headlines. Sign up today.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe