Is it possible to identify mental illnesses using AI-driven voice analysis | Technology and Health
|Is it possible to identify mental illnesses using AI-driven voice analysis|
Can (AI) Artificial Intelligence-assisted voice analysis assist in the diagnosis of mental illnesses?
New York - Imagine a quick and easy test like taking your temperature or blood pressure that can reliably detect anxiety disorder or predict an impending recurrence of depression.
Healthcare providers have many tools for measuring a patient's physical condition, but no reliable biomarker for assessing mental health - an objective indicator of the medical condition observed from outside the patient.
But some AI researchers now believe that your voice can hold the key to understanding your mental state - and that artificial intelligence (AI) is good at identifying changes that would otherwise be difficult if not impossible to notice. The result is a suite of apps and online tools designed to track your mental state, as well as programs that provide real-time mental health assessments to telemedicine and call center providers.
Dr. Maria Espinola, a psychologist and assistant professor at the University of Cincinnati School of Medicine, says psychologists have long known that certain mental health problems can be identified not only by what a person says, but also by what they say.
For depressed people, said. Espinola: "Their language is usually more monotonous, peaceful and soft. They have a reduced tonal range and lower volume. You need more breaks. They stop often."
People with anxiety disorders increase physical arousal, which also changes their voice, he said. "They talk fast. They have trouble breathing."
Today, machine learning researchers use this type of voice signature to predict depression and anxiety, as well as other mental disorders such as schizophrenia and post-traumatic stress disorder. Capturing short recordings using deep learning algorithms can uncover additional patterns and features that even a trained expert may not notice.
Kate Bentley, PhD, an assistant professor at Harvard Medical School and a clinical psychologist at Massachusetts General Hospital, said: "The technology we are using now can extract meaningful features that even the human ear cannot detect."
"It's exciting to look for biological or more objective methods of psychiatric diagnosis that go beyond the more thematic forms of assessment traditionally used, such as clinical graded interviews or self-reports," he told Level, Sleep Patterns and Social Media-Data.
This technological advancement comes at a time when the need for mental health care is particularly acute. According to the National Alliance for Mental Illness, by 2020, one in five adults in the United States will have some form of mental illness. And that number keeps growing.
Dr Bentley said that while AI technology cannot meet the shortage of qualified mental health providers - not enough to meet the country's demand - there is hope to reduce barriers to accurate diagnosis, and physicians can also help identify patients with the convenience of seeking medical care and seeing a doctor. Decisiveness of self-observation.
"A lot can happen in appointments and technology allows us to continuously improve monitoring and evaluation," he said. Bentley.
To test out the new technology, I first downloaded the Mental Fitness app from the health technology company Sound Health to see if my discomfort was a serious symptom or if I was tired. Described as a "voice-enabled mental health tracking and journal product", the free app invites me to record my first check-in, a 30-second verbal journal entry that ranks my mental health from 1 to 100.
One minute later, I got my score: a remarkable 52. The app warns of "caution".
The app detects very little activity in my voice. Am I bored just because I try to whisper? Should I consult an app to improve my mental health while walking or organizing my space? (The first question may indicate that the app may be buggy: As a consumer, it may be difficult to know why your speech volume is fluctuating.)
Later, when I felt nervous during the interview, I checked out another speech analysis program that focused on anxiety identification. The StressWaves test, a free online tool from Health and Insurance Group Signer and developed in collaboration with AI specialist Ellipse Health, uses a 60-second sample of voice recordings to assess stress levels.
"What keeps you awake at night?" The website asks. After taking a minute to resolve my ongoing concerns, the program evaluated my acceptance and sent me an email saying, "Your stress level is moderate." Unlike the Probe app, Signer's email did not provide any useful self-improvement tips.
Other technologies add a potential level of human-machine interaction, such as the Berkeley, California-based Kintsugi take, for example, which recently raised $ 20 million (S $ 27 million) in a Series A funding round. Kintsugi is named after Japanese men who used gold veins to repair broken pottery.
Kintsugi was founded by Mrs. Grace Chang and Mr. Rima Sililova-Olson, who were bound by their shared experience in trying to access mental health care.
For example, using Kintsugi's speech analysis program, a nurse may be asked to spend an extra minute asking a busy parent about her health.
One problem with developing these types of machine learning techniques is the issue of bias - ensuring that programs work properly for all patients, regardless of age, gender, race, nationality or other population.
"In order for machine learning models to work well, you need to have very large, varied and powerful datasets," said Ms. Chang, adding that Kintsugi uses records from different languages around the world, especially to help prevent this problem.
Another big concern in this emerging case, said. At Bentley, privacy - especially voice data that can be used to identify individuals.
Even when patient consent is recorded, the issue of consent is sometimes doubled. In addition to assessing patients' mental health, some speech analysis programs use audio recordings to develop and improve their algorithms.
Another challenge, said. Customers' inherent distrust of Bentley, machine learning, and so-called black-box algorithms that work in ways that even developers can't fully explain - especially the functions they use to predict.
"Build algorithms, understand algorithms," said Dr. Alexander Young, interim director of the Semel Institute for Neuroscience and Human Behavior and chair of UCLA's Department of Psychiatry. This reflects the concerns of many researchers about artificial intelligence and machine learning. The following applies: There is little, if any, human support during the training phase of the project.
For now, Young people are cautiously optimistic about the potential of speech analysis technology, especially as a tool for patient self-monitoring.
"I think you can model people's mental health or their mental health in general," he said. "People like to be able to monitor their own condition, especially those with chronic illness."
But before automated speech analysis technology reached the mainstream, some called for rigorous study of its accuracy.
"We need more validation for AI and machine learning models based on other data streams, not just for language technology," he said. Bentley. "We need to achieve this legitimacy through large-scale, well-planned representative studies."
Until then, AI-driven speech analysis technology has remained a promising but unproven tool that could eventually become a common way of capturing the temperature of our mental health.