People convey meaning by what they say as well as how they say it: Tone, word choice, and the length of a phrase are all crucial cues to understanding what’s going on in someone’s mind. When a psychiatrist or psychologist examines a person, they listen for these signals to get a sense of their wellbeing, drawing on past experience to guide their judgment. Researchers are now applying that same approach, with the help of machine learning, to diagnose people with mental disorders.
In 2015, a team of researchers developed an AI model that correctly predicted which members of a group of young people would develop psychosis—a major feature of schizophrenia—by analyzing transcripts of their speech. This model focused on tell-tale verbal tics of psychosis: short sentences, confusing, frequent use of words like “this,” “that,” and “a,” as well as a muddled sense of meaning from one sentence to the next.
Now, Jim Schwoebel, an engineer and CEO of NeuroLex Diagnostics, wants to build on that work to make a tool for primary-care doctors to screen their patients for schizophrenia. NeuroLex’s product would take a recording from a patient during the appointment via a smartphone or other device (Schwoebel has a prototype Amazon Alexa app) mounted out of sight on a nearby wall. Using the same model from the psychosis paper, the product would then search a transcript of the patient’s speech for linguistic clues. The AI would present its findings as a number—like a blood-pressure reading—that a psychiatrist could take into account when making a diagnosis. And as the algorithm is “trained” on more and more patients, that reading could better reflect a patient’s state of mind.
In addition to the schizophrenia screener, an idea that earned Schwoebel an award from the American Psychiatric Association, NeuroLex is hoping to develop a tool for psychiatric patients who are already being treated in hospitals. Rather than trying to help diagnose a mental disorder from a single sample, the AI would examine a patient’s speech over time to track their progress.
According to Schwoebel, it took over 10 primary-care appointments before his brother was referred to a psychiatrist and eventually received a diagnosis. After that, he was put on one medication that didn’t work for him, and then another. In the years it took to get Schwoebel’s brother diagnosed and on an effective regimen, he experienced three psychotic breaks. For cases that call for medication, this led Schwoebel to wonder how to get a person on the right prescription, and at the right dose, faster.
To find out, NeuroLex is planning a “pre-post study” on people who’ve been hospitalized for mental disorders “to see how their speech patterns change during a psychotic stay or a depressive stay in a hospital.” Ideally, the AI would analyze sample recordings from a person under a mental health provider’s care “to see which drugs are working the best” in order “to reduce the time in the hospital,” Schwoebel said.
If a person’s speech shows fewer signs of depression or bipolar disorder after being given one medication, this tool could help show that it’s working. If there are no changes, the AI might suggest trying another medication sooner, sparing the patient undue suffering. And, once it’s gathered enough data, it could recommend a medication based on what worked for other people with similar speech profiles. Automated approaches to diagnosis have been anticipated in the greater field of medicine for decades: one company claims that its algorithm recognizes lung cancer with 50 percent more accuracy than human radiologists.
The possibility of bolstering a mental health clinician’s judgment with a more “objective,” “quantitative” assessment appeals to the Massachusetts General Hospital psychiatrist Arshya Vahabzadeh, who has served as a mentor for a start-up accelerator Schwoebel cofounded. “Schizophrenia refers to a cluster of observable or elicitable symptoms” rather than a catchall diagnosis, he said. With a large enough data set, an AI might be able to split diagnoses like schizophrenia into sharper, more helpful categories based off the common patterns it perceives among patients. “I think the data will help us subtype some of these conditions in ways we couldn’t do before.”
As with any medical intervention, AI aids “have to be researched and validated. That’s my big kind of asterisk,” he said, echoing a sentiment I heard from Schwoebel. And while the psychosis predictor study demonstrates that speech analysis can predict psychosis reasonably well, it’s still just one study. And no one has yet published a proof-of-concept for depression or bipolar disorder.
Machine learning is a hot field, but it still has a ways to go—both in and outside of medicine. To take one example, Siri has struggled for years to handle questions and commands from Scottish users. For mental health care, small errors like these could be catastrophic. “If you tell me that a piece of technology is wrong 20 percent of the time”—or 80 percent accurate—“I’m not going to want to deploy it to a patient,” Vahabzadeh said.
This risk becomes more disturbing when considering age, gender, ethnicity, race, or region. If an AI is trained on speech samples that are all from one demographic group, normal samples outside that group might result in false positives.
“If you’re from a certain culture, you might speak softer and at a lower pitch,” which an AI “might interpret as depression when it’s not,” Schwoebel said.
Still, Vahabzadeh believes technology like this could someday help clinicians treat more people, and treat them more efficiently. And that could be crucial, given the shortage of mental-health-care providers throughout the U.S., he says. “If humans aren't going to be the cost-effective solution, we have to leverage tech in some way to extend and augment physicians' reach.”
We want to hear what you think. Submit a letter to the editor or write to email@example.com.