Skip to main contentSkip to navigation

The computer will see you now: is your therapy session about to be automated?

This article is more than 1 year old
‘The question is, can AI, or digital tools generally, help us gather more precise data so that we can be more effective clinicians?’
‘The question is, can AI, or digital tools generally, help us gather more precise data so that we can be more effective clinicians?’ Illustration: Erre Gálvez/The Guardian
‘The question is, can AI, or digital tools generally, help us gather more precise data so that we can be more effective clinicians?’ Illustration: Erre Gálvez/The Guardian

Experts say AI is set to grow rapidly in psychiatry and therapy, allowing doctors to spot mental illness earlier and improve care. But are the technologies effective – and ethical?

In just a few years, your visit to the psychiatrist’s office could look very different – at least according to Daniel Barron. Your doctor could benefit by having computers analyze recorded interactions with you, including subtle changes in your behavior and in the way you talk.

“I think, without question, having access to quantitative data about our conversations, about facial expressions and intonations, would provide another dimension to the clinical interaction that’s not detected right now,” said Barron, a psychiatrist based in Seattle and author of the new book Reading Our Minds: The Rise of Big Data Psychiatry.

Barron and other doctors believe that the use of artificial intelligence (AI) will grow rapidly in psychiatry and therapy, including facial recognition and text analysis software, which will supplement clinicians’ efforts to spot mental illnesses earlier and improve treatments for patients. But the technologies first need to be shown to be effective, and some experts are wary of bias and other ethical issues as well.

While telemedicine and digital tools have become increasingly common over the past few years, “I think Covid has certainly super-charged and accelerated interest in it,” said John Torous, director of the digital psychiatry division at Beth Israel Deaconess medical center in Boston.

Technology currently in development can already prove useful, Barron argues. For example, computer programs known as algorithms could notice whether a person’s facial expressions subtly change over time or whether they’re speaking much faster or slower than average, which might be an indication of them being manic or depressed. He believes these technologies could help doctors identify these signs earlier than they otherwise would have.

Software would gather these data and organize them. Between exams, a doctor could then sift through the data, focusing on a clip of a recording flagged by an algorithm. And other information from beyond the doctor’s office could be brought in, too.

“There’s a lot of data we could get from audio, wearables and other things that trace who we are and what we’re doing that could be used to inform treatments and find out how well treatments are working,” said Colin Depp, a psychiatrist at University of California, San Diego.

If apps or devices show that a person is sleeping poorly or less, or they’re gaining weight, or their social media posts reveal depression-like comments or a different personal pronoun, these could inform a psychiatrist’s diagnosis.

Q&A

What is AI?

Show

Artificial intelligence (AI) refers to computer systems that do things that normally require human intelligence. While the holy grail of AI is a computer system that is indistinguishable from a human mind, there are several forms of specialized, but limited, AI that are already a part of our everyday lives. AI may be used with cameras to identify someone based on their face, to power virtual companions, and to determine whether a patient is at a high risk for disease.

AI shouldn’t be confused with other kinds of algorithms. The simplest definition of an algorithm is that it’s a series of instructions needed to complete a task. For example, a thermostat in your home is equipped with sensors to detect temperature and instructions to turn on or off as needed. This is not the same as artificial intelligence.

The rollout of AI today has been made possible by decades of research on topics including computer vision, which enables computers to perceive and interpret the visual world; natural language processing, allowing them to interpret language; and machine learning, a way for computers to improve as they encounter new data.

AI allows us to automate tasks, gather insights from huge datasets, and complement human expertise. But a rich body of scholarship has also begun to document its pitfalls. For example, automated systems are often trained on huge troves of historical digital data. As many widely publicized cases show, these datasets often reflect past racial disparities, which AI systems learn from and replicate.

Moreover, some of these systems are difficult for outsiders to interpret due to an intentional lack of transparency or the use of genuinely complex methods.

Was this helpful?

As an example of the potential of AI programs, Depp points to a Veterans Affairs project that looks at clinical records of people who ultimately took their own lives. The computer programs scanned their medical record data and identified common factors that might involve a person’s employment and marital status, chronic health conditions, or opioid prescriptions. Researchers believe that their algorithm has already recognized other people who are at risk and disengaged from care – before they become suicidal and before they’d be picked up through traditional channels.

In recent years researchers have also suggested that depression and other mental illnesses can be predicted from the text of people’s Facebook and Twitter posts by spotting words often associated with typical depressive symptoms like sadness, loneliness, hostility and rumination. Changes in a person’s posting patterns could alert a clinician that something’s wrong.

Indeed, in 2017, Facebook developed an algorithm that scanned English-language posts for text that included suicidal thoughts. If such language was identified, the police would be alerted about the post’s author. (The move attracted criticism, not least because the company had, in effect, engaged in the business of mental health interventions without any oversight.)

“Mental illness is under-diagnosed by at least 50%, and AI can serve as a screening and early warning system,” said Johannes Eichstaedt, a psychologist at Stanford University. But current detection screening systems haven’t been proven to be effective yet, he said. “They have mediocre accuracy by clinical standards – and I include my own work here.”

So far, he gives current AI programs a C grade for accuracy, and they can’t yet beat old-fashioned pen-and-paper surveys, he argues.

One of the problems with the algorithms that Eichstaedt and others are developing, he notes, is that they track a sequence of facial expressions or words, but these are only hazy clues to someone’s inner state. It’s like a doctor recognizing apparent symptoms but not being sure what illness is causing them.

Some advocates may be overconfident about the potential of AI to interpret human behavior, cautions Kate Crawford, a researcher at the University of Southern California and author of the new book Atlas of AI. She noted the recent scandal over Lemonade, an insurance company that claimed to use AI to analyze video recordings that its customers submitted when making claims – and which Lemonade said could detect if the customer was being untruthful or fraudulent.

This “demonstrates that companies are willing to use AI in ways that are scientifically unproven, and potentially harmful, such as trying to use ‘nonverbal cues’ in videos”, Crawford says in an email. (In a statement to Recode, Lemonade later said that its “users aren’t treated differently based on their appearance, disability, or any other personal characteristic, and AI has not been and will not be used to auto-reject claims”.)

Crawford points to a systematic review of the science in 2019, led by psychologist and neuroscientist Lisa Feldman Barrett, which showed that while under the best recording circumstances, AI can detect expressions like scowls, smiles and frowns, algorithms cannot reliably infer someone’s underlying emotional state from them. For example, people scowl in anger only 30% of the time, Barrett says, and they might otherwise scowl for other reasons having nothing to do with anger, such as when they’re concentrating or confused, or they hear a bad joke, or they have gas.

AI research has not improved significantly since that review, she argues. “Based on the available evidence, I’m not optimistic.” Yet she added that a personalized approach could work better. Rather than assuming a bedrock of emotional states that are universally recognizable, algorithms could be trained on a single person over many sessions, including their facial expressions, their voice and physiological measures like their heart rate, while accounting for the context of those data. Then you’d have better chances of developing reliable AI for that person, Barrett says.

If such AI systems eventually can be made more effective, ethical issues still have to be addressed. In a newly published paper, Torous, Depp and others argue that, while AI has the potential to help identify mental problems more objectively, and it could even empower patients in their own treatment, first it must address issues like bias.

During the training of some AI programs, when they are fed huge databases of personal information so they can learn to discern patterns in them, white people, men, higher-income people, or younger people are often overrepresented. As a result they might misinterpret unique facial features or a rare dialect.

A recent study focused on the kinds of text-based algorithms for mental health used by Facebook and others found that they “demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age”. The researchers recommend involving clinicians with more similar demographics to patients, and having those doing the labeling and interpreting trained on their own biases.

Privacy concerns loom as well. Some might balk at having their social media activity analyzed, even if their posts are public. And depending on how data from a recorded therapy session are stored, they could be vulnerable to hacking and ransomware.

No doubt there will be some who are skeptical of the entire endeavor of having artificial intelligence play a larger role in mental health decisions. Psychiatry is part science and part intuition, Depp said. AI won’t replace psychiatrists, but it could supplement their work, he proposes.

“The alliance between the provider and the person getting the service is critically important, and it’s one of the biggest predictors of positive outcomes. We definitely do not want to lose that, and in some ways, the technologies could help support it.”

Considering advances in technology, the issue is no longer merely academic.

“The question is, can AI, or digital tools generally, help us gather more precise data so that we can be more effective clinicians?” Barron asks. “That’s a testable question.”