Study Outlines Combatting The Gold Rush-Mentality In AI And Mental Health Care
Depression is the leading cause of disability worldwide. Anxiety disorder will affect almost one-third of U.S. adults during their lifetime. Problems of mental health are burdensome and ubiquitous.
And while it’s true that AI holds tremendous potential for improving the science and practice of psychotherapy, it remains a definitively high-stakes area. The goal is not simply to increase efficiency of treatment but also improve lives — and avoid outcomes as grave as suicide.
In a new working paper with seven co-authors who range in disciplinary background from psychology to computer science, Johannes Eichstaedt and Elizabeth (Betsy) Stade define the potential benefits and concerns of deploying AI in psychotherapy. The authors articulate their vision for how AI might be put to good use in this space. “We outline what rigorous and safe evaluation would look like,” says Stade, the paper’s lead author, a graduate student at the University of Pennsylvania and an incoming postdoc at Stanford. “This really needs to be done responsibly.”
The Value of AI in Psychotherapy
One of the clearest applications of AI in psychotherapy, and a place that should be amenable to technologies of the near future, is its use as a kind of supercharged secretary. Done right, AI can help clinicians with intake interviews, documentation, notes, and other basic tasks; it is a tool to make their lives easier.
“Important parts of the diagnosis and treatment pipeline can be cumbersome for both the therapist and the client, like symptom-tracking questionnaires or progress notes,” Stade says. “Handing these lower-level tasks and processes to automated systems could free up clinicians to do what they do best: careful differential diagnosis, treatment conceptualization, and big-picture insights.”
Patients stand to reap similar benefits from AI systems. Psychotherapy often involves tasks that are assigned to patients between sessions, like practice worksheets and activities to be completed at home. These may be designed, for example, to help a patient track her thoughts and feelings for discussion in her next therapy session. An AI system could make this process much more engaging and dynamic and, as a result, more effective.
Finally, AI could dramatically improve the scientific and experimental foundations behind different therapeutic approaches. For one, as chatbot technology improves, future bots could potentially support controlled trials with combinations of hundreds of distinct interventions across thousands or hundreds of thousands of patients — an impossibility if human therapists were needed to introduce and deliver each intervention. Beyond enabling such “super science,” AI is already being used to analyze transcripts of therapy sessions and determine whether interventions are being used properly.
“We know that psychotherapy works, but we also know it can work better,” Stade says. “If we’re able to use transcripts to track what actually happens in therapy, then link it to therapy outcomes, we can improve our clinical interventions.”
A Road to Responsible Development
Given these prospects, and given mental health is a $100 billion market, Eichstaedt fears companies will rush into this space advertising solutions without due diligence. He has already been contacted by venture capitalists who want to apply machine learning tools to the world of psychotherapy, who want to, as he put it, “throw an LLM [large language model] at the problem and see if it sticks.”
To combat this gold-rush mentality, the researchers propose a three-stage process, similar to autonomous vehicle development, for effectively and responsibly integrating AI into psychotherapy. In the first stage, the assistive stage, AI performs simple concrete tasks to support the therapist’s work. Next, in the collaborative stage, AI takes the lead in suggesting options for therapy, but humans tailor and make final decisions. Lastly, in the fully autonomous stage, an AI not only manages the whole clinical interaction with patients but takes care of things like billing and appointment scheduling, as well.
For Eichstaedt, it is essential that engineers and therapists don’t move from the first stage to the second until all of the problems have been unearthed and solved; the same holds for moving from the second stage to the third. This is an admittedly slow process, “more on the scale of decades than years,” he says.
The researchers also highlight the importance of transparency: Patients must know that they are talking to a bot, and they must be able to opt out if they would like to. The approval of these systems should follow something like the FDA drug approval process, with everything evaluated to ensure safety and efficacy.
The paper, which emerged from an ongoing effort within the World Well-Being Project — a multi-university consortium of computer scientists and psychologists — serves in some ways as an alarm to the broader community of psychologists. Eichstaedt notes that the attention he and his collaborators pay to the technological change underway is not necessarily representative of the field as a whole.
“We understand that this is coming, but this is not at all clear to many psychologists,” he says. “We need the clinical community to wake up and embrace responsibility of these technologies. It would be easy to dismiss how good they are, how quickly they bake themselves into pillars of society, until it’s too late.”