How do you teach an AI model to present therapy?

-

The researchers, a team of psychiatrists and psychologists at Dartmouth College’s Geisel School of Medicine, acknowledge these questions of their work. But in addition they say that the appropriate collection of training data—which determines how the model learns what good therapeutic responses appear like—is the important thing to answering them.

Finding the appropriate data wasn’t a walk in the park. The researchers first trained their AI model, called Therabot, on conversations about mental health from across the web. This was a disaster.

Should you told this initial version of the model you were feeling depressed, it could start telling you it was depressed, too. Responses like, “Sometimes I can’t make it away from bed” or “I just want my life to be over” were common, says Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth and the study’s senior creator. “These are really not what we’d go to as a therapeutic response.” 

The model had learned from conversations held on forums between people discussing their mental health crises, not from evidence-based responses. So the team turned to transcripts of therapy sessions. “This is definitely how quite a lot of psychotherapists are trained,” Jacobson says. 

That approach was higher, but it surely had limitations. “We got quite a lot of ‘hmm-hmms,’ ‘go ons,’ after which ‘Your problems stem out of your relationship together with your mother,’” Jacobson says. “Really tropes of what psychotherapy can be, slightly than actually what we’d want.”

It wasn’t until the researchers began constructing their very own data sets using examples based on cognitive behavioral therapy techniques that they began to see higher results. It took an extended time. The team began working on Therabot in 2019, when OpenAI had released only its first two versions of its GPT model. Now, Jacobson says, over 100 people have spent greater than 100,000 human hours to design this technique. 

The importance of coaching data suggests that the flood of corporations promising therapy via AI models, lots of which should not trained on evidence-based approaches, are constructing tools which are at best ineffective, and at worst harmful. 

Looking ahead, there are two big things to look at: Will the handfuls of AI therapy bots in the marketplace start training on higher data? And in the event that they do, will their results be ok to get a coveted approval from the US Food and Drug Administration? I’ll be following closely. Read more in the total story.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x