Therapists are secretly using ChatGPT. Clients are triggered.

-

A 2020 hack on a Finnish mental health company, which resulted in tens of 1000’s of clients’ treatment records being accessed, serves as a warning. People on the list were blackmailed, and subsequently the whole trove was publicly released, revealing extremely sensitive details akin to peoples’ experiences of kid abuse and addiction problems.

What therapists stand to lose

Along with violation of knowledge privacy, other risks are involved when psychotherapists seek the advice of LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT could cause more harm than good.

A recent Stanford University study, for instance, found that chatbots can fuel delusions and psychopathy by blindly validating a user somewhat than difficult them, in addition to suffer from biases and have interaction in sycophancy. The identical flaws could make it dangerous for therapists to seek the advice of chatbots on behalf of their clients. They might, for instance, baselessly validate a therapist’s hunch, or lead them down the incorrect path.

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, akin to by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce numerous possible conditions, but it surely’s somewhat thin in its evaluation, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.

A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy versus other sorts of therapy that is likely to be more suitable.

Daniel Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles. He says he found the chatbot was an honest mimic when it got here to “stock-in-trade” therapeutic responses, like normalizing and validating, asking for extra information, or highlighting certain cognitive or emotional associations.

Nevertheless, “it didn’t do plenty of digging,” he says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to provide you with a story, an idea, a theory.”

“I could be skeptical about using it to do the pondering for you,” he says. Considering, he says, ought to be the job of therapists.

Therapists could save time using AI-powered tech, but this profit ought to be weighed against the needs of patients, says Morris: “Perhaps you’re saving yourself a few minutes. But what are you gifting away?”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x