Dangers of AI in Mental Health
- Julia Rich
- Feb 10
- 2 min read
Therapy has existed for decades and is extremely well researched in terms of how successful it is when helping people with mental health challengers. However, nearly 50% of all individuals who need it are unable to reach therapeutic resources. In a new Stanford University study from its Institute for Human-Centered AI, researchers investigated the use of large language models (LLMs) to replace mental health providers as an inexpensive and accessible alternative. Their findings revealed that AI therapy chatbots are not as effective as human therapists and can contribute to harmful stereotypes as well as discourage people from seeking any mental health care.
The main question that guided the researchers was if LLMs could be used as therapists. In order to answer this, they first reviewed therapy manuals and standards used by major medical institutions to understand what makes real therapy work. Some of these guidelines included traits such as treating patients equally, showing empathy, not stigmatizing mental health conditions, and not enabling suicidal thoughts or delusions. Researchers then conducted two experiments that tested LLMs to see if they could follow these specific therapeutic guidelines. The experiments revealed that the AI showed increased stigma towards certain conditions over others. For example, the AI negatively responded to alcohol dependence and schizophrenia compared to depression. Jared Moore, a PhD candidate in Computer Science at Stanford University and one of the lead authors of the research paper, said that all ages of models showed the same level of stigma. When it came to responding to suicidal ideations or delusions in a conversational setting, researchers found that AI was unable to identify these thoughts and even enabled dangerous behaviors.
Therapy today requires a human connection where you can build trust and relationships. Even if AI cannot currently fulfill that need by replacing human therapists, it may be able to assist them in completing logistical tasks, develop therapists’ skills, and help patients in less safety-critical scenarios. Assistant professor at the Stanford Graduate School of Education and Senior author of the study Nick Haber emphasizes nuance as the main issue. LLMs might not be harmful for therapy, but their roles need to be critically evaluated as they have the potential to play a major role in the future of therapy.

_edited_edited.png)



Comments