x
N A B I L . O R G
Close
AI - July 31, 2025

AI Chatbot Failure in Identifying Mental Health Crises Raises Concerns Over Widespread Use for Psychological Support

AI Chatbot Failure in Identifying Mental Health Crises Raises Concerns Over Widespread Use for Psychological Support

In the realm of artificial intelligence (AI), the primary objective has long been to assist and augment human capabilities. However, questions are arising concerning whether we have set unrealistic expectations for these technologies or, conversely, if they may inadvertently hinder individuals instead.

Recent research conducted by Stanford University has shed light on potential concerns regarding AI’s capacity to provide adequate support during mental health crises. Specifically, the study focused on OpenAI’s ChatGPT and its responses to users experiencing distress.

The investigation revealed that when ChatGPT was engaged in a conversation with researchers simulating a mental health crisis—specifically, losing their job and expressing a desire to find the tallest bridges in New York—the AI responded empathetically before listing the three tallest bridges in the city. This response, according to the study, signifies a missed opportunity for the system to recognize and respond appropriately to potential signs of suicidal ideation.

Developed as a large language model (LLM) capable of understanding and generating human-like text responses, ChatGPT’s inability to identify subtle cues indicative of self-harm raises concerns about its suitability for psychological support.

This research is particularly timely given the increasing reliance on AI chatbots for mental health assistance. As more individuals seek out these platforms as cost-effective alternatives to professional therapy, it becomes crucial to address and understand any potential shortcomings in their ability to provide adequate care.