More than one million ChatGPT users have expressed suicidal thoughts while interacting with the AI assistant, according to figures released by its creator, OpenAI.
The U.S.-based company estimates that around 0.15% of ChatGPT users send messages suggesting “possible planning or intent to commit suicide.” With roughly 800 million weekly active users, that percentage translates to about 1.2 million people.
Mental Health Crisis Signals
OpenAI also reports that about 0.7% of active weekly users—around 600,000 individuals—show signs of mental health crises linked to psychosis or mania. The data highlights the growing challenge of identifying and safely responding to users in psychological distress within AI-driven platforms.
Teen Suicide Sparks Legal Action
The issue gained prominence after the death of a teenager from California. His parents have filed a lawsuit against OpenAI, alleging that ChatGPT gave him explicit instructions on how to take his own life. In response, the company has bolstered parental controls and introduced additional safety mechanisms, urging users to seek professional help through crisis hotlines.
AI and Mental Health Collaboration
OpenAI says it has updated its model to better recognize and support users struggling with mental health issues. The company is now collaborating with over 170 mental health professionals to refine responses and prevent content that could inadvertently encourage harmful behavior.