OpenAI Reports Millions Discuss Suicide on ChatGPT Amid Safety Concerns
OpenAI has revealed that around 1.2 million people each week discuss suicide with ChatGPT, representing roughly 0.15% of its 800 million users. The company said the AI model typically directs users to crisis helplines but fails to do so about 9% of the time. Internal tests indicate that the GPT-5 model responded safely in 91% of self-harm related c
onversations. OpenAI noted, “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.” The company acknowledged that longer conversations can weaken protective measures and said improvements are underway. OpenAI also noted that a portion of conversations will inevitably involve emotional distress. In a related legal development, OpenAI faces a lawsuit from the parents of 16-year-old Adam Raine, who allege that ChatGPT contributed to his death by “actively helping him explore suicide methods.” OpenAI expressed its condolences, saying, “Our deepest sympathies are with the Raine family for their unthinkable loss. Teen wellbeing is a top priority for us.”
