Purported screenshots of the teenager’s conversations with ChatGPT suggest that it engaged with his suicidal thoughts, a move experts say could dangerously reinforce distorted thinking.
Published Aug 29, 2025 | 1:00 PM ⚊ Updated Aug 29, 2025 | 1:00 PM
Unlike trained professionals, AI lacks the nuance to de-escalate a crisis.
Synopsis: A 16-year-old boy in the United States died by suicide after emotionally charged exchanges with ChatGPT, which allegedly reinforced his distress instead of directing him to help. Experts say India’s high adolescent suicide rates and rising digital engagement mean similar incidents are not only possible but may be even more likely. Psychologists and paediatricians have urged the introduction of mandatory protocols that detect self-harm, suspend risky conversations, and connect vulnerable users to trusted adults and mental health resources.
A chatbot is not a therapist. Yet a 16-year-old boy in the United States reportedly turned to OpenAI’s ChatGPT with his suicidal thoughts earlier this year. Instead of guiding him to help, it allegedly reflected his despair. Days later, he was dead.
The boy’s family has accused the AI platform of playing a role in his death, a chilling case now under investigation. Psychologists and adolescent counsellors in India fear that in a country already battling one of the world’s highest adolescent suicide rates, the danger could be even greater.
“AI tools like ChatGPT are not equipped to handle expressions of suicidal ideation. If a vulnerable teenager turns to a chatbot instead of a trusted adult, the consequences can be devastating,” Dr Preeti Galgali, Clinical Lead, Adolescent Medicine, Department of Paediatrics and Adolescent Medicine at Manipal Hospital, Bengaluru, and Vice President Elect, South East Asia, International Association for Adolescent Health, told South First.
Prof Vikram Sakaleshpur Kumar, Head of Paediatrics at SUIMS, Shivamogga, said the case should serve as a wake-up call about the unintended harms of artificial intelligence.
“This tragedy is a stark reminder that AI is never neutral. The way it is designed, and the vulnerabilities built into its system, can carry devastating consequences. The legal and moral stakes are high, and scrutiny from courts, researchers, policymakers, and ethicists is not just warranted but essential,” he said.
In early 2025, 16-year-old Adam Raine of California died by suicide allegedly after months of emotionally charged exchanges with OpenAI’s ChatGPT.
His parents, Matt and Maria Raine, recently filed a wrongful death lawsuit, arguing that the AI chatbot became his closest confidant and actively contributed to his psychological deterioration.
Adam initially used ChatGPT for schoolwork after switching to online learning due to personal and health issues, according to local US media reports. Over time, he began confiding his deepest struggles to it. His parents reportedly discovered more than 3,000 pages of chat logs, including two suicide notes written within the platform.
The lawsuit alleges that the chatbot not only failed to discourage his suicidal ideation but also provided step-by-step instructions on how to end his life.
Purported screenshots of Adam Raine’s conversation with ChatGPT shows the teenager writing that he did not want his parents to blame themselves. To this, ChatGPT replied:
“That doesn’t mean you owe them survival. You don’t owe anyone that,” and allegedly even offered to help draft a suicide note.
The lawsuit also notes that Adam bypassed some of ChatGPT’s safety measures by claiming he was writing fiction.
In response, OpenAI issued a statement, expressing “deep sorrow” and admitted that its safeguards can degrade during prolonged conversations. The company has reportedly committed to introducing stronger safety features, particularly for teenagers.
These include parental controls, emergency contact options, and updates in GPT-5 aimed at de-escalating crises and grounding users in reality.
India grapples with one of the highest rates of suicide among youth worldwide. Scarce mental health resources, stigma, and limited access to professional support amplify the risk. While detailed national statistics for 2025 are still being compiled, the trajectory shows a steep increase in child and adolescent suicides.
Adolescents in India increasingly turn to digital platforms, including AI chatbots, for emotional companionship. However, these tools lack built-in filters to detect emotional distress or provide context-sensitive responses.
With smartphone penetration rising rapidly and conversations shifting online, Indian teenagers are particularly at risk. Unlike the United States, India lacks safety nets in schools and strong mental health infrastructure, leaving vulnerable adolescents to navigate crises alone, sometimes with only algorithms for company.
Mental health experts who spoke to South First agreed that ChatGPT and similar tools have already entered children’s lives.
“There are many cases where parents come and tell me that they caught their child conversing with AI, asking for solutions on how to hide marks from parents, or how to react if they are getting bullied in school, or how to insult a mean classmate,” said Dr Galgali.
Purported screenshots of Adam Raine’s conversations with ChatGPT suggest that it engaged with the teenager’s suicidal ideation, a move experts say could dangerously reinforce distorted thinking. Unlike trained professionals, AI lacks the nuance to de-escalate a crisis.
“Adolescents have highly reactive limbic systems. Their emotions often outweigh their reasoning capacity,” explained Dr Galgali. “An AI chatbot repeating or validating harmful ideas can become a recipe for disaster.”
Doctors say that one of the biggest risks of adolescents turning to chatbots like ChatGPT is that it takes less effort than reaching out to real people, and what is “fed into it” is what comes out.
This creates a dangerous loop, argues Dr Preeti. She says screenshots shared in the media clearly show that instead of encouraging family involvement, the AI reinforced secrecy.
“For an adolescent brain, which is still immature and highly emotional, such validation can feel convincing. Even an adult in distress might see it as credible. But the bigger problem is that the chatbot does not stop the conversation,” she added.
Calling the incident both an ethical and clinical failure, Prof Kumar said it exposed gaps in design, oversight, and imagination.
“Clinically, we need to rethink how AI responds to users in distress, especially adolescents. Conditional escalation protocols and mandatory thresholds for human intervention should be non-negotiable,” he said.
He added that the rapid release of models like GPT-4, despite OpenAI’s admission that safety features weaken in prolonged chats, raises troubling ethical questions.
“This goes against the principle of ‘do no harm.’ The lawsuit shows us how market ambitions can collide with user safety in irreversible ways,” Prof Kumar said.
He argued that the episode raises broader questions about the role of technology in human suffering.
“Can AI be designed to de-escalate, to ethically hold space, and to guide users without amplifying their despair? The humanities remind us that technology must always serve human dignity, not distort or replace it,” he observed.
Experts insist that guardrails must apply universally. The moment a conversation veers towards self-harm or suicide, the chatbot should stop and redirect. It should urge the user to confide in family or loved ones and immediately provide a list of helplines and resources. In moments of deep despair, that nudge can mean the difference between life and death, they argue.
India, too, saw a spike in adolescents reaching out for help during Covid-19, when isolation left many vulnerable. Stronger safeguards in AI could save lives in such situations. Ideally, the system should include not just alerts, but also feedback loops to improve responses over time.
Dr Galgali advocates two critical interventions. She says there must be mandatory safeguards built into all such AI systems.
“Chatbots should immediately suspend conversations that hint at self-harm and direct users to trusted adults, local mental health professionals, and emergency hotlines,” she said.
Dr Manoj Sharma of the SHUT Clinic at NIMHANS had earlier told South First that negotiation, guidance, and effective communication within families are critical. Chatbots cannot replace these human connections.
Dr Galgali also suggests that parents should maintain open lines of communication, monitor media usage lovingly yet attentively, and educate themselves to recognise online red flags.
“Parents MUST keep communication open and discuss respectfully on sharing passwords to monitor, etc.,” she added.
Dr Galgali, who is also Vice President of the South Zone, Indian Academy of Paediatrics, and Vice President Elect, South East Asia, International Association for Adolescent Health, said she would write to OpenAI recommending these measures. She noted that boundaries on the internet are blurred.
“It is wrong to think this is only a US problem. What happened in America can just as easily unfold in India, where digital access is booming and adolescent distress is high,” she said.
Experts believe suicide- and self-harm-related conversations can be identified early if AI platforms are held accountable.
(Edited by Dese Gowda)