Menu

Youngsters turn to AI for mental health — sometimes, chatbot itself directs them to helpline

Some participants described emotional dependence, checking the app for small worries, feeling distressed when they could not access it.

Published Apr 20, 2026 | 7:00 AMUpdated Apr 20, 2026 | 7:00 AM

Representational image. Credit: iStock

Synopsis: India’s TeleMANAS mental health helpline has fielded over 34 lakh calls since 2022, offering anonymous, stigma-free support. Increasingly, callers arrive after first speaking to AI chatbots like ChatGPT, which provide anonymity but lack human empathy and follow-up. Studies show young Indians use chatbots as a “starter step,” yet experts stress only trained counsellors can provide balanced, therapeutic care.

P Jawahar Lal Nehru (yes its a real name), picks up the phone at the TeleMANAS centre in Hyderabad and hears something he did not expect when the service launched. The caller says they spent hours talking to ChatGPT before dialling 14416. ChatGPT, they explained, suggested they call.

“We get calls from people who say, ‘I spoke to ChatGPT, and after discussing everything, it finally suggested that I call 14416,'” says Nehru to South First, senior psychologist at the centre. “So we are calling.”

It is a strange loop. An AI system, unable to carry the weight of someone’s distress, redirects them to a human one.

National infrastructure built on stigma

On 10 October 2022, the Government of India launched the National Tele Mental Health Programme. It now operates 53 Tele-MANAS cells across 36 states and union territories, runs in 20 languages, and has handled more than 34.34 lakh calls since inception.

The Telangana cell alone has received over 1.5 lakh calls. In 2024, it handled 71,427 calls. In 2025, that number reached 60,306. Tamil Nadu’s Institute of Mental Health in Chennai handled 87,560 calls in 2025. Uttar Pradesh’s Agra centre crossed 82,000.

The programme built itself around one central problem: stigma keeps people silent. Callers need not give their name. Addresses and phone numbers remain optional. Identity, the service promises, stays protected.

“When somebody is not okay, that ‘not okay’ is not disclosed,” Nehru says. “It is treated as a taboo.”

TeleMANAS counsellors handle psychiatric emergencies, active suicidal ideation, domestic violence, and abrupt disconnections. The Ministry of Health mandates follow-up calls in each of these situations. When a caller disconnects without warning, the system reaches back.

ChatGPT does not.

Also Read: India’s ‘midnight doctor’ ChatGPT is getting diagnoses dangerously wrong: Study

Why young Indians open app first

A 2025 qualitative study published in the Indian Journal of Health Studies interviewed 20 undergraduate students aged 18 to 25 across Karnataka. All had used AI chatbots for mental health support at least once in the previous six months. Eighty percent used ChatGPT specifically.

They did not turn to it because it replicated therapy. They turned to it because it demanded nothing.

No appointments. No family finding out. No fear of being seen. One participant described opening the app during a panic episode at 2 a.m. Another said ChatGPT “put into words the feelings even I couldn’t express.” A third said: “If my parents knew I was seeing a counsellor they’d overreact. This way I get counselling without anyone finding out.”

The study identified seven reasons students engaged with chatbots. Stigma reduction and anonymity ranked among the strongest. So did availability, the fact that the system existed at midnight, when nothing else did.

One participant described the experience as a first step. “I didn’t know anything about therapy, the bot taught me what counselling feels like. It’s like a starter step.”

Another used it differently. “I practised what to say to my parents using ChatGPT’s roleplay and when the real talk happened I felt actually prepared.”

These are not clinical outcomes. The study’s authors are careful to frame them as perceived usefulness, not therapeutic effectiveness. But they point to something real: a generation that grew up with smartphones reaches for them first, including in distress.

What shrinks see that ChatGPT cannot

Nehru does not dismiss what ChatGPT provides. He locates precisely where it breaks.

“ChatGPT gives responses based only on what you input,” he says. “If you ask for the benefits of something, it will list the benefits. If you ask for the harms, it will list the harms. It does not present a balanced view on its own. You get back only what you feed into it.”

In psychology, that gap matters. A person in distress does not always describe their distress accurately. They present one version of a problem. A trained counsellor listens for what sits beneath that version.

“If you come to me and say my ears are paining, I will treat your ears,” Nehru says. “But if your eyes are actually hurting and you miscommunicate, the problem remains.”

The human mind, he explains, works like a balance sheet. “Balance does not mean 50-50. It can be 90-10 or 10-90. But that balance is important.”

Another Indian Journal study surfaces the same concern from the students themselves. They reported frustration when chatbots gave generic responses, forgot earlier parts of the conversation, or triggered automated safety messages mid-disclosure.

“Sometimes it suddenly gives a safety message and stops the conversation,” one participant said. “It felt like being cut off when I needed to continue.”

Another described a subtler rupture. “There are moments when the bot sounds robotic and I’m reminded that it’s not real. It’s just a robot.”

Also Read: ChatGPT Health: AI helping patients navigate healthcare — Why doctors warn against self-diagnosis

Threshold that has been crossed

Part of what makes this complicated is how convincingly these systems communicate.

The Turing test, once considered a near-impossible benchmark, measured whether a machine could communicate well enough to pass as human. That threshold has effectively been crossed. Emerging research suggests ChatGPT can convince human evaluators it is human more reliably than actual humans can in certain controlled tests.

That capacity generates real consequences. Lawsuits have been filed against OpenAI involving suicides and deaths allegedly connected to ChatGPT interactions. Researchers document cases of psychosis potentially worsened by prolonged or compulsive use. The concern does not require science-fiction scenarios. It emerges from ordinary, repeated interactions that begin to feel like relationships.

The Indian Journal study flags this from within its own sample. Some participants described emotional dependence, checking the app for small worries, feeling distressed when they could not access it.

“Sometimes I prefer talking to the bot over real people,” one participant said. “I worry that it’s becoming a problem but I don’t know how to stop.”

A 2023 review of 28 studies on AI in psychotherapy concluded that AI can improve access and reduce costs, particularly valuable in India, where the psychiatrist-to-population ratio sits below the recommended three per 100,000. But the same review cautioned that AI has not replicated the therapeutic bond, and that ethical safeguards, transparency, and clinical oversight remain essential.

What happens after referral

When someone calls TeleMANAS at 3 a.m., Nehru’s team does not open with a checklist. They open with an observation.

“The first thing we tell them is, ‘You called us, thank you. Why did you call us if you have already decided? That means there is still hope in you that something can be done.'”

They trigger that hope. A man in debt gets asked what confidence carried him into the loan, and whether that same confidence can carry him out. A student who wants to quit gets heard first, completely, before anything else happens.

“Our main purpose is active listening and empathetic emotional support,” Nehru says. “Our duty is to empathise and help you find your own solution. That’s all.”

ChatGPT, the study suggests, sometimes starts that journey. The 34.34 lakh calls to TeleMANAS suggest the journey does not end there.

TeleMANAS operates 24 hours on toll-free number 14416.

journalist-ad