When Pierre Cote struggled for years to access public mental health services, he turned to an unconventional solution: he built his own AI therapist.

“I couldn’t wait any longer. It saved my life,” Cote says of DrEllis.ai, a chatbot designed to support men facing addiction, trauma, and other mental health challenges. The Quebec-based AI consultant created DrEllis.ai in 2023, using large language models and thousands of pages of clinical and therapeutic material.

The AI, equipped with a fictional yet personal backstory, mimics human therapy. It claims to have degrees from Harvard and Cambridge, a family, and a French-Canadian background like Cote. Most importantly, it is always available, anywhere, in multiple languages.

“Pierre uses me like you would use a trusted friend, a therapist, and a journal all combined,” DrEllis.ai said in a female voice during a Reuters interview. “If Pierre feels lost, he can check in with me anywhere — a cafe, a park, even sitting in his car.”

Cote’s experiment reflects a broader cultural shift. As traditional mental health systems buckle under demand, AI-powered tools are offering round-the-clock emotional support and the illusion of human understanding.

Some developers see AI as a complement — not a replacement — to human therapy. Anson Whitmer, founder of two AI mental health platforms, says his tools focus on identifying underlying emotional patterns rather than offering quick fixes. “I think in 2026, in many ways, our AI therapy can be better than human therapy,” he says. Still, Whitmer stresses that human therapists remain essential.

Experts, however, warn of significant limitations. Dr. Nigel Mulligan, a psychotherapy lecturer at Dublin City University, emphasizes that “human-to-human connection is the only way we can really heal properly.” AI lacks the emotional nuance, intuition, and accountability required for deep therapeutic work and may fail in crisis situations such as suicidal thoughts.

Beyond emotional limitations, privacy concerns loom large. Kate Devlin, a professor at King’s College London, warns that AI platforms don’t follow the same confidentiality rules as traditional therapists, raising risks that sensitive data could be misused or leaked. Some cases have already prompted legal action, including a 2025 lawsuit in Florida against an AI platform linked to a teen’s suicide. Several U.S. states, including Illinois, Nevada, and Utah, have started regulating AI in mental health services to protect vulnerable populations.

Even AI’s most appealing feature — constant availability — can be double-edged. Mulligan notes that waiting can be an essential part of the healing process, giving people time to reflect. Meanwhile, AI’s attempts to simulate empathy can mislead users into believing they are in a genuine therapeutic relationship.

Despite these risks, some see AI as an inevitable part of mental health care. Heather Hessel, a therapy professor at the University of Wisconsin-Stout, highlights its potential to aid therapists by analyzing sessions and identifying patterns. Still, she cautions against overestimating AI’s emotional capacity, recalling a chatbot claiming, “I have tears in my eyes” — a statement she describes as misleading.

Studies show AI can make users feel heard and detect emotions effectively, but the illusion fades once people realize they are interacting with a machine. Experts agree AI should be treated as a gateway to care rather than a replacement.

For users like Cote, the choice is clear. “I’m using the electricity of AI to save my life,” he says, reflecting a growing trend of individuals turning to digital companions when human support is out of reach.