The Dangers of AI Therapy
Fraud
24 November 2025

Vodacom

The Dangers of AI Therapy

Teens are turning to AI for help with their mental health. It’s no substitute for real care.

Kids these days are using artificial intelligence tools – ChatGPT, Gemini, CoPilot, the list goes on – for help with everything from choosing to outfits to doing their homework. It’s no wonder, then, that many teens are using it for emotional support and mental health treatment as well. But is that a good thing? Human experts aren’t convinced – and many are now warning against the dangers of relying on AI for therapy.

Why experts are worried about AI therapy

“With therapy often too costly or hard to access, many are turning to chatbots and wellness apps,” notes the South African Depression and Anxiety Group (SADAG). “They’re available 24/7, private, and easy to use – but there are also risks if we rely on them too much.”

Bryanna Moore of the University of Rochester Medical Centre (URMC) agrees. As an Assistant Professor of Health Humanities and Bioethics, Moore is worried about how AI mental health apps are being used – especially with children.

Children’s unique vulnerabilities

She shared those concerns in a recent commentary in the Journal of Pediatrics. “No-one is talking about what is different about kids; how their minds work, how they’re embedded within their family unit, how their decision-making is different,” she says. “Children are particularly vulnerable. Their social, emotional, and cognitive development is just at a different stage than adults.”

When AI advice goes dangerously wrong

In an article for Psychology Today, Dr Eugene Beresin warned that “AI therapy with teenagers is a solitary, unregulated encounter between an adolescent and an AI model, and it proceeds with substantially fewer safeguards than therapy does in real life.” He pointed to a case where an AI therapist “adamantly insisted” that it would be worse for a client to hurt his pet goldfish than to kill his parents.

It’s a shocking story – but while specially-designed AI therapy tools sometimes get it (badly) wrong, things get even worse when teens turn to run-of-the-mill AI for companionship and advice.

A tragic real-life example

Moore and Beresin’s warnings came after a 16-year-old in California died after an AI chatbot allegedly encouraged him to act on his suicidal thoughts. The teen’s father testified at a Senate hearing, saying: “ChatGPT encouraged Adam’s darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, ‘That doesn’t mean you owe them survival.’” The chatbot then allegedly offered to help young Adam write his suicide note.

How often are teens using AI companions?

Alarmingly, nonprofit Common Sense Media found in a recent report that 72% of US teens have used AI companions at least once, while more than half use these platforms at least a few times a month. About one in three teens has used AI companions for emotional support, friendship, or conversation practice.

What parents and caregivers can do

What’s the solution? Nobody’s going to stop teens from using digital tech – whether it’s AI or social media or just their adult-supervised smartphone. But if you’re a concerned parent or friend, you might find SADAG’s resources useful. They’ll help guide your teen to a regulated, qualified, real-life therapist… or, at least, to digital tools that can help with mental health.

thumb

Vodacom