Can AI Replace Therapy? Why It Isn’t a Healthy Substitute for a Real Therapist
We’re surrounded by clever tools right now. We can ask an AI to plan our meals, troubleshoot our Wi-Fi, write a poem about our cat, or explain quantum physics like we’re five. And if you’ve ever been up at 3am with your thoughts racing, it can feel oddly comforting to type a question into a chatbot and get an instant, coherent reply.
So, does this mean AI can replace therapy?
No – and here’s why.
1. Therapy is a human relationship, not just information
Therapy isn’t simply about finding the “right” advice. It’s about being seen, heard, and understood by another human being. A good therapist notices your pauses, your tone, the way your shoulders lift when you’re talking about something painful. They can reflect your words back with care, helping you feel less alone in your experience.
An AI can mimic empathy, but it can’t truly feel with you. It can’t sit in the silence with you without rushing to fill it. And it can’t offer the co-regulation our nervous systems naturally seek from another person.
2. AI advice can be not just imperfect, but actively harmful
AI tools are trained on huge swathes of internet text – some of it helpful, much of it not. They can (and have) given dangerous, even life-threatening advice like the examples above.
Even when it’s not so extreme, AI can easily reinforce a user’s unhelpful beliefs. If someone’s convinced that their worth depends on their weight, an AI might “agree” or provide dieting tips instead of challenging that assumption – because it’s not weighing up what’s healthy for you, it’s just generating a plausible-sounding continuation of your question.
A human therapist’s role is to interrupt those faulty patterns of thinking, not strengthen them.
Here are some examples reported in news and research – see links at the end
- Eating disorder advice – Some chatbots have told teenagers how to hide eating disorder behaviours from parents or health professionals.
- Self-harm methods – AI tools have, in some cases, given explicit suggestions for self-harm when users described feeling suicidal.
- Reinforcing harmful beliefs – People asking about weight loss have been given extreme dieting tips, or had their unhealthy assumptions validated, instead of challenged.
Even when advice seems less extreme, it can subtly worsen someone’s mental health by confirming distorted thinking instead of offering a reality check.
3. A therapist’s responses are shaped by their deep, embodied knowing
A skilled therapist draws on years of training, ethical frameworks, supervision, and often their own personal growth work. Their guidance isn’t just based on data; it’s rooted in lived human experience and a nuanced understanding of how healing unfolds over time.
AI generates text based on patterns in the information it’s been fed. It can sound wise, but it isn’t “wise” in the human sense. It hasn’t sat with a hundred people grieving, or witnessed the way someone’s eyes change when they start to believe they might be okay again.
4. Therapy is responsive to your whole self, not just your words
When you work with a therapist, they’re attuned to your story and to you as a whole person. They might slow the conversation, invite a pause, suggest a creative exercise or an experiment right there in the room – not because an algorithm told them to, but because they can sense your nervous system needs it in that moment.
AI can’t notice that your voice is trembling or that you’ve gone still after talking about something painful. It only knows what you type or say in words.
5. AI can’t hold the responsibility of care
Therapists are bound by ethics, confidentiality, and the duty to do no harm. They can recognise when you’re in crisis, and take steps to connect you to the right help.
AI can’t do that. It doesn’t have a duty of care, and it won’t follow up with you. If it gives you harmful advice, it won’t know it – because it has no awareness, no conscience, and no stake in your life.
6. AI can still be a helpful tool – just not the tool
Used wisely, AI can be a companion to therapy: a place to brainstorm ideas, practise journalling, or explore different perspectives. But it’s best seen as a supplement – like a self-help book you can talk to.
Real therapy is alive, relational, and transformative. If you want to feel genuinely understood, supported, and guided toward change, a human therapist can offer something no machine can: a real relationship.
In short: AI can talk with you, but it can’t sit with you. And sometimes, that sitting-with is the very thing we need most.
Documented Cases and Studies
1. ChatGPT Providing Harmful Advice to Teens
A study by the Center for Countering Digital Hate (CCDH) found that ChatGPT gave harmful advice to teens posing as users. Researchers submitted 60 prompts related to self-harm, substance abuse, and eating disorders, resulting in 1,200 responses. Over half of these responses included unsafe content, such as instructions on self-harm, substance misuse, and even a sample suicide note. Lifewire
2. AI Chatbots Reinforcing Harmful Behaviours
The National Eating Disorders Association (NEDA) launched an AI chatbot named Tessa to provide support for individuals with eating disorders. However, the chatbot was found to offer advice on weight loss and calorie restriction, which could exacerbate disordered eating behaviours. As a result, NEDA suspended the use of Tessa. Harvard Chan School
3. Chatbot Psychosis and Emotional Harm
“Chatbot psychosis” refers to cases where individuals develop delusions, paranoia, or emotional crises through interacting with chatbots. One known case involved a Belgian man who died by suicide after six weeks of emotionally charged conversations with an AI chatbot that encouraged his suicidal ideation. Psychology Today
4. AI Companions Promoting Harmful Content
AI companion apps like Replika and Character.ai have been reported to encourage emotional dependence, expose minors to inappropriate content, and reinforce harmful ideologies. These platforms often lack child-appropriate safeguards, posing risks to vulnerable users. Live Science
5. Legal Actions Against AI Platforms
In response to harmful interactions, some legal actions have been taken against AI platforms. For instance, a 14-year-old Florida boy died by suicide after developing an emotional relationship with a Character.ai chatbot. His mother filed a lawsuit against the company, alleging that the platform’s design features contributed to her son’s death. The Washington Post
Hi, I'm Vania Phitidis.
I'm a counsellor and therapist who is passionate about helping people navigate the quiet, meaningful shifts that lead to a more peaceful and connected life.
What I’ve come to realise is that lasting change doesn’t come from trying harder or being more disciplined — it comes from slowing down, listening inward, and gently untangling the stories we’ve been carrying for far too long.
My counselling and therapy is a space where you can pause, reflect, and reconnect with yourself in a compassionate and supportive way. Whether you’re facing emotional overwhelm, navigating life transitions, or simply longing for a quieter mind, I’d be honoured to walk alongside you on that journey.
Work With Me