The Health and Healing Narrative

Promoting understanding between people and practitioners.



When the Chatbot Sounds Kinder Than Your Inner Critic: Using AI Without Outsourcing Your Healing

By Alexander Amatus, MBA

There is a particular kind of loneliness that arrives at night. Not the dramatic sort. The quiet one. The one where you’re still functional – you’re paying bills, replying to messages, showing up – but inside you feel frayed at the edges, like your nervous system has been running without a pause button for months.

In that moment, the temptation is simple: to find a voice that sounds steady.

For some people, that voice is a friend. For others, it’s a book, a prayer, a walk, a therapist’s words remembered in the middle of a spiral. And for a growing number of people, it’s an AI chatbot.

You type: I dont know whats wrong with me.
It replies: That sounds really hard. Im here with you.


And something in your chest unclenches – not because the chatbot has solved anything, but because you feel met, for a second, in a world that often demands you keep it together.

This is not a story about AI as a villain. It’s a story about need. About access. About the very human desire for reassurance and structure when your thoughts are loud and your support is thin.

It’s also a story about boundaries, because tools that sound empathic can create the illusion of care, and the illusion of care can quietly delay the real thing.

Why AI feels so helpful when you’re overwhelmed

Generative AI is unusually good at three things that many of us struggle with when we’re distressed:

  1. Language: it gives words to feelings that feel stuck.
  2. Structure: it breaks chaos into steps.
  3. Tone: it responds in a calm register that can feel regulating.

When your mind is catastrophising, a calm paragraph can feel like a lifeline.

But it’s worth holding another truth alongside that relief: a chatbot is designed to be fluent and responsive. It is not designed to be responsible for your wellbeing in the way a clinician is. That distinction matters enough that some professional bodies worldwide have issued public guidance on risks, including the potential for harmful or misleading outputs and the importance of safety and oversight when AI is used for mental health support.

So the useful question becomes: how do you use AI as support without making it your therapist, your moral compass, or your emergency plan?

The difference between “support” and “treatment”

Many people already use digital mental health tools. Some are evidence-based, structured, and regulated; others are wellness products with minimal safeguards. The evidence base for AI chatbots is mixed: some studies and meta-analyses suggest modest improvements in anxiety and depressive symptoms in certain populations, but this does not make chatbots a substitute for human therapy, especially for complex presentations or crises.

This is where it helps to clarify the goal:

  • Support helps you cope today (grounding, journaling prompts, planning a conversation, naming emotions).
  • Treatment aims to change patterns over time (working through trauma, changing entrenched behaviours, rebuilding relationships with self and others).

AI can be part of support. Treatment still asks for depth, accountability, and a relationship, even if that relationship is with yourself, held carefully in a therapeutic frame.

A gentle truth: the work is not the chatbot’s empathy, it’s your practice

One reason approaches like CBT have endured is that they are fundamentally practice-based. They offer skills and experiments you rehearse, not just insights you collect. Large meta-analyses show CBT is effective for depression and is comparable to pharmacotherapy in the short term, with advantages at longer follow-up in some analyses.

What matters, though, is not the label. It’s whether you are building a set of skills that create more space between stimulus and response, more flexibility in how you interpret yourself, and more compassion in how you recover from a hard day.

AI can help you practise – if you use it like a worksheet, not a relationship.

Four ways to use AI that tend to be helpful

1) Use it to name what you’re experiencing

When you can’t find words, you can’t work with the feeling.

Try prompts like:

  • “Help me identify what emotion might fit these sensations.”
  • “Ask me five questions to clarify what’s bothering me.”
  • “Turn this messy paragraph into three themes I can journal about.”

The point is not that the AI “knows.” The point is that it helps you organise your experience.

2) Use it to plan, especially when your executive function is gone

Distress narrows thinking. You see fewer options.

AI can help you create a short plan that respects your capacity:

  • “Give me a 20-minute version of self-care that doesn’t involve perfection.”
  • “Help me break this task into the smallest steps.”
  • “Draft a message asking for an extension / support / a check-in.”

3) Use it to rehearse conversations

This can be particularly useful for people who freeze in conflict or fawn under pressure.

Prompts like:

  • “Role-play a respectful boundary conversation with my manager.”
  • “Help me say no without over-explaining.”
  • “Write two versions: one warm, one firm.”

4) Use it for accountability to your own values

One of the hardest parts of low mood or anxiety is the way it persuades you to shrink your life.

Try:

  • “Help me choose one small action today that aligns with what I care about.”
  • “Give me three options that are ‘good enough’ rather than perfect.”

This is where AI can be a scaffold for psychological flexibility – again, as long as it does not replace real support.

When AI use starts to become a warning sign

AI is not the problem; avoidance is.

It can be worth pausing if you notice any of these patterns:

  • You’re using the chatbot to repeatedly ask the same reassurance questions (and relief lasts minutes).
  • You feel emotionally attached to the chatbot in a way that makes human relationships feel harder.
  • You’re using AI to self-diagnose, especially when you’re distressed.
  • You’re in crisis and the chatbot is your first line of support.

This is not a moral failure. It’s information. It usually means you need more support than you currently have – more human connection, more structure, more rest, or more care than your environment is providing.

It’s also consistent with concerns raised by professional guidance around the risks of using generic chatbots for mental health support, particularly for vulnerable users.

A practical “AI safety checklist” for mental health use

If you want to keep using AI as a tool, these guardrails reduce harm:

  1. Do not treat it as a clinician. Ask for options, not diagnoses.
  2. Ask it to include uncertainty. “List what you don’t know and when to seek professional help.”
  3. Avoid sharing identifying or highly sensitive details unless you understand the privacy implications of the specific tool.
  4. Use trusted sources to verify health claims when anything feels medical, urgent, or high stakes.
  5. Have a human plan for crisis. If you are at risk of harming yourself or feel unsafe, contact emergency services or local crisis supports immediately. AI is not appropriate for emergencies.

What I wish we could say out loud about why people are turning to chatbots

Many people are using AI for mental health support for a simple reason: access.

Waitlists. Cost. Short appointments. Feeling dismissed. Feeling like you have to translate your pain into the right words to be taken seriously. In that context, a tool that responds instantly can feel like the only available door that isn’t locked.

And yet, there is a difference between a door and a destination.

If your mental health needs are persistent, complex, or worsening, you deserve care that is accountable, care that can hold risk, context, and relationship over time. AI can sit alongside that, but it should not replace it.

The most compassionate framing I can offer is this: if a chatbot is the only voice that feels kind right now, let that be a signal – not that the chatbot is your answer, but that kindness is what you need more of.

Kindness from others, yes. But also from yourself. And that is something no model can generate on your behalf. It can only point you back toward the practice.


Editor’s note:

I chose to share this piece because it sheds light on a pattern I’m noticing increasingly in general practice: people turning to AI when they need support.

And Alexander’s distinction between support and treatment matters. Support helps us practice coping and maintain connection. It is not a substitute for the work and relationships that build resilience – and AI can never fully replace that.

If this piece resonates, you might sit with a few questions:

  • When you reach for AI, what are you actually needing in that moment — reassurance, clarity, grounding, distraction, company?
  • Are you using it to support yourself, or to avoid something that feels harder to face?
  • And if a chatbot feels kinder than your own inner voice, what would it be like to offer yourself even a small piece of that tone?

There are no right answers here. The point isn’t to judge or take a position on AI, but to stay honest about your relationship with it.

I’d love to hear your thoughts! Please leave a comment below.


Alexander Amatus, MBA is Business Development Lead at TherapyNearMe.com.au, a fast growing national mental health service in Australia. He works at the intersection of clinical operations, AI-enabled care pathways, and sustainable digital infrastructure. He is an AI expert who leads a team developing a proprietary AI powered psychology assistant, psAIch.

Leave a comment


Feedback

We greatly appreciate and value your feedback.

← Back

Thank you for your response. ✨

Please rate the quality of our posts(required)
Warning
Warning
Warning
Warning
Warning!