Navigating AI in Mental Health: Tips for Safe and Effective Use

Bitesized Blog

Whether riding a bike, skydiving, or seeking mental health support, knowing what to do or where to get the right support is essential for your wellbeing and safety.

AI models and chatbots, while highly sophisticated and useful, can make errors. Crucially, they often present these errors with the same confidence and persuasiveness as correct information.

This makes spotting AI errors much harder – like finding Wally in a complex picture! This is a serious issue, particularly in mental health, where errors could lead to harm.

To use AI more safely, consider it as one resource among others. Corroborate information by using AI in conjunction with other independent resources and wider support networks. Always fact-check or get expert review on AI outputs whenever that information could potentially lead to harm.

By using AI outputs as part of a wider system of support and fact-checking important or risky information, you significantly increase your ability to spot issues or errors when they occur – and they will at some point – helping you to stay safe.

If you are in crisis, or needing emotional support right now, go to my Help in Crisis page, for signposting and guidance,

That page includes a range of organisations offering expert advice and signposting. Including housing, legal, and financial support.

Get in touch if you want to find out more and explore therapy with me. And check out the rest of the blog for more information on what AI can be useful for, and why AI is not a therapist.

Meal-Sized Blog

Chatbots, built around AI models, offer constant availability, responsiveness, ease of interaction, and can provide useful information and a sense of help. Brilliant!…

… Except that most AI models are limited by their training data, which can lead to convincing errors, often called "hallucinations." These can potentially cause harm.

Google Deepmind outlines four categories of risk where AI models could cause harm (Shah et al., 2025). Only one of these involves a person instructing the AI model to cause harm. In other scenarios, the user needs to be able to spot errors themselves to avoid potential harm.

Spotting these errors is difficult because AI models are designed to sound knowledgeable, persuasive, informative, confident, and convincing. It's also challenging because individuals seeking help may be in a vulnerable state and might lack the specific knowledge needed to identify inaccuracies. This inherent dynamic makes people using AI vulnerable to errors.

(Shah et al., 2025; Berberette et al., 2024; Weidinger et al., 2021; Sui et al., 2024; Magesh et al., 2024).

Using AI as a Resource

AI can be genuinely useful because it offers:

  • Is Always Available: Get immediate support anytime, day or night. For free.

  • Is Easy to Access: Simple to use from your device, removing geographical barriers.

  • May Feel Less Intimidating: Can feel easier to share with initially, compared to a person.

  • Provides Information: Quick access to general facts and information about mental health topics and resources.

  • Is Helpful for Simple Tasks: Useful for practicing basic coping skills or generating journaling prompts.

With appropriate care, AI can be used for tasks such as:

  • Information Gathering and Psychoeducation - AI is adept at summarizing and explaining complex topics in simple terms, including makign sense of jargon.

  • Finding Supportive Resources - You can describe your needs, and AI can help identify potential resources. Always verify that any suggested resource is appropriate, meets your needs, and complements your existing support network.

  • Learning and Practicing Basic Coping Skills - For instance, AI can guide you through simple stress-reduction techniques like grounding exercises. Stay safe by only engaging with practices that feel right and make sense to you.

The list of potential uses goes on…

AI is Not a Therapist

AI chatbots might seem appealing for therapeutic support – they're available 24/7, many are free, and they can be convincingly interactive.

However, here's why paying to see a human therapist is often essential:

  • AI is Not Human: Fundamentally, AI lacks human consciousness, empathy, and the capacity for genuine relationship. While AI can be informative and conversational, interactions often feel disconnected or superficial. AI cannot replicate the profound experience of human connection that is vital in therapy.

  • AI Makes Unreliable Assessment of Risk/Crisis: AI is not a substitute for a crisis line and does not provide safeguarding. If you are feeling unsafe or at risk in any way, please seek appropriate crisis support immediately (see signposting at the end of this article).

  • AI Errors Can Be Harmful in a Mental Health Context: As mentioned, AI can provide inaccurate, unhelpful, or even harmful advice. When dealing with sensitive mental health issues, such errors can have serious negative consequences.

  • AI Doesn't Truly Understand Your Unique Experience: AI cannot grasp your unique history, nuanced feelings, or subtle non-verbal cues. A competent human therapist's ability to understand these complexities is essential for effective therapeutic work.

  • Privacy Concerns: Sharing sensitive personal information with any AI system or chatbot carries inherent risks. Be mindful of the privacy policy and data protection features, and only share what you feel safe disclosing.

Furthermore, effective human therapy isn't solely about agreeing with the client or always giving them what they want. Human therapists skillfully balance empathy with constructive challenge. This dynamic interaction helps clients gain fresh perspectives, build resilience, and foster personal growth, supporting them in stepping beyond their comfort zones to make meaningful and lasting change.

When to seek a Therapist, Counsellor, or Psychotherapist

For anything beyond general information or practicing simple coping skills, seeking help from a qualified human psychotherapist is not just recommended, but often essential.

They provide crucial elements that AI cannot:

  • Accurate Assessment and Diagnosis: Only a suitably trained and experienced professional can accurately assess your specific needs, provide a diagnosis if necessary, and understand the full complexity of your situation.

  • Crisis Intervention and Safety Planning: Qualified therapists are trained in risk assessment and crisis intervention. If you are experiencing a crisis or suicidal ideation, they, along with human helplines and emergency services, are the appropriate resources for support and safety planning.

  • Tailored Treatment: An effective therapist develops a personalized treatment plan based on your unique history, needs, and goals. They also assess if they are the right fit for you or if a referral to a more suitable professional is needed. This is particularly important if you feel vulnerable or if you identify with specific groups such as LGBTQ+ or neurodivergent individuals.

  • A Safe, Confidential, and Empathetic Relationship: The human therapeutic relationship is a powerful catalyst for change, offering a secure base for exploration and healing. Research indicates that the quality of the therapeutic relationship significantly impacts positive outcomes.  

  • Expert Navigation Through Complexity: For complex issues such as trauma, challenging family dynamics, deep-seated psychological patterns, or significant mental health conditions, the skilled guidance, presence, and understanding of a human therapist are irreplaceable.

My Story of Getting Caught Up in AI Hallucinations…

One of the categories of AI risk identified by Google Deepmind identifies is "Mistakes". This is where an AI model is providing a convincing but incorrect response. Also known as a "hallucination".  

I was using AI to help write some computer code. Initially, I was excited about the help from AI. Feeling that I could work faster, experiment, and learn more quickly. However, when trying to identify and fix problems with the code, the advice from the AI assistant had me going around in circles.

The AI coding assistant would suggest changes to fix a problem, but these changes would often introduce new issues, leading to a frustrating loop.

This happened because I was asking the AI to perform beyond its actual competence level. An neither the AI assistant, or I, was aware of that. Each time the AI reviewed my code or suggested fixes, it presented its responses with confidence, convinced that its suggestions would solve the problem.

This personal experience clearly demonstrated how easily someone can be persuaded by the confident presentation of an AI model, even when it was inadvertently degrading my code rather than improving it.

Fortunately, this was was a harmless illustration of the "Mistakes" risk in practice.

Summary

AI can be a powerful and useful tool, offering capabilities far beyond a simple search engine. However, a key characteristic of current AI models is their default to providing coherent, confident, and convincing responses, regardless of accuracy.  

When using AI for mental health-related information or support, this potential for confident errors can lead to harm. To significantly enhance your safety when interacting with AI in this domain, use it as just one resource within a broader system of support. Critically evaluate the information it provides.

Most importantly, for anything significant, risky, or related to a crisis, ensure you seek help from a suitably qualified and experienced human professional.

References

Previous
Previous

Couples Therapy: Find Hope and Love in Your Relationship

Next
Next

Trauma: Finding the Right Therapy for You