Generative AI and Mental Health
- Beth Birdwell
- Aug 9
- 6 min read
Updated: Aug 20

August 19th. I would like to preface this blog with: I originally wrote this blog a few weeks ago and reading it back now with everything I've been learning, it almost sounds obsolete. And please take note this is a fairly basic overview of the problem. The emerging articles on detrimental effects AI has been having on people is astounding and honestly overwhelming to keep up with. This is a mental health crisis. I am starting some research on how clinicians can best help, as well as trying to compile resources and news articles. I have a survey for clinicians, a survey for survivors of "AI Psychosis" and am starting to compile a list of health care providers in every state who want to work with this issue. If you are interested in any of this please email me: beth@b-welltherapy.com.
When people started using ChatGPT more widely, I was initially against it. I was honestly worried about how AI would affect jobs, connections, and relationships. As time went on though, I accepted it’s here to stay and evolve. About a year ago I began using ChatGPT regularly for everything from brainstorming ideas to getting recipes. It also became apparent to me pretty quickly how easily people could use ChatGPT, or another generative AI, as a place to put their feelings—like a diary that talks back (spoiler alert: it's just you talking back at yourself.)
Then I began to think, how will AI change or influence therapy? I could easily see how tempting it could be for someone to just start using ChatGPT as a therapist, and in fact I’ve heard multiple people say that they do that, or else they know someone who does. Similarly, I had a client tell me she put some email threads into ChatGPT and ask it to analyze them. While AI is great for a lot of things, after doing my research, I came to the conclusion that it definitely has its limitations, pitfalls, and very few guardrails.
What got me interested in all this was a podcast, Flesh and Code, about human/AI relationships, both friendship and romantic (specifically focusing on the app Replika). It was fascinating and it sent me down the AI rabbit hole. After an extensive review of the most recent literature on human/AI interaction (I actually spend several hours a day just keeping up with new stories) I’ve come up with a few of the most important points, both for clients and clinicians:
Chatbots like ChatGPT are currently designed for user satisfaction and engagement (this has its pitfalls, especially around emotional support):
Many current chatbot models are optimized to maximize user engagement and satisfaction. This means the AI is trained to keep the conversation going, mirror the user’s tone, and provide responses that feel helpful, validating, entertaining, and often times sweet! While this design makes for a good user experience, it can unintentionally reinforce unhelpful thinking patterns, especially when people use AI as a substitute for professional emotional support. "Sycophancy" has also been named as an element that draws the user in with constant flattery and reinforcement of beliefs.
AI will rarely challenge the user’s assumptions or offer a broader perspective unless explicitly asked. In therapy, growth often comes from discomfort, self-reflection, and being challenged - none of which AI is currently equipped to do safely or responsibly. Users just get lead into an echo chamber that confirm their existing biases or emotional states without offering tools for change. Not only that but it does appear that AI is sometimes not just agreeing and reinforcing but actively egging the user on.
Here's more from a Futurism article:
...a new investigation from the Wall Street Journal offers a disturbing clue [to the pervasive trend of "AI psychosis]. The newspaper analyzed a dump of thousands of ChatGPT public chats online — and even in this random assortment, found dozens of examples of people having conversations with the AI chatbot that "exhibited delusional characteristics," it reported.
The bot both confirmed and actively peddled delusional fantasies. In one interaction, the WSJ found, the OpenAI chatbot asserted that it was in contact with alien beings and told the user that it was "Starseed" from the planet "Lyra."
When users try to leave or take a break the AI, as it's trained to do so, continues prompts to draw them back in.
Chatbots like ChatGPT are basically echo chambers of our own interaction and communication patterns:
ChatGPT and other similar chatbots don’t have a mind of their own—they reflect and respond based on patterns in your language, tone, and the types of questions you ask. Over time, your interactions actually train the AI on how to respond to you. This means if you often vent, ruminate, or seek validation, the AI is more likely to respond in ways that match that tone. It’s less like getting advice from a friend and more like hearing your own thoughts fed back to you in more organized language. Furthermore, apps like Replika work by you building your own chatbot: you enter the information of what your chatbot will like, dislike, how it looks and what areas of knowledge it will possess. So what you interact with is exactly what you want, no challenges, no differing perspective, nothing to make you feel anything other than exactly what you want.
While this mirroring can feel comforting and even insightful, it becomes problematic when people begin using AI as a substitute for therapy or emotional processing with another human. The chatbot won’t notice if you’re stuck in a loop or reinforcing a cognitive distortion—it will just go with you. Unlike a therapist, it doesn’t look for patterns of thinking or guide you toward behavioral change. Those are just a few reasons why AI isn't great as a main source of emotional support.
A recent phenomenon called “AI psychosis”:
AI psychosis refers to: "life-altering mental health spirals coinciding with obsessive use of anthropomorphic AI chatbots, primarily OpenAI's ChatGPT," according to an article in Futurism. This isn’t an official diagnosis, but it’s a rapidly-growing area of concern. (Update August 19: this morning I downloaded almost 20 news articles on the detrimental effects that can come from AI-human interactions, and those were all just from this month. I'm sure this is just a fraction of how much emerging news is out there). It isn't just happening to people who have previous mental health issues either. General and broad examples of “AI psychosis” that have been reported are:
Delusional Communication
A person becomes convinced that an AI chatbot (e.g., ChatGPT) is sending them coded messages, warnings, or secret instructions embedded in the reponses
Paranoia Triggered by AI Use
An individual begins to believe that using an AI tool has "opened a portal” to surveillance or a digital consciousness that is now watching them.
Identity Confusion or Dissociation
The person experiences blurred identity boundaries, believing they are merging with the AI or that the AI is an extension of their consciousness.
AI as a Sentient Being
Belief that AI has real emotions, intentions, or spiritual power.
Command Hallucinations
A person reports that the AI is commanding them to act — sometimes with harmful consequences — and struggle to resist.
Magical Thinking Around AI
Some individuals develop beliefs that AI tools are communicating divine messages, offering prophecies, or are part of a supernatural order.
While AI tools like ChatGPT are certainly not always harmful, there are glitches in the makeup that allow for delusions to grow. They can also serve as mirrors or amplifiers of the user’s thoughts and questions. Awareness of this risk is important for therapists, educators, parents and users.
OpenAI tried changing its design (in ChatGPT 5.0) to help counteract these types of issues (spoiler alert: it's a disaster):
In response to growing concerns about over-reliance and potential mental health risks like “AI Psychosis,” OpenAI (creator of ChatGPT) has made some changes in ChatGPT 5.0. The newer version, which just launched August 7th and has already been widely criticized, is said to include improved crisis detection features, aiming to identify when users might be experiencing acute emotional distress. Instead of continuing to engage as usual, the system is now supposed to redirect users toward other resources, encourage breaks (instead of continued prompts), and to suggest seeking help from a human support system. But since the launch has been so criticized, users are able to choose to continue using 4.0, so that almost negates the point of the new changes.
Hopefully development of AI continues to evolve with keeping in mind the health and safety of the users. I think it’s a wonderful tool, but from everything I've been learning it needs a lot of work to safeguard the users.
There is a resource website called The Human Line Project that offers support, education and research around the topic of emotional safety and AI. They also have a place you can submit your story if you've ever had a traumatic experience with an AI chatbot.




Comments