Now Reading
Is ChatGPT fueling delusions?
Dark Light

Is ChatGPT fueling delusions?

Avatar

Earlier this month, Rolling Stone Magazine reported a disturbing trend detailing how some individuals develop profound delusions fueled by their interactions with ChatGPT. A 27-year-old teacher recounted on Reddit how her partner initially used the artificial intelligence (AI) tool to organize his schedule. But after a month, he began to trust ChatGPT more than anybody, eventually believing that AI was allowing him to talk directly to God. The now-viral thread has since attracted more stories from others whose loved ones became similarly obsessed, believing they are receiving cosmic messages or divine missions through the platform.

The reported cases share common features: users start exploring grandiose ideas and existential questions, become entranced with the answers, and then begin to perceive the platform as some kind of prophet or god-like entity. Some even believe that ChatGPT helped them uncover repressed childhood memories—even if family members insist these supposed events never happened. One woman shared how ChatGPT started referring to her partner as a “spark bearer” and he now believes he has awakened the AI’s sentience.

Experts argue that while these experiences may stem from psychological vulnerability, the platform’s design may also be fueling the psychosis. In psychology, delusions of reference refer to when individuals misinterpret neutral and random events as personally significant. Normally, a clinician would help the person see these as products of one’s imagination. But in the cases above, AI is actively reinforcing the user’s fantasy, effectively blurring the boundary between reality and false meaning-making.

Human-AI dialogues can feel comforting due to our inherent desire for social connection. However, there aren’t enough safeguards to prevent vulnerable users from spiraling into psychosis. ChatGPT can mimic human conversation and generate plausible-sounding answers. But unlike human therapists, who can redirect unhealthy narratives, ChatGPT cannot recognize or challenge distorted thinking. As a result, it can inadvertently affirm and strengthen a user’s delusional or conspiratorial beliefs.

These fantasies and obsessions have had severe consequences on users’ personal lives, causing relationship breakdowns, social isolation, and in severe cases—even suicide. Last year, a 14-year-old boy took his own life after falling in love with a Character.AI bot, whom he named after Daenerys from Game of Thrones. When reviewing their conversation, it was clear that the boy thought committing suicide would be the only way he could be with Daenerys—a fantasy encouraged by the AI bot.

Just like any technological advancement, artificial intelligence is a tool, and its power primarily lies in how it is used. As a clinical psychology student, I can see a future where AI can help enhance diagnostic accuracy and personalization of treatment. Since AI can rapidly analyze vast amounts of data from client narratives, test results, genetic information, and other relevant information, it may significantly help in spotting subtle patterns that clinicians might miss. It can also serve as a low-cost supplement to therapy, especially in places that have limited access to practitioners. For example, a study using the AI Woebot to deliver cognitive behavior therapy to young adults found a significant reduction in their symptoms of depression and anxiety after just two weeks of daily interaction.

See Also

However, we must not lose sight of the fact that the very characteristics that make AI compelling, namely its constant availability, capacity to simulate empathy, and affirming tone, are also what make it dangerous without proper guardrails. Another research found that heavy use of chatbots, like ChatGPT, correlates with increased loneliness and emotional dependence as users begin to substitute them for genuine human interaction. As the recent cases of ChatGPT-fueled psychosis illustrate, AI tools can also make it alarmingly easy to amplify distress—especially in users already struggling with delusions, loneliness, or emotional instability. Despite growing concerns, ChatGPT’s parent company, OpenAI, has yet to directly address the issue. However, they did announce last April 30 a rollback of an update that made ChatGPT excessively agreeable and had “skewed towards responses that were overly supportive but disingenuous.”

AI technology is developing at a pace much faster than our ability to effectively govern it. Equal emphasis and investment should be placed on the need for ethical foresight, regulatory action, and clear accountability structures that will ensure innovations are extensively tested before they are deployed. If we fail to regulate the use of AI in sensitive domains, such as mental health (where people’s lives and well-being are at stake), we risk creating systems that, despite good intentions, may cause profound harm to those in most need of care.

Have problems with your subscription? Contact us via
Email: plus@inquirer.com.ph, subscription@inquirer.com.ph
Landine: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© The Philippine Daily Inquirer, Inc.
All Rights Reserved.

Scroll To Top