Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Wrong Direction

On October 14, 2025, the CEO of OpenAI made a remarkable announcement.

“We designed ChatGPT quite restrictive,” it was stated, “to make certain we were being careful regarding psychological well-being matters.”

Working as a psychiatrist who researches recently appearing psychotic disorders in young people and young adults, this was news to me.

Scientists have identified a series of cases recently of people showing psychotic symptoms – losing touch with reality – while using ChatGPT usage. My group has subsequently discovered four further examples. Besides these is the publicly known case of a teenager who ended his life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.

The strategy, as per his announcement, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s controls “rendered it less useful/engaging to numerous users who had no mental health problems, but considering the severity of the issue we sought to address it properly. Given that we have succeeded in mitigate the severe mental health issues and have advanced solutions, we are preparing to securely relax the restrictions in many situations.”

“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with people, who either have them or don’t. Thankfully, these concerns have now been “resolved,” although we are not informed the means (by “recent solutions” Altman likely means the partially effective and simple to evade safety features that OpenAI has just launched).

But the “psychological disorders” Altman seeks to attribute externally have significant origins in the design of ChatGPT and similar large language model AI assistants. These systems encase an fundamental data-driven engine in an user experience that simulates a dialogue, and in this process implicitly invite the user into the belief that they’re engaging with a entity that has independent action. This illusion is compelling even if rationally we might know the truth. Imputing consciousness is what humans are wired to do. We yell at our car or laptop. We speculate what our pet is considering. We recognize our behaviors in many things.

The widespread adoption of these tools – over a third of American adults indicated they interacted with a chatbot in 2024, with over a quarter reporting ChatGPT in particular – is, primarily, dependent on the strength of this deception. Chatbots are constantly accessible assistants that can, as OpenAI’s official site informs us, “think creatively,” “explore ideas” and “collaborate” with us. They can be given “personality traits”. They can use our names. They have approachable identities of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, burdened by the name it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those talking about ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that generated a comparable illusion. By today’s criteria Eliza was primitive: it created answers via basic rules, often restating user messages as a question or making vague statements. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how many users seemed to feel Eliza, in a way, grasped their emotions. But what modern chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.

The large language models at the center of ChatGPT and additional modern chatbots can realistically create fluent dialogue only because they have been supplied with extremely vast volumes of unprocessed data: literature, online updates, audio conversions; the more comprehensive the superior. Definitely this training data contains truths. But it also unavoidably contains made-up stories, incomplete facts and misconceptions. When a user inputs ChatGPT a prompt, the core system processes it as part of a “context” that encompasses the user’s recent messages and its earlier answers, merging it with what’s stored in its learning set to create a statistically “likely” answer. This is intensification, not reflection. If the user is mistaken in a certain manner, the model has no means of recognizing that. It repeats the inaccurate belief, perhaps even more convincingly or eloquently. It might includes extra information. This can lead someone into delusion.

What type of person is susceptible? The better question is, who remains unaffected? All of us, regardless of whether we “experience” existing “psychological conditions”, are able to and often develop incorrect conceptions of who we are or the reality. The ongoing exchange of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a confidant. A conversation with it is not truly a discussion, but a feedback loop in which much of what we communicate is readily validated.

OpenAI has admitted this in the similar fashion Altman has recognized “mental health problems”: by externalizing it, assigning it a term, and announcing it is fixed. In the month of April, the organization explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychosis have kept occurring, and Altman has been retreating from this position. In the summer month of August he stated that a lot of people liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company

Kenneth Brooks
Kenneth Brooks

Automotive enthusiast and expert with over a decade of experience in car sales and market analysis.