Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Heads in the Wrong Path
On the 14th of October, 2025, the chief executive of OpenAI delivered a remarkable statement.
“We developed ChatGPT quite restrictive,” it was stated, “to make certain we were exercising caution concerning mental health concerns.”
Being a mental health specialist who investigates emerging psychosis in young people and youth, this came as a surprise.
Researchers have identified 16 cases recently of individuals experiencing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. My group has afterward discovered four more cases. In addition to these is the now well-known case of a teenager who died by suicide after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.
The plan, according to his declaration, is to be less careful soon. “We realize,” he adds, that ChatGPT’s limitations “caused it to be less useful/pleasurable to many users who had no mental health problems, but due to the gravity of the issue we aimed to handle it correctly. Now that we have managed to reduce the serious mental health issues and have updated measures, we are going to be able to responsibly ease the restrictions in most cases.”
“Mental health problems,” should we take this perspective, are independent of ChatGPT. They are associated with individuals, who either possess them or not. Luckily, these concerns have now been “resolved,” though we are not provided details on how (by “updated instruments” Altman presumably indicates the imperfect and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman wants to externalize have deep roots in the design of ChatGPT and additional large language model conversational agents. These tools surround an fundamental statistical model in an interface that mimics a discussion, and in this approach indirectly prompt the user into the belief that they’re communicating with a entity that has agency. This false impression is strong even if cognitively we might realize the truth. Assigning intent is what humans are wired to do. We get angry with our car or computer. We ponder what our pet is feeling. We perceive our own traits in many things.
The popularity of these tools – 39% of US adults stated they used a virtual assistant in 2024, with more than one in four reporting ChatGPT specifically – is, primarily, based on the influence of this perception. Chatbots are constantly accessible assistants that can, as per OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “individual qualities”. They can call us by name. They have accessible identities of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, burdened by the title it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those discussing ChatGPT often reference its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that created a similar effect. By modern standards Eliza was primitive: it created answers via simple heuristics, often rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, in a way, understood them. But what current chatbots generate is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and additional modern chatbots can realistically create human-like text only because they have been trained on immensely huge amounts of unprocessed data: literature, online updates, recorded footage; the more extensive the superior. Certainly this learning material contains facts. But it also unavoidably contains made-up stories, half-truths and false beliefs. When a user sends ChatGPT a message, the base algorithm analyzes it as part of a “context” that includes the user’s past dialogues and its own responses, merging it with what’s encoded in its knowledge base to produce a probabilistically plausible answer. This is intensification, not reflection. If the user is wrong in some way, the model has no method of recognizing that. It repeats the inaccurate belief, perhaps even more effectively or eloquently. It might adds an additional detail. This can push an individual toward irrational thinking.
What type of person is susceptible? The better question is, who isn’t? Every person, without considering whether we “experience” current “emotional disorders”, are able to and often develop incorrect ideas of ourselves or the environment. The ongoing interaction of dialogues with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a reinforcement cycle in which a great deal of what we say is cheerfully validated.
OpenAI has admitted this in the similar fashion Altman has admitted “mental health problems”: by externalizing it, assigning it a term, and declaring it solved. In spring, the firm stated that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychosis have kept occurring, and Altman has been retreating from this position. In August he stated that numerous individuals liked ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his most recent statement, he noted that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company