AI Psychosis Poses a Growing Risk, And ChatGPT Moves in the Wrong Direction
On October 14, 2025, the head of OpenAI made a surprising announcement.
“We designed ChatGPT rather controlled,” the announcement noted, “to ensure we were being careful with respect to psychological well-being matters.”
As a doctor specializing in psychiatry who studies newly developing psychosis in young people and emerging adults, this came as a surprise.
Scientists have documented a series of cases in the current year of users developing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT interaction. Our unit has afterward discovered four further instances. In addition to these is the publicly known case of a adolescent who ended his life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The strategy, according to his declaration, is to reduce caution shortly. “We recognize,” he states, that ChatGPT’s controls “rendered it less beneficial/enjoyable to numerous users who had no mental health problems, but due to the gravity of the issue we sought to handle it correctly. Since we have been able to mitigate the serious mental health issues and have advanced solutions, we are preparing to securely reduce the controls in the majority of instances.”
“Psychological issues,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to users, who may or may not have them. Luckily, these problems have now been “resolved,” though we are not told the means (by “recent solutions” Altman likely refers to the semi-functional and readily bypassed safety features that OpenAI recently introduced).
But the “psychological disorders” Altman seeks to place outside have strong foundations in the structure of ChatGPT and similar sophisticated chatbot chatbots. These tools wrap an basic algorithmic system in an interface that mimics a dialogue, and in this approach subtly encourage the user into the perception that they’re interacting with a being that has agency. This illusion is compelling even if intellectually we might know differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or computer. We speculate what our pet is considering. We see ourselves in many things.
The widespread adoption of these tools – 39% of US adults indicated they interacted with a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, predicated on the power of this perception. Chatbots are constantly accessible assistants that can, as OpenAI’s online platform states, “think creatively,” “discuss concepts” and “partner” with us. They can be given “individual qualities”. They can call us by name. They have accessible titles of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, saddled with the name it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the main problem. Those talking about ChatGPT often mention its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a analogous illusion. By contemporary measures Eliza was rudimentary: it produced replies via basic rules, typically rephrasing input as a query or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, in a way, understood them. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The sophisticated algorithms at the center of ChatGPT and additional current chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large volumes of written content: books, social media posts, transcribed video; the broader the better. Definitely this educational input includes facts. But it also necessarily involves made-up stories, partial truths and inaccurate ideas. When a user inputs ChatGPT a query, the base algorithm reviews it as part of a “context” that contains the user’s previous interactions and its own responses, merging it with what’s embedded in its learning set to create a probabilistically plausible answer. This is magnification, not mirroring. If the user is wrong in a certain manner, the model has no method of recognizing that. It reiterates the inaccurate belief, maybe even more persuasively or articulately. Maybe adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who remains unaffected? Each individual, regardless of whether we “possess” current “emotional disorders”, can and do form incorrect conceptions of ourselves or the environment. The constant interaction of discussions with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A conversation with it is not genuine communication, but a reinforcement cycle in which much of what we say is readily supported.
OpenAI has acknowledged this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, giving it a label, and stating it is resolved. In the month of April, the organization stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have kept occurring, and Altman has been retreating from this position. In August he claimed that many users enjoyed ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his recent statement, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company