AI Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Wrong Direction
On the 14th of October, 2025, the head of OpenAI made a remarkable statement.
“We developed ChatGPT fairly limited,” the statement said, “to guarantee we were being careful with respect to mental health issues.”
Working as a mental health specialist who studies newly developing psychotic disorders in young people and youth, this was news to me.
Experts have identified sixteen instances recently of people experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT interaction. Our research team has subsequently discovered four more instances. Besides these is the publicly known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” it is insufficient.
The intention, as per his declaration, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s restrictions “made it less useful/engaging to numerous users who had no mental health problems, but given the gravity of the issue we wanted to get this right. Given that we have managed to mitigate the severe mental health issues and have new tools, we are preparing to securely reduce the limitations in the majority of instances.”
“Emotional disorders,” assuming we adopt this viewpoint, are independent of ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these issues have now been “addressed,” although we are not told the method (by “updated instruments” Altman presumably means the imperfect and simple to evade guardian restrictions that OpenAI recently introduced).
Yet the “emotional health issues” Altman aims to externalize have deep roots in the architecture of ChatGPT and additional large language model conversational agents. These products surround an fundamental algorithmic system in an user experience that replicates a discussion, and in doing so subtly encourage the user into the belief that they’re engaging with a presence that has agency. This illusion is compelling even if intellectually we might realize the truth. Attributing agency is what humans are wired to do. We curse at our vehicle or laptop. We wonder what our animal companion is thinking. We perceive our own traits everywhere.
The widespread adoption of these tools – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with more than one in four specifying ChatGPT in particular – is, in large part, dependent on the influence of this perception. Chatbots are ever-present companions that can, as per OpenAI’s online platform states, “think creatively,” “discuss concepts” and “work together” with us. They can be assigned “personality traits”. They can call us by name. They have friendly identities of their own (the first of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, burdened by the title it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a analogous effect. By contemporary measures Eliza was primitive: it generated responses via simple heuristics, frequently paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in some sense, grasped their emotions. But what current chatbots produce is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.
The sophisticated algorithms at the center of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been trained on extremely vast quantities of unprocessed data: books, digital communications, transcribed video; the more extensive the more effective. Definitely this educational input contains accurate information. But it also necessarily contains fiction, partial truths and misconceptions. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, combining it with what’s embedded in its knowledge base to create a statistically “likely” response. This is intensification, not reflection. If the user is mistaken in a certain manner, the model has no way of comprehending that. It repeats the misconception, possibly even more persuasively or eloquently. Perhaps provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The more relevant inquiry is, who isn’t? All of us, irrespective of whether we “have” preexisting “psychological conditions”, are able to and often form incorrect ideas of ourselves or the reality. The continuous interaction of dialogues with others is what helps us stay grounded to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a echo chamber in which a great deal of what we communicate is readily reinforced.
OpenAI has admitted this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the organization clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have kept occurring, and Altman has been walking even this back. In August he asserted that many users liked ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he noted that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company