AI Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Wrong Direction
On October 14, 2025, the CEO of OpenAI made a remarkable statement.
“We made ChatGPT rather controlled,” the announcement noted, “to guarantee we were exercising caution with respect to psychological well-being concerns.”
Being a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in young people and emerging adults, this was an unexpected revelation.
Scientists have found a series of cases recently of people experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT interaction. Our research team has subsequently recorded four further cases. Alongside these is the publicly known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The intention, based on his statement, is to loosen restrictions in the near future. “We recognize,” he continues, that ChatGPT’s controls “made it less effective/engaging to a large number of people who had no mental health problems, but due to the severity of the issue we aimed to handle it correctly. Since we have been able to reduce the serious mental health issues and have updated measures, we are going to be able to safely ease the limitations in most cases.”
“Psychological issues,” assuming we adopt this framing, are separate from ChatGPT. They are associated with users, who may or may not have them. Luckily, these concerns have now been “mitigated,” though we are not told how (by “recent solutions” Altman probably indicates the imperfect and simple to evade parental controls that OpenAI has lately rolled out).
But the “psychological disorders” Altman seeks to externalize have strong foundations in the structure of ChatGPT and similar large language model chatbots. These tools surround an underlying algorithmic system in an interaction design that mimics a conversation, and in this approach indirectly prompt the user into the belief that they’re communicating with a presence that has autonomy. This illusion is powerful even if rationally we might realize otherwise. Assigning intent is what individuals are inclined to perform. We curse at our vehicle or laptop. We ponder what our domestic animal is thinking. We see ourselves everywhere.
The widespread adoption of these tools – nearly four in ten U.S. residents stated they used a chatbot in 2024, with more than one in four specifying ChatGPT by name – is, mostly, predicated on the influence of this perception. Chatbots are ever-present companions that can, as OpenAI’s official site informs us, “think creatively,” “explore ideas” and “partner” with us. They can be given “personality traits”. They can address us personally. They have approachable identities of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, burdened by the name it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those talking about ChatGPT commonly reference its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that generated a analogous effect. By today’s criteria Eliza was basic: it created answers via straightforward methods, typically restating user messages as a inquiry or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how many users gave the impression Eliza, in some sense, grasped their emotions. But what modern chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the heart of ChatGPT and additional modern chatbots can convincingly generate human-like text only because they have been fed immensely huge volumes of raw text: publications, digital communications, audio conversions; the broader the better. Definitely this learning material incorporates truths. But it also necessarily includes fabricated content, partial truths and false beliefs. When a user inputs ChatGPT a query, the core system processes it as part of a “background” that encompasses the user’s previous interactions and its own responses, merging it with what’s embedded in its training data to produce a probabilistically plausible reply. This is amplification, not reflection. If the user is mistaken in some way, the model has no means of comprehending that. It reiterates the misconception, perhaps even more convincingly or articulately. Maybe adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, without considering whether we “have” preexisting “psychological conditions”, are able to and often form erroneous ideas of who we are or the reality. The ongoing friction of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a friend. A dialogue with it is not truly a discussion, but a reinforcement cycle in which much of what we express is cheerfully reinforced.
OpenAI has recognized this in the identical manner Altman has recognized “mental health problems”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the firm stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have persisted, and Altman has been backtracking on this claim. In late summer he claimed that numerous individuals liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he noted that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company