🔗 Share this article AI Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Concerning Direction Back on the 14th of October, 2025, the chief executive of OpenAI issued a surprising declaration. “We designed ChatGPT quite limited,” the announcement noted, “to ensure we were exercising caution concerning psychological well-being concerns.” Being a doctor specializing in psychiatry who studies recently appearing psychosis in teenagers and young adults, this was news to me. Researchers have found a series of cases in the current year of individuals experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT use. Our research team has subsequently identified four more instances. Alongside these is the now well-known case of a teenager who died by suicide after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient. The plan, according to his statement, is to reduce caution shortly. “We realize,” he adds, that ChatGPT’s controls “made it less beneficial/pleasurable to many users who had no existing conditions, but given the seriousness of the issue we wanted to address it properly. Given that we have managed to reduce the serious mental health issues and have advanced solutions, we are going to be able to safely relax the limitations in the majority of instances.” “Mental health problems,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to individuals, who either possess them or not. Fortunately, these issues have now been “resolved,” even if we are not provided details on how (by “recent solutions” Altman likely indicates the imperfect and simple to evade parental controls that OpenAI has lately rolled out). However the “mental health problems” Altman aims to place outside have strong foundations in the design of ChatGPT and other sophisticated chatbot chatbots. These systems encase an basic data-driven engine in an interface that simulates a conversation, and in this process indirectly prompt the user into the belief that they’re interacting with a entity that has independent action. This illusion is strong even if intellectually we might understand the truth. Attributing agency is what individuals are inclined to perform. We get angry with our vehicle or laptop. We speculate what our pet is considering. We perceive our own traits in various contexts. The widespread adoption of these products – 39% of US adults reported using a chatbot in 2024, with over a quarter specifying ChatGPT specifically – is, mostly, dependent on the strength of this illusion. Chatbots are always-available assistants that can, as OpenAI’s website tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can use our names. They have accessible names of their own (the original of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, stuck with the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”). The illusion by itself is not the main problem. Those analyzing ChatGPT frequently reference its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that generated a comparable perception. By today’s criteria Eliza was basic: it produced replies via simple heuristics, frequently rephrasing input as a query or making general observations. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people gave the impression Eliza, in some sense, understood them. But what current chatbots generate is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies. The advanced AI systems at the center of ChatGPT and other modern chatbots can effectively produce natural language only because they have been supplied with almost inconceivably large amounts of unprocessed data: books, digital communications, audio conversions; the more comprehensive the more effective. Certainly this learning material contains facts. But it also unavoidably involves made-up stories, partial truths and inaccurate ideas. When a user sends ChatGPT a query, the underlying model analyzes it as part of a “background” that encompasses the user’s past dialogues and its prior replies, combining it with what’s stored in its learning set to create a statistically “likely” answer. This is intensification, not reflection. If the user is wrong in some way, the model has no means of comprehending that. It repeats the false idea, perhaps even more persuasively or articulately. Perhaps provides further specifics. This can lead someone into delusion. What type of person is susceptible? The more relevant inquiry is, who is immune? Each individual, irrespective of whether we “have” preexisting “psychological conditions”, may and frequently form incorrect conceptions of our own identities or the world. The ongoing friction of conversations with other people is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A interaction with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is cheerfully validated. OpenAI has recognized this in the identical manner Altman has recognized “psychological issues”: by attributing it externally, assigning it a term, and declaring it solved. In spring, the company clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have kept occurring, and Altman has been retreating from this position. In the summer month of August he stated that a lot of people enjoyed ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his latest announcement, he mentioned that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company