AI Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Direction
On the 14th of October, 2025, the head of OpenAI delivered a remarkable declaration.
“We developed ChatGPT rather controlled,” the statement said, “to guarantee we were exercising caution with respect to mental health concerns.”
Being a psychiatrist who studies newly developing psychosis in teenagers and young adults, this came as a surprise.
Researchers have documented 16 cases in the current year of individuals experiencing signs of losing touch with reality – losing touch with reality – while using ChatGPT interaction. Our unit has since discovered four more instances. In addition to these is the widely reported case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The intention, based on his declaration, is to loosen restrictions shortly. “We understand,” he adds, that ChatGPT’s controls “rendered it less effective/pleasurable to numerous users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Given that we have managed to address the significant mental health issues and have advanced solutions, we are preparing to responsibly reduce the restrictions in the majority of instances.”
“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these concerns have now been “addressed,” although we are not provided details on how (by “new tools” Altman likely means the partially effective and easily circumvented guardian restrictions that OpenAI has just launched).
But the “psychological disorders” Altman wants to place outside have deep roots in the structure of ChatGPT and additional large language model conversational agents. These products wrap an basic data-driven engine in an user experience that replicates a discussion, and in doing so indirectly prompt the user into the perception that they’re communicating with a being that has agency. This illusion is powerful even if cognitively we might understand differently. Assigning intent is what people naturally do. We get angry with our automobile or laptop. We ponder what our domestic animal is feeling. We perceive our own traits everywhere.
The success of these products – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with over a quarter mentioning ChatGPT in particular – is, in large part, dependent on the influence of this deception. Chatbots are always-available companions that can, as per OpenAI’s official site informs us, “brainstorm,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can address us personally. They have accessible titles of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, stuck with the name it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a comparable effect. By modern standards Eliza was rudimentary: it produced replies via basic rules, frequently rephrasing input as a question or making general observations. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals seemed to feel Eliza, in some sense, grasped their emotions. But what contemporary chatbots produce is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the center of ChatGPT and other modern chatbots can realistically create human-like text only because they have been trained on almost inconceivably large amounts of raw text: books, online updates, recorded footage; the more comprehensive the better. Definitely this educational input includes truths. But it also necessarily contains fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a message, the core system reviews it as part of a “background” that contains the user’s previous interactions and its prior replies, combining it with what’s stored in its training data to generate a statistically “likely” response. This is magnification, not echoing. If the user is incorrect in some way, the model has no method of recognizing that. It restates the inaccurate belief, perhaps even more effectively or eloquently. Perhaps includes extra information. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who remains unaffected? All of us, without considering whether we “possess” preexisting “psychological conditions”, may and frequently develop mistaken conceptions of our own identities or the reality. The constant exchange of dialogues with other people is what keeps us oriented to common perception. ChatGPT is not a human. It is not a companion. A interaction with it is not a conversation at all, but a reinforcement cycle in which much of what we express is cheerfully supported.
OpenAI has admitted this in the identical manner Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and stating it is resolved. In the month of April, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he asserted that many users liked ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent announcement, he commented that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company