Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Wrong Direction

Back on the 14th of October, 2025, the CEO of OpenAI made a remarkable declaration.

“We developed ChatGPT quite limited,” the statement said, “to make certain we were acting responsibly concerning mental health concerns.”

As a mental health specialist who investigates recently appearing psychosis in young people and emerging adults, this was news to me.

Scientists have documented a series of cases this year of users showing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT interaction. Our unit has subsequently recorded four more examples. Besides these is the publicly known case of a teenager who died by suicide after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.

The intention, based on his announcement, is to loosen restrictions in the near future. “We understand,” he adds, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but due to the severity of the issue we aimed to handle it correctly. Now that we have been able to address the severe mental health issues and have updated measures, we are going to be able to safely reduce the restrictions in many situations.”

“Emotional disorders,” if we accept this viewpoint, are unrelated to ChatGPT. They are attributed to users, who may or may not have them. Luckily, these concerns have now been “addressed,” though we are not provided details on how (by “recent solutions” Altman probably refers to the imperfect and readily bypassed guardian restrictions that OpenAI recently introduced).

Yet the “mental health problems” Altman aims to place outside have deep roots in the design of ChatGPT and additional advanced AI AI assistants. These tools wrap an fundamental data-driven engine in an interaction design that replicates a dialogue, and in doing so implicitly invite the user into the illusion that they’re interacting with a being that has independent action. This false impression is strong even if rationally we might realize otherwise. Imputing consciousness is what individuals are inclined to perform. We yell at our car or computer. We ponder what our pet is thinking. We see ourselves in many things.

The popularity of these systems – over a third of American adults stated they used a virtual assistant in 2024, with over a quarter specifying ChatGPT by name – is, in large part, dependent on the influence of this deception. Chatbots are constantly accessible assistants that can, as OpenAI’s online platform states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “personality traits”. They can address us personally. They have approachable identities of their own (the first of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, burdened by the designation it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those talking about ChatGPT often mention its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a analogous illusion. By modern standards Eliza was primitive: it created answers via straightforward methods, typically rephrasing input as a question or making generic comments. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how many users appeared to believe Eliza, to some extent, understood them. But what modern chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.

The sophisticated algorithms at the heart of ChatGPT and additional current chatbots can realistically create natural language only because they have been supplied with immensely huge amounts of written content: publications, social media posts, transcribed video; the broader the more effective. Certainly this training data incorporates truths. But it also inevitably contains fabricated content, partial truths and false beliefs. When a user inputs ChatGPT a prompt, the base algorithm analyzes it as part of a “background” that includes the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its learning set to create a probabilistically plausible response. This is amplification, not reflection. If the user is incorrect in some way, the model has no means of understanding that. It restates the inaccurate belief, possibly even more effectively or articulately. Maybe provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who remains unaffected? Every person, irrespective of whether we “possess” preexisting “psychological conditions”, are able to and often develop incorrect ideas of ourselves or the world. The continuous friction of discussions with other people is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is cheerfully supported.

OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, giving it a label, and announcing it is fixed. In the month of April, the company explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have kept occurring, and Altman has been retreating from this position. In August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his most recent announcement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Jacob Schwartz
Jacob Schwartz

A tech enthusiast and business strategist with over a decade of experience in digital transformation and startup consulting.