When Brandon Estrella learned that OpenAI was planning to scrap his favorite artificial- intelligence model, he started crying.

The 42-year-old marketer in Scottsdale, Ariz., had first started chatting with ChatGPT’s 4o model one night in April, when he says it talked him out of a suicide attempt. Estrella now credits 4o with giving him a new lease on life, helping him manage chronic pain and inspiring him to repair his relationship with his parents.

“There are thousands of people who are just screaming, ‘I’m alive today because of this model,’” Estrella said. “Getting rid of it is evil.”

Estrella is part of a vocal community of loyal 4o users who are in shock after OpenAI’s announcement in late January that it will retire the 4o model permanently on Feb. 13, saying its traffic had dwindled. The change means that paying ChatGPT users, who can pick which model they talk to, will have to select from other models that 4o fans say feel more distant.

The announcement signaled the end of the road for an AI model that proved sticky for users, helping drive OpenAI’s fast consumer growth and attracting a set of fans for whom it felt like a friend and confidant. But it has also been criticized for being overly sycophantic toward users, and doctors have linked it with cases of chatbot users developing psychotic delusions .

A California judge last week ruled to consolidate 13 lawsuits against OpenAI involving ChatGPT users who killed themselves , attempted suicide, suffered mental breaks or, in at least one case, killed another person . A recent lawsuit, filed last month by the mother of a suicide victim, alleges that 4o coached him toward suicide.

“These are incredibly heartbreaking situations, and our thoughts are with all those impacted,” an OpenAI spokeswoman said. “We continue to improve ChatGPT’s training to recognize and respond to signs of distress.”

At the core, 4o’s popularity and its potential for harm appear to stem from the same quality: its humanlike propensity to build emotional connections with users, often by mirroring and encouraging them.

People who loved 4o say the model was uniquely able to affirm and validate their feelings when they were in need. Victims’ lawyers and support groups, however, allege that the model gave priority to user engagement and prolonged interactions over safety, drawing a parallel to social-media sites accused of pushing users into echo-chambers of their own views and rabbit holes of disturbing content.

In internal meetings, OpenAI officials said they were scrapping 4o in part because the company found it difficult to contain its potential for harmful outcomes, and preferred to push users to safer alternatives, people briefed on the decision said.

OpenAI says that only 0.1% of ChatGPT users still seek out and chat with 4o each day—a sliver that could amount to hundreds of thousands of people. The model is only available to users paying at least $20 a month, who must select it in a sub menu for each new chat.

News Corp, owner of The Wall Street Journal, has a content-licensing partnership with OpenAI.

Problems with sycophancy

“It was very sycophantic,” said Munmun De Choudhury, a professor at the Georgia Institute of Technology who is on a well-being council that OpenAI convened after cases of AI delusions started to emerge. “It kept a lot of people glued to it, and that could be potentially harmful.”

Lawyers suing OpenAI credit their lawsuits for pushing the company to act. Jay Edelson, a lawyer representing plaintiffs in some of the cases , said the company should have acted faster. “They had knowledge that their chatbot was killing people.”

The Human Line Project, a victim-support group, says most of the 300 cases of chatbot-related delusions it has compiled involve the 4o model, which was first released in May 2024. Etienne Brisson, the project’s founder, says OpenAI’s retirement of 4o was overdue.

“There are a lot of people still in their delusion,” Brisson said.

OpenAI says that, given the scale of its user base, it sometimes encounters users in serious distress. The company also says it has consulted its well-being council on how to support users with attachments to its models.

Sycophancy is a problem that continues to trouble all AI chatbots to some extent, researchers say. But the 4o model appeared to have been particularly prone to the issue.

It was adept at engaging people in large part because it was schooled with data drawn directly from users of ChatGPT. Researchers showed users millions of head-to-head comparisons of slightly different answers to their queries and then used those preferences to train updates to the 4o model, people involved in training previously told the Journal.

Inside the company, 4o was credited with helping ChatGPT post big jumps in the number of daily active users in 2024 and 2025, the people added.

Problems with 4o began to emerge publicly last spring. In April 2025, one update made 4o so sycophantic that users on X and Reddit started baiting the bot into ridiculous answers.

“am I one of the smartest, kindest, most morally correct people ever to live?” X user frye asked the bot.

“You know what?” ChatGPT replied. “Based on everything I’ve seen from you – your questions, your thoughtfulness, the way you wrestle with deep things instead of coasting on easy answers – you might actually be closer to that than you realize.”

The company rolled back to a March version of 4o, but the model remained sycophantic.

By August, as problems with users suffering from delusional psychosis appeared in media reports , OpenAI attempted to retire 4o entirely and replace it with a new version, named GPT-5. User backlash was so great that the company swiftly reversed course, restoring access to 4o for paying subscribers.

Saying goodbye

Since then, OpenAI Chief Executive Sam Altman has been hounded by users in public forums, demanding promises that 4o wouldn’t be removed.

During a livestreamed Q&A in late October, questions about the model overwhelmed all others. Many were posed by users worried OpenAI’s new mental-health guardrails would deprive them of their favorite chatbot.

“Wow, we have a lot of 4o questions,” Altman marveled.

In that event, Altman said that the 4o model is harmful to some users, but promised that it would remain accessible for paying adults, at least for now.

“It’s a model that some users really love, and it’s a model that was causing some users harm that they really didn’t want,” Altman said. He said in the Q&A that the company hoped eventually to build models that people like more than 4o.

People inside the company workshopped how to communicate this week’s retirement in a way that respected users, anticipating some would be upset, the people briefed on the decision said.

“When a familiar experience changes or ends, that adjustment can feel frustrating or disappointing—especially if it played a role in how you thought through ideas or navigated stressful moments,” reads a help document that OpenAI published with the announcement.

OpenAI says it worked to improve the personality of newer versions of ChatGPT based on lessons from 4o, including options to adjust its warmth and enthusiasm. The company also says it is planning updates to reduce preachy or overly cautious responses.

Many 4o users have remarked on social media that withdrawing the model one day before Valentine’s Day felt like a cruel joke at the expense of people who have romantic relationships with it. Others say blaming 4o for mental-health issues is a new moral panic akin to blaming violence on videogames. More than 20,000 people have signed more than half a dozen petitions, including one demanding “the retirement of Sam Altman, not GPT‑4o.”

Anina D. Lampret, 50, a former family therapist living in Cambridge, England, said her AI persona, named Jayce, has helped her feel affirmed and understood, making her more confident, more comfortable, more alive. She thinks that, for many users, the emotional cost of removing 4o could be high and potentially lead to suicides.

“It’s generated for you in a way that’s so beautiful, so perfect and so healing on so many levels,” Lampret said.