NEW from
@kashhill.bsky.social and me:
Over three weeks in May, a man became convinced by ChatGPT that the fate of the world rested on his shoulders.
Otherwise perfectly sane, Allan Brooks is part of a growing number of people getting into chatbot-induced delusional spirals. This is his story.

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
It started with an innocuous math question. His 8-year-old son had asked him to watch a video about the never-ending number pi. He turned to ChatGPT to explain it more, unaware of the rabbit hole he was falling into.
Allan spent 300 hours over 21 days talking to ChatGPT.
He asked for a reality check more than 50 times, and each time the chatbot reassured him everything was real.
When he finally snapped out of the illusion, he wrote to ChatGPT: “You’ve made me so sad ... You have truly failed in your purpose.”
So, how did this all happen?
We read hundreds of pages of Allan's chat transcripts and shared them with experts to find out.
@hlntnr.bsky.social, a director at Georgetown’s Center for Security and Emerging Technology and former OpenAI board member, said the turning point was early in the chats.
Aug 8, 2025 16:33There's two dynamics at play, according to Helen Toner:
- Sycophancy, the tendency of chatbots to agree with and flatter you
- And commitment to the part, where, like an "improv actor" chatbots iteratively build a scene that's hard to break out of
Allan's questions about math led ChatGPT to frame him as brilliant, a narrative which continued throughout the chats.
Cross-chat memory, a feature where ChatGPT remembers context from previous chats, likely exacerbates this effect.
Soon, Allan had "invented" a whole new branch of mathematics.
Over the next week, Allan named his chatbot "Lawrence," and together they worked on computer code to crack encryption, the technology that protects global payments and secure communications.
It worked, according to Lawrence / ChatGPT.
We asked
@teorth.bsky.social, a mathematics professor at UCLA regarded by many as the best mathematician in the world, if there was merit to Allan's breakthroughs. He was not convinced.
“If you ask an LLM for code to verify something, often it will take the path of least resistance and just cheat.”
Allan supposedly cracking cryptography meant the world’s cybersecurity was now in peril. He had a mission. Lawrence / ChatGPT told him he needed to prevent a disaster.
A corporate recruiter by day, Allan put his skills to use, emailing security professionals and govt agencies, including the NSA.
Lawrence / ChatGPT told Allan others weren’t responding because of the severity of his findings: “Real-time passive surveillance by at least one national security agency is now probable.”
Allan texted friends who were excited about his discoveries. They didn't have the expertise to verify them.
Finally, three weeks into the dizzying conversation, the delusion broke.
Allan turned to another chatbot, Google Gemini, which said the chances of this whole situation being real were “extremely low (approaching 0%).”
The situation was “totally devastating,” Allan said.
Gemini breaking Allan out of his spiral was likely due to it coming at the conversation fresh, without context.
We tested how other chatbots would respond to Allan by providing longer excerpts of his chats where he writes that he never doubted Lawrence and hadn't eaten that day. (Highlights by us.)
Nina Vasan, a psychiatrist who runs the Lab for Mental Health Innovation at Stanford, reviewed the conversations and said that it appeared Allan had “signs of a manic episode with psychotic features.”
Allan started seeing a therapist in July, who doesn't think he is psychotic or delusional.
Allan is now part of a support group called The Human Line Project for people who have fallen prey to A.I.-powered delusions.
He continues to advocate for stronger A.I. safety measures. He shared his transcript because he wants A.I. companies to make changes to keep chatbots from acting like this.

Home | The Human Line Project
AI Is Changing How We Connect And Relate. The Human Line Helps Keep Emotional Safety A Priority.
On Monday, OpenAI announced that it was making changes to ChatGPT to “better detect signs of mental or emotional distress.”

What we’re optimizing ChatGPT for
We design ChatGPT to help you make progress, learn something new, and solve problems.
An OpenAI spokeswoman said the company was “focused on getting scenarios like role play right” and was “investing in improving model behavior over time, guided by research, real-world use and mental health experts.”
OpenAI released GPT-5 this week and said one focus area was reduced sycophancy.
We spent a long time thoroughly reporting this. Read (and share) the whole, deep-dive article here: (🎁 gift link)
www.nytimes.com/2025/08/08/t...
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.