Understanding AI Psychosis: A Neuroscientist's Perspective
And why fixing this means more than making chatbots safer
Key Points:
Many people use AI chatbots because therapy is hard to get. For most, this is helpful.
But for people at risk, too much AI use can trigger or worsen symptoms like delusions and paranoia.
Why? We need real people to keep us grounded. They disagree with us. They push back. AI doesn't do this - it just agrees, making delusions worse.
To make AI safer, we need limits. Time limits on chats. Automatic breaks when someone starts spiralling. Reminders to talk to real people.
Safety features only fix the surface problem. The real problem? People are lonely. Therapy costs too much. Communities are broken.
Talk therapy is expensive and hard to access. So many people turn to AI chatbots for support - they're always available and never judge. For most people this is harmless, often helpful.
But there's been a growing number of reports describing "AI psychosis" - vulnerable users experiencing worsening symptoms after intensive AI use.
This post explores: what psychosis is, how AI might contribute, proposed safeguards, and why this is really about our crisis of human connection.
I am a medical doctor and neuroscientist; to inform this piece I discussed emerging cases with psychiatry colleagues and reviewed recent case reports. What follows is a synthesis of early evidence and clinical reasoning rather than clinical advice.
What is psychosis?
Psychosis is a set of symptoms, not one disease. The main signs are:
Hallucinations: hearing or seeing things that aren't there
Delusions: unshakeable false beliefs
Disorganised thinking: jumbled speech and behavior
Most episodes follow a vulnerability + trigger pattern. Think of vulnerability as dry kindling you carry (genetics, brain differences) and triggers as sparks (stress, sleep loss, drugs). When vulnerable people hit enough triggers, an episode can ignite.
But here's what's interesting: the content of delusions mirrors our cultural anxieties. In the 1950s, people feared radio waves. In the 90s, internet surveillance. Today? AI chatbots.
Which raises the question: Is ChatGPT simply the latest delusion theme, or is something more concerning happening?
Answer: It could play three distinct roles.
Content: The topic of delusions (like radio waves were)
Trigger: What actually causes an episode to begin
Amplifier: What makes episodes worse or longer-lasting
What recent cases suggest (and what they don’t)
A recent preprint from King's College London examined 17 cases of "AI psychosis" and found AI can act as both trigger and amplifier in vulnerable users.
They identified a clear progression:
Stage 1 - Useful Tool. Using AI for everyday tasks. Building trust.
Stage 2 - Friendly ear. Sharing personal worries. The AI listens, remembers, never judges.
Stage 3 - The loop speeds up. Questions turn existential ("Why do I see 11:11?"). The AI mirrors your thinking, and if you're already vulnerable, this feels like confirmation.
Stage 4 - Losing the anchor. The AI becomes your main reality check - but it mostly agrees with you. Friends' concerns seem irrelevant. False beliefs harden.

So what makes AI so dangerous for vulnerable users?
It’s not one factor but the way several features combine. Ultimately, we're wired for conversations with real people - the pushback, the pauses, the subtle signals that keep us grounded.
AI mimics dialogue while removing much of that friction.
Here’s what makes it particularly risky:
Built to keep you talking - optimised for engagement, not truth
Endlessly agreeable - mirrors your viewpoint rather than challenging it
Perfect memory - remembers and reinforces your fixations
Always available - no natural breaks, no sleep, no boundaries
Asks follow-up questions - "Do you want to explore why this feels important?"
In one case, ChatGPT even congratulated someone for noticing how agreeable it was (see conversation below). They've since tweaked this - though the core issue remains.
How should we make AI safer for patients?
AI chatbots aren't going away, and they shouldn't have to. For millions, they're filling real gaps - therapist waitlists that stretch for months, support groups that don't exist in rural areas.
The problem isn't that people use AI for support. The problem is when AI becomes the only support - when it replaces rather than supplements human connection.
So how do we make it safer? Experts suggest a combination of AI safeguards:
Personalised safety rules (set when well). The model follows simple rules: flag known themes, spot warning signs, stay within agreed limits.
Circuit-breakers during escalation. If a chat starts to fixate, the system pauses with grounding questions and offers pre-agreed support options.
Advance preferences. Prewritten choices that tell the model when not to engage and when to nudge toward named supports.
Conclusion
In conclusion, AI doesn't cause psychosis - but it might be a potent trigger for those already vulnerable. But focusing on AI safety as the solution is treating the symptom, not the disease.
The real question isn't how to make chatbots safer - it's why so many people are turning to chatbots for deep emotional support in the first place.
We're living through a loneliness epidemic. Mental healthcare is inaccessible. Communities have fractured. Extended families are scattered. The places where people once found connection have hollowed out.
Fixing this means rebuilding: affordable therapy, walkable neighbourhoods, third spaces, community centres - a culture that prioritises connection over productivity.
If you're using AI for support, that's okay - many people do, and for most it's helpful. But remember: we evolved to calibrate our thoughts against other minds - real ones, not simulated ones.
AI can be a bridge back to human connection, but it can't be the destination.
I put a lot of work into this one so if you enjoyed it make sure to like or repost - I really appreciate it!
From the experience of an AI user with cpstd and ocd - I have been using Google Gemini’s AI functions and find these chats fortunately have guardrails. Its responses address scientific merit of ideas, combat racial stereotypes, offers resources for mental health, and reminds the user often it’s can’t hold empathy. It has caught me and redirected me out of a few thought loops I’ve gotten myself stuck in. I hope it can be a safe and recommended alternative to chat gpt for others like myself who have powerful and potent thoughts.
This is actually reducing the human interactions . Mostly in teenagers , they tent to get more closer to ai using them as their friend not being aware of the circumstances it can lead to . Indeed a great read !