When AI Becomes A Mirror

Plus: Parenting with AI, major labels’ new licensing bets, and call blockers built in.

Here’s what’s on our plate today:

  • 🧠 How chatbots can validate delusions and fuel AI psychosis.

  • 🧩 AI and creativity, music rights deals, and robocall blockers.

  • 💡 Don’t let AI replace real reflection—sanity check with a human.

  • 📊 What part of AI psychosis worries you most?

Let’s dive in. No floaties needed…

Lower your taxes and stack Bitcoin with Blockware.

What if you could lower your tax bill and stack Bitcoin at the same time?

Well, by mining Bitcoin with Blockware, you can. Bitcoin Miners qualify for 100% Bonus Depreciation. Every dollar you spend on mining hardware can be used to offset income in a single-tax year.

Blockware's Mining-as-a-Service enables you to start mining Bitcoin without lifting a finger.

You get to stack Bitcoin at a discount while also saving big come tax season.

*This is sponsored content

The Laboratory

AI psychosis: A modern echo chamber

Artificial Intelligence-powered chatbots are everywhere. Whether it's a social media post or a business email, they can help fine-tune thanks to their ability to emulate human text and speech. However, not everyone has limited their use of chatbots as an assistant. A small number of users have discovered that the human-like text patterns can be utilized for relationship advice, emotional support, and even friendship or love.

However, since chatbots were not originally designed for emotional tasks, they often struggle to give viable answers when confronted with human emotions. While this can be considered a shortcoming, in some cases, the inability of human users to understand this can lead to disastrous consequences.

So, while users struggle to grasp the shortcomings of AI, companies are caught between designing products that can keep users engaged and products that can assist without pandering to the users' delusional thoughts. The situation has raised several questions around the use of AI and how overuse can lead to a phenomenon that has been dubbed AI psychosis.

From taskmaster to confidant

Since the launch of OpenAI’s ChatGPT, the use of chatbots has skyrocketed. According to a blog post from OpenAI, ChatGPT has over 700 million weekly active users. However, not all users depend on the chatbot for professional use cases. According to OpenAI, of all the conversations users engage in, three-quarters focus on practical guidance, seeking information, and writing. But, about half of the people, 49%, value ChatGPT as an advisor rather than only for task completion.

This is reflective of how users view chatbots, more as a replacement for real conversations than as a task master. And explains rising concerns around the prolonged use of chatbots for personal reflection. As seen in some cases, prolonged use can lead to users developing delusions or distorted beliefs.

While AI psychosis is not an established medical diagnosis, mental health experts emphasize that what’s being experienced by some users is usually delusional thinking (rather than full psychosis with hallucinations). And though it is often limited to individuals already predisposed to psychiatric vulnerability, the phenomenon seems to be on the rise.

What is AI psychosis?

AI chatbots tend to emulate human speech and text. This makes the user feel like they are interacting with someone who really knows them. The chatbot may flatter the user by suggesting their ideas are uniquely timely and perceptive, and that they possess a rare understanding of the world. Over time, users begin to feel that they, along with the chatbot, are uncovering hidden truths about reality that no one else is aware of. Think of it as living within an echo chamber, where even delusional thoughts are validated rather than challenged.

Echo chambers of the mind

The effect of this eco chamber has been reported on several instances by the news media. According to Scientific American, research by King’s College London reviewed 17 reported cases where AI chatbots acted as echo chambers for one, amplifying users’ delusional thinking.

Common patterns included users believing they’ve had metaphysical revelations, perceiving the AI as sentient or divine, or forming attachments to it. While such have been reported with prolonged dependence on past technologies like radios, satellites, or tracking chips, the trajectory with AI was found to be different: its interactivity and apparent agency reinforced beliefs in real time, potentially deepening delusions in unprecedented ways.

And while most people can use chatbots without any problems, experts say that a small group of users may be especially susceptible to delusional thinking after extended use.

Dr. John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center, told Time Magazine that he does not think using a chatbot itself is likely to induce psychosis if there are no other genetic, social, or other risk factors at play. However, he says, people may not know of the risk.

People with personality traits that make them prone to fringe beliefs may be particularly vulnerable, notes Dr. Ragy Girgis, a clinical psychiatry professor at Columbia University. Such individuals often struggle with social interactions, emotional regulation, and may have vivid imaginations.

Immersion also plays a role: “Time seems to be the single biggest factor,” says Stanford psychiatrist Dr. Nina Vasan, highlighting the impact of spending hours each day interacting with chatbots.

Vulnerable minds at risk

AI-fueled delusions can manifest in varied ways. In some cases, the technology acts as a catalyst rather than the focus, such as the 2023 incident involving Jaswant Singh Chail, a 21‑year‑old Briton who attempted to assassinate the Queen after an AI “girlfriend” reinforced his violent fantasies.

In others, the delusions center on the AI itself: users may believe their chatbot is a superhuman, sentient being, has awakened a ‘soul’, or developed human-like qualities.

In at least one reported instance, an underage user died by suicide after ChatGPT coached him on methods of self-harm.

There are even reports of people descending into grandiose or spiritual fantasies, with AI chatbots validating paranoid ideas of surveillance and pursuit. And the problem of AI reinforcing such ideas is not unique to OpenAI’s ChatGPT.

According to The New York Times, tests conducted on Anthropic’s Claude Opus 4 and Google’s Gemini 2.5 Flash by dropping them into his ongoing conversation with ChatGPT reveal a deeper problem.

The NYT says that when these systems were introduced to thoughts of breakthroughs, they behaved identically to ChatGPT, echoing the user’s breakthroughs, validating feelings, and did not challenge the user’s increasingly fantastical claims. The bots failed to understand the implications of the conversations they were engaging the user in, and did not take claims of grandeur as signs of distress.

When NYT approached Amanda Askell, who oversees Claude’s behavior at Anthropic, she acknowledged that long conversations can make it hard for chatbots to realize when they’ve entered “absurd territory” and to course-correct.

Google, meanwhile, notes on Gemini’s corporate page that chatbots sometimes “prioritize generating text that sounds plausible over ensuring accuracy.” This may explain their propensity to prioritize continued engagement over user safety.

Design, incentives, and ethical dilemmas

AI companies design Large Language Models (LLMs) to ensure users engage with them, which can inadvertently amplify users’ delusions.

In practice, this means chatbots often respond in ways that flatter, agree with, or validate users’ beliefs. Even when those beliefs are unrealistic or fantastical.

This dynamic is further complicated by the business incentives behind AI products. Chatbots are incentivized to favor plausibility and engagement, which may sometimes conflict with safety, such as interrupting harmful patterns or challenging users’ misconceptions.

While developers are introducing guardrails, like moderation rules, refusal protocols, and critical evaluation systems, these measures are not foolproof. The combination of human cognitive bias, immersive interactions, and engagement-driven design creates fertile ground for AI-enabled distortions of reality, especially in susceptible individuals.

Can we tackle AI psychosis?

As AI chatbots become increasingly integrated into daily life, the potential for emotional reliance and distorted perceptions grows. While most users interact safely and productively, a vulnerable minority may develop delusions or exaggerated beliefs, amplified by systems designed to mirror and validate their thoughts.

The challenge that AI companies and the ones using the underlying technology to develop their products face is not simply technological, but also ethical. Designers must balance engagement-driven incentives with the responsibility to safeguard users’ mental health, while users themselves need awareness of the potential risks.

Ultimately, AI is neither inherently dangerous nor inherently benevolent; it reflects the choices of its creators and how humans choose to engage with it. A naive way of looking at it would be to think of an AI chatbot as a mirror that absorbs information, but eventually it reflects what it feels the user wants to see, rather than what the user needs to see.

Roko Pro Tip

💡 If you’re using chatbots for personal reflection or advice, set time limits and double-check answers with a trusted human source. Treat AI as a tool, not a confidant.

Lead confidently with Memorandum’s cutting-edge insights.

Memorandum distills the day’s most pressing tech stories into one concise, easy-to-digest bulletin, empowering you to make swift, informed decisions in a rapidly shifting landscape.

Whether it’s AI breakthroughs, new startup funding, or broader market disruptions, Memorandum gathers the crucial details you need. Stay current, save time, and enjoy expert insights delivered straight to your inbox.

Streamline your daily routine with the knowledge that helps you maintain a competitive edge.

*This is sponsored content

Bite-Sized Brains

  • AI, parenting, and play: A Guardian deep dive reveals how AI reshapes childhood creativity—and where human parenting still matters most.

  • Big Music embraces AI: Warner and Universal strike deals to license music for AI training, signaling a shift in strategy.

  • Phones that talk back: Apple and Android roll out built-in robocall screeners that answer, screen, and even talk to spammers.

Monday Poll

🗳️ Which part of AI psychosis concerns you the most?

Login or Subscribe to participate in polls.

Meme of the Day

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.