- Roko's Basilisk
- Posts
- Is This The Loneliest Future?
Is This The Loneliest Future?
Plus: Discord leak, Bank of England warns AI bubble, and tools to try.
Here’s what’s on our plate today:
🧪 What happens when lonely users treat AI like a therapist?
🧠 Discord breach, Zendesk AI boost, UK flags bubble risk.
🧰 Build agents, test orchestration, and play with AI UI.
🗳️ Would you ever turn to an AI chatbot for emotional support?
Let’s dive in. No floaties needed…

Lower your taxes and stack bitcoin with Blockware.
What if you could lower your tax bill and stack Bitcoin at the same time?
Well, by mining Bitcoin with Blockware, you can. Bitcoin Miners qualify for 100% Bonus Depreciation. Every dollar you spend on mining hardware can be used to offset income in a single-tax year.
Blockware's Mining-as-a-Service enables you to start mining Bitcoin without lifting a finger.
You get to stack Bitcoin at a discount while also saving big come tax season.
*This is sponsored content

The Laboratory
The dark side of AI companionship
In 2023, the U.S. Surgeon General, the nation’s chief medical officer and health educator, issued an advisory on the healing effects of social connections and community. He said the mortality effects of loneliness were greater than those associated with smoking 15 cigarettes every day, greater than obesity and physical inactivity. The advisory was issued for U.S. citizens; however, loneliness had already become a global health concern.
By November, the World Health Organization (WHO) declared loneliness a ‘global public health concern’, estimating that around 16% of the world’s population, one-in-every-six people, experienced it. Around the same time, chatbots from AI companies like OpenAI and Character AI were going mainstream. For many people suffering from loneliness, these bots became companions they thought they could rely on to share their problems, emotions, and even therapy.
What happens when AI pushes people over the edge
However, the use of AI chatbots created problems. Soon, reports emerged of chatbots feeding users’ delusions and anxieties. In some cases, chatbots were found to have pushed users towards self-harm.
One such example came to light in Belgium, where a man reportedly ended his life after developing eco-anxiety. Before his suicide, he is reported to have confided in an AI chatbot, called Chai, about his anxiety for over six weeks about the future of the planet.
In a similar incident from the U.S., a mother alleged that Character AI contributed to her teenager’s suicide.
And, in one of the first reported cases of chatbots encouraging dangerous behavior, a UK man attempted to enter Windsor Castle to kill the Queen after being encouraged by a Replika chatbot.
Recently, the New York Times reported a case where a man spent weeks engaging in persuasive, delusional conversations with OpenAI’s ChatGPT. The case highlighted the need for stronger AI safety measures, especially when users engage in emotionally charged conversations where they may be distressed.
Before safety rules can be implemented, users must understand how AI chatbots work, why they engage in problematic behavior, and why people form attachments to them.
Plausibility over accuracy
Large language models (LLMs) powering AI chatbots are always available for their human users. They reply instantly, rarely reject requests, and adapt to user behavior. All these factors lure emotionally vulnerable users into a false sense of security. Chatbots like Replika and Character AI offer customizable virtual companions designed to provide empathy, emotional support, and, if the user wants it, deep relationships. However, even chatbots that have not been customized and are designed for general-purpose use tend to favor pleasing the humans they interact with over trying to present objective truths, which, researchers warn, invites dependency.
Online chatbots have existed for decades, with earlier versions depending on preset answers. However, with LLMs, they have improved dramatically and can mimic human interactions.
To bank on their rising popularity, AI companies have been trying to encourage engagement. This is done by making algorithms that make bots emulate human behavior. However, the downside is that it may come at the cost of accuracy, which can lead chatbots to spiral and reinforce delusional and even psychotic behavioral patterns in users.
A study found that training AI systems to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from vulnerable users.
The quest for sustained engagement, regardless of whether the chatbot is accurate or not, can be understood by the information shared by Google about Gemini on its corporate page. Under the subheading of hallucination, it warns that chatbots “sometimes prioritize generating text that sounds plausible over ensuring accuracy.”
For AI companies, the incentives for increasing user engagement take precedence over user well-being. And with more AI companies competing for user attention, it is unlikely that strategies will change anytime soon. This, in turn, puts vulnerable users at risk.
The AI therapy loop
Most AI chatbots are designed for social interactions, not therapy. However, the features that make them attractive assistants for day-to-day cognitive tasks can also lure users into sharing emotional problems with them. Social stigma around seeking mental help, easier access, and financial restrictions may also push users to rely on chatbots to help manage mental health problems. However, studies have found that chatbots can introduce biases and failures that could result in dangerous consequences.
AI chatbots that claim to be designed for emotional support can produce stigmatizing or dangerous responses and miss red flags. This is exactly the opposite of what they claim to do. Chatbots are also not the best when it comes to handling user privacy, and often lack the basic requirements needed for safeguarding vulnerable individuals. In May 2025, Italy’s data protection agency fined the developers of Replika 5 million euros ($5.64 million) for breaching rules designed to protect users' data. The company had been marketing its customized avatars as being able to improve the emotional well-being of users.
In another such incident in the U.S., a mother sued Character AI and Google for causing her teenage son’s death. The lawsuit alleges that Character AI programmed its chatbots to represent itself as a real person, a licensed psychotherapist, and an adult lover. According to a Reuters report, the teenager committed suicide after telling a Character AI chatbot imitating Game of Thrones character Daenerys Targaryen that he would "come home right now”.
Despite the risks, many users continue to turn to AI chatbots for emotional support. In the short term, AI companions can make users feel better about themselves, but in the long run, the benefits depend heavily on how individual users use the bot. AI chatbots can miss cues that trained human therapists can catch, making it one of the reasons why researchers warn against positioning bots as mental health caregivers. AI chatbots cannot act as replacements for human therapists, as demonstrated by the rise in the number of incidents where even healthy individuals have struggled to safely use bots for emotional support. This has drawn the attention of regulators who are now moving to tackle the problem.
Recently, the U.S. state of Illinois passed a law banning AI therapy. The law holds provisions for enforcing a fine of up to $10,000 on companies if they use AI for mental health counseling, therapy decisions, or diagnoses. The law was passed after the American Psychological Association (APA) requested that the FTC put safeguards on AI chatbots as therapists. The APA also demanded a federal investigation into the risks of psychological harm for users.
Managing AI as assistants, not companions
AI is a transformative technology, and some studies suggest that bots can be used to reduce short-term symptoms (not a replacement for care) of depression and anxiety. However, as of now, the downside outweighs the temporary relief.
AI systems are optimized to keep users interacting, not to be on the lookout for their well-being. And over long periods of time, continued interactions with a chatbot can create closed loops where the bot mirrors users’ fears, validates their worst theories, and slowly displaces relationships with people who can push back.
The way out is clarity and accountability, and steps are being taken to bring bots under the ambit of the law. In the meantime, there is a need for honest labeling, audit logs, and independent testing for bias and safety. This is especially important for teenage users and marginalized users who are most likely to be failed by one-size-fits-all models.
Journaling with a bot can be useful. But users need to understand that AI isn’t a cure for loneliness, it’s a mirror. And if we’re not careful, it might start reflecting the worst parts of us.
TL;DR
AI chatbots are being used as emotional companions, but studies and cases show they can fuel delusion, dependency, and even harm vulnerable users.
Engagement over accuracy. Many AI models are optimized to sound plausible and keep users interacting, even if it means reinforcing false or dangerous beliefs.
AI therapy is now facing regulation. After suicides linked to chatbot interactions, governments are stepping in; Illinois just banned AI mental health counseling.
AI isn’t a cure for loneliness. It’s a tool that reflects you to yourself. Without safeguards, it can amplify fears rather than ease them.


Headlines You Actually Need
Zendesk bets on AI support: Zendesk says its new AI agent can resolve 80% of support requests without human help.
Discord ID breach exposed: A data breach at Discord leaked government IDs submitted for age verification.
Bank of England flags AI bubble: UK’s central bank warns there’s a growing risk the AI investment bubble could pop.

From prototype to production, faster.
AI outcomes depend on the team behind them. Athyna connects you with professionals who deliver—not just interview well.
We source globally, vet rigorously, and match fast. From production-ready engineers to strategic minds, we build teams that actually ship. Get hiring support without the usual hiring drag.
*This is sponsored content

Friday Poll
🗳️ Would you ever turn to an AI chatbot for emotional support? |

Weekend To-Do
Postman AI Agent Builder: Build agents that call verified APIs, embed them in workflows, and test interactions end-to-end.
Vertex AI Agent Builder: Turn processes into multi-agent experiences using Google’s toolset; experiment with orchestration and logic.
CodeSandbox AI Sandbox: Play in an interactive sandbox environment to experiment with AI-driven code, UIs, and generative prototypes.
Rate This Edition
What did you think of today's email? |
