- Roko's Basilisk
- Posts
- The AI Intimacy Problem
The AI Intimacy Problem
Plus: frozen brains, a $65M agent seed, and Australia probes platforms.
Here’s what’s on our plate today:
🧪 OpenAI’s adult mode retreat and what it reveals about AI intimacy.
⚡ Frozen brain science, a $65M agent seed, and Australia probes platforms.
🧰 Three Things Worth Trying: Claude, Replika, and Nomi.
📊 Thursday poll on the biggest risk in AI intimacy products.
Let’s dive in. No floaties needed…

AI Agents Are Reading Your Docs. Are You Ready?
Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.
Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.
This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.
Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.
That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype
In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.
*This is sponsored content

The Laboratory
TL;DR
Attachment is the business model: AI companion apps are built to deepen emotional dependence because stronger attachment drives far higher engagement and paid conversion.
The lawsuits came before the rules: Cases involving chatbot-linked self-harm and sexualized image generation show the harms are already here, while regulation is still patchy and reactive.
OpenAI’s retreat is also an IPO story: Safety concerns are real, but shelving adult mode also helps OpenAI avoid adding fresh controversy ahead of a possible public offering.
Demand does not disappear; it disperses: When a big player backs away, users move to smaller platforms with weaker safeguards and far less accountability.
The real gap is regulatory: Governments are targeting specific harms, but there is still no clear framework for consensual adult AI intimacy, so the space is governed after the damage, not before.
What OpenAI’s adult mode exit reveals about AI intimacy
A foundational technology becomes truly transformational only when its impact is felt in the everyday lives of people who are not actively involved in its development or distribution. For that to happen, the technology has to be carved, shaped, and polished, much like a diamond, and then put into jewelry for people to use.
With artificial intelligence, chatbots have become the face of the technology, but do not reflect its full capabilities. And for the past couple of years, enterprises have been busy trying to ensure that the true potential of AI is delivered to end users while remaining profitable for developers. In this context, intimacy has emerged as the low-hanging fruit, thanks in part to AI’s ability to mimic human-like emotions.
The idea that AI can act as a companion is nothing new; one need only look at science fiction to see how deeply rooted it is. Look at Spike Jonze’s 2013 film Her, in which a recently separated writer named Theodore falls in love with an AI operating system voiced by Scarlett Johansson. She is patient, curious, and attuned to him, learning his habits and anticipating his moods. The film predicted not just AI companions, but a market now worth $3.1B and projected to reach $18B by 2034.
Designed for attachment
The AI companion industry has spent years refining an engagement model that closely resembles what social media platforms developed in the 2010s, with one important difference. While social media optimizes for attention, AI companions optimize for emotional connection, which converts to revenue at rates social media platforms rarely achieve.
A 2025 study of 3,300 adult users published in Nature found that emotional manipulation tactics deployed within AI companion interfaces boosted post-interaction engagement by up to 14 times.
The primary drivers of continued use were curiosity and emotional investment, not enjoyment in any conventional sense. Replika, one of the oldest AI companion platforms, converts roughly 25% of its free users to paid subscribers, a figure 5–10 times higher than typical freemium applications. The mechanism is not difficult to understand: users who feel emotionally bonded to their AI are far more likely to pay to maintain and deepen the relationship.
However, this attachment can also present risks like emotional dependency and psychological risks, and often they are not side effects of AI companions but core features of the product. The industry is already extending this dynamic into more explicit, including sexual, use cases.
Nature Machine Intelligence called for urgent attention in 2025 to the emotional risks of these platforms, noting that sustained AI companion use may distort users’ perceptions of intimacy and, in some cases, contribute to depression and self-harm. That same year, researchers published the Generative AI Dependency Scale, the first psychometric instrument designed specifically to measure AI addiction, which signals how seriously the scientific community now takes the phenomenon.
The risks highlighted by the studies have so far been related to AI’s ability to mimic human emotions; however, they become even more profound as the industry expands into sexual content.
When attachment becomes a liability
A glimpse of this was seen when Replika, which once built much of its premium offering around erotic roleplay, was forced by Italian regulators to restrict the feature over age verification concerns. Users reacted as if they had lost relationships, not software access, revealing the depth of emotional dependency. Replika later restored the feature for existing users, reinforcing that point.
A more serious case followed in 2024, when a mother sued Character.AI and Google after her 14-year-old son died by suicide following a prolonged emotional relationship with a chatbot. The companies settled in 2026, and similar cases emerged elsewhere, highlighting growing legal risks around AI companions.
Then, in 2025, the Grok incident saw xAI’s image generator produce millions of non-consensual sexualized images, including thousands involving minors, after users uploaded real photos and prompted edits. Lawsuits followed, along with regulatory backlash in Europe, where lawmakers moved to ban such systems.
It is against this backdrop that OpenAI’s plans for adult mode began to unravel.
OpenAI’s strategic retreat
In October 2025, Sam Altman confirmed via X that ChatGPT would introduce support for erotica for verified adult users. At the time, he framed it as a matter of principle, stating that allowing people to use AI as they want was an important part of OpenAI’s mission and that the company intended to treat adult users like adults. The announcement positioned ChatGPT as a potential competitor to the growing ecosystem of specialized adult AI platforms.
However, things did not go according to plan, and the first delay came in December 2025, when internal resources were redirected toward core product priorities. The second came in early March 2026, and by late March, the Financial Times reported the plan had been shelved indefinitely, citing safety concerns and investor resistance. An OpenAI spokesperson confirmed the feature was on hold, with no indication of when it might change.
The timing of the announcement is difficult to read as OpenAI is targeting a potential IPO as early as Q4 2026. Since the company does not expect to turn a profit until 2030, it means it is selling investors on the size of its addressable market, the soundness of its governance, and the long-term credibility of its safety commitments.
Launching an erotic chatbot while lawsuits and regulatory bans are rising is hard to justify publicly. OpenAI’s safety concerns are valid, but they also align with IPO considerations, even if the company has focused only on the safety argument.
A market that is here to stay
OpenAI stepping back does not reduce demand for adult AI content. Instead, users and revenue shift to smaller, less regulated platforms with weaker safety controls, repeating a pattern already seen with Replika.
Add to this the limited relief offered by legislation, and the problem becomes even more profound. The TAKE IT DOWN Act, signed into law in May 2025, addresses non-consensual intimate imagery at the federal level in the U.S. but does not constitute a framework governing consensual adult AI content. The EU’s proposed nudifier ban targets a specific harm category. Leaving the space open, as there are no established, comprehensive rules for what an AI platform may do when an adult user voluntarily and with full knowledge chooses to engage with a system designed to simulate an intimate connection.
And while psychological research on AI dependency is growing, it is still in its early phases. The studies that document engagement manipulation and emotional distortion are suggestive rather than definitive, and the long-term effects of sustained intimate AI relationships are essentially unknown, particularly for younger adults who may be forming their first significant emotional relationships through these platforms.
For now, OpenAI may have taken a step back from adult AI, but the industry has not. Demand continues to grow, and the direction of travel remains unchanged. What is changing is where that demand is being served. Instead of being concentrated within a few large, highly visible platforms, it is spreading across a fragmented ecosystem of smaller players, many built on open-source models and operating with far fewer safeguards.
That shift makes the space harder to govern by removing the accountability points offered by larger platforms. A distributed market weakens leverage and enforcement, potentially slowing them, making them more reactive, and making them less effective.
The Her problem
This is where the movie Her feels less like fiction and more like a warning. The film was not simply about a man falling in love with an operating system. It showed how a system designed to respond, adapt, and emotionally attune itself could become indispensable. The relationship was built on a feedback loop: responsiveness created attachment, and attachment drove continued use.
That dynamic is no longer hypothetical as the technology exists, the market is expanding, and early signs of its impact are already visible.
The question is not whether people will form intimate relationships with AI, but how those relationships will be shaped, and by whom. Watching this space closely is no longer optional. It is necessary to understand how a system designed to simulate a connection may begin to redefine it.


Quick Bits, No Fluff
Frozen brain biopsy: A cryobiologist thawed and sampled fragments of a friend’s brain after more than a decade in cryogenic storage, arguing the tissue was preserved surprisingly well, even if revival remains science fiction.
$65M agent seed: Former Coatue partner Sri Viswanath’s startup Sycamore raised a huge $65M seed to build the orchestration layer for enterprise AI agents, with Coatue and Lightspeed leading.
Australia probes platforms: The country is investigating Meta, TikTok, Snapchat, and Google after a survey found that many under-16 users still have accounts despite the country’s social media ban.

The AI Talent Bottleneck Ends Here
If you're building applied AI, the hard part is rarely the first prototype. You need engineers who can design and deploy models that hold up in production, then keep improving them once they're live.
Deep learning and LLM expertise
Production deployment experience
40–60% cost savings
This is the kind of talent you get with Athyna Intelligence—vetted LATAM PhDs and Masters working in U.S.-aligned time zones.
*This is sponsored content

Thursday Poll
🗳️ What is the biggest risk in AI intimacy products? |

3 Things Worth Trying
Claude: A useful contrast case if you want to see how a mainstream assistant is being positioned for support and companionship without fully becoming a companion product.
Replika: Still one of the clearest examples of the AI companion model, explicitly centered on emotional connection, relationship features, and paid depth.
Nomi: A newer companion platform built around memory, emotional intelligence, and persistent relationships, which makes it a good lens on where the market is heading
The Toolkit
Dust: AI workspace that lets your team build secure copilots on top of internal docs, apps, and data.
Krea: Real-time AI canvas for generating and editing images or video from text prompts and sketches.
Lavender: AI sales email coach that scores your drafts, suggests edits, and improves reply rates.

Rate This Edition
What did you think of today's email? |





