- Roko's Basilisk
- Posts
- Regulating AI With Feelings
Regulating AI With Feelings
Plus: China’s rules, Gemini everywhere, and Tesla subscriptions.
Here’s what’s on our plate today:
🧪 Should AI ‘friends’ follow human-level safety rules?
🧠 Headlines: Gemini everywhere, Tesla locks FSD, and energy wars.
🧰 Weekend To-Do: test regulated-friendly AI companion tools, hands-on.
📊 Would you trust a tightly-governed AI companion?
Let’s dive in. No floaties needed…

Your copilot for compliance is here.
Ever been tired, stuck, and miserably documenting something it feels like AI was born to do? AI finally delivered.
Delve's AI copilot understands your entire tech stack and tells you exactly what to do next. Just ask it "What's my next step? Take me there" and watch Copilot read your screen and start doing tasks for you. Congrats, you've found the light at the end of the tunnel.
Join Wisprflow, 11x, Bland, and more in the future of compliance with Delve.
Book a demo here to get $1,500 off compliance and see Copilot in action.
*This is sponsored content

The Laboratory
Is it time to regulate AI with human-like traits?
Modern societies exist on the premise that individuals living within them adhere to the social, economic, and political norms. These norms can be unwritten, handed through the generations through cultural practices, or can be codified in the form of legal documents dictating the relationship between residents and the state.
Over the centuries, these norms have undergone tremendous changes to reflect shifting preferences, changing definitions of morality, and social justice. However, the underlying premise has remained the same. Regulations form the framework upon which society rests.
Up until 2022, these regulations have largely focused on ensuring how humans interact with humans, with little regard for technological spaces. And though technological spaces are regulated, the focus has been on what humans share and how they share. But with artificial intelligence, societies are being forced to rethink their approach to how technology is regulated.
When regulations meet machines
Since AI chatbots can emulate human speech capable of influencing human behavior, regulators have been struggling to bring chatbots under the ambit of regulations to minimize harm. However, so far, this has proven to be more difficult than expected.
In the U.S., the tussle between the states and the federal government is stalling a cohesive regulatory plan. In Europe, implementation has had to be reworked to ensure regulations do not stall innovation. And while the Western powers are dealing with their challenges, China has been moving quickly to establish a coherent and cohesive plan to regulate AI with human-like characteristics.
In December 2025, the Cyberspace Administration of China proposed measures for AI systems that simulate human personality traits and engage users emotionally through text, images, audio, or video.
The description covers a burgeoning category of applications, from virtual companions and AI boyfriends to chatbots designed to provide emotional support or simply keep users company during lonely nights.
Winston Ma, an adjunct professor at NYU School of Law, told CNBC that compared to China's 2023 generative AI regulation, this version "highlights a leap from content safety to emotional safety." That leap matters because AI companions aren't just generating content; they're shaping relationships, influencing mental states, and in some documented cases, allegedly contributing to tragic outcomes.
China’s need for regulations
In the Western world, the need for regulating AI companions was felt after multiple lawsuits were filed against OpenAI, Character.AI, and Google for failing to dissuade underage users from self-harm. Users were also reported to have formed unhealthy emotional attachments to AI chatbots.
The two most reported cases involved underage users who failed to understand the limitations of AI when it came to providing emotional connections.
In February 2024, 14-year-old Sewell Setzer III of Florida died by suicide after developing an intense relationship with a Character.AI chatbot modeled after a Game of Thrones character.
The parents of 16-year-old Adam Raine sued OpenAI after discovering their son had confided suicidal thoughts to ChatGPT, which they allege discouraged him from seeking help and offered to write his suicide note.
And while companies, including OpenAI and Character.AI, have implemented improved safeguards in their chatbots, including barring underage users. The question of regulating chatbots capable of emulating human emotions is yet to be answered.
What China’s draft rules actually require
China’s draft measures go well beyond any existing framework in spelling out how AI companions are expected to behave. From the moment a user logs in, or if there are signs of emotional over-reliance, providers would be required to display clear pop-up notices reminding users that they are interacting with an artificial system rather than a human.
Extended use is also addressed: after two uninterrupted hours, the system must prompt users to step away.
The rules become far more stringent in situations involving psychological distress. If a user signals suicidal thoughts or intent to self-harm, the draft mandates an immediate handover to a human operator, who must then alert a designated guardian or emergency contact.
For elderly users, providers would be obligated to help set up such contacts in advance, while explicitly barring AI systems from posing as relatives or familiar individuals.
As The AI Insider observes, these provisions are framed not as best-practice guidelines but as binding obligations. They impose responsibility across the entire lifecycle of an AI companion, including compulsory disclosure of AI identity, stronger safeguards for minors and older users, and strict limits on how emotional and interaction data can be collected and used.
Children, the elderly, and the mandatory guardrails
Providers would need explicit guardian consent before offering emotionally oriented AI companions, alongside enforced time caps on usage.
Parents or guardians must be able to monitor activity, receive alerts about potential safety risks, restrict access to specific AI characters, and block in-app purchases, giving them direct oversight of how these systems engage with minors.
The business implications of these rules are hard to overstate, and their timing appears deliberate rather than accidental. In December 2025, two of China’s most prominent AI chatbot companies—Minimax and Z.ai, better known as Zhipu—moved to file for initial public offerings in Hong Kong, placing the sector firmly under investor scrutiny.
Minimax’s flagship product, Talkie, has already crossed borders to become a global hit. It ranks among the most popular AI apps in both China and the United States. Data from Sensor Tower shows that Talkie was the fourth most downloaded AI application in the US in the first half of 2024, surpassing rivals such as Character.AI.
Together with its China-focused counterpart, Xingye, the app generated more than a third of Minimax’s revenue through September 2025 and drew an average monthly user base exceeding 20 million.
That scale of success, however, rests on business models that could be directly reshaped by regulation. AI companion platforms typically rely on subscriptions, in-app spending, and, above all, prolonged user engagement.
Requirements such as enforced breaks, time caps for minors, and limits on emotionally manipulative features threaten to disrupt the engagement metrics that underpin growth projections and company valuations.
Some analysts see echoes of an earlier intervention. When China imposed strict limits on online gaming for minors in 2021, capping playtime at just three hours a week. At the time, it triggered sharp market corrections across the gaming industry. Observers now warn that the emerging AI companion sector could face a similar reckoning if these regulations are implemented as proposed.
Regulation’s grey areas and design trade-offs
However, even as China continues to work on regulations, the question remains whether they will be enough. It should be noted that despite their comprehensiveness, China's draft rules leave significant ambiguities. How exactly is "emotional interaction" defined? Does a customer service chatbot with a friendly demeanor qualify? What about a general-purpose AI assistant that users happen to confide in?
Geopolitechs notes that it is still unclear whether text-based, general-purpose chatbots count as providing emotional interaction, creating legal uncertainty for companies with multiple AI products. Firms may need careful interpretation to determine which systems fall under the rules.
The issue also cuts into AI design itself. Chinese apps like Minimax’s Xingye already operate under strict content controls, reportedly steering users toward forced positivity. That raises a deeper concern: safety rules may blunt an AI’s ability to engage with difficult emotions, limiting its usefulness for people dealing with grief, loneliness, or distress.
China’s draft rules close for public comment on January 25, 2026, and could be implemented within months, following the fast rollout of earlier AI regulations. Major firms like Alibaba and Tencent are already prepared, but smaller companies may struggle, increasing the risk of consolidation.
The end of unregulated AI companionship
Globally, China’s approach signals that tight regulation of AI companions is possible, but it also highlights the challenge of balancing safety with usefulness. What is clear is that unregulated AI companionship is ending, and the next phase will reveal whether these guardrails are enough.
Regulations have helped societies evolve while safeguarding individual rights. The challenge posed by AI is shaping a dynamic shift where relations are evolving beyond the interactions between humans and institutions.
Treating AI companions as neutral tools ignores the social role they increasingly play. If past revolutions taught us anything, it is that norms must evolve alongside innovation.
The challenge ahead is not whether to regulate human-like AI, but how to do so without eroding trust, agency, or humanity itself.
TL;DR
From content to emotional safety. China’s new draft rules target AI systems that mimic personality and form emotional bonds, shifting the focus from what models say to how they make people feel.
Hard guardrails for minors and elders. Providers must disclose “I’m an AI,” enforce breaks, get guardian consent, cap usage for minors, and escalate self-harm signals to human operators and emergency contacts.
Business model whiplash. Engagement-driven companion apps like Talkie face time caps, monitoring, and design limits that threaten their stickiness, margins, and pre-IPO growth narratives.
End of the ‘anything goes’ era. China shows that full-stack regulation of human-like AI is possible, but raises messy questions about grey zones (what counts as emotional) and how far safety rules can go before they hollow out usefulness and trust.


Headlines You Actually Need
Gemini as personal intelligence: Google is pitching Gemini as an always-on layer across Gmail, Docs, Android, and more, turning your account history into a unified AI assistant.
Tesla locks FSD behind subs: Musk says Full Self-Driving will move to subscription-only, shifting from big upfront fees to recurring payments and tightening control over how drivers access autonomy features.
Big Tech raids the power grid: Google, Microsoft, and friends are aggressively hiring energy veterans as AI data centers turn electricity access, not GPUs, into the next strategic choke point.

The context to prepare for tomorrow, today.
Memorandum merges global headlines, expert commentary, and startup innovations into a single, time-saving digest built for forward-thinking professionals.
Rather than sifting through an endless feed, you get curated content that captures the pulse of the tech world—from Silicon Valley to emerging international hubs. Track upcoming trends, significant funding rounds, and high-level shifts across key sectors, all in one place.
Keep your finger on tomorrow’s possibilities with Memorandum’s concise, impactful coverage.
*This is sponsored content

Friday Poll
🗳️ How far should regulators go with AI companions? |

Weekend To-Do
Run an AI companion audit with Replika: This app is a good test case to see what modern companion apps actually let you control
Lock down what your kids’ devices can actually do with Qustodio: Use a proper parental control suite to cap app time, block dodgy sites, and monitor usage instead of trusting vague “we don’t allow minors” statements.
Scan outputs with OpenAI’s safety tools (if you’re a customer): If you’ve got API access, wire up the safety tools/mode in a test environment and compare raw vs guarded outputs on the same prompt set.
Rate This Edition
What did you think of today's email? |




