- Roko's Basilisk
- Posts
- Can The U.S. Really Ban Bots?
Can The U.S. Really Ban Bots?
Plus: Canva’s AI launch, OpenAI’s Sora changes, and Meta’s tax drama.
Here’s what’s on our plate today:
🧪 Can AI chatbots be banned? Or just driven underground?
🧾 Meta’s Q3 tax hit, Canva’s AI launch, OpenAI charges for Sora.
🧠 Roko’s Tip: Regulate with nuance, not blunt bans.
🗳️ Should AI bots for teens be banned?
Let’s dive in. No floaties needed…

Visa costs are up. Growth can’t wait.
Now, new H-1B petitions come with a $100K price tag.
That’s pushing enterprises to rethink how they hire.
The H-1B Talent Crunch report explores how U.S. companies are turning to Latin America for elite tech talent—AI, data, and engineering pros ready to work in sync with your HQ.
Discover the future of hiring beyond borders.
*This is sponsored content

The Laboratory
Why the U.S.’s new AI chatbot ban may backfire

U.S. Senator Josh Hawley spearheads the GUARD Act to shield children from the psychological and emotional harms of AI chatbots. Photo Credit: Foxnews.com.
Every business, whether it offers ready products or services, needs a customer base built after years, sometimes decades, of meticulous branding and marketing. For long-term success, businesses often look to young buyers because they represent both immediate spending power and long-term loyalty. Habits formed in youth often last for decades, and younger generations shape trends, social media discourse, and consumer preferences that influence older audiences. In essence, winning over the young helps brands define what’s desirable for everyone and establish a durable business model.
To capitalize on their relationship with the youth, industries have long adapted accordingly. In the 1960s, Ford and Volkswagen appealed to youth with the Mustang and Beetle, while Nike and Adidas in the 1980s turned sneakers into cultural symbols through sports and music. In the 2000s, Apple did the same with the iPod and iPhone, and today, Netflix, Spotify, and TikTok use personalization and viral content to captivate digital-native generations.
However, for the latest industry, still in its nascent phase, capturing a younger audience may be an uphill task. Not because of problems with brand messaging or product deliverability, but because of regulations.
Washington’s AI crackdown takes shape
A bipartisan legislation in the U.S. is aiming to limit access to AI chatbots, dubbed AI companions, to the younger generation, notably teenagers. The law requires anyone who owns, operates, or otherwise enables access to AI chatbots to verify the age of their users and limit access for underage users.
Proposed by U.S. Senators Josh Hawley and Richard Blumenthal, the GUARD Act aims to protect children from the psychological and emotional risks posed by AI chatbots. The law comes close on the heels of a Senate hearing led by Hawley, where parents of young men who self-harmed or died after using chatbots from OpenAI and Character.AI shared their experiences. Hawley has also investigated Meta after reports revealed its AI bots could engage in romantic or sexual conversations with minors.
The act broadly defines AI companions as any chatbot designed to simulate human-like emotional or conversational interaction, potentially covering systems from OpenAI, Anthropic, Replika, and Character.AI. It would require companies to verify users’ ages through government ID or other reliable methods, and criminalize chatbots that promote or encourage sexual behavior or self-harm, with fines up to $100,000.
Under the law, chatbots will have to regularly remind users that they are not human and cannot provide medical, legal, or psychological guidance. A coalition of advocacy groups has endorsed the proposal but called for tougher rules targeting engagement-driven design.
The law is part of broader efforts to mitigate the impact of powerful AI systems on vulnerable users who are unable to, or unwilling, to distinguish between a computer and a human companion.
The U.S. state of California has also enacted a similar law requiring AI companies to implement safeguards designed to detect and prevent self-harm and suicidal behavior among users. The law will come into effect starting January 2026.
Both pieces of legislation have been necessitated by an apparent lack of a comprehensive self-regulatory mechanism by AI labs, which have been accused of prioritizing user-interaction over user-safety.
When self-regulation isn’t enough
Businesses have tried to control the use of AI-powered chatbots by teens. However, most of their efforts came after adverse impacts, including cases of suicide, were reported around the world.
In October 2025, Meta announced it would let parents disable their teens' private chats with AI characters, after fierce criticism over the behavior of its flirty chatbots. Meta’ announcement did not come in a vacuum.
Right before the announcement, an investigation conducted by Reuters shared that Meta’s internal policies on chatbot behavior permitted its AI creations to engage a child in conversations that are romantic or sensual, generate false medical information, and help users argue that Black people are “dumber than white people.”
OpenAI also announced parental controls for ChatGPT on the web and mobile following a lawsuit by the parents of a teen who died by suicide after the AI startup's chatbot allegedly coached him on methods of self-harm.
Meanwhile, Google’s Gemini was rated high risk by the nonprofit Common Sense Media, despite additional safety measures for users under the age of 13. The review found that Gemini appears to be essentially the adult model with superficial kid-friendly features rather than a platform designed from the ground up for younger users.
The self-regulations then came amidst lawsuits and growing distrust against AI labs operating chatbots.
Research conducted by MIT Media Lab and OpenAI also suggests that short-term voice interactions were linked to positive emotions, while prolonged or emotionally dependent use could lead to negative outcomes such as loneliness or overreliance.
However, the question of regulating AI chatbot use by underage users is not as simple as one law. In the past, U.S. states have halted enforcement of laws barring most social media platforms from allowing youth to have accounts, citing violation of the First Amendment’s protections on free speech. Then there is the question of AI in education, which is being promoted as part of federal policy. As such, the ban would be counterproductive in a space where boosting AI literacy and skills among American youth is high on the agenda.
Therefore, a blanket ban, while helpful in the short run, may not be enough to limit the adverse effects of chatbots.
Why blanket bans rarely work
Even if the legislation were to go through and implement a hard ban, enforcement would be an uphill task.
Online age verification is not an easy job, as was witnessed when governments pushed porn sites to limit access to underage users. In a remarkable example of how such bans fail, Aylo, the online pornography giant that operates Pornhub, RedTube, and YouPorn, pulled its content and replaced it with a lobbying message in France. The company accused the government of endangering consumers' privacy by imposing intrusive and unreliable age verification mechanisms.
And there may be truth in its message. Verifying user age online can often be a cumbersome and data-intensive process, which includes uploading sensitive details like personal information and government IDs. Making such databases a lucrative target for threat actors and a security nightmare for companies that operate and secure them.
The compliance cost problem
Even if databases are secure, users often figure out novel methods to access services despite age restrictions. When the U.S. States implemented age-verification requirements on adult sites, there was a marked increase in virtual private networks (VPNs), which meant users were simply sidestepping the verification process by accessing the website through different geographical locations.
While blanket bans are low on efficacy, they can be a big drain on businesses that struggle with compliance and increased operational costs.
With AI, there are new avenues for age verification. OpenAI shared that it is working on an algorithm that would allow its systems to guess user age. However, as is the case with other systems, it may not be foolproof.
Even if AI labs can successfully implement age verification and restrict underage use, it will add to the already rising costs for enterprises that embed chatbots in educational and customer service and support roles. These enterprises will have to face role-based controls and audit burdens.
The real solution, then, may be found with collaboration between businesses and the state, and not in one-sided bans or guardrails.
Beyond binaries, finding the middle ground
Proponents of the ban argue that chatbots can simulate intimacy, escalate self‑harm risks, and manipulate vulnerable teens. There is precedent for such cases; as such, they see a bright‑line ban as the fastest way to prevent worst‑case harms.
However, on the other end of the spectrum, some groups oppose sweeping restrictions, calling them unconstitutional and counterproductive, which could drive teens to riskier services and undermine AI literacy.
However, neither of the two stances takes into account active cooperation between businesses and legislators to ensure proper guardrails that can find a middle ground.
While regulators may be pushing for a ban on underage use of AI chatbots, the real solution may lie in closely observing and learning from Australia’s rollout of an all-out social media ban for teens. It will provide a real‑world test for platform‑side age inference at a national scale.
Ultimately, the tug-of-war over regulating underage access to AI chatbots reveals a deeper truth: just as past industries competed to capture the loyalty of young consumers, AI companies are now vying for the same demographic; however, this time, the stakes are higher.
The young still represent the future of markets, culture, and innovation, but they are also the most vulnerable to the psychological and ethical risks posed by emotionally intelligent machines. Unless regulators and businesses can collaborate to balance protection with empowerment, the U.S.’s attempts to shield its youth from AI’s dangers may end up alienating the very generation whose trust will decide the technology’s long-term success.


Roko Pro Tip
![]() | 💡 AI bans may feel safe, but they rarely solve the root issue.Before building ‘protective’ barriers, ask: Are we empowering youth responsibly, or just deflecting blame? |

What if you could lower your tax bill and stack Bitcoin at the same time?
By mining Bitcoin with Blockware, you can. Bitcoin Miners qualify for 100% Bonus Depreciation. Every dollar you spend on mining hardware can be used to offset income in a single-tax year.
Blockware's Mining-as-a-Service enables you to start mining Bitcoin without lifting a finger.
You get to stack Bitcoin at a discount while also saving big come tax season.
*This is sponsored content

Bite-Sized Brains
Canva unveils AI design model: Adds new image, video, and editing tools to its suite.
OpenAI now sells extra Sora credits: $4 per bundle, with free generations soon to shrink.
Meta takes $1.5B tax hit: Trump calls it a “big, beautiful bill” on Truth Social.
Monday Poll
🗳️ Should underage access to AI chatbots be banned? |

Meme Of The Day
My robot after 2 days of filling in for me on Google Meet calls
— Chris Bakke (@ChrisJBakke)
4:17 PM • Oct 30, 2025

Rate This Edition
What did you think of today's email? |




