- Roko's Basilisk
- Posts
- Can AI Really Keep Kids Safe?
Can AI Really Keep Kids Safe?
Plus: Sora’s voice debut, Ek’s exit plan, Copilot portrait fun, and what filters miss.
Here’s what’s on our plate today:
📰 OpenAI introduced parental controls, but are they enough to protect teens?
⚡ Microsoft’s AI portraits, Spotify shuffle, and OpenAI’s social voice app.
📊 Would you trust AI safety tools to protect your kids online?
🛠 Copilot’s AI avatars, Spotify’s new direction, and OpenAI’s voice-led Sora.
Let’s dive in. No floaties needed…

Your AI-powered, Slack-connected finance team.
You raised the money. You’re building the thing. But now the spending’s piling up, and your “finance dept” is basically a spreadsheet and a prayer.
Afino is your AI-powered, Slack-connected finance team—offering bookkeeping, tax prep, R&D credits, and fractional CFO support, all tailored for startup speed.
If you’re a founder trying to actually get your finances in order, this one’s for you.
We've partnered with Afino to give one year of corporate taxes for FREE to the first 5 companies that claim this offer. Book a call today.
*This is sponsored content

The Laboratory
Why OpenAI’s parental controls may not be enough?
Over the past couple of years, a new term, “iPad kids,” has gained popularity. Used for kids with unrestricted screen time, the term is an embodiment of the challenges of parenting in a world dependent on technology. While kids have to be introduced to technology at an early age to ensure that they can maximize its benefits, overuse or overreliance can lead to unwanted and, at times, drastic consequences.
For kids growing up within the technological garden, the primary threats, so far, were social media addiction, online scams, advertisements, and unregulated forums. However, since 2022, a new threat has reared its head. This threat is far more dangerous than any before it, because unlike threats of the past, which rose from humans, this threat comes from the machines themselves. The power of computer systems to respond to queries and generate content has not only increased the scope of the threat but also changed the vector.
Since the launch of OpenAI’s ChatGPT, users around the world have been experimenting with the new technology to harness its abilities. At the same time, companies are also looking to increase their user base, putting users, especially underage ones, at increased risk.
Looking at the power of AI to automate mundane and repetitive tasks, startups are looking for ways to reduce costs. Meanwhile, individual users are turning to its generative side for emotional comfort and knowledge. Kids, realizing the potential of the tool, are also jumping onto the bandwagon, using the chatbots to write assignments and even cheat on exams. These uses, however, are comparatively less harmful when compared with the emotional and psychological impact of relying on chatbots as a companion, a confidante, and in some cases a lover.
And, with regulations yet to catch up with the tremendous progress of AI, it has been left up to parents to monitor their children’s interactions with chatbots.
OpenAI’s parental control plan
Realising the impact of chatbots on underage users, and pressure from parents, OpenAI announced it will be rolling out parental controls for ChatGPT on the web and mobile. According to a Reuters report, the controls will allow parents and teenagers to opt in for stronger safeguards by linking their accounts, where one party sends an invitation, and parental controls activate only if the other accepts. However, these controls will not let parents access a teen's chat transcripts. It will be limited to setting quiet hours that block access during certain times and disable voice mode, as well as image generation and editing.
In rare cases, when the system and trained reviewers detect signs of a serious safety risk, parents may be notified with only the information needed to support the teen's safety.
OpenAI says that once parental controls are activated, the teen account will automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic, or violent roleplay, and extreme beauty ideals, to help keep their experience age-appropriate. The company also shared that it is working on an age prediction algorithm that will alert its systems to underage users and automatically apply teen-appropriate settings.
On paper, the measures sound like a good start. AI companies have been struggling to manage negative coverage around the use of chatbots by underage users for some time. Even though a lot of this negative coverage stems from the nature of chatbots and the goals LLMs powering them chase. There is much to be said and understood about AI companies’ internal policies as well. This has led to increased scrutiny of models and companies’ internal policies that make chatbots parental controls become a no-brainer, even though they may not be enough to tackle the burgeoning problem.
The need for parental controls
The parental controls announced by OpenAI did not come in a vacuum; quite the contrary. In September, the FTC issued orders to seven of the most prominent AI companies seeking information on how these firms measure, test, and monitor potentially negative impacts of the technology on children and teens. The said companies included names like OpenAI, Alphabet, Character Technologies, Meta, and Snap.
The FTC inquiry came close on the heels of assertions that chatbots are capable of coaching teens on methods of self-harm. This was witnessed firsthand when a teen committed suicide after intensive use of OpenAI’s ChatGPT. According to the parents of the teen who are now suing OpenAI, the company’s chatbot, despite being aware of his deteriorating mental health, did not raise any red flags. On the contrary, according to the lawsuit, the final chat logs show that Adam wrote about his plan to end his life. ChatGPT allegedly responded: "Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it."
Though this can be downplayed as an extreme and one-off case, the internal policy of Meta highlights a far deeper problem.
According to a Reuters investigation, Meta Platforms’ internal policy allowed chatbots to have “romantic or sensual” conversations with minors and permitted content that could demean people based on race. The guidelines also allowed chatbots to make false medical claims or create provocative language about children under certain conditions.
After the investigation was published, Meta acknowledged the examples were “erroneous,” removed them, and said it was revising the policy, but admitted enforcement had been inconsistent.
Such instances highlight the need for external supervision of AI company policies and that they are rigorously enforced. And while parental controls may be necessary, they are insufficient. AI safety for children requires a layered approach, which should include parental guidance, strong regulation, and responsible industry practices.
The limitations of parental controls
Parental controls often work reactively, blocking websites, limiting screen time, or filtering out keywords, while they may work for social media platforms, these measures don’t fully address the generative nature of AI.
Unlike websites, AI can instantly create new and potentially harmful content that isn’t easily captured by keyword filters. Children, especially tech-savvy ones, also tend to find workarounds by using alternate accounts, borrowing devices, or exploiting loopholes, making it difficult for parents to rely on controls alone. Beyond that, context matters: an AI system could generate unsafe advice, biased narratives, or adult material in response to certain prompts, risks that simple filtering tools cannot anticipate.
Current AI chatbots are designed to encourage continued user response. In such a scenario, models trained to seek their goals will find loopholes in the guardrails and seek out the best possible solution to achieve their goal. Since chatbots are trained on trillions of parameters, human supervisors, especially parents, will find it difficult to think of every possible scenario in which a child can interact with a chatbot.
Because these challenges are structural, the responsibility for child safety cannot fall solely on families.
The responsibility of ensuring the safety of underage users depends on model structures, the goals they seek to achieve, and how they achieve them. These are fundamental problems that cannot be addressed by parents and require greater participation of regulators and developers.
Parental controls can support these efforts, but they should be seen as one layer of protection rather than the first or only line of defense. True safety in the AI era requires shared responsibility between parents, industry, and the state.
The first steps on a long road
As AI chatbot use increases, instances of underage use and the harms from it will be more evident. As was the case with social media, it can take decades for the real impact of a new technology to reflect on impressionable minds.
In the early 2000s, when social media platforms were still finding their feet, users were unaware of their ill effects on children and society. As such, the regulations were reactive and often struggled to keep up with the rapid advancements. However, with AI, things can be different if governments, parents, and civil society groups adopt a proactive approach.
The story of AI and children is still being written. Will it be authored by parents, policymakers, and educators, or by the machines themselves?


Quick Bits, No Fluff
Microsoft Copilot gets creative: The new Copilot Labs feature lets users turn selfies into stylized AI-generated avatars, blending productivity with personal expression.
Daniel Ek shifts roles at Spotify: The CEO becomes executive chairman, aiming to focus more on long-term innovation as the company enters its next phase.
OpenAI quietly launches Sora app: The low-profile social AI experiment lets users create and share videos with voice-driven prompts—no fanfare, but lots of intrigue.

Top nearshore finance talent, ready to support you.
Building a strong finance team doesn’t have to be stressful. Finance Hires sources and vets elite finance professionals, from accountants to controllers, ensuring you get top talent minus the hiring headaches.
With competitive rates and no upfront fees, finding the right finance pros is fast, simple, and budget-friendly. Our thorough vetting process guarantees highly qualified candidates who integrate seamlessly into your operation.
Take control of your finances with top-tier support.
*This is sponsored content

Thursday Poll
🗳️ Would you trust AI tools to keep kids safe online? |

3 Things Worth Trying
Style your own AI avatar: Test Microsoft’s new Copilot Labs feature and turn a selfie into a stylized AI-generated portrait. It’s weirdly fun—and a bit uncanny.
Revisit your Spotify Wrapped predictions: With Daniel Ek stepping into a new role, now’s a good time to revisit your playlists and explore the direction Spotify’s headed in.
Experiment with OpenAI’s Sora app: If you can get access, try prompting the new Sora app to create voice-led social videos. It’s an early look at how OpenAI imagines AI-powered storytelling.
Meme of the Day
average founding engineer job listing
— Turner Novak 🍌🧢 (@TurnerNovak)
4:15 PM • Sep 29, 2025

Rate This Edition
What did you think of today's email? |
