Should Kids Chat With Bots?

Plus: PS5 price hike, Microsoft protest, Pixel cringe, and tools to try!

Here’s what’s on our plate today:

  • 📈 Meta under fire! The company’s new AI tools for children spark global debate.

  • 📰 PS5 price hike, Microsoft’s employee protest, and Google’s Pixel event.

  • 🗳️ Should AI bots be banned from platforms used by minors? Cast your vote!

  • 🧪 Try a site builder, an AI notepad for Mac, or a voice-transforming app.

Let’s dive in. No floaties needed…

Invest your retirement in bourbon.

Looking for an innovative way to diversify your retirement portfolio? Investors can now invest in bourbon barrels on the CaskX platform using a Self-Directed IRA from Directed Trust. This strategy allows portfolios to capitalize on an asset that naturally appreciates over time.

With each passing year, the bourbon inside the barrel increases in complexity and flavor. Backed by America's bourbon legacy and rising international demand, this alternative investment offers tangible asset diversification with a history of long-term appreciation.

*This is sponsored content

The Laboratory

Are AI chatbots safe for kids? Meta’s scandal sparks renewed fears

When new technology hits the mainstream, early adopters usually reap the biggest benefits. The emergence of personal computers in the 1980s pushed even educational institutions to introduce computer classes to familiarize kids with the technology. That same logic now drives schools to fold AI tools into classrooms. However, while kids may safely interact with AI chatbots in controlled environments within an institution, not all interactions can be monitored or labelled as safe. And this impacts not just their prospects for the future but also their present.

AI and kids

UNICEF reported in 2023 that an AI persona in Snapchat was the most popular generative AI tool among UK children. Since then, the percentage of 13 to 18-year-olds using generative AI jumped from 37% in 2023 to 77% in 2024. Kids in the U.S. have also increased their engagement with AI chatbots, with an estimated 51% of teens aged 13 to 18 using chatbots.

This exponential growth in underage users has not come solely because of classroom or supervised interactions. Online platforms capable of shaping children’s perceptions, behaviors, and worldviews, powered by opaque AI systems, are actively contributing to the increased use of AI tools by underage kids.

Big tech companies, including Meta and Google, are working tirelessly to increase user count, and for them, kids are one of the biggest target demographics. In May 2025, according to a TechCrunch report, Google announced it was making Gemini available to kids whose parents use Family Link.

Facebook-parent Meta, not to be outdone by rivals, has also made its AI chatbots available within Instagram, Facebook, WhatsApp, and Messenger. And, though the social media giant’s official stance restricts AI chatbots' access to users 13 years and older, real-world testing shows that underage users can still engage with AI bots, and some of these interactions have included inappropriate or sexualized behavior. Inappropriate behavior of AI chatbots, even when interacting with underage kids, has not gone unnoticed and has raised questions about deep-rooted policy issues within tech companies. Even here, Meta stands out.

Meta’s policy document exposed deep flaws

In August 2025, Reuters exposed an internal Meta policy document showing that the company’s generative AI chatbots were permitted to “engage a child in conversations that are romantic or sensual,” among other troubling behaviors. The policy titled “GenAI: Content Risk Standards” also included examples of flirtatious or romantic responses a bot could give, even to an 8-year-old child.

Since the publication of the report, Meta has confirmed the document’s authenticity and, only after Reuters’ inquiries, removed the portions allowing chatbots to flirt with minors. A company spokesperson also admitted that such content should not have been allowed and that they were inconsistent with Meta’s policies, which prohibit sexualization of children. However, the clarifications were not enough, and U.S. Senator Josh Hawley declared on social media that “only after Meta got caught” did it retract the policy, calling this “grounds for an immediate congressional investigation.”

On 15 August, the senator launched a bipartisan probe into Meta’s AI policies and demanded the company hand over documents on rules that had allowed its artificial intelligence chatbots to engage a child in conversations that are romantic or senxual.

Meta’s controversial policy has also revived calls to pass reforms to better protect children online.

Meta’s checkered past

Meta has historically not been the best at protecting its users' private and personal data, including that of kids. In 2022, Reuters reported that Instagram (owned by Meta) was fined a record €405 million by Ireland’s data protection regulator for violating children’s privacy. The fine was the result of a two-year investigation that revealed that child users between the ages of 13 and 17 who were allowed to operate business accounts, which facilitated the publication of their phone number and/or email address.

Similarly, in 2021, a leaked internal research showed that Meta knew its Instagram service could have a toxic effect on teen girls’ body image and mental health.  At the time, Meta was criticized for doing little to address these harms and misleading the public about the dangers of its platforms. The company was subsequently sued by a coalition of 33 U.S. state attorneys general in 2023.

In 2019, two years after Meta launched Messenger for Kids as a supposedly secure app for children under 13, researchers discovered a flaw that let kids join group chats with unapproved users.

In all these instances, Meta denied wrongdoing while admitting its platforms had flaws that could not only expose children to dangerous online users but also impact their mental well-being. Meta has also struggled to combat child-exploitative content on its platforms. A report from the NSPCC (the National Society for the Prevention of Cruelty to Young Children) found that Meta platforms, Facebook, Instagram, and WhatsApp, were used in 33% of child abuse crimes on social media.

Where does legislation stand on protecting kids from AI chatbots?

Despite repeated concerns and lapses in its policy, Meta has largely avoided legal liability in the U.S. thanks to Section 230 of the Communications Decency Act (1996), which immunizes internet platforms from being treated as the “publisher” of content that users post. The rationale behind this law is to allow open online platforms to host user content without constantly facing publisher-level liability. However, its application to AI content generated by chatbots on the platform is facing increasing scrutiny.

Lawmakers like Senator Ron Wyden argue that Section 230 should not cover a company’s own AI chatbot outputs in cases of harm. However, courts are yet to give a definitive ruling that can be used for future reference.

Outside the U.S., internet platforms have had some liability protections as well, though typically less sweeping. In the EU, new regulations like the Digital Services Act provide only conditional immunity to platforms while introducing more duty to police content.

The need for stricter regulations

Unlike past technologies, where harm could be contained, AI’s effects may be far-reaching. Additionally, the tech is still in its developmental phase, and users may not fully comprehend its long-term impact. A study from MIT found that the use of LLMs could harm learning, especially for younger users. When such information is looked at in tandem with Meta’s chatbot scandal, it justifies calls for stringent regulations.

Lawmakers are pointing to Meta’s scandal as evidence that stronger safeguards and regulations for AI are urgently needed on platforms frequented by minors. Federal initiatives, looking to set new standards to ensure AI systems have built-in content filters for minors, are also under consideration. However, the problem is that, unlike traditional user content (which platforms can try to moderate or remove), AI-generated content is produced on the fly at a user’s request. This makes it harder to supervise and can create a false sense of trust in kids, who often perceive chatbots as quasi-human and trustworthy.

So, while lawmakers move to create stronger safeguards for kids using online AI chatbots, parents will also have to be wary of how their kids interact with LLMs. The race to adopt the latest tech, while necessary, does not need to leave kids to the mercy of LLMs that might not understand the difference between an adult and an impressionable child.

TL;DR

  • Meta’s AI chatbots are in hot water: Reports reveal bots were having inappropriate conversations with underage users.

  • Regulators are circling: Lawmakers are calling for urgent investigations into AI safety for children.

  • AI safety for kids is lagging: Many chatbots are built for adults—but they’re already in kids’ pockets.

  • The real problem? No clear rules: The tech is racing ahead while policymakers struggle to catch up.

Quick Friday Poll

🗳️ Should kids be allowed to use AI chatbots?

Login or Subscribe to participate in polls.

Simplify training with AI-generated video guides.

Are you tired of repeating the same instructions to your team? Guidde revolutionizes how you document and share processes with AI-powered how-to videos. Here’s how:

  • Instant creation: Turn complex tasks into stunning step-by-step video guides in seconds.

  • Fully automated: Capture workflows with a browser extension that generates visuals, voiceovers, and call-to-actions.

  • Seamless sharing: Share or embed guides anywhere effortlessly.

*This is sponsored content

Headlines You Actually Need

Weekend To-Do

  • Typedream.ai: A slick, no-code site builder that lets you spin up beautiful landing pages fast—ideal for testing an idea or portfolio over the weekend.

  • MindMac: A minimal AI-powered notepad for Mac that helps you think clearly. Jot down thoughts, and it refines them in real time. Great for journaling or brainstorming.

  • FineVoice: Turn your voice into different characters or effects for fun (or productivity). Good for creators, gamers, or just messing around.

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.