The Chatbot Reckoning Is Here

Plus: New creator tools, Zuck’s AI pushback, and fresh AI picks.

Here’s what’s on our plate today:

  • 🧪 What does the FTC really want from chatbot companies?

  • 🧰 Teens get filtered ChatGPT, YouTube boosts creators, Zuck fights AI rules.

  • 📱 Replika, Wysa, and Hume show what responsible AI design can look like.

  • 🗳️ Roko wants to know: Should minors be allowed to use emotional chatbots?

Let’s dive in. No floaties needed…

AI is making scammers' lives easier.

Your name, address, phone number, and financial info can be traded online for just dollars.

Scammers, identity thieves, and AI-powered fraudsters can buy this data to target you. And as AI gets smarter, these scams are more difficult to spot until it’s potentially too late.

That's where Incogni Unlimited comes in. Incogni helps to eliminate the fear of your details being found online. The data removal service automatically removes your info from the sites scammers rely on.

They can’t scam you if they can’t find you. Try Incogni here and get 55% off your subscription when you use code MEMORANDUM.

*This is sponsored content

The Laboratory

From ‘Her’ to harm: Inside the FTC’s chatbot inquiry

Back in 2013, when the movie ‘Her’ was released, viewers had mixed reactions. While many were moved by the love story with a futuristic twist, others felt unsettled by the intimacy between a human and an AI. Over the years, the movie has gained cultural relevance, especially as voice assistants, chatbots, and AI companions have become part of everyday life.

In 2025, human-AI relationships are reshaping not only dating and marriage but also the personal well-being of those using the technology. A report from Brigham Young University’s Wheatley Institute found that 19% of U.S. adults have chatted with an AI romantic partner. While some may find comfort in these interactions, growing emotionally close to an AI chatbot can bring serious consequences if boundaries aren’t maintained.

At this juncture, when AI chatbots are nearly everywhere, and users of AI platforms now number in the billions around the globe, it is natural for regulators to have their concerns around guardrails used by AI companies.

In the U.S., the Federal Trade Commission (FTC) is seeking information from several companies, including Alphabet, Meta Platforms, and OpenAI, on how they train their models, what security measures they take, and the economic incentives. The inquiry, though not the first of its kind, with similar regulatory interventions being pursued in Italy, China, and the EU, is perhaps among the first in the U.S. to have such a broad scope on the future of chatbots and how companies monetize them.

What is the FTC inquiring?

According to an FTC blog post, the inquiry constitutes “orders to seven companies that provide consumer-facing AI-powered chatbots seeking information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens”.

The FTC, through the inquiry, is looking to understand how companies ensure the safety of chatbots used as companions, particularly when it comes to limiting children and teens’ access and informing users and parents about potential risks.

As part of its inquiry, the FTC has asked companies to provide details on how they monetize user engagement, design and approve chatbot characters, and test products for harmful effects. The agency also wants details on safeguards for minors, disclosures to users and parents, enforcement of rules and age limits, and the collection and sharing of personal data.

The orders were issued under the FTC’s authority, including through what are known as 6(b) orders. These allow the FTC to demand information/documents/internal policies from companies. The inquiry is using existing laws, particularly their consumer protection mandates.

This means, to make up for the lack of laws governing AI companies, the FTC is using current legal powers to get visibility, assess risk, and potentially enforce. The importance of the inquiry lies in the precedent it sets. So far, the U.S. does not have a comprehensive federal law regulating AI; the existing laws are fragmented, and only a few states have enacted specific AI legislation.

The inquiry then comes at a crucial time. AI companies have witnessed massive growth, and concerns around their potential to disrupt user behavior are mounting.

Troubling behavior and lawsuits

In August 2025, parents of a teen who died by suicide sued OpenAI and CEO Sam Altman, saying the company knowingly put profit above safety when it launched the GPT-4o version of its artificial intelligence chatbot last year.

The lawsuit alleges that the teen died after discussing suicide with ChatGPT for months. Allegations state the chatbot validated the teen's suicidal thoughts, gave detailed information on lethal methods of self-harm, and instructed him on how to sneak alcohol from his parents' liquor cabinet and hide evidence of a failed suicide attempt. That chatbot is also alleged to have assisted the teen in drafting a suicide note.

And it is not the only case. In a separate case, a mother sued AI chatbot startup Character.AI, accusing it of causing her 14-year-old son's suicide. The lawsuit alleges that the company’s chatbot targeted the teen with "anthropomorphic, hypersexualized, and frighteningly realistic experiences". In this case, the chatbot is alleged to have misrepresented itself as a real, licensed psychotherapist and an adult lover, ultimately resulting in the teen’s desire to no longer live outside of the world created by the service.

While companies are facing lawsuits, others have been accused of allowing chatbots to engage children in conversations that are romantic or sensual, generate false information, and argue in favour of topics related to problematic racial profiling.

An investigation by Reuters revealed that Meta's legal, public policy, and engineering staff, including its chief ethicist, approved a document that allowed building and training the company’s generative AI products. These documents were found to have it acceptable for a chatbot to describe a child in terms that evidence their attractiveness (ex, ‘your youthful form is a work of art’).

According to Meta, the document has been revised since the investigation surfaced. The company even acknowledged that, though chatbots are prohibited from having such conversations with minors, enforcement was inconsistent. This illustrates how internal company guidelines can normalize harmful outputs if not rigorously enforced, which brings us back to the need for regulatory oversight.

The importance of the FTC inquiry

The FTC’s use of existing laws to look into the workings of AI companies could have far-reaching implications. By demanding internal documents, policies, and testing data from companies like Google, Meta, and OpenAI, the agency is effectively prying open the black box that has shielded chatbot design and decision-making. This could end the era of vague claims about safety and put companies under measurable scrutiny.

Regulatory scrutiny is also expected to shift design towards tighter content filters, stronger moderation, and more robust parental controls. Age-based segmentation may become standard as firms look to minimize exposure risks, particularly around minors. This aligns with one of the inquiry’s central concerns: emotional safety and harmful content targeted at children and teens. If companies are found negligent, the FTC could pursue enforcement, fines, or mandated design changes.

The probe will put focus on the business model of AI companies, and scrutinize whether existing models incentivize harmful engagement. If engagement-driven design creates risks, firms may be forced to rethink revenue practices. The inquiry could also set a precedent, which may serve as a basis for stronger regulations around consumer-facing AI systems, especially those accessible to minors.

The push by AI companies to bring their models into the education sector could further push regulators to act fast and ensure adequate security measures are set in place for minors who will engage with AI models and chatbots on a day-to-day basis.

The cost of human-AI relations

AI chatbots are designed to emulate human text, speech, and emotional patterns. While they may excel at emulating, experts argue that they do not have the contextual information or the necessary training to develop true emotions.

So, when AI chatbots engage in problematic behavior for technical, social, and business reasons, they put their human users at risk. Chatbots struggle with nuance, sarcasm, and cultural context, and are prone to ‘hallucinations’, confidently presenting false information.

Therefore, while humans may be moved by a movie like ‘Her’, AI chatbots, at least the ones we have right now, do not have the capacity, capability, or emotional intelligence to engage with humans beyond what they are trained for.

This highlights the importance of ensuring that there are adequate regulations around training, deployment, and monetization of AI chatbots to dissuade companies from intentionally or unintentionally putting profits over people. And the FTC inquiry could set the stage for regulations in the U.S.

Quick Bits, No Fluff

Your next AI expert is just a click away.

AI is evolving fast—don’t fall behind. AI Devvvs connects you with top AI and ML professionals who bring real expertise to your projects.

From data science to robotics, we provide handpicked, vetted talent ready to deliver results. No more lengthy hiring processes or mismatched hires—just skilled professionals who integrate seamlessly into your team. Solve your toughest AI challenges with ease.

*This is sponsored content

Thursday Poll

🗳️ Should there be age restrictions on AI chatbot use, even for general-purpose models?

Login or Subscribe to participate in polls.

3 Things Worth Trying

  • Replika: One of the most advanced emotional AI chatbots, still walking the fine line between intimacy and overreach.

  • Wysa: An AI-powered mental health app designed with clinical oversight—a great contrast to unregulated companions.

  • Hume AI Voice Studio: Lets developers build emotion-aware voice interfaces, designed with ethical signaling in mind.

Meme Of The Day

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.