Racing Toward AGI, No Guardrails

Plus: Yoodli’s assistive AI, Meta’s pendant gamble, GPT-5.2 arms race.

Here’s what’s on our plate today:

  • 🧪 AI labs are racing ahead while safety standards fall behind.

  • 🧠 Yoodli’s assistive AI, Meta’s pendant, GPT-5.2 arms race.

  • 📊 Poll: Who should be forced to slow AI first?

  • ✏️ Prompt: Ask your favorite model to audit its own risks.

Let’s dive in. No floaties needed…

Launch fast. Design beautifully. Build your startup on Framer.

First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours. No dev team, no hassle. Join hundreds of YC-backed startups that launched here and never looked back.

  • One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.

  • No code, no delays: Launch a polished site in hours, not weeks, without hiring developers.

  • Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.

  • Join YC-backed founders: Hundreds of top startups are already building on Framer.

Eligibility: Pre-seed and seed-stage startups, new to Framer.

*This is sponsored content

The Laboratory

Why AI companies are falling behind on safety

PauseAI protests spread across London, New York, and San Francisco as members debate the movement’s next steps. Photo Credit: Wired.

In nature, progress and adaptation occur over millions of years at a slow, steady pace. This allows animal and plant species to adapt to changes and survive. Human progress, however, operates on a different scale.

Within the span of a couple of centuries, which, when compared to nature, is just a blip, humans have gone from a hunter-gatherer life to establishing massive cities, harnessing nuclear energy, and developing an alien intelligence.

This rapid advancement has led to some unforeseen consequences, even when humans have tried to limit the negative impact of their actions.

In 2025, the biggest challenge facing humanity is not posed by changes in the natural order, but by artificial intelligence.

The AGI race speeds up

Recently, a new edition of Future of Life Institute's AI safety index shared that the safety practices of major artificial intelligence companies, such as Anthropic, OpenAI, xAI, and Meta, are "far short of emerging global standards”.

The Future of Life Institute's Winter 2025 AI Safety Index found that none of the eight evaluated companies achieved higher than a C+ grade, with no firm demonstrating credible plans to control highly advanced AI systems.

This assessment arrives as AI companies face mounting lawsuits alleging their chatbots have contributed to multiple teen suicides and psychological harm, while the industry pours over $300 billion into AI development in 2025 alone. The gap between capability advancement and safety implementation represents what experts describe as a systemic market failure requiring immediate regulatory intervention.

Experts warn safety scores fall short

The assessment compared company practices against emerging global standards, including the EU AI Act's General-Purpose AI Code of Practice, which took effect in August 2025.

The findings of the study are not the first time that safety standards have been questioned amidst increasing efforts to achieve AGI.

Geoffrey Hinton, widely known as the ‘godfather of AI’ for his pioneering work on neural networks, has on more than one occasion warned about the impact of the AI race.

In a recent public discussion with Senator Bernie Sanders at Georgetown University, the Nobel laureate said society is not prepared for what advanced AI could bring.

He cautioned that the technology’s rapid progress could lead to widespread job losses, greater inequality, and even reshape how people relate to one another, all while governments and tech companies drift toward a potential crisis.

Combine his warnings with the findings of the study, and a grim picture starts emerging. And while teen suicides reflect the impact of weak safety in AI tools, researchers are focusing their energies on the disease itself.

Researchers are documenting a phenomenon termed "AI psychosis", where distorted thoughts or delusional beliefs are triggered by chatbot interactions.

Real-world harms emerge

A notable 2025 case involved Stein-Erik Soelberg, who murdered his mother after ChatGPT confirmed his paranoid delusions that she was poisoning him.

Despite the documented cases of the impact of AI on users, regulators appear apprehensive about introducing and implementing stricter laws. And this could in part be due to extensive industry resistance to safety regulation.

Big tech pushes back on regulations

OpenAI CEO Sam Altman urges lawmakers not to hinder U.S. AI development as he returns to Capitol Hill after two years. Photo Credit: Fortune.

OpenAI has come under fire for pushing back against state efforts to regulate AI safety. In California, Governor Gavin Newsom vetoed a child safety chatbot bill after strong opposition from the tech industry, even though lawmakers from both parties supported it.

The lobbying pressure is significant. California records show that from January to September 2025, the California Chamber of Commerce spent about 11.48 million dollars. Meta was a major contributor with 4.13 million dollars, including 3.1 million sent through the Chamber. Google spent another 2.39 million dollars, often working through groups like TechNet.

At the national level, major tech companies have reportedly pushed Congress to include a 10-year pause on state AI rules in a major federal bill. President Trump also drafted an executive order telling the attorney general to create an AI Litigation Task Force to challenge state laws in court, arguing that the country needs one simple national AI policy that does not place heavy burdens on companies.

Even the EU, which was the first to introduce legislation to regulate AI, has stalled implementation of some of the stricter laws to ensure companies have room to scale. This did not happen in a vacuum. The bloc was facing stiff blowback from business and the U.S. government.

OpenAI CEO Sam Altman has said that letting each state make its own rules could be harmful. He has spoken out against safety rules and requirements that companies disclose where their training data comes from.

All of this creates an unusual contradiction. The same companies that say they are developing powerful AI systems responsibly are also fighting the regulations that would allow the public to confirm whether that claim is true.

From the company's end, the argument is clear. They say that tough AI regulations could slow innovation and hand an advantage to China.

Companies also argue that they are improving safety as they go. Google DeepMind says it is updating safety and governance systems as models advance. After lawsuits, OpenAI added new protections like age prediction and parental controls.

Meanwhile, legal battles are testing how far AI companies can lean on free speech arguments. Character AI has claimed chatbot output should be treated as protected speech, but a federal judge in 2025 refused to dismiss the case on those grounds, allowing it to proceed. OpenAI has also argued in court that the teen involved in the Raine case violated several terms of service, including the age limit, rules against discussions of self-harm, and warnings not to rely on AI output as fact. Critics say this shifts blame onto users, especially minors, despite the way chatbots are marketed as friendly assistants.

Another problem is that measuring existential AI risk is almost impossible right now because there is no historical data to work from, which cuts both ways: it makes precise estimates unrealistic, but it also strengthens the argument for caution.

On the other hand, businesses insist they cannot slow down because competitive and economic pressure is intense.

Why are stricter rules needed?

Amidst all this chaos, the Future of Life Institute's AI safety index adds weight to the idea that stricter regulations are needed to ensure that AI tools can be used safely.

Since the Industrial Revolution, humans have been marching towards advancements in technology and how society functions. The rapid advancement has left a deep impact on humans as well as the planet as a whole. Ecosystem degradation, pollution, and global warming are just some of the problems that have arisen due to unrestricted advancement.

However, with AI, the situation is different. The technology is still in its infancy; if collective regulations and safety guidelines are not implemented, then the dream of solving problems through the use of AGI might in reality become the dream of solving the problems brought up due to the race to develop and deploy AGI.

Visa costs are up. Growth can’t wait.

Now, new H-1B petitions come with a $100k price tag. That’s pushing enterprises to rethink how they hire.

The H-1B Talent Crunch report explores how U.S. companies are turning to Latin America for elite tech talent—AI, data, and engineering pros ready to work in sync with your HQ.

Discover the future of hiring beyond borders.

*This is sponsored content

Prompt Of The Day

Audit your favorite model: Ask your go-to AI.

“List 3 realistic ways your answers could mislead or harm me, and 1 safeguard for each.”

Then pick one safeguard and actually change how you use the tool today.

Tuesday Poll

🗳️ If AI keeps shipping faster than safety rules, who should be forced to slow down first?

Login or Subscribe to participate in polls.

Bite-Sized Brains

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.