The Agent War Gets Real

Plus: AI novels slip through, writhing robots, and chatbot advice risks.

Here’s what’s on our plate today:

  • 🧪 OpenClaw, agentic AI, and the pricing model that cracked first.

  • 🧠 Bite-Sized Brains: AI novels, writhing robots, and chatbot advice risks.

  • 💡 Roko’s Prompt of the Day on where agent workflows should actually live.

  • 📊 Poll on what the OpenClaw fight really revealed about AI agents.

Let’s dive in. No floaties needed…

🌱 Framer for Startups

First impressions matter. Launch a stunning, production-ready site in hours with Framer, no dev team required. Early-stage startups get one year of Framer Pro free, a $360 value. No code, no delays. Scale from MVP to full product with CMS, analytics, and AI localization. Trusted by hundreds of YC-backed founders.

*This is sponsored content

The Laboratory

How OpenClaw exposed the economics of agentic AI

TL;DR

  • The agent outpaced the platforms: OpenClaw let AI agents act across real workflows, from code to email, and hit 250K GitHub stars in about two months, exposing how far agentic AI demand has outrun enterprise adoption.

  • Google’s pricing model cracked first: In February 2026, Google started restricting users who connected OpenClaw to Antigravity through consumer accounts, showing that flat-rate subscriptions were never built for autonomous, always-on agents.

  • The reversal clarified the rule: Google restored access and updated its FAQ, but the core message stayed the same: agent workflows belong on metered APIs and enterprise pricing, not consumer plans.

  • The absorption cycle repeated fast: Steinberger joined OpenAI, Anthropic shipped Claude Code Channels days later, and OpenClaw became another case of an independent tool surfacing demand before the labs folded the idea into their own platforms.

  • The window for independents is shrinking: Open-source projects still reveal what users actually want, but the gap between breakout traction and platform absorption is getting shorter every cycle.

OpenClaw can run autonomous tasks using modular ‘skills,’ making it highly flexible but also raising security concerns. Photo Credit: Android Central.

In the short history of modern artificial intelligence products, AI models have emerged as a key metric for people to judge the tools’ abilities and the models’ power. Information such as the size of the context window, reasoning abilities, and benchmarks has become an important factor for users and enterprises looking to invest in AI. However, this framing often overlooks that, regardless of the model or the size of the context window, success for AI is not defined by model size but by the model’s ability to make meaningful contributions that reduce human effort.

And while LLMs can generate text, write code, analyze documents, and answer questions with remarkable fluency, they cannot act on their own, limiting their ability to make meaningful contributions to workflows.

To fill this gap, companies began developing AI agents that were thought to be capable of autonomously executing a sequence of actions to complete tasks; however, despite the evolution, Deloitte estimates that only 6% of organizations have fully implemented agentic AI, even as more than 70% have introduced generative AI into their operations, which means that the models are ready, but real-world workflows are not. That gap is exactly what a free, open-source project built by a single Austrian developer exposed and partly closed in less than two months.

The tool that moved faster than the rulebook

OpenClaw was first published in November 2025 under the name Clawdbot by Peter Steinberger, a developer better known in iOS engineering circles than AI ones. At its core, the project is a layer between users and LLMs, which connects to messaging apps like Telegram, Discord, Signal, and WhatsApp, allowing the AI to go beyond answering simple prompts and taking actions such as running code, browsing the web, managing files, sending emails, and interacting with other services.

OpenClaw is built around a skills system, where capabilities are stored as directories containing a plain-text instructions file, allowing developers to write and share skills without modifying the core agent, making the platform extensible almost immediately. Critically, OpenClaw does not enforce a mandatory human-in-the-loop mechanism, meaning that once a user configures it and sets permissions, it can run autonomously, executing chains of actions without asking for approval at each step. That design choice explains both its appeal and the concerns it eventually attracted from security researchers.

Within weeks of its release, OpenClaw had accumulated 210,000 GitHub stars, a metric that tracks how many developers have bookmarked or endorsed a repository. By early March, it had crossed 250,000 stars, surpassing React, the JavaScript framework that powers much of the modern web, and took over a decade to reach the same count. OpenClaw did it in roughly 60 days, with no launch event and no venture-backed growth team. One widely circulated user account described letting the agent research car prices, contact dealerships, and negotiate a purchase while the owner was in a meeting, saving $4,200 without any direct involvement.

For developers, OpenClaw was recognition of a problem that practitioners had been working around for months: the gap between what a model can do when carefully prompted by an expert and what it can do when left to act on behalf of a non-technical user across a real workflow. However, its rapid success and compatibility with almost every LLM exposed a vulnerability in how hyperscalers had priced and packaged their products. Companies investing billions in model development had built their billing around human-initiated, session-based usage, not autonomous agents running continuous workflows.

Google’s calculation and its miscommunication

OpenClaw supports multiple models, including Claude, DeepSeek, and OpenAI’s GPT models. It also supported Google’s Gemini through Antigravity, Google’s AI-native coding environment (an integrated development environment, or IDE, built on top of Gemini). This very ability to work with multiple models led to a conflict between OpenClaw and Google.

In mid-February 2026, Google began restricting accounts that had connected OpenClaw to Antigravity via OAuth. This standard protocol allows third-party apps to access a service using a user’s credentials without requiring the user to share their password. A subscriber paying $250 per month for Google AI Ultra posted about a sudden account restriction on 12 February, with no warning and no path to a refund. Within days, a pattern was clear: Google was enforcing its Terms of Service against users who had connected third-party agent tools to flat-rate subscriptions.

While the underlying economics were legitimate, the enforcement was poorly executed. Google’s terms were written before agentic AI became common and did not clearly address this kind of usage.

Users who were affected had not violated any explicitly stated rule when they set up their systems. The policy was applied retroactively, without warning or refunds, and early communication made the bans seem broader than they actually were, leading some to believe their entire Google accounts had been disabled when only Antigravity access was restricted.

The event highlighted the tension between hyperscalers, who, though not opposed to third-party developers, have built their products, pricing, and policies for a world where AI use is human-initiated, short, and predictable. Autonomous agents break those assumptions at once, running continuously, making independent calls, and consuming resources in ways the existing systems were never designed to handle. The result is not open hostility, but friction created by infrastructure that was not built for this kind of automation.

The reversal and what it settled

On 27 February, Google restored the affected accounts and said it was “welcoming back everyone whose account was restricted for using third-party tools.” It also updated its Antigravity FAQ to clarify that tools such as OpenClaw, Claude Code, and OpenCode require a Vertex or AI Studio API key rather than a consumer login. The restriction on the underlying behavior remained unchanged, but the penalty for users who set things up without clear guidance was removed.

The episode made one point explicit: consumer AI subscriptions are not meant to function as developer infrastructure. Platforms expect autonomous or large-scale workflows to run through metered APIs, where usage can be tracked and billed separately instead of included in a flat monthly plan. That shifts the economics of agentic AI toward enterprise pricing, at a time when many of the most interesting tools were first built on consumer accounts. Future projects will likely need API access and a budget from the start.

The incident also highlights the functioning of the broader AI industry, where the sequence that produced OpenClaw follows a structure that has now repeated several times in the AI era. An independent developer identifies a real gap between what models can do and what users need them to do, builds a tool that closes it, reaches meaningful distribution, and is then absorbed by one of the major labs. At the same time, those labs begin shipping their own versions of the same functionality.

The pattern became visible almost immediately after OpenClaw’s rise. Steinberger joined OpenAI, where, according to CEO Sam Altman, he will “drive the next generation of personal agents.” OpenClaw itself will “live in a foundation as an open source project that OpenAI will continue to support.”

Days later, on 20 March, Anthropic shipped Claude Code Channels, a feature that allows Claude Code to receive messages and respond autonomously via Telegram and Discord. VentureBeat framed it as a direct competitive response to OpenClaw, covering the exact messaging-based agent interaction model that OpenClaw had popularized. The feature launched as a research preview, required a paid Claude subscription, and ran through Anthropic’s Model Context Protocol (MCP) standard.

The story of OpenClaw reflects an important pivot not only in how end users view AI, but also in how the broader industry continues to function. Open-source projects show what users actually want. Platforms build their own versions. The most successful developers get pulled into the ecosystem. With each cycle, the space for fully independent tools becomes smaller.

Bite-Sized Brains

  • AI books slip through: Publishers are scrambling to spot AI-written novels after Shy Girl was pulled, and experts say detection tools are already losing the race.

  • Robots that keep crawling: Northwestern’s new modular ‘metamachines’ can keep moving even after losing chunks of their bodies, which is either impressive or mildly cursed.

  • Chatbots make bad therapists: A Stanford study warns sycophantic AI advice can reinforce users’ beliefs, encourage dependence, and make people worse at handling hard social situations.

Outperform the competition.

Business is hard. And sometimes you don’t really have the necessary tools to be great in your job. Well, Open Source CEO is here to change that.

  • Tools & resources, ranging from playbooks, databases, courses, and more.

  • Deep dives on famous visionary leaders.

  • Interviews with entrepreneurs and playbook breakdowns.

Are you ready to see what’s all about?

*This is sponsored content

Prompt Of The Day

Act as an AI product strategist. I will describe a workflow I want to automate. Tell me whether it belongs on a customer subscription, a metered API, or a fully enterprise setup, and explain where the pricing or platform model breaks first.

Tuesday Poll

🗳️ What did the OpenClaw fight really reveal about agentic AI?

Login or Subscribe to participate in polls.

The Toolkit

  • Mirage: Browser-based 3D design app that lets you prototype interactive product shots and visuals directly in your editor.

  • Speechmatics: Speech recognition engine for real-time, multilingual transcription and voice analytics across audio and video.

  • Superhuman: Rebranded Grammarly platform offering an AI assistant that drafts, edits, and manages communication across email and work apps.

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.