- Roko's Basilisk
- Posts
- When AI Vendors Say No
When AI Vendors Say No
Plus: OpenAI shopping flop, Pichai payday, and OpenClaw’s New York meetup.
Here’s what’s on our plate today:
🧪 Anthropic-Pentagon clash rewrites AI defense ethics.
🧠 OpenAI shopping flop, Pichai payday, and OpenClaw meetup.
💡 Roko’s Prompt: Draft one non-negotiable ethics clause for vendors.
🗳️ Poll: Where should AI companies draw red lines on war?
Let’s dive in. No floaties needed…

Goodies delivered straight into your inbox.
Get the chance to peek inside founders and leaders’ brains and see how they think about going from zero to one and beyond.
Join thousands of weekly readers at Google, OpenAI, Stripe, TikTok, Sequoia, and more.
Check all the tools and more here, and outperform the competition.
*This is sponsored content

The Laboratory
How a $200M Pentagon contract turned into the AI industry’s first major ethics fight
TL;DR
The deal that unraveled: Anthropic lost a $200M Pentagon deal after refusing to drop contract bans on mass domestic surveillance and fully autonomous weapons, then was labeled a supply chain risk.
OpenAI blinked: OpenAI rushed in to take the contract, faced internal backlash, and quietly added a no-domestic-surveillance clause, but still avoided any clear limit on autonomous weapons.
The chilling effect is the real story: The designation tells smaller AI labs that keeping strong ethical-use clauses could cost them government business, so many will likely strip them out in advance.
The stakes go beyond a single contract: The fight now shapes trust and governance; industry groups, allies, and regulators are watching whether the U.S. treats AI ethics as real policy or just a negotiable line item.
For much of Silicon Valley’s history, the relationship between technology companies and the U.S. military has been one of quiet cooperation. As defense contracts flowed, the money poured in, and engineers looked the other way; uncomfortable questions were avoided, and the arrangement persisted for decades.
With AI, the relationship was even stronger, with the military funding a large share of early research in machine learning, robotics, and autonomous systems during the Cold War. In fact, many foundational AI programs were built for military logistics, planning, and simulation rather than combat systems. For example, AI planning systems were used in the early 1990s to optimize supply chains and deployment logistics during the Gulf War.
However, as AI systems became more powerful under private enterprise, a new relationship emerged. The earliest signs of this new understanding between private enterprise and the military came to the fore when news broke that Google’s artificial intelligence technologies were being used by the U.S. military for one of its drone projects, causing controversy both inside and outside the company.
In 2024, the Pentagon, having witnessed the abilities of AI models, began experimenting with frontier AI models from major labs such as Anthropic, OpenAI, and xAI.
The deal that looked bulletproof
Among the frontier AI labs, Anthropic seemed to have cracked the code for responsible defense contracting, having signed a $200M agreement with the Pentagon. Under the contract, Claude became the first major AI deployed across U.S. classified government networks.
At the time, the terms, as Anthropic understood them, included specific protections: Claude would not be used for mass domestic surveillance of American citizens, and it would not be used to power autonomous weapons systems, meaning machines that can select and kill targets without a human making that call.
For a company founded explicitly on the idea that AI should be developed safely, these were not negotiating tactics; these were important markers that helped it stand out.
Then the Trump administration arrived, rewrote the rules, and the arrangement fell apart in the most public, consequential way the AI industry has seen.
The breaking point
The collapse between the Pentagon and Anthropic was not one event; it was a slow accumulation of pressure that hit a wall in January 2026, when an Anthropic employee raised concerns about how Claude was being used in U.S. military operations related to Venezuela.
At the time, the Defense Secretary Pete Hegseth, by multiple accounts, reacted with fury, issuing a memo directing all Pentagon AI contracts to adopt “any lawful use” language, effectively telling every AI company: remove your restrictions or lose the contract.
Within hours of the Anthropic ban, OpenAI announced it had struck its own deal with the Pentagon. The company that had, just months earlier, voiced support for Anthropic’s position on red lines had moved to fill the gap the moment it opened.
The OpenAI angle
The backlash was immediate and came from inside the company. OpenAI employees publicly vented about leadership’s handling of the negotiations. Critics launched campaigns encouraging ChatGPT users to switch to Claude. In a remarkable twist of market irony, Claude became the most-downloaded free app on Apple’s App Store in the days following the ban, suggesting that a significant portion of AI users actively valued Anthropic’s stance.
CEO Sam Altman, in an all-hands meeting, admitted the deal “looked opportunistic and sloppy.” This was followed by OpenAI amending the contract to add language prohibiting domestic surveillance of U.S. persons, without adding explicit restrictions on autonomous weapons. The MIT Technology Review observed that what OpenAI agreed to is almost precisely the outcome Anthropic had been trying to prevent.
Anthropic, on its part, has vowed to challenge the supply chain designation in court, calling it “legally unsound” and warning that it sets a dangerous precedent.
Legal experts at Lawfare agree the designation is legally shaky, stating that the relevant statute was written to address foreign-controlled infrastructure posing espionage risks, not domestic companies that declined to remove usage restrictions from a contract.
But the legal question, while important, is almost secondary to the structural precedent being set. If the government can designate a domestic AI company a national security risk for maintaining ethical standards, every AI company now understands that those standards come at a price. Smaller companies, without Anthropic’s resources to litigate or its brand equity to absorb a government ban, will almost certainly self-censor in advance to avoid the same fate.
The Center for American Progress has called on Congress to establish a statutory framework governing AI in defense contracts. Without legislation, the terms will continue to be set by whoever holds executive power and has the most leverage in the negotiating room. And with a technology as powerful as AI, this could have serious repercussions for the rule of law.
The Pentagon’s insistence is not entirely without merit. The military’s position that decisions about the lawful use of weapons belong to the government, not to private companies, reflects a genuine constitutional principle that has governed defense procurement for decades. No private company has historically had veto power over how government-acquired technology is used. Boeing does not decide which targets the Air Force strikes; Lockheed Martin does not approve individual F-35 missions.
What the Pentagon did not account for is that AI models are fundamentally different from aircraft or weapons platforms. A jet fighter does what its pilot directs. However, an AI model makes decisions, interprets instructions, and generates outputs in ways that even its designers cannot fully predict or control. The assurance that “we only intend to use it lawfully” provides considerably less comfort when the technology itself can behave in ways its operators did not anticipate, and when the consequences of those surprises play out on a battlefield.
Two lines in the sand
To understand what broke down, you need to understand what Anthropic was actually asking for, because neither restriction was abstract.
The first was a ban on mass domestic surveillance: using AI to monitor, track, and profile American citizens at scale, without the individual warrants or legal oversight that existing law requires. The Pentagon stated that it did not intend to do that. Anthropic’s position was straightforward: then put it in the contract.
The second was fully autonomous weapons, meaning systems that can identify, select, and engage targets without a human deciding to fire. Drone swarms, AI-assisted targeting, and machine-speed battlefield decisions that outpace any human review are not science fiction; these are active Pentagon development areas.
Anthropic’s argument here was technical rather than moral: current AI models are not reliable enough to make life-or-death targeting decisions without producing errors that, in a combat context, could mean collateral deaths and potential violations of international law.
With these in the backdrop, the “any lawful use” standard the Pentagon demanded appears vague, as it can change with administration, jurisdiction, and interpretation.
A contractual commitment to restrict autonomous lethal use is substantially more durable than a policy memo that can be rewritten overnight. That is precisely the assurance Anthropic was seeking, and precisely what the Pentagon refused to provide.
The fallout
As the story unfolds, it becomes important to understand the consequences of this dispute, which extend beyond a single contract and a single company. For enterprises building AI-powered products and services, the question of what their model providers can be directed to do, by whom, and under what conditions, is now a live procurement concern rather than a theoretical one.
Enterprises have expressed concern over the Pentagon’s handling of the matter through a major tech industry group, including major Anthropic backers Amazon and Nvidia. In a letter, the Information Technology Industry Council said they are concerned by recent reports regarding the Department of War’s consideration of imposing a supply-chain risk designation in response to a procurement dispute.
Beyond American shores, the Pentagon’s stance has also signaled to allied governments and international bodies that the U.S. government will treat ethical usage restrictions on AI as a commercial obstacle rather than a policy commitment worth honoring. This threatens to weaken American credibility in every global negotiation over the governance of military AI.
And for the industry itself, the precedent is uncomfortable but clarifying: the company that drew a public line, absorbed a government ban, lost a $200M contract, and watched a rival move in to fill the gap ended up as the most downloaded AI app in the country’s largest tech marketplace. Whether that trade-off holds in the long term remains to be seen. But it proved something that had not been tested at this scale before: that there is a real market for AI companies willing to say no.


Bite-Sized Brains
ChatGPT shopping flop: OpenAI is killing its Instant Checkout product after brands saw confusing flows, weak conversions, and proof that AI chat is not ready to replace real e-commerce funnels.
Pichai mega payday: Google approved a performance-heavy stock package that could pay Sundar Pichai up to $692M over three years if Alphabet hits aggressive market cap and share-price targets.
OpenClaw lobster cult: Hundreds packed ClawCon in New York to celebrate the open-source agent platform OpenClaw, treating it as an escape from Big AI even as security researchers warn about malware-filled skills.

The context to prepare for tomorrow, today.
Memorandum merges global headlines, expert commentary, and startup innovations into a single, time-saving digest built for forward-thinking professionals.
Rather than sifting through an endless feed, you get curated content that captures the pulse of the tech world—from Silicon Valley to emerging international hubs. Track upcoming trends, significant funding rounds, and high-level shifts across key sectors, all in one place.
Keep your finger on tomorrow’s possibilities with Memorandum’s concise, impactful coverage.
*This is sponsored content

Prompt Of The Day
![]() | Imagine your AI company is offered a massive defense contract that crosses one of your red lines. In 5 sentences, describe the deal, the ethical line it hits, and write the exact one sentence you would insist on adding to the contract to protect that boundary. |

Tuesday Poll
🗳️ For frontier AI labs, what should the Pentagon line be? |
The Toolkit
Together AI: Cloud platform to run, fine-tune, and serve open models with high-performance inference.
Veed IO: Browser-based editor to record, caption, and quickly polish short-form video.
Writer: Enterprise-grade AI writing platform for brand-safe content, style guides, and workflows.

Rate This Edition
What did you think of today's email? |





