- Roko's Basilisk
- Posts
- The New Military Brain
The New Military Brain
Plus: DHS Google surveillance, India deep tech, and streaming shakeups.
Here’s what’s on our plate today:
🧪 How AI chatbots are quietly reshaping military decision-making.
🧩 DHS surveillance, India deep tech, and streaming consolidation.
💡 Prompt of the Day: tuning AI for real-world risk.
📊 Poll on how much military AI still needs humans.
Let’s dive in. No floaties needed…

Launch fast. Design beautifully. Build your startup on Framer—free for your first year.
First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours. No dev team, no hassle. Join hundreds of YC-backed startups that launched here and never looked back.
Key value props:
One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.
No code, no delays: Launch a polished site in hours, not weeks, without hiring developers.
Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.
Join YC-backed founders: Hundreds of top startups are already building on Framer.
Eligibility: Pre-seed and seed-stage startups, new to Framer.
*This is sponsored content

The Laboratory
How commercial AI is quietly rewiring military decision-making
During the Second World War, Britain’s radar network illustrated how technological superiority in warfare rarely emerges from a single institution. The Chain Home radar system was initiated through government-funded scientific research led by Sir Robert Watson-Watt and the British Air Ministry, which recognized early that detecting enemy aircraft before visual contact could fundamentally change air defence—but translating that breakthrough into a nationwide defensive shield required far more than scientific discovery.
It required the speed, scale, and engineering expertise that only private industry could provide.
Companies such as Marconi, GEC, Metropolitan-Vickers, and Ferranti manufactured radar transmitters, receivers, and associated infrastructure, enabling Britain to deploy a functioning early-warning network before the Battle of Britain began.
The result was not just a new technology but a decisive strategic edge. The episode highlights a recurring pattern in modern warfare: militaries that can quickly integrate cutting-edge research with industrial production capacity often gain the operational upper hand, turning innovation into a battlefield advantage.
Private industry and the new military brain
In 2026, militaries are scrambling to integrate artificial intelligence into their systems to gain an edge over their adversaries. However, as with radar during the Second World War, the task cannot be completed without the speed, scale, and engineering expertise of the private sector.
In the U.S., the Pentagon is seeking to leverage private-sector expertise by engaging and collaborating with commercial AI companies.
Recently, the U.S. Defense Secretary Pete Hegseth announced that Elon Musk’s AI chatbot Grok will join Google’s generative AI engine for use within the Pentagon network. The aim is to feed as much of the military’s data as possible into the developing technology.
However, the announcement that Grok would access Pentagon networks is merely the visible edge of a transformation already underway.
From administrative tool to cognitive infrastructure
Across military installations worldwide, Google’s Gemini now sits on 3M desktops, processing intelligence reports, drafting operational plans, and analyzing drone footage.
What began as an administrative convenience, formatting documents and summarizing briefings, is expanding into the cognitive infrastructure of modern warfare.
However, unlike the radar systems that helped defend the British Isles against the Luftwaffe, these AI systems were not developed solely for military purposes. And neither are they exclusive to the military.
Unlike specialized defense systems developed over years of classified research, these are the same models millions use for homework and marketing copy, rapidly adapted for national security.
The implications of this overlap extend beyond mere efficiency gains for the armed forces.
Unlike computer systems that speed up decision-making, AI systems can subtly reshape how decisions are framed, which options appear feasible, and which information commands attention. In a conflict situation, this could mean the difference between a human decision-maker and an AI system influencing that decision.
Commercial AI and the problem of incentives
The Pentagon’s use of commercial AI systems not only overlooks this but also fails to factor in the incentives that guide commercially available AI.
Currently, AI companies optimize their models for engagement and revenue, rather than for adversary deception or crisis stability. So, while Grok’s inability to distinguish earthquake depths from seismological placeholder values might mean little on X, in a command center during geopolitical tensions, such errors could lead to catastrophic miscalculation.
And even if AI systems can be retuned for military purposes and their propensity to hallucinate is fixed, the Pentagon’s current push rests on the assumption that decades of fragmented military data can be rapidly cleaned and integrated to feed these systems.
And this assumption has failed before.
Early AI pilots focused on predictive maintenance, but logistics reportedly stalled when algorithms struggled to reconcile inconsistent data standards across the military services. While the use of AI may appear strategically sound on paper, implementation has repeatedly encountered structural barriers.
These challenges typically surface as data fragmentation, network isolation, and, most importantly, the widespread use of proprietary formats maintained by different commercial vendors.
For instance, the Marine Corps database categorizes equipment differently from Army systems. At the same time, classified networks often remain segregated from unclassified ones. Such fragmentation can slow decision-making in rapidly evolving operational environments while expanding the attack surface that adversaries can exploit.
Lt. Gen. Jack Shanahan, who led the Pentagon’s first Joint AI Center, later acknowledged that the scale and complexity of enterprise-wide data reform had been significantly underestimated.
The problem is not merely technical. Data governance involves turf battles between services, commands, and agencies that control information flows. When decision-makers direct that any DoD data be accessible to cleared users for a valid purpose, they are ordering a cultural transformation, not a software update.
Commercial AI companies excel at processing vast datasets, but military information presents unique challenges. Intelligence sources must remain compartmentalized. Operational security requires limiting system access. Even unclassified data on troop movements or maintenance schedules becomes sensitive when aggregated.
Additionally, the approach, which feeds everything into commercial AI models for exploitation, creates new attack surfaces. If adversaries compromise these systems, they gain access not only to current operations but also to patterns that reveal future intentions.
Another aspect often overlooked is the influence of AI-managed data on decision-making.
Automation, bias, and the erosion of human control
The Pentagon insists AI serves only advisory roles, with humans retaining ultimate authority. This framing comforts ethicists and reassures allies, but it obscures how automation actually changes behavior.
Research on operator automation bias shows personnel trust algorithmic recommendations despite contradictory evidence, a phenomenon observed in the 1988 USS Vincennes shootdown.
Project Maven, the AI targeting system that sparked Google’s 2018 employee revolt, demonstrates the slippage. Initially framed as decision support for drone analysts, Maven evolved into near-autonomous targeting.
A senior targeting officer reports processing 80 targets per hour with Maven, versus 30 without. That speed gain doesn’t just assist human judgment; it fundamentally alters what scale of operations becomes possible.
This efficiency creates pressure to expand AI’s role. When systems process targets faster than humans can review them, the human-in-the-loop requirement becomes a bottleneck.
Commanders face incentives to increase automation. Add to this the National Geospatial-Intelligence Agency's stated goal of rapidly achieving full operational capabilities. The question is not how AI is being used, but what the role of human oversight would be when intelligence arrives pre-analyzed, with recommended actions.
Even with limitations on the use of AI within the military, it must be remembered that the technology has yet to prove its reliability.
The risks of unconstrained AI in defence systems
Grok’s integration within the military infrastructure introduces variables absent from Google, Anthropic, or OpenAI deployments. Those companies, despite their past controversies, maintain safety teams and implement guardrails in response to public pressure.
Musk explicitly rejects constraints that other AI firms adopted, marketing Grok as unfiltered and anti-woke. At the same time, the promises of an AI without ideological constraints may appeal to defense leadership. It raises the question of how to distinguish between ideology and safety.
Especially when considering Grok’s past, where it generated Holocaust denial content and called itself “MechaHitler”. Created sexualized images of minors, prompting Indonesia and Malaysia to ban the service, and failed to live up to regulations that bar it from such behavior.
If Grok is designed to break across ideological and social conventions, what is to say it won’t look at military rules and regulations the same way?
Beyond behavioral risks, reliance on commercial AI also raises concerns about vendor lock-in.
Lock-in, accountability, and strategic risk
Once the Pentagon starts feeding operational data into an AI system, dependency sets in.
In a September 2025 letter, Senator Elizabeth Warren asked questions that the Department of Defense has not yet clearly answered: Can xAI reuse Pentagon data to train its commercial models? Could servicemember information shape Grok’s public versions? And what prevents insights gained within the military from leaking to other clients?
This isn’t abstract; examples of this have been seen in cloud computing. Once systems are built around a provider, discontinuing the provider is expensive, disruptive, and sometimes impossible.
With AI, the trap runs deeper: models shape workflows, which in turn shape habits, which then shape judgment. If Grok becomes the lens through which analysts see intelligence, replacing it later means retraining not just tools, but minds.
However, the most troubling is the absence of clear responsibility structures. When commercial AI makes errors in civilian contexts, outcomes are annoying: bad recommendations, incorrect search results, and offensive chatbot responses. When military AI makes errors, the outcomes can be lethal. Yet accountability mechanisms remain undefined.
The changing face of protection
The story of Britain’s radar network is often remembered as a triumph of technological foresight, but its deeper lesson lies in balance. Radar succeeded not simply because it was advanced, but because it was built within structures that aligned scientific innovation, industrial capacity, and strategic control under clear military oversight. Technology amplified human judgment rather than quietly reshaping it.
Today’s race to embed AI into military systems echoes that earlier scramble for technological advantage, but with far greater complexity and risk. Unlike radar, AI is not a single defensive shield. It is an adaptive, opaque system shaped by commercial incentives, vast data flows, and evolving algorithms that even its creators struggle to fully understand.
As militaries integrate these tools, the question is not whether AI can provide an operational edge, but whether armed forces can maintain strategic control over technologies that increasingly influence how wars are planned, fought, and ultimately decided.


Bite-Sized Brains
DHS Data Grab: A TechCrunch investigation reports that the Department of Homeland Security is trying to force Google and other tech firms to hand over data on people critical of Trump, raising fresh alarms about political surveillance and free speech.
India Deep Tech: India just doubled the startup window for deep-tech firms to 20 years and tripled the revenue cap for benefits, aiming to provide longer policy support and access to a new ₹1T RDI fund for space, semiconductor, and biotech startups.
Streaming Shake-Up: A proposed Netflix–Warner Bros. mega-merger would create a dominant bundle and force rival streamers to either merge, sell, or accept being niche players in a suddenly more consolidated market.

The AI Talent Bottleneck Ends Here
If you're building applied AI, the hard part is rarely the first prototype. You need engineers who can design and deploy models that hold up in production, then keep improving them once they're live.
Deep learning and LLM expertise
Production deployment experience
40–60% cost savings
This is the kind of talent you get with Athyna Intelligence—vetted LATAM PhDs and Masters working in U.S.-aligned time zones.
*This is sponsored content

Prompt Of The Day
![]() | Take 5 minutes to list where you already let AI pre-filter information for you (search, email, docs). For one of those, design a simple ‘human friction’ step before you act on its suggestions. |

Tuesday Poll
🗳️ How should militaries use commercial AI systems like Gemini or Grok? |
Rate This Edition
What did you think of today's email? |





