- Roko's Basilisk
- Posts
- Outsourcing The State’s Brain
Outsourcing The State’s Brain
Plus: Robots are doing the dishes, Google's $400B moment, and Super Bowl drama.
Here’s what’s on our plate today:
🇬🇧 The Laboratory: UK’s Meta-backed AI powering public services.
📰 Robots, Google’s $400B, and Altman–Claude drama.
📊 Friday Poll: Would you trust Meta-backed AI in government?
🧰 Weekend To-Do: Test civic AI tools, open-source Llama demos.
Let’s dive in. No floaties needed…

Launch fast. Design beautifully. Build your startup on Framer—free for your first year.
First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours. No dev team, no hassle. Join hundreds of YC-backed startups that launched here and never looked back.
Key value props:
One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.
No code, no delays: Launch a polished site in hours, not weeks, without hiring developers.
Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.
Join YC-backed founders: Hundreds of top startups are already building on Framer.
Eligibility: Pre-seed and seed-stage startups, new to Framer.
*This is sponsored content

The Laboratory
Decoding the UK’s Meta-backed AI initiative for public services
In 2026, modern democratic institutions face an uphill task. On the one hand, the global rule-based order is facing challenges from shifting national goals and the end of long-standing alliances, while on the other, artificial intelligence is prompting governments to regulate and use technologies that some argue threaten the foundations of human society.
While there is no consensus on the optimal regulatory approach to the technology, governments in both the EU and the U.S. have adopted it to improve public services. However, while the two major decision-makers have made their decisions, the UK stands at a crossroads and has opted for the middle ground.
The British government recently announced it had recruited a team of AI specialists to develop tools to improve transport, public safety, and defense, with funding from Meta.
The partnership deals
According to a Reuters report, the government said the AI experts would spend the next year developing open-source tools to improve how authorities maintain roads and transport networks, manage public safety, and make national security decisions.
The body includes a data scientist from the Alan Turing Institute (ATI) and university researchers whose expertise spans computer vision, applied machine learning for the public sector, robotics-driven imaging, and the design of trustworthy, safety-critical AI systems.
To address concerns about the use of open-source tools developed with support from an American corporate entity, the government, in its press release, emphasized a reassuring detail: tools built with Meta’s Llama models would be owned by the government, allowing departments to keep sensitive data in-house.
However, it is important to note that the British government’s announcement comes months after Meta funded an initiative called The Open Source AI Fellowship, which aimed to place AI experts in departments across government to help solve big challenges openly and in the public interest, under the Alan Turing Institute.
The language of the government’s release appears carefully calibrated to address the central anxiety haunting public-sector AI adoption: whether we are solving problems or outsourcing sovereignty.
The sovereignty question
The answer to the question is more complicated than either the government’s optimism or critics’ alarm suggests. The $1M fellowship, bringing AI specialists from the Alan Turing Institute and universities into government to build open-source tools for transport, public safety, and national security, represents an attempt to bridge a gap.
On one side lies the expensive trap of proprietary vendor lock-in, where governments pay escalating fees for closed systems they can’t modify or escape. On the other hand lies the fantasy of building everything in-house, which the UK’s chronic digital skills shortage and legacy IT crisis make impossible.
Open-source AI offers a theoretical middle path. Unlike commercial APIs, where every query incurs a fee and model updates occur at the vendor’s discretion, Llama’s architecture allows the government to deploy models on its own infrastructure, customise them for specific needs, and avoid sending sensitive data to Meta’s servers. But this assumes a level of technical sophistication the UK public sector demonstrably lacks.
Infrastructure reality check
Consider the baseline reality: approximately 28% of central government systems are classified as end-of-life legacy systems, i.e., digital infrastructure that requires updating or replacement. A third of the 72 highest-risk legacy systems still have no funding for remediation. Half of civil service digital job postings went unfilled in 2024. This is the foundation on which the government proposes to mainstream AI nationwide.
Beyond ageing infrastructure, the UK also needs to address issues stemming from Meta’s origins as a for-profit organization, which is accountable to its shareholders rather than to citizens.
Meta’s strategic play
Meta’s intentions here are easier to read than the government’s ability to execute this. By backing the fellowship and nudging Llama into the role of default public-sector AI, Meta isn’t chasing quick wins. It’s playing the long game.
If UK government teams learn and build on Llama, Meta shapes the skills, assumptions, and ecosystem of public-sector AI, even if the government owns the tools.
Government adoption also delivers unmatched validation. Being used by the UK government lends the company credibility that would be difficult to match through any marketing campaign. Geopolitically, positioning Llama as a democratic AI infrastructure would place Meta on the side of future regulatory discussions rather than on the receiving end.
Meta’s move, then, isn’t philanthropy; it’s influence, accumulated patiently.
However, for Meta, the task will not be easy. Catering to a private organization is one thing; winning the trust of a nation and the subsequent governments its citizens elect is quite another.
The task is further complicated by concerns about data security, AI sovereignty, and Meta’s checkered past.
The trust deficit
Research from the Ada Lovelace Institute shows that most people in the UK are deeply uneasy about how their data is used in the public sector, and that a large majority worry the government will put big tech’s interests ahead of the public when it comes to regulating AI.
That anxiety isn’t just theoretical. Investigations by Privacy International and No Tech For Tyrants document that the UK government’s partnership with the data analytics firm Palantir has expanded across multiple departments, including the NHS and security services. Yet, the details of those contracts are often opaque.
Palantir’s work with NHS patient data has drawn additional scrutiny and criticism from medical professionals and civil liberties groups. Campaigners and the British Medical Association have warned that handing sensitive health information to a private firm with ties to intelligence and surveillance applications could erode public trust.
For Meta, winning over a population that is already unhappy about the involvement of foreign AI companies in public services will be an uphill task.
Then there is Meta’s own baggage, from Cambridge Analytica to persistent data privacy concerns; the company doesn’t exactly have a squeaky clean record. In such a scenario, the questions might shift away from assurances about open-source models or in-house data. And whether Meta’s involvement in public services benefits citizens or the corporation.
There are also concerns that the benefits of AI in the public sector may be overstated, and that the project’s experimental phase in the UK may not be sufficient to highlight the risks and benefits of AI.
The 12-month fellowship aims to apply AI to complex public services such as transport safety and national security, systems shaped by decades of policy, fragmented data, and legal constraints.
Past AI failures
Research shows that AI can assist public work, but high-stakes decisions, such as welfare decisions, require caution. Past failures in the Netherlands, Spain, and Italy demonstrate the harm caused by misapplied algorithms.
The real challenge is not the technology; it is whether institutions can deploy it responsibly, with good data, oversight, and accountability.
For the UK and Meta, the challenge lies not only in ensuring both participants can achieve their goals but also in making them measurable and tangible enough to be replicated in the future.
The visibility problem
While Prime Minister Keir Starmer is betting that AI can boost productivity and improve public services, the challenge is that AI successes in government are largely invisible, while failures attract attention. A system that optimizes traffic or schedules maintenance quietly improves infrastructure, but one that wrongly denies welfare or flags an innocent person creates headlines and legal risks.
The future of AI in the public sector, then, rests not on its ability to complete tasks and enhance productivity, but on how citizens perceive the improvements, and whether they can avoid complicating an already strained relationship.
What’s at stake
The UK’s experiment with AI in public services sits at the center of this shift. By working with open-source models backed by a global technology company, the government aims to leverage the benefits of AI without sacrificing control. Whether that balance holds will depend less on model sophistication and more on the strength of public institutions, the quality of oversight, and citizens’ trust.
Democracies have survived past technological upheavals by bending without breaking. Artificial intelligence will test that resilience again. The real question is not whether governments will use AI; they already are. It is whether citizens will still be able to see, question, and challenge the systems that increasingly shape their lives.
TL;DR
Meta-funded UK AI fellows will build open-source tools for transport, public safety, and defense using government-owned Llama models.
The UK is trying to thread the needle between vendor lock-in and build everything in-house, but is doing it on top of decaying legacy systems and a severe digital talent shortage.
For Meta, backing Llama in government is a long-game influence play, shaping skills, standards, and public-sector AI ecosystems under the banner of ‘open’ while battling a massive trust deficit.
The real test isn’t model performance; it’s whether citizens believe AI in public services serves them, not big tech—and whether invisible small wins can outweigh highly visible failures.


Friday Poll
🗳️ How should governments approach Big Tech–backed AI in public services? |

The context to prepare for tomorrow, today.
Memorandum combines global headlines, expert commentary, and startup innovations into a single, time-saving digest for forward-thinking professionals.
Rather than sifting through an endless feed, you get curated content that captures the pulse of the tech world—from Silicon Valley to emerging international hubs. Track upcoming trends, major funding rounds, and high-level shifts across key sectors—all in one place.
Keep your finger on tomorrow’s possibilities with Memorandum’s concise, impactful coverage.
*This is sponsored content

Headlines You Actually Need
Robot Dishwashers: Figure’s Helix 02 humanoid robot completes a four-minute, fully autonomous unload-and-reload of a dishwasher, showcasing long-horizon home-task automation (with plenty of motion-captured human flair).
Google’s $400B Moment: Alphabet’s annual revenue tops $400B for the first time, driven by a $70B cloud run rate, $60B+ from YouTube, and 750M Gemini users as Google leans harder into AI-powered search and checkout agents.
Altman vs. Claude: Anthropic’s Claude Super Bowl ads poking fun at ChatGPT’s new ad tier prompted Sam Altman to launch a public rant, accusing Anthropic of dishonesty and “authoritarian control”, even as both companies jockey for the “responsible AI” narrative.

Weekend To-Do
Trace your public data trail: Pick one public service you use (health, transport, benefits) and map where your data actually flows: which agencies, which vendors, which cloud. If you can’t find that in 10–15 minutes of searching, that’s your signal about transparency.
Run an ‘AI in my city’ audit: Spend 20 minutes checking your local council, city, or national government sites for mentions of “AI,” “algorithm,” or “automated decision-making.” Note where it’s used (policing, welfare, transport) and whether any impact assessments or appeal routes are published.
Kick the tires on open civic tools: Try one real civic-tech tool this weekend (e.g., a reporting app like FixMyStreet / SeeClickFix, or an open-data portal in your country). Pay attention to how usable it is and what’s missing—that gap is exactly where ‘AI for public good’ either becomes real or stays rhetoric.
Rate This Edition
What did you think of today's email? |




