- Roko's Basilisk
- Posts
- Europe's AI Rulebook Cracks
Europe's AI Rulebook Cracks
Plus: DualShot Recorder rises, Meta's footage flap, Uber sells its driver data.
Here’s what’s on our plate today:
🧪 Europe's AI Act is under pressure before it kicks in.
📰 DualShot's quiet hit, Meta's smart glasses problem, Uber's sensor grid.
💬 Map your AI use cases against the EU AI Act before August 2.
🗳️ Poll: Hold the line, delay, carve out, or scrap the AI Act?
Let’s dive in. No floaties needed.

Goodies delivered straight into your inbox.
Get the chance to peek inside founders and leaders’ brains and see how they think about going from zero to 1 and beyond.
Join thousands of weekly readers at Google, OpenAI, Stripe, TikTok, Sequoia, and more.
Check all the tools and more here, and outperform the competition.
*This is sponsored content

The Laboratory
TL;DR
Deadline meets deadlock: The EU AI Act’s high-risk rules kick in on August 2, but a 12-hour negotiation session on April 28 collapsed, with the next round expected around May 13.
Germany wants carve-outs: Chancellor Merz and Siemens want industrial AI exempted, but 10 member states are blocking the push in the Council.
Convenient competitiveness framing: U.S. AI investment hit $285.9B, while Europe’s was $20.9B, but that gap predates the AI Act, and Big Tech spends €151M/year lobbying Brussels to weaken it.
Civil society is pushing back: Over 40 groups warn that the proposed Omnibus guts protections around biometric ID, school AI, and medical systems.
Global stakes: The EU’s regulatory credibility has shaped AI frameworks in Japan, Canada, and Singapore. If simplification becomes deregulation, Europe loses the one advantage that doesn’t require matching America’s checkbook.
Europe’s AI Act is under pressure before it even takes effect
The stretch of time between when a technology is released to the masses and when its real impact begins to surface across economic, political, and social structures is rarely smooth. More often than not, this time is defined by friction between those building the future and those trying to regulate it.
That tension has appeared again and again through the history of human civilization, and artificial intelligence is simply the latest chapter. Ever since AI was placed in the hands of the public, anxieties have grown around what it could do to the systems already in place. Jobs, institutions, information flows, security, and power itself now sit inside the conversation. Those concerns have become the foundation for the first generation of AI laws and regulatory frameworks.
Yet just as much of the world is still trying to cut through the fog around what AI actually is and what it is truly capable of doing, regulators are grappling with the same uncertainty. They are being asked to make rules for a moving target, in a world where a heavy hand could choke innovation, while a lenient one could release forces that may become far harder to control once they are fully in motion.
The law that was supposed to lead the way
In August 2024, the European Union’s AI Act entered into force, making it the world’s first comprehensive legal framework for artificial intelligence. The act was built around a simple organizing principle: the riskier the AI system, the stricter the rules. Certain uses were banned outright, such as government social scoring. In contrast, others, including areas such as hiring, healthcare, law enforcement, and education, were classified as high risk and placed under strict compliance requirements. The law was designed to phase in gradually, giving companies time to adapt, and the most consequential rules, those applying to high-risk systems, were set to take effect on August 2, 2026.
That deadline is now three months away, and rather than preparing for it, much of the political energy in Brussels is being spent trying to delay it.
What changed
In November 2025, the European Commission proposed changes called the Digital Omnibus on AI. The reasoning was straightforward: European companies were struggling to compete with U.S. and Asian rivals, and the rules were adding to the burden. The proposed fix was to delay the high-risk compliance deadline to December 2027 for stand-alone AI systems and to August 2028 for AI built into regulated products such as medical devices and industrial machinery.
Both the EU Council (which represents the governments of member states) and the European Parliament (the elected legislative body) agreed that a delay made sense. Where things fell apart was over a deeper question: just how many exemptions should the law allow?
That disagreement came to a head on April 28, 2026, when negotiators from the Parliament, the Council, and the Commission sat down in Brussels for a formal negotiation session. The talks ran roughly 12 hours, stretching past two a.m. on April 29, and ended without an agreement. “It was not possible to reach an agreement with the European Parliament,” a Cypriot official told Reuters.
According to IAPP reporting, the next round of talks is expected around May 13. And if a deal is not reached before August 2, the original rules kick in unchanged.
The argument at the center
To understand the fight, it helps to think about how regulation works in the EU. Europe already has safety rules for medical devices, heavy machinery, toys, and connected cars; these rules existed before the AI Act. The AI Act was designed as a ‘horizontal’ law, meaning it applies across all sectors, in addition to any existing rules.
The question now is whether that layering makes sense. If a medical device already meets strict EU safety requirements, should it also undergo a separate AI-specific compliance process? Industry groups say the overlap creates extra costs without making the product meaningfully safer, and Germany has been the loudest voice advocating this position.
At the Hannover Messe industrial fair on April 19, Chancellor Friedrich Merz said he would push to “exempt industrial AI from the current regulatory straightjacket” in the EU. Siemens CEO Roland Busch separately told Bloomberg that the company would direct most of its €1B in industrial AI investment to the U.S., citing regulatory burdens.
But not everyone in Europe agrees that this should be done. A group of 10 member states, including Austria, Denmark, the Netherlands, and Spain, formally opposed Germany’s push. That coalition is large enough to block the proposal from passing in the Council.
The money argument, and the parts it skips over
Germany’s efforts to soften the AI Act point to a bigger issue: Europe is struggling to keep up with other regions in AI investment. The 2026 Stanford AI Index Report found that U.S. private AI investment reached $285.9B in 2025, while Europe attracted just $20.9B.
These numbers have become a rallying cry for companies to push to delay the regulations.
In July 2025, executives from 46 major European companies, including Airbus, ASML, and Mercedes-Benz, signed an open letter calling on the Commission to pause for two years the AI Act’s key obligations. The Commission rejected the request, with spokesperson Thomas Regnier saying there would be “no stop the clock, no grace period, no pause.” However, just four months later, the Omnibus delivered much of what the industry had asked for, only packaged differently.
The trouble with the competitiveness argument is that Europe’s investment deficit has existed for years, long before the AI Act was even drafted. The deeper roots lie in structural underinvestment in venture capital and digital infrastructure, issues that stretch back decades. A regulation that has not yet been fully enforced is an unlikely explanation for a gap that was already enormous before the law existed.
There is also the question of who has had the most influence over the ‘simplification’ agenda. A joint analysis by Corporate Europe Observatory and LobbyControl found that the digital industry now spends €151M per year lobbying the EU, with Big Tech representatives averaging three meetings per day with senior Commission officials in the first half of 2025. A Tech Policy Press analysis found that 69% of those meetings were with business groups, and only 16% with NGOs. That does not mean the competitiveness concerns are invented, but that the conversation about what ‘simplification’ should look like has been shaped disproportionately by those who stand to benefit most from weaker rules.
What gets lost in the simplification
On the other side of this debate are the people and organizations who pushed for the AI Act in the first place.
In mid-April, more than 40 civil society groups, including Access Now, Amnesty International, and AlgorithmWatch, signed an open letter arguing that the Omnibus weakens protections around biometric identification, AI in schools, and medical AI. An earlier letter from over 133 organizations had urged the Commission not to reopen the law at all.
Even one of the Parliament’s own lead negotiators, Michael McNamara, acknowledged that shifting AI governance into sector-specific laws could end up being ‘deregulatory rather than simplifying.’ Moving obligations out of a single unified framework and into a dozen sectoral regimes does not make the rules simpler. It spreads them across more places, making them harder to enforce.
There is also a timing problem because the Omnibus proposes to delay rules while keeping the law non-retroactive. Bram Vranken of Corporate Europe Observatory warned that companies could rush risky AI systems to market before the rules apply, avoiding compliance costs entirely.
Where this goes from here
The negotiating window is closing quickly, as the Omnibus must take legal effect before August 2. The EU needs a political agreement, a Parliament vote, Council endorsement, and publication in the Official Journal, all within weeks.
Even if the delay passes, there is a separate practical problem: the technical standards companies need to demonstrate compliance with are still being drafted. The body responsible, CEN-CENELEC’s Joint Technical Committee 21, is not expected to have the full set ready before December 2026. Even a delayed deadline may arrive before companies have a clear picture of what compliance actually looks like.
The most likely outcome, if any deal is reached, is a narrow compromise: limited exemptions for areas already covered by existing safety rules, while keeping the AI Act in place for medical devices and higher-risk categories.
Beyond the immediate deadline politics, there is a broader question. The AI Act was designed to serve as a model for the rest of the world, and countries such as Japan, South Korea, Singapore, and Canada have drawn on its structure. The Stanford AI Index found that globally, the EU is trusted more than the U.S. or China to regulate AI effectively. That regulatory credibility is a form of competitive advantage, harder to measure than investment flows but no less real. Whether it survives depends on what ‘simplification’ of the EU’s AI Act ultimately means.


Roko’s Prompt Of The Day
![]() | Act as an AI compliance strategist for a European company. Map which of my AI use cases fall under the EU AI Act’s high-risk categories, and tell me what to do before August 2 if the Omnibus delay fails. |

Hire smarter with Athyna, save up to 70% on salary costs.
Athyna connects you with top LATAM AI talent, fast
Meet vetted professionals in as little as five days, without long, expensive recruiting cycles.
Save up to 70% on salary costs when hiring AI engineers, product leaders, and data scientists.
Get AI-assisted matching plus human vetting, so your shortlist is tight, and your interviews are worth it.
*This is sponsored content

Brain-sized Bites
Meta naked footage smart glasses: Meta is responding to reports of naked footage being captured by its smart glasses.
Uber drivers sensor grid self-driving: Uber wants to turn its driver fleet into a sensor grid for self-driving car companies.
DualShot's quiet hit: Creator Derrick Downey Jr. built DualShot Recorder, an iPhone camera app that shoots front and back simultaneously, and it's quietly becoming a creator favorite.

Tuesday Poll
🗳️ Europe is rewriting its AI rulebook before it even kicks in. What's the right move? |
|
The Toolkit
Deepgram: Speech-to-text API built for scale, handling real-time transcription and voice intelligence for production apps.
Descript: AI-powered audio and video editor that lets you edit recordings by editing the transcript like a doc.
Drift.ai: AI-powered conversational marketing platform that turns website visitors into qualified pipeline through automated chat.

Rate This Edition
What did you think of today's email? |





