When Innovation Meets Regulation

Plus: Anthropic’s big raise, OpenAI’s next move, and 3 tools to try.

Here’s what’s on our plate today:

  • 🧭 Why the U.S. is pushing back on Europe’s AI laws?

  • 💸 Anthropic’s valuation soars, Apple loses exec, OpenAI expands products.

  • 🧪 3 tools to try: Multi-AI hub, prompt-built sites, and Replit’s new agent.

  • 🗳️ Plus: Should global laws shape the future of tech? Vote in our Thursday Poll?

Let’s dive in. No floaties needed…

Your AI-powered, Slack-connected finance team.

You raised the money. You’re building the thing. But now the spending’s piling up, and your “finance dept” is basically a spreadsheet and a prayer.

Afino is your AI-powered, Slack-connected finance team—offering bookkeeping, tax prep, R&D credits, and fractional CFO support, all tailored for startup speed.

If you’re a founder trying to actually get your finances in order, this one’s for you.

We've partnered with Afino to give one year of corporate taxes for FREE to the first 5 companies that claim this offer. Book a call today!

*This is sponsored content

The Laboratory

Why the U.S. is pushing back on EU and UK tech rules

Technology is a double-edged sword. On the one hand, it has the potential to break prejudices through innovations in information technology, while on the other, it fuels the darkest parts of human behavior, with child sexual abuse material on the dark web, a prime example.

With every advancement in technology, lawmakers are tasked with understanding its impact and establishing appropriate guardrails to minimize the associated risks. It is a lengthy process and requires collaboration and cooperation, not just within different arms of governance, but also between nations.

When a technology as important as artificial intelligence comes into the picture, lawmakers must understand the impact and set guardrails to minimize risks without hindering progress. Real success is when there is a balance between regulations that protect, while at the same encouraging unhindered development.

Recently, the U.S. Federal Trade Commission, in a rare move, warned Apple, Google’s parent company Alphabet, and Meta that efforts to comply with British and European digital content laws could violate U.S. law if they weaken privacy and data security protections for American users.

The warning is a reminder that laws enacted in one part of the world may not align with the values of another. And since big tech companies operate on a global scale, they have to navigate different laws and regulations, which is easier said than done.

The EU was one of the first regions to come up with a comprehensive law on the development and use of platforms that impact ordinary citizens. However, their implementation comes at a time when the EU’s closest ally, the U.S., is trying to hold on to its technological advantage, and countries like China are catching up. With this backdrop, EU limits on U.S. tech giants could slow U.S. efforts could derail efforts to fend off China’s advances.

Inside the FTC’s warning

The FTC’s warning came in the shape of a letter addressed to 13 of the biggest U.S.-based tech companies, including Apple, Alphabet, Meta, Microsoft, and Amazon.

These companies represent the might of the U.S.’s tech dominance.

In the letter, the FTC Chair Andrew Ferguson mentioned the EU Digital Services Act (DSA), the UK Online Safety Act, and the UK Investigatory Powers Act as attempts by foreign powers to impose censorship and weaken end-to-end encryption. Ferguson said these laws will erode Americans’ freedom, and that complying with requests to censor freedom of expression can violate sections of the Federal Trade Commission Act that prohibit unfair or deceptive acts.

Ferguson’s warning stems from the idea that U.S. companies might adopt uniform global standards to simplify compliance.

The warning from the FTC aligns with broader U.S. political concerns about EU regulations. Some U.S. lawmakers view the DSA (and its companion, the Digital Markets Act or DMA) as heavy-handed measures that disproportionately target U.S. tech giants.

Washington’s resistance to EU rules

Even before the current administration under Donald Trump took over, conservative senators did not have a favorable view of the European Commission’s laws to rein in the power of big tech corporations.

In 2023, Republican Senator Ted Cruz criticized the then-FTC Chairwoman Lina Khan and called for details of the FTC’s work with its European counterparts to discuss the new rules around big tech.

So, while the EU and UK have been working on laws to limit the power of big tech companies, the U.S., at least for the most part, does not want a foreign power to dictate laws that could impact the functioning of tech giants within its own borders.

Why is the U.S. against the application of EU and UK laws within its territory?

The EU’s Digital Services Act, enforced since August 2023 for very large services, mandates stricter oversight of online platforms. The laws cover issues around transparency, illegal content, risk assessments, and content moderation.

Some provisions ban or restrict certain AI applications deemed “unacceptable” (e.g., untargeted biometric surveillance, certain social scoring), and impose high restrictions on high-risk categories (procedural safeguards, human oversight).

Though the Digital Services Act (DSA) does not explicitly require providers to break end-to-end encryption, its obligations (risk mitigation, detection of illegal content, researcher access, orders to provide information) create substantial practical pressure to implement scanning/monitoring that can be inconsistent with EU/UK policing and specialist laws, along with investigative requests. The EU argues these measures are neutral and aim to protect users and digital markets.

Critics of the law in the U.S. fear it could be used to suppress political expression, enable overreach via ‘trusted flaggers’, and foster what’s known as the Brussels Effect, where global companies adopt EU standards outside Europe.

The timing of these laws is also crucial. The EU/UK laws come into force at a time when the U.S. is working to ensure its dominance in the AI race. If tech companies start adhering to EU regulations, requiring model-level transparency or operational limits (e.g., disallowing certain uses, or requiring safety testing before deployment. It could slow down progress. Additionally, the forced disclosure of training data or model internals could raise privacy/security concerns, slowing down the training of newer models.

The Brussels Effect: Europe’s rules go global

In response to stricter regulations, companies like Google, Microsoft, and Meta reworked their risk and transparency frameworks. Google publishes its annual transparency report, while Microsoft says it has taken a proactive, layered approach to compliance with new regulatory requirements, including the European Union’s AI Act, with pre-deployment reviews, red teaming, and documentation across the AI supply chain. Meta has also set risk thresholds, threat modeling, and explicitly ties its framework to model-release decisions and invites community evaluation, all of which are consistent with regulations that expect tech companies to be more transparent about their AI models.

Tech companies have also started committing to labeling AI-generated content. Meta is using industry-standard indicators and user disclosures for images across Facebook, Instagram. Google, meanwhile, says it’s deploying “provenance technology” in products and publishing model information (e.g., model cards) as part of its risk-mitigation and transparency regime.

This, in turn, is slowing or delaying some model-training plans, pushing firms to build opt-outs, clarify legal bases, and sequence launches differently in Europe.

All these practices are effects of tech companies looking to comply with the EU’s regulations on AI, ranging from the GDPR and the Digital Services Act (DSA) to the recent AI Act (AIA).

Balancing innovation with oversight

The U.S. government’s current priority seems to be to dominate the AI race. The country needs big tech companies to power this push. However, if big tech is subject to strict regulations, it could slow down progress. And give rivals a chance to catch up, or even surpass the U.S., as they may not be bogged down with complying with strict regulations.

The U.S., though not against the DSA, DMA, or the UK’s regulations, is wary of how they can impact its tech companies. Additionally, the FTC is trying to limit the application of stricter regulations to U.S. users.

If a company promised ‘end-to-end encryption’ or ‘no viewpoint discrimination’, and then weakens that for Americans, citing EU/UK law, the FTC may treat that as deceptive or unfair under U.S. law.

The varying approaches to regulating tech

Technology has always carried with it a paradox: it connects and empowers, but it also risks enabling harm. This is why regulation is necessary, but also why it must be carefully calibrated.

Artificial intelligence, with its transformative potential, has intensified this dilemma. The EU and UK have chosen a path that prioritizes precaution, not just around AI, but also companies that build, train, and deploy the models. The U.S., meanwhile, views such restrictions as potential brakes on innovation, especially at a time when geopolitical competition in AI leadership is fierce.

At this point, it is difficult to say which way is better. But in the long run, when granted immense power, corporations have resorted to monopolistic behavior, which ultimately leads to slower innovations.

So, while Washington is signaling that its priority is not only safeguarding constitutional protections like free expression and encryption, but also ensuring that its tech champions remain globally competitive.

For tech companies, this creates a delicate balancing act: complying with European laws without undermining commitments at home. For lawmakers, the world is watching as they attempt to strike that elusive balance between innovation and protection, knowing that whichever side sets the standard will shape the next era of global technology.

Quick Bits, No Fluff

AI teams built for real-world impact.

AI outcomes depend on the team behind them. Athyna connects you with professionals who deliver—not just interview well.

We source globally, vet rigorously, and match fast. From production-ready engineers to strategic minds, we build teams that actually ship. Get hiring support without the usual hiring drag.

*This is sponsored content

Thursday Poll

🗳️ Are global tech laws helping or hurting innovation?

Login or Subscribe to participate in polls.

3 Things Worth Trying

  • Lumio AI: A multi-model AI hub that lets you compare responses from ChatGPT, Gemini, Claude, Grok, and more in one interface—perfect for picking the right tool for the job.

  • Div‑idy: Build fully functional websites or games using plain language prompts. It’s AI-powered, beginner-friendly, and blazing fast—ideal for creators and non-technical founders.

  • Replit Agent (via Replit): Use natural language to generate, debug, and deploy full applications right from your browser. A seamless dev experience powered by AI, now with Replit Agent v2.

Meme Of The Day

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.