- Roko's Basilisk
- Posts
- Rule-Makers vs. Rule-Breakers
Rule-Makers vs. Rule-Breakers
Plus: YouTube’s teen-ban debate, Meta raids OpenAI, and HDMI 2.2 future-proofs 16K dreams.
Here’s what’s on our plate:
🏛️ We unpack how Brussels, Beijing & D.C. are all writing very different AI laws—and what that means for you.
🗳️ Tell Roko if you’d rather see an EU-style clamp-down, a U.S. state-by-state mash-up, or one big planetary playbook.
📰 YouTube could join Oz’s teen ban, HDMI 2.2 future-proofs your 16K dreams, and Meta poaches OpenAI talent.
🛠️ GitHub Copilot, AgentSpace’s AI Studio, and Cohere Coral—three ways to put those fresh regulations to the test (legally, of course).
Let’s dive in. No floaties need…

Guidde—Create how-to video guides fast and easy with AI.
Tired of explaining the same thing over and over again to your colleagues? It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.
Share or embed your guide anywhere
Save valuable time by creating video documentation 11x faster
Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to action.
The best part? The extension is 100% free.
*This is sponsored content

The Laboratory
How governments are regulating AI
Whether you are a business owner or an individual user, artificial intelligence in the form of generative AI tools, chatbots, and AI agents will impact how you interact with and use technology daily. While users, whether enterprises or individuals, scramble to understand the use cases of this new technology, another question remains unanswered.
How do we regulate a technology that is evolving at such a rapid pace?
While this is a broad question, how it is answered has been left up to legislators around the world. How representatives of our nation-states and the larger population tackle the acceleration and adoption of a technology that many claim could threaten humanity depends not just on their understanding of the technology but also on how Big Tech can leverage its dominance for its benefit.
As we sit on the cusp of artificial intelligence reshaping human-to-human and human-to-machine interactions, it would be a wise decision to keep a tab on what governments around the world are doing to regulate the algorithms that power AI.
EU leads the regulatory charge
During my time as a journalist, I learned an important lesson. Neither was ChatGPT the beginning of AI, nor will it be the final form of this technology. The use of algorithms and natural language processes, at the industrial and organizational levels, has been around for some time. ChatGPT marked the point at which this technology became accessible to the general public. Its immense popularity forced tech companies to release products that were, at times, not yet ready for the larger public.
One of the first authorities to take note of how AI could impact users’ lives, not just in terms of their mental health and data security, but also their worldview and the impact on the job market, was the European Union.
In December 2023, the European Union policymakers and lawmakers reached a provisional deal on governing the use of artificial intelligence, including governments' use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.
The laws laid down by the economic bloc categorized AI systems based on their potential impact on human lives, their use in law enforcement, and their general-purpose use.
Some of the key features of the regulations included regulating high-risk systems– those deemed to have significant potential to harm health, safety, fundamental rights, the environment, democracy, elections, and the rule of law. These systems must comply with a set of requirements, including conducting a fundamental rights impact assessment and obligations to gain access to the EU market.
AI systems that pose little to no risk would be subject to very light transparency obligations, such as disclosure labels declaring that the content was AI-generated.
The bloc also outlined limited provisions under which real-time, remote biometric identification systems in public spaces could be used by law enforcement agencies. The law also barred biometric categorization systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, and race, and stopped AI systems from using untargeted scraping of facial images from the internet or CCTV footage to create databases.
The EU’s regulations aimed to lay the groundwork for how other countries might develop their frameworks and how communities would handle the questions of their work being used to train models. However, the regulations, though aimed at ensuring greater transparency in AI models and their use, did not sit well with tech giants.
In September 2024, even before the regulations came into force, trade organization CICA, which represented members like Amazon, Google, and Meta, said, "The code of practice is crucial. If we get it right, we will be able to continue innovating. If it's too narrow or too specific, that will become very difficult.” The EU also faced criticism from businesses claiming the bloc was prioritizing tech regulation over innovation.
As of June 2025, the CICA has come out urging the European Union to pause implementation of the AI Act, saying a rushed roll-out risks jeopardizing the continent’s AI aspirations. "With critical parts of the AI Act still missing just weeks before rules kick in, we need a pause to get the Act right, or risk stalling innovation altogether," said Daniel Friedlaender, CCIA Europe's senior vice president, according to a report from Reuters.
But while the EU has led the way in regulating AI, countries like the U.S. and China are taking a different approach.
The American patchwork
In the United States, state legislatures are passing AI bills with varying thresholds, scopes, and subject matter. This is in contrast to the EU, which has taken a unified approach. The U.S. is instead following a patchwork approach. Instead of comprehensive federal legislation, we are seeing a state-by-state and agency approach. To date, these laws generally fall into four main categories: consumer protection, employment rights, image and likeness rights, and transparency/ risk assessment requirements for high-risk AI processing.
At the federal level, the FTC has been at the forefront, using its authority to regulate against unfair and deceptive practices. It is cracking down on companies that make misleading or fraudulent claims about their use of AI tools.
Recently, in a bid to encourage innovation, the federal government proposed a 10-year moratorium on state regulation of artificial intelligence.
China aims to balance control with growth
In December 2022, China released a set of measures governing deep synthesis, deepfake technology, and services–text, images, audio, video, virtual scenes, or other information produced using generative models. This marked one of the earliest efforts to regulate AI-generated content.
Since then, the country’s cyberspace regulator has unveiled measures for managing generative artificial intelligence services, saying it wants firms to submit security assessments to authorities before they launch their offerings to the public. The country is home to some big names in the AI space like Baidu, SenseTime, Alibaba, and DeepSeek.
Under the regulations, providers will be held responsible for the legitimacy of data used to train generative AI products. Measures should be taken to prevent discrimination when designing algorithms and training data. The regulator also said service providers must require users to submit their real identities and related information.
However, by 2023, the country had softened its stance on the regulations to ensure investments continued to pour into its AI efforts. The country has also said the current measures are “interim,” as it is looking to the AI industry to spur an economy recovering more slowly than expected after the scrapping of COVID-19 curbs.
The need for reflecting on regulations
Regardless of the progress, AI is still in its nascent stages. The technology can reshape a lot more than we can quantify right now. Whether it's intellectual property rights, job security, or mental health, the impact of AI on all aspects needs to be assessed. And it is only in consumer-facing products that regulations need to catch up. As companies push to build bigger, more competent models, they will need data from reliable sources; compromises in it can lead to the strengthening of political, racial, or gender biases.
While we enjoy the benefits of artificial intelligence, we must also pause and closely examine how these models are developed, how they impact our lives, and how we can regulate organizations to ensure we can make the most of a new turn in the age of automation.
TL;DR
EU goes first: Its sweeping AI Act tiers systems by risk, bans creepy biometric profiling, and makes Big Tech quake—so much that lobbyists want a pause button.
USA goes patch-and-paint: No single federal law yet—just FTC smack-downs and a growing maze of state bills on deepfakes, jobs, and consumer rights.
China’s tightrope walk: Hard rules on real-name verification and content controls, then a quick dial-back to keep investment flowing and stay in the AI race.
Why it matters: Divergent playbooks mean global companies face a compliance Rubik’s Cube—while citizens worry about bias, job loss, and who polices the next ChatGPT.


Quick Poll
🗳️ How Should Governments Handle AI? |

Fact-based news without bias awaits. Make 1440 your choice today.
Overwhelmed by biased news? Cut through the clutter and get straight facts with your daily 1440 digest. From politics to sports, join millions who start their day informed.
*This is sponsored content

Headlines You Actually Need
Australia mulls adding YouTube to under-16 social-media ban. Lawmakers debate whether Google’s video giant should join TikTok & Instagram in the proposed age-gate.
HDMI 2.2 is here: 96 Gbps pipes, auto audio-sync, and theoretical 16K video. The new spec future-proofs gaming rigs and theater setups for the next decade.
Meta raids OpenAI—poaches three elite researchers in latest hiring blitz. Zuckerberg’s AI arms race heats up as top talent jumps ship to Reality Labs.

Weekend To-Do (Generated by GPT, Verified by You)
GitHub Copilot – AI pair programmer integrated into GitHub.
Agentspace / AI Studio – Platforms for developing AI applications.
Cohere Coral – AI tool for building and deploying language models.

Rate this edition
What did you think of today's email? |
