- Roko's Basilisk
- Posts
- Can The UN Govern AI?
Can The UN Govern AI?
Plus: $100B chip talks, AI translations, and ExplainThis for research.
Here’s what’s on our plate today:
🧠 Unpack Red Lines initiative and why global AI guardrails are long overdue.
🧪 ExplainThis, Humata, and Kadoa — three AI tools that simplify the complex.
⚡ $100B chip talks, Google’s AI tools, and WhatsApp translations.
📊 Thursday Poll: Should the UN lead global AI safety efforts?
Let’s dive in. No floaties needed…

Lead confidently with Memorandum’s cutting-edge insights.
Memorandum distills the day’s most pressing tech stories into one concise, easy-to-digest bulletin, empowering you to make swift, informed decisions in a rapidly shifting landscape.
Whether it’s AI breakthroughs, new startup funding, or broader market disruptions, Memorandum gathers the crucial details you need. Stay current, save time, and enjoy expert insights delivered straight to your inbox.
Streamline your daily routine with the knowledge that helps you maintain a competitive edge.
*This is sponsored content

The Laboratory
Why AI needs a global body like the UN
After the hostilities of the Second World War ended, nations around the world got together to ensure that future conflicts between nuclear-powered states were avoided at all costs. The resulting consultations between different countries led to the creation of international rules, universal declarations, and the establishment of the UN to facilitate the resolution of conflicts and stop them from spiraling out of control.
In 2025, the United Nations remains a global body responsible for implementing international standards on political and social issues. Over the course of its existence, it has managed to temper large-scale conflicts and, to a large extent, helped nations co-exist.
The establishment of the UN was a reaction to the devastation caused during the course of the Second World War. However, when dealing with threats posed by technological advancements, the kind we have seen with the advent of artificial intelligence, it is better to have a proactive, rather than a reactive, approach.
It was with this intention that, more than 200 former heads of state, diplomats, Nobel laureates, AI leaders, scientists, along with 70 other organizations that address AI, recently signed the Global Call for AI Red Lines initiative.
What is the Global Call for AI Red Lines?
The initiative “Global Call for AI Red Lines” is part of a broader project from groups like CeSIA (French Center for AI Safety), The Future Society, and UC Berkeley’s Center for Human-Compatible AI.
The initiative aims to establish a global agreement on red lines that AI systems should not be allowed to cross. These could include not allowing AI to impersonate a human being or self-replicate. The initiative also looks to establish practical mechanisms for monitoring and enforcement of these rules.
According to a report from The Verge, signatories to this call include British Canadian computer scientist Geoffrey Hinton, OpenAI cofounder Wojciech Zaremba, Anthropic CISO Jason Clinton, Google DeepMind research scientist Ian Goodfellow, and others.
Through the initiative, signatories are calling for the establishment of “red lines” before the use of AI systems leads to some unforeseen catastrophe. The inclusion of notable personalities like Hinton, often touted as a “godfather” of AI, gives weight to the initiative. Hinton, who was awarded the Nobel Prize in physics for his work in AI.
Since leaving his post at Google, he openly called for stricter regulations around AI, citing risks, including a “10% to 20%” chance that AI would lead to human extinction within the next three decades. And he is not the only one.
A key concern for many AI safety campaigners is that the race for artificial general intelligence AGI, or systems that are smarter than humans, could sideline safety. And that if AI systems were to evade human control, it could have disastrous consequences.
The signatories have also on occasion shared that even if future AGI systems are discounted, the accelerated speed at which AI systems are being developed and deployed calls for clear regulations and oversight.
While geographic and economic areas like the EU, China, and some states in the U.S. have taken steps in this direction. Since the use of AI systems transcends geographical borders, clear global guidelines are the need of the hour.
How valid are these concerns?
Regardless of whether one agrees with the signatories or not, AI is advancing rapidly, and some of its potential capabilities or applications could cause damage. This can be difficult, even impossible, to reverse and stems from misuse or a loss of human control over the systems.
Defining “red lines” is essentially about drawing clear boundaries: identifying areas where society should never go, no matter how powerful the technology becomes. Even for companies working on AI systems, it is helpful to have a clear understanding of the dos and don’ts.
Without global guardrails, technology companies may face incentives to prioritize speed, competitive advantage, or commercial gain over safety. And though many AI firms act responsibly, inconsistent regulations and weak oversight across countries create room for accidents or bad actors to emerge. The recent spate of lawsuits filed by users’ families against AI companies highlights how ambiguity around broader guardrails can lead to litigation and larger public distrust. This can be disastrous for future applications of AI, as well as for the users.
Global guidelines will strengthen public trust and give AI developers ethical legitimacy. As AI becomes embedded in sensitive areas such as healthcare, justice, finance, and governance.
However, the guardrails for humans on how, where, and to what extent AI should be used can lead to abuses and erode confidence. Clear limits, backed by enforcement, signal that companies are prioritizing responsibility over profit, which is critical for securing regulatory approval, broad adoption, and long-term social acceptance.
What’s in it for corporations and nations?
Global guidelines will also help in creating a level playing field for companies, which in turn will enable investment decisions, compliance, and trust.
In the geopolitical space, guidelines are particularly important. The race toward advanced AI, including AGI, is already underway between countries like the U.S., China, and members of the EU. If one nation develops powerful systems without constraints, the risk of misuse for surveillance, military purposes, or destabilizing tactics grows. Agreed international boundaries would help reduce arms-race dynamics and escalation.
However, globally accepted guidelines cannot come up in a vacuum. They require cooperation between nations with different political, social, and economic philosophies. These not only change their incentives for developing AI, but also how they approach its development.
China’s DeepSeek is clearly prioritizing efficiency over brute power, which is in stark contrast to how companies in the U.S. approach AI development. It will be difficult to bridge these differences, which is why the guidelines would require an international body capable of ensuring compliance and cooperation akin to what the UN does in the political and social realms.
This brings us to the challenges of propping up such global rules and an organization to enforce them. The UN was formed after a disastrous war that killed millions around the world. The threat from advanced AI models, however, is yet to be understood, which makes it difficult to formulate regulations; this, in turn, creates problems for people working in the sector.
The challenges of global AI regulations
For those building AI systems, infrastructure, or policy, the idea of red lines is not theoretical; it directly shapes business strategy.
Risk management expectations are rising, with audits, red-teaming, and pre-deployment testing becoming standard. Falling behind could invite failures or public backlash, as most experts already see these safeguards as essential.
The call for “red lines” on AI could very well be the beginning of a larger mechanism or movement that facilitates dialogue and cooperation between different nations and corporations.
Currently, regulations are sparse and untested. The EU’s AI Act bans certain uses of AI like biometric surveillance, subliminal manipulation, inferring emotions, and targeting vulnerable groups. Similarly, the G7 Hiroshima AI Process & other multilateral statements are beginning to assert what kinds of dangerous uses are “not acceptable.”
Even companies have begun talks to assess the risks from advanced AI technology. According to a report from the Financial Times, OpenAI, along with other U.S. AI firms such as Anthropic and Cohere, held private/secret talks with Chinese AI experts. These discussions were focused on dealing with risks from advanced AI technology.
However, these are limited to their geographical areas. Something that does not restrict technology. In this scenario, it is important to ensure global cooperation, which is easier said than done in a highly politicized and fragmented world.
The slow but steady approach to global AI safety
The call for AI red lines is part of a long debate around the impact of AI. And while it is amongst the early calls for a collective global stand on it, efforts have been made in this direction.
The Global Call for AI Red Lines is one such step. It underscores that the world cannot afford to wait for a catastrophic incident before acting.
Just as the world came together after 1945 to place limits on nuclear weapons, today’s challenge is to set global boundaries on AI before it spirals beyond control. The Global Call for AI Red Lines is not about halting progress, but about ensuring that innovation does not outpace humanity’s ability to manage its risks. The choice is clear: either act collectively now, or risk confronting dangers too vast to contain later.


Elite AI talent, matched to your exact needs.
AI is evolving fast—don’t fall behind. AI Devvvs connects you with top AI and ML professionals who bring real expertise to your projects.
From data science to robotics, we provide handpicked, vetted talent ready to deliver results. No more lengthy hiring processes or mismatched hires—just skilled professionals who integrate seamlessly into your team. Solve your toughest AI challenges with ease.
*This is sponsored content

Quick Bits, No Fluff
Altman and Huang’s $100B deal: A behind-the-scenes look at how Sam Altman and Jensen Huang secured OpenAI’s historic $100B Nvidia chip partnership.
Google’s secret AI dev sauce: Meet the VP behind Google’s AI coding tools—and how her team’s workflow is quietly shaping the next wave of dev platforms.
WhatsApp now speaks your language: WhatsApp is rolling out automatic message translations, starting with bilingual users on iPhone and Android.
Thursday Poll
🗳️ Should the UN create global AI “red lines”? |

3 Things Worth Trying
Humata – Upload any file, and ask it questions like it’s a teammate.
Kadoa – Extract clean, structured data from messy PDFs or web pages.
ExplainThisToMe – AI that breaks down complex documents or articles in plain English. Great for making sense of dense policy papers.

Meme Of The Day


Rate This Edition
What did you think of today's email? |
