- Roko's Basilisk
- Posts
- Is Global AI Governable?
Is Global AI Governable?
Plus: Apple’s Trump ban, OpenAI’s next big bet, and Microsoft’s new Copilot push.
Here’s what’s on our plate today:
🧪 Can the UN prevent a global AI divide before it’s too late?
🧠 Apple blocks Trump app, Altman teases deals, Microsoft boosts Copilot.
🧰 Understand AI literacy, not just usage—Roko explains why.
🗳️ Is the global AI divide getting enough attention?
Let’s dive in. No floaties needed…

Lower your taxes and stack bitcoin with Blockware.
What if you could lower your tax bill and stack Bitcoin at the same time?
Well, by mining Bitcoin with Blockware, you can. Bitcoin Miners qualify for 100% Bonus Depreciation. Every dollar you spend on mining hardware can be used to offset income in a single-tax year.
Blockware's Mining-as-a-Service enables you to start mining Bitcoin without lifting a finger.
You get to stack Bitcoin at a discount while also saving big come tax season.
*This is sponsored content

The Laboratory
Why the U.N. is building a Global AI Governance Framework
By 2025, the global population had reached 8.2 billion, with about 68.7%, or 5.64 billion, connected to the internet. While the majority is now participating in the global information network, the data reflects that around 31.3% of the world’s population is yet to see the benefits of an interconnected world.
The global trend of internet penetration is reflective of the challenges institutions, both state and private, face in democratizing the benefits of emerging technologies. It took decades for the internet to reach a majority of the world’s population, and it may take even more time before it reaches the remaining 31%.
However, in 2025, a new technology has shifted focus away from the internet and towards a much bigger problem: the widening digital AI divide.
Since the mass release of AI tools, the concentration of AI know-how, infrastructure, and accessibility in a handful of countries has raised alarms that some countries may have no agency over AI models deployed within their borders. This would impact not just their state and private institutions, but also push them towards techno-feudalism, a term used to describe a modern economic system where big technology companies have power similar to feudal lords. The only difference is that instead of owning land, corporations dominate digital platforms, data, and online marketplaces. Individuals and small businesses now depend on them much like peasants once depended on feudal lords for access and security.
Understanding the global AI divide
The AI digital divide is unlike anything we’ve seen before. Previously, it was about who had access to devices like smartphones or computers. Now, it’s about who controls the compute power, data, skills, languages—and even basic resources like electricity and water.
The AI digital divide is becoming a matter of concern because while billions have yet to access the internet, a small number of countries and clouds command most GPU clusters and research dollars, a reality that locks many regions out, forcing them to depend on the few for access.
Beyond technology, the structural gap is also intertwined with geopolitics. Countries that have the technical know-how understand the potential of the technology for good and for bad. As such, they have begun exercising export controls and compliance rules that shape where high‑end chips can go. Critics say it can inadvertently widen access gaps, even when the end goal is to stop misuse.
On the infrastructure front, a handful of countries host specialized AI data centers, meaning most of the world is excluded from hosting or controlling powerful AI. The United States and China dominate these hubs; Europe has a few; regions like Africa and Latin America are mostly shut out.
This has happened not just because these countries were first to adopt the technology, but also because they have the necessary capital, power, cooling, skilled staff, and stable infrastructure. Even if the countries that are currently shut out start working on the infrastructure, it could take years before they have enough built up to host their own models.
In the past, state participation in technological adoption has helped countries break free of infrastructural constraints. However, with AI, things are a little different. States can invest in infrastructure, but they also have to contend with the challenge of gathering and cleaning the data needed to train the AI models.
As of now, AI models perform much better in ‘high-resource’ languages (English, Chinese, Spanish, etc.) because there is far more training data. Communities speaking lower-resourced languages often get poor accuracy or no support at all. And while telecom and tech companies are working to mitigate the issue, Veon recently partnered to support underrepresented languages, and Google has released support for regional languages like Hindi and Tamil, there is a long way to go before these efforts bear fruit.
Then there is the problem of training a skilled workforce that is well-versed in technology. Having infrastructure and data is one thing; knowing how to use them, understand model behavior, and ask the right questions is another. The 2024–25 Fluency Report argues that the AI divide is driven not just by hardware but by gaps in digital literacy, transparency, and institutional investment.
UNESCO also warns that unless there are increased investments in AI literacy (knowing what AI can and cannot do, how to interpret outputs, and how to use it safely), the benefits will mostly flow to those already well served.
The human and economic costs of the AI divide
The digital divide not just alienates large sections of the population from reaping the benefits of automation, but also obscures their contribution to the development of the modern world.
People who label data, moderate content, or manage infrastructure in low-cost regions are often not paid their fair share. Their working conditions, mental health, and labor rights are ignored (though some recent litigation, e.g., in Kenya, has started exposing these costs). Also, detection tools for deepfakes or misinformation work worse in low-resource languages or regions, which means those societies may be more vulnerable to AI misuse.
In healthcare, underfunded, safety-net providers struggle to adopt AI because they lack staff, technical capacity, or the dollars for customization. So, while wealthier systems accelerate ahead, it leaves the vulnerable populations further behind.
In business, micro, small, and medium enterprises (MSMEs) often can’t afford AI infrastructure or specialized talent, so they’re excluded from gains seen by big firms. And in education, AI models may reinforce inequalities if only well-resourced schools can integrate them; low-income or rural students fall further behind.
So, while AI as a technological advancement can help usher in systematic changes, it brings with it the real threat that a large section of the global population may be left behind. However, steps are being taken to address the situation.
How the U.N. plans to bridge the AI gap
In July 2025, the WSIS+20 High-Level Event in Geneva gathered governments, civil society, technical groups, and U.N. agencies to take stock of the information society agenda and to align it more tightly with emerging tech realities.
The event explicitly convened a session titled ‘Bridging Visions: Aligning the Global Digital Compact (GDC) and WSIS+20 Overall Review by the U.N.’, signaling that digital inclusion, trust, and skills would be central inputs into the upcoming U.N. General Assembly’s review in December 2025.
Beyond the Geneva meetings' symbolic alignment, the U.N. General Assembly took a concrete step in August, adopting a new resolution that defines and authorizes two new global mechanisms: an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance.
The Scientific Panel is envisioned as a body of experts providing evidence-based assessments, foresight studies, and technical reviews of AI developments, including risks, capabilities, and trends to feed policymaking. Meanwhile, the Global Dialogue is intended as an inclusive, ongoing forum bringing states, civil society, academia, industry, and technical communities together to negotiate principles, norms, and strategies for AI governance, especially to make sure that bridging the digital divide is on the agenda.
However, this may to be enough since the mechanisms are frameworks, not monetary plans, and the actual impact will depend on states and funders backing them. Some skeptics warn that without a binding force or coercive power, they may become symbolic signals rather than agents of real redistribution of AI capacity.
Closing the technological canyon
As companies in the U.S., Europe, and China continue to push for edge models with the clear goal of achieving AGI, the global South is at risk of being left behind. The technological infrastructures, manpower, and skills needed to bridge the gap will be more challenging than setting up telecommunication networks and ensuring access to powerful tools.
Though steps are being taken to build a roadmap for a future where large parts of the population are not left behind, the challenge remains steep. The next decade will reveal whether global leaders choose to entrench power or share it.


Roko Pro Tip
![]() | 💡 Don’t just learn to use AI—learn how it works.If your team’s only using AI tools like ChatGPT for surface-level tasks, you’re building on a fragile foundation. Start investing in AI literacy—understand what large language models can and can’t do, how data bias shows up, and what “compute” actually means. |

Stay at the forefront with Memorandum's daily tech insights.
Memorandum distills the day’s most pressing tech stories into one concise, easy-to-digest bulletin, empowering you to make swift, informed decisions in a rapidly shifting landscape.
Whether it’s AI breakthroughs, new startup funding, or broader market disruptions, Memorandum gathers the crucial details you need. Stay current, save time, and enjoy expert insights delivered straight to your inbox.
Streamline your daily routine with the knowledge that helps you maintain a competitive edge.
*This is sponsored content

Monday Poll
🗳️ Is the global AI divide getting enough attention? |

Bite-Sized Brains
Apple blocks Trump-friendly app: Apple rejected the conservative IceBlock app, citing App Store policy violations. Developers claim bias, and Trump allies are rallying behind it.
OpenAI’s next big deals are incoming: Even after massive hardware deals with AMD, Nvidia, and Oracle for Stargate, Sam Altman says OpenAI has more major partnerships on the way.
OneDrive gets AI + Windows fusion: Microsoft is relaunching OneDrive as a native Windows app, powered by AI Copilot features and better offline sync—rolling out by the end of the year.
Meme Of The Day

Rate This Edition
What did you think of today's email? |
