- Roko's Basilisk
- Posts
- Is AGI the Endgame?
Is AGI the Endgame?
Plus: Headlines on OpenAI hires, deepfake law, Gemini Sheets tricks—and more.
Here’s what’s on our plate today:
🧠 We break down the $500 B moon-shots and what “human-level” means.
🗳️ Promise, pipe-dream, or just a slick TED Talk? Vote your AGI feelings.
📰 OpenAI’s new rec-engine squad, Denmark’s deepfake law, and more.
⚡ Run every grand-vision pitch through the “Bubble Test”.
Let’s dive in. No floaties needed…

Find accountants, controllers, and tax pros today.
Finding skilled finance professionals shouldn’t be a challenge. Finance Hires connects you with nearshore accountants, controllers, and tax managers—all vetted for expertise in QuickBooks, Xero, and beyond.
Whether you need ongoing support or project-based specialists, we offer top talent at competitive rates. No upfront fees, no long hiring cycles—just experienced professionals ready to make an impact.
Streamline your search and secure reliable financial experts without the usual hassles.
*This is sponsored content

The Laboratory
AGI: holy grail or false prophet of tech?
When I sit down to write an article, I draw upon years of experience, research, and an in-depth understanding of the topic to ensure it is engaging, informative, and useful for the reader who invests their time in it. When I ask an AI chatbot like ChatGPT to write something for me, it relies on its dataset, trained on billions of parameters used in its training, to form a chain of words that mimic how humans use language to share their thoughts and ideas. So, is a chatbot smarter than a human? So far, the answer is a resounding “no.” But if big tech CEOs are to be believed, it could all change if we can claim the holy grail of artificial intelligence, AGI.
What is AGI, and why does it matter?
This is merely an example of the limitations of AI as we currently understand it. There is, however, a bigger goal that AI researchers and engineers are chasing: the idea of artificial general intelligence.
AGI represents a generalized human cognitive ability in software. On a broadly understood level, the idea of AGI represents a computer software that can use cognitive abilities to perform tasks in a way that it can be considered to be “generally smarter than humans.” But beyond this general understanding lies a deeper question: should humanity, as a collective, be working on a theoretical concept that, proponents say, promises to solve problems like climate change and lapses in healthcare? Or should we be focusing our energy on solving smaller problems one step at a time?
So let's take a closer look at AGI, how big tech views it, and how the chase for artificial general intelligence could shape the future of humanity.
The vague inevitability that is AGI
If we were to go by what the leaders of tech companies and venture capitalists say, AGI is inevitable. But beyond the marketing jargon, the term represents a vague idea that is somehow supposed to solve all the problems we currently face.
While companies choose to define AGI as “research that attempts to create software with human-like intelligence and the ability to self-teach” what is often overlooked is the lack of a precise definition for an eminent technology with real-world application. The mere idea of having a technology that can do it all, solve the imminent threat from climate change, to cure cancer takes focus away from how we are tackling these problems right now.
If big tech, which looks to return a profit on its investments, is to be believed, the current focus should be on developing AGI even at the expense of a drastic impact on the climate, since intelligent software would in the future be able to undo the damage.
The idea of AGI, though appealing, is theoretical and demands that all scientific focus should be on developing it rather than on the here and now. However, the biggest problem with the concept is that it lays the foundation for two futures. One where AGI is trained with proper values, and will lead to a world of limitless abundance or the second possibility where we will develop a machine superintelligence too quickly to control it, that it will realize it does not need to rely on humans, and humanity might be forced to find ways to prevent such a superintelligence from going rogue.
Regardless of which scenario we chase, and whether or not AGI will be a possibility, at present, the very idea has an outsized impact on current policies. One of the best examples of this is the Stargate initiative, which has the current U.S. government investing up to $500 billion. Another example is the 10-year moratorium on state-level regulations on AI in the U.S. that aims to enable tech companies to chase the dream of AGI without regulatory interventions.
The industry divide: visionaries vs skeptics
People leading some of the biggest tech companies predict that they are on the path to making AGI a reality. Nvidia CEO Jensen Huang says that if we were to give AI every single test imaginable, I’m guessing in five years, we’ll do well on every single one.” Meanwhile, OpenAI CEO Sam Altman has said that “We are now confident we know how to build AGI as we have traditionally understood it.”
However, a question that arises here is whether passing tests can be considered intelligence, or should it be viewed as a software’s ability to give responses from datasets it has been trained on, basically, is memorising, not reasoning, a mark of intelligence. The debate around AI’s abilities was called into question when Apple researchers published a paper called The Illusion of Thinking which said that popular and buzzy AI models “face a complete accuracy collapse beyond certain complexities,” especially with things they’ve never seen before.
Critics of the paper were quick to point out that Apple itself was struggling to develop its AI capabilities; as such, its research should be taken with a grain of salt.
But Apple is not alone. A survey of 475 AI researchers by the Association for the Advancement of Artificial Intelligence (AAAI) found that the “majority of respondents (76%) assert that ‘scaling up current AI approaches’ to yield AGI is ‘unlikely’ or ‘very unlikely’ to succeed, suggesting doubts about whether current machine learning paradigms are sufficient for achieving general intelligence.” So, while companies continue to chase AGI, it appears to be a distant dream, one which requires not just a diversion of finite resources but also of public perception.
Do the ends justify the means?
Currently, if you ask your preferred chatbot if it is aware of or what the difference is between it and AGI, you are likely to receive a response somewhat similar to what it gave me. “I'm a specialized language tool, while AGI would be a truly intelligent system capable of learning and reasoning like a human.’
If you were to ask your text-to-image generator if it understands the implications of its image generation capabilities, it will, in all probability, be unable to respond. Which leads us to the question: Should the collective aim of technology be to create an all-understanding, self-aware system that can solve problems the collective human intelligence is unable or unwilling to resolve? Or should we continue down the path of chasing a solution that might not be possible?
Regardless of where you stand on this question, some issues need to be addressed right away, before we delve deeper into the philosophical and ideological debate. These include demarcating a clear aim for what AI research is setting out to do, shifting focus from relying on a distant AGI to solve collective problems, and aiming to design tools based on collective input from multiple fields and communities. And ensuring that in the collective hunt for AGI, the concerns of the larger community impacted by AI systems are not left unaddressed.


Monday Poll
🗳 What’s the real deal with AGI—promise or pipe dream? |

Bite-Sized Brains
OpenAI scoops up Crossing Minds’ rec-engine team — Expect ChatGPT to get eerily good at “you might also like,” as the startup’s personalization brain-trust joins Sam Altman’s ranks.
Denmark rewrites copyright for deepfakes — New rules permit “transformative” AI remixes under strict disclosure, signaling a measured EU path on synthetic media.
Google sneaks Gemini into Sheets — A new “=AI” function lets Gemini fill empty cells with formulae, summaries, or next-step suggestions, turning spreadsheets into chat windows.

Start learning AI in 2025
Keeping up with AI is hard – we get it!
That’s why over 1M professionals read Superhuman AI to stay ahead.
Get daily AI news, tools, and tutorials
Learn new AI skills you can use at work in 3 mins a day
Become 10X more productive
*This is sponsored content

Roko Pro Tip
![]() | 💡 Before chasing the dream of AGI saving the world, pause and ask: Are we solving today’s real problems, or just dazzled by tomorrow’s tech fantasy? |

Meme of the Day


Rate this edition
What did you think of today's email? |
