Cheap Models, High Stakes

Plus: Moon catapults, inference startups, and OpenAI reports.

Here’s what’s on our plate today:

  • 🧪 DeepSeek vs US: how cheap Chinese models reshape AI power.

  • 🧩 Moon factories, inference infra, and Deep Research.

  • 🧠 Roko’s Pro Tip on using constraint as your AI edge.

  • 📊 Today’s poll on DeepSeek, pricing pressure, and US-China rivalry.

Let’s dive in. No floaties needed…

In partnership with

Better prompts. Better AI output.

AI gets smarter when your input is complete. Wispr Flow helps you think out loud and capture full context by voice, then turns that speech into a clean, structured prompt you can paste into ChatGPT, Claude, or any assistant. No more chopping up thoughts into typed paragraphs. Preserve constraints, examples, edge cases, and tone by speaking them once. The result is faster iteration, more precise outputs, and less time re-prompting. Try Wispr Flow for AI or see a 30-second demo.

*This is sponsored content

The Laboratory

How DeepSeek reframed the global AI race

There is a long, well-documented idea across psychology, philosophy, economics, sports, and even evolutionary biology that humans often perform better when competing with others, especially when the competition is visible and meaningful.

Consider the space race, a major motivational push that propelled the Apollo Missions, which came not from within the U.S. but from competition with the Soviet space program.

When the Soviet Union launched Sputnik in 1957, it incentivized the Americans to work even harder to achieve the next big milestone in humanity’s quest to understand the cosmos.

From space to AI

In 2026, the space race still exists, but it is no longer defined by rivalry with the Soviet Union or its successor state, Russia. Instead, the competition now centers on China. Even so, it has been eclipsed by a far more consequential technological contest, one that may shape economic power and geopolitical influence for decades to come.

From a geopolitical standpoint, the race for AI dominance has narrowed to two countries: the United States and China.

Two-nation AI race

As of February 2026, the U.S. appears to have the upper hand. The country is home to some of the most prominent names in the AI industry, like OpenAI, NVIDIA, Anthropic, Meta, Google DeepMind, and Microsoft.

Chinese AI startup DeepSeek is expected to launch its next-generation V4 model, with a focus on coding capabilities, in mid-February. Photo Credit: The Information.

China, meanwhile, is home to companies such as Baidu, Alibaba, Tencent, Huawei, and, most importantly, DeepSeek.

Until recently, DeepSeek was largely unknown outside technical circles. That changed when its reasoning models began circulating among developers and researchers, not because they were flashy, but because they were unexpectedly good and built with far fewer resources than most people thought possible.

In an era where frontier AI development has become synonymous with vast compute budgets and sprawling data centres, DeepSeek offered a quiet counterpoint: that constraint, rather than abundance, could sharpen innovation.

What makes DeepSeek stand out in China’s AI ecosystem is not scale, but intent. Much like OpenAI in the U.S., it operates less as a consumer-facing tech company and more as a frontier research lab, focused on advancing model capabilities rather than simply deploying AI across existing platforms.

Its models have been scrutinized, tested, and debated precisely because they challenge assumptions about the amount of money and hardware required to remain competitive at the cutting edge.

That context matters as DeepSeek prepares to launch a new coding-focused model. Code, after all, sits at the heart of modern technological power, from software infrastructure to autonomous systems. A strong coding model is not just a developer tool; it is a force multiplier.

In the same way Sputnik’s simple radio signal carried outsized symbolic weight, DeepSeek’s next release is being watched less for what it does in isolation and more for what it signals: that China’s AI ambitions are no longer confined to state-backed giants or consumer platforms, but are increasingly being driven by lean, research-first labs willing to compete head-on with their American counterparts.

What makes the new model from DeepSeek worth watching is also reports suggesting it could outperform rivals such as Anthropic’s Claude and OpenAI’s GPT series in coding tasks.

If the reports are accurate, the upcoming models could rattle markets, much as DeepSeek did in 2025. Last year, the company released its R1 reasoning model alongside a claim that initially sounded implausible: it said the system had been trained for about $6 M, a rounding error compared with the hundreds of millions spent by OpenAI and Google on comparable models.

Within days, the model shot to the top of Apple’s U.S. App Store. Around the same time, Nvidia’s market value briefly fell by nearly $ 600B.

The panic was not about whether DeepSeek had built a better model. It concerned what the release revealed. The industry’s assumption that progress in AI requires ever-larger models, ever more chips, and ever more data centres may be wrong.

DeepSeek emphasizes efficiency over scale

DeepSeek did not win by throwing more hardware at the problem. Instead, it made smarter design choices. Its model activates only a small subset of its total parameters at any given time, reducing computational costs while maintaining high performance. It also simplified the training process by removing components that other labs consider essential, thereby reducing memory usage.

Perhaps most strikingly, DeepSeek pulled this off using Nvidia’s H800 chips, slower and export-restricted GPUs that many assumed were a dead end for cutting-edge AI. The result helped validate a different philosophy. Models can become more capable not by being endlessly scaled up during training, but by learning to think longer when responding.

In a single release, DeepSeek did not merely ship a model. It forced the industry to confront an uncomfortable possibility. Efficiency, not brute force, might define the next phase of the AI race.

If the upcoming models from DeepSeek can achieve what they claim, better performance than OpenAI and Anthropic, they could pave the way for another wave of disruptions. As in 2025, the impact will not be limited to forcing the industry to rethink its strategy; it will also affect the economics of the AI industry.

Market shockwaves

Since the second half of 2025, U.S. AI companies have been pushing to secure more enterprise clients to improve their revenue models. If DeepSeek’s models outperform them, they could shake the very assumptions on which current pricing structures have been based.

This would be a major blow to U.S. companies, which, as of February 2026, have had to cut API costs to match what analysts call the “DeepSeek standard.” A task costing $50 per M tokens on OpenAI might run $1-2 on DeepSeek, according to industry comparisons.

DeepSeek’s pricing is reported to undercut competitors by up to 95% in some workloads. This pricing erodes margins across the AI value chain. This, in turn,  could completely undermine the economic foundations of the current U.S. AI industry.

Economics under strain

According to Goldman Sachs, data centers coming online in 2025 will incur $40B in annual depreciation while generating only $15- $20B in revenue at current usage rates. The infrastructure is depreciating faster than it generates replacement revenue.

Faced with this situation, the U.S. companies would ideally have reduced costs. However, they did quite the opposite. In 2025, Amazon committed $100B to AI infrastructure. Microsoft followed with $80B, Google with $75B, and Meta with up to $ 65B. In total, Big Tech spent more than $320B on AI, approximately 30% more than the previous year.

The reasoning traces back to the Jevons Paradox, an idea first observed in 1865. When technology becomes more efficient, total consumption often rises rather than falls. Cheaper steam engines led Britain to burn more coal, not less, because lower costs unlocked new uses. Satya Nadella echoed this logic after DeepSeek’s launch, arguing that as AI becomes cheaper and more accessible, demand will surge rather than shrink.

Early data support that view. Companies are using AI more aggressively rather than cautiously. Walmart now operates AI systems that process far more data than before, even at higher energy costs, because the business payoff is worth it. Microsoft’s AI revenue surged soon after DeepSeek’s release, suggesting falling costs expanded the market faster than they compressed margins.

The AI race, then, is not only pushing AI companies in both countries to improve, but also to innovate in ways that suit their conditions and demands.

Constraint as advantage

DeepSeek was shaped by constraint. U.S. export controls limited access to advanced chips, forcing Chinese firms to focus on efficiency and software optimization rather than brute force. That necessity became an advantage, proving that leading models do not always require limitless hardware.

This also puts it in a unique position. In much of the world, especially the Global South, affordability matters more than peak performance. Chinese companies are positioning efficient, lower-cost AI as a practical alternative to expensive Western models.

The race to reach the Moon, though expensive, prompted the Soviet Union and the U.S. to focus on innovation, a practice typically observed only in dire times. The competition between the two superpowers ensured that humanity set foot on the Moon.

What comes next

In 2026, the race looks different from its Cold War predecessor. The competitors are not just governments but also research labs, cloud providers, and lean startups, and the battlefield is not space but software. Yet the underlying dynamic is familiar. Visible competition is forcing rapid progress, hard trade-offs, and uncomfortable rethinks of long-held assumptions. Whether or not it leads to artificial general intelligence, this new race is already reshaping how power, productivity, and innovation are distributed. As DeepSeek has shown, the next decisive breakthroughs may come not from those with the most resources, but from those most willing to rethink how progress is made.

Roko Pro Tip

💡 

If you’re building with LLMs, stop thinking “who’s smartest?” and start modeling “who’s cheapest at acceptable quality.” Benchmark DeepSeek-style efficient models against your current stack and price out a 12-month infra bill, not just this month’s API spend.

Outperform the competition.

Business is hard. And sometimes you don’t really have the necessary tools to be great in your job. Well, Open Source CEO is here to change that.

  • Tools & resources, ranging from playbooks, databases, courses, and more.

  • Deep dives on famous visionary leaders.

  • Interviews with entrepreneurs and playbook breakdowns.

Are you ready to see what’s all about?

*This is sponsored content

Bite-Sized Brains

  • Musk’s lunar catapult plan: Elon Musk reportedly told xAI staff he wants a Moon factory churning out AI satellites and an enormous electromagnetic “mass driver” to fling them into orbit, pitching space-based compute as the next step for his AI empire.

  • Modal’s $2.5B Ambition: Inference startup Modal Labs is reportedly raising at a $2.5B valuation to scale its serverless GPU platform for AI workloads, positioning itself as a cheaper, developer-friendly alternative to hyperscalers for running production inference.

  • ChatGPT Deep Research: OpenAI’s new Deep Research mode turns ChatGPT into a long-form research assistant, auto-browsing the web, generating multi-page reports with citations, and offering a full-screen report viewer aimed squarely at knowledge workers who live in documents and decks.

Monday Poll

🗳️ What’s the most important shift DeepSeek forces in the AI race?

Login or Subscribe to participate in polls.

Meme of the Day

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.