- Roko's Basilisk
- Posts
- Good Enough Beats Best
Good Enough Beats Best
Plus: Tinder scans eyeballs, OpenAI says sorry, and AI shapes college choices.
Here’s what’s on our plate today:
• 🧪 Why DeepSeek V4 matters beyond benchmarks.
• 📰 Tinder's iris scan, OpenAI's Tumbler Ridge apology, AI-anxious students.
• 💬 A 30-day plan to test DeepSeek V4 vs your frontier model.
• 🗳️ Poll: switch to DeepSeek, stay frontier, or run hybrid?
Let’s dive in. No floaties needed…

Launch fast. Design beautifully. Build your company's website on Framer
Framer helps teams design, build, and launch their marketing sites lightning fast. With the ability to publish hundreds of CMS pages in a single click, operate at a global scale with seamless localization, and even host unified content across multiple domains, teams have never been able to ship faster.
Trusted by companies like Miro, Bilt, and Perplexity.
*This is sponsored content

The Laboratory
TL;DR
• Close enough at a fraction of the price: DeepSeek’s V4-Pro trails top U.S. models by three to six months of development, but at $1.74/M input tokens versus $25 to $30 for comparable American output, the cost gap makes “good enough” a serious competitive threat.
• Chinese chips enter the picture: V4 is the first frontier model optimized for Huawei’s Ascend processors rather than NVIDIA, though training may still rely on U.S. chips.
• Distillation tensions boil over: The launch coincided with a U.S. State Department cable warning allies about Chinese IP theft, with OpenAI and Anthropic both alleging large-scale unauthorized use of their models.
• The gap is compressing fast: The Stanford 2026 AI Index found the U.S.-China performance gap shrank to 2.7%, down from as high as 31.6 percentage points in 2023, while U.S. investment outpaces China’s by 23x.
Why DeepSeek V4 matters beyond benchmarks
In the current geopolitical context, where nuclear-armed nations can no longer depend on direct military confrontation to secure their interests, technology has become one of the most powerful instruments for expanding and preserving spheres of influence. Among these technologies, artificial intelligence is increasingly seen as a defining force shaping the future of global politics.
In that context, AI now stands at a strategic crossroads. On one hand, there are proprietary frontier models largely developed in the Western world, led by the United States. On the other hand, open-source and increasingly capable models are emerging from China. While the debate over which country is truly ahead in the AI race remains complex, and the long-term implications of each strategy are still unfolding, one reality is becoming harder to ignore: Chinese AI models are beginning to pose a serious competitive challenge to those being built in the United States.
On April 24, the Chinese AI lab DeepSeek released preview versions of two new models, V4-Pro and V4-Flash.
DeepSeek says its new V4 Flash and V4 Pro models are mixture-of-experts systems with 1M-token context windows, large enough to handle massive codebases or lengthy documents in a single prompt while reducing inference costs by activating only a subset of parameters for each task.
The flagship V4 Pro contains 1.6T total parameters with 49B active at a time, making it the largest open-weight model currently available. DeepSeek says architectural improvements make both models more efficient and capable than V3.2, bringing them close to the performance of today’s leading open and closed systems.
It also claims the higher-end V4-Pro-Max surpasses rival open-source models on reasoning benchmarks and beats OpenAI’s GPT-5.2 and Gemini 3.0 Pro on some tasks. In contrast, both V4 models deliver coding benchmark results comparable to GPT-5.4.
The rise of good enough AI
However, despite the gains, the company’s own admission is that neither ranks at the top of the leaderboard. DeepSeek’s technical report states that V4 “falls marginally short” of OpenAI’s GPT-5.4 and Google’s Gemini 3.1 Pro, trailing the best closed-source models by roughly three to six months of development.
But V4’s significance is not about who sits at the top of a benchmark table. It is about three shifts the model represents in the broader AI race: the compression of pricing, the push toward Chinese chip independence, and the escalating confrontation between Washington and Beijing over AI intellectual property.
V4-Pro costs $1.74/M input tokens (the units of text that AI models process) and $3.48/M output tokens, according to MIT Technology Review. OpenAI and Anthropic charge $30 and $25, respectively, for the same volume of output, according to Bloomberg.
That puts V4-Pro at roughly one-seventh to one-eighth the cost of comparable American models.
This is possible because both models use a mixture-of-experts architecture, a design in which only a fraction of the model’s total parameters are active for any given task, thereby keeping computational costs low. V4-Pro has 1.6T total parameters, but only 49B are active at once. A new attention mechanism further reduces computational cost for long texts: in a million-token context, V4-Pro uses only 27% of the computing power and 10% of the memory of its predecessor.
On the major benchmarks, V4-Pro competes with Anthropic’s Claude Opus 4.6, OpenAI’s GPT-5.4, and Google’s Gemini 3.1 on several tests, and exceeds all other open-source models on coding and math.
This means the models developed by DeepSeek do not need to be the best to matter. Where it scores in the same range as frontier American models, the cost difference becomes the deciding factor for many buyers.
And even if one were to ignore the cost of computing differences between the American and Chinese models, it would be impossible to ignore the choice of hardware, which could have far-reaching consequences for American chipmakers.
The chip question gets real
V4 is the first frontier-class release explicitly optimized for Huawei’s Ascend AI processors rather than NVIDIA hardware. On launch day, Huawei confirmed that its Ascend SuperNode products based on the Ascend 950 series would support V4. He Hui, director of semiconductor research at consultancy Omdia, told Reuters that V4 shows “top Chinese AI models can now run on Chinese hardware.”
This matters because NVIDIA’s GPUs (the specialized chips that power most AI work) dominate the global market, and U.S. export controls restrict Chinese access to the most advanced ones. DeepSeek’s earlier R1, which shook markets in January 2025, was trained on NVIDIA’s H800 chips. V4 signals a shift away from that dependency.
However, the picture is more complicated than a mere reduction of dependency on Western hardware.
Liu Zhiyuan, a computer science professor at Tsinghua University, told MIT Technology Review that while V4 uses Chinese chips for inference (running the model when a user sends a prompt), training may still have relied primarily on NVIDIA hardware. DeepSeek has not clarified whether that is the case. While running inference on Huawei chips would be a meaningful milestone, it is not the same as training a frontier model entirely on domestic hardware. Multiple anonymous sources told MIT Technology Review that Chinese chips still do not match NVIDIA’s performance and are better suited for inference workloads than for large-scale training.
What adds to the complication is a Bloomberg report that says V4’s launch was delayed from its original February-March window because DeepSeek spent months reworking its software for Huawei’s chips.
Given that DeepSeek is now using Huawei chips for inference, MIT Technology Review notes that V4-Pro prices could fall further once Huawei ships Ascend 950 processors at scale later in 2026.
Which brings us to the third, and the most important impact of the model: the escalating confrontation between Washington and Beijing over AI intellectual property.
The distillation fight arrives at V4’s door
V4 launched the same day that the U.S. State Department sent a diplomatic cable to embassies worldwide warning about Chinese efforts to steal IP from American AI labs through distillation.
Distillation involves feeding a powerful AI model many carefully crafted questions, collecting its responses, and using those to train a cheaper model that mimics the original. It is a standard technique, routinely used by companies on their own models. However, in the case of DeepSeek, the central question is whether the company used it on competitors’ models without permission, on an industrial scale.
On February 12, 2026, OpenAI told the U.S. House Select Committee on China it had observed “ongoing attempts by DeepSeek to distill frontier models,” as the company disclosed in its congressional submission. This was followed by Anthropic stating that roughly 24,000 fraudulent accounts had generated over 16M exchanges with Claude. This led the White House science adviser, Michael Kratsios, to accuse Chinese entities of “industrial-scale” distillation campaigns.
Meanwhile, DeepSeek said that while its earlier models relied on web-crawled data, it did not intentionally use synthetic data from OpenAI, prompting China’s foreign ministry to call the accusations “groundless.”
Regardless of whether DeepSeek actually used data from OpenAI and Anthropic, as the U.S. claims, the central problem remains that Chinese AI labs can now train and run models at much lower cost than their U.S. counterparts.
Why DeepSeek matters even without winning
The Stanford 2026 AI Index Report highlights this tension. The report found that the performance gap between the top U.S. and Chinese AI models had compressed to 2.7% as of March 2026, down from 17.5 to 31.6 percentage points in May 2023.
U.S. private AI investment reached $285.9B in 2025, more than 23 times China’s $12.4B, indicating the capability gap is shrinking and the arguments about how that happened are getting louder.
The latest DeepSeek models do not resolve these questions, since they are still in preview and remain text-only. At the same time, competitors offer image and video, and whether they were trained substantially on Chinese chips is an open question.
DeepSeek has chosen not to answer these questions, but its V4 has shown that it matters not because it wins, but because it makes advanced AI cheaper, less dependent on U.S. chips, and more politically contentious.


Bite-Sized Brains
• Tinder scanning eyeballs: Futurism on Tinder is rolling out eyeball/iris scanning, presumably for verification.
• OpenAI's Tumbler Ridge apology: Altman apologizes to the Canadian town after OpenAI flagged the mass shooter's ChatGPT account months earlier but failed to alert law enforcement.
• AI anxiety college major: College students are picking majors out of AI-driven career anxiety.

2026 Salary Report: U.S. vs Global hiring.
Want to know what world-class talent actually costs in 2026?
Athyna's Salary Report breaks down real salary data across AI, Tech, Data, Design, and more—so you can see exactly where the savings are.
The numbers might surprise you.
*This is sponsored content

Prompt Of The Day
![]() | Act as an AI procurement strategist. Build me a 30-day plan to test whether DeepSeek V4-Pro can replace my Frontier US model on specific workloads, covering quality, cost, data risk, and fallback routing. |

Tuesday Poll
🗳️ DeepSeek V4 is "good enough" at one-eighth the price. What's your move? |
|
The Toolkit
• Mirage: AI video generator that turns prompts or photos into cinematic clips with multimodal foundation models.
• Speechmatics: Speech-to-text engine built for accuracy across accents, noisy audio, and real-time use.
• Superhuman: AI-powered email client that drafts replies, summarizes threads, and gets you to inbox zero faster.

Rate This Edition
What did you think of today's email? |






