- Roko's Basilisk
- Posts
- When Compliance Becomes The Moat
When Compliance Becomes The Moat
Plus: SAP’s robots, Oracle layoffs, and Big Tech’s energy crunch.
Here’s what’s on our plate today:
🧪 Tabnine, enterprise coding, and why trust can beat raw model power.
🧠 SAP’s robot rollout, Oracle cuts, and Big Tech’s energy squeeze.
💡 Roko’s Prompt of the Day on choosing between smarts, privacy, and control.
📊 Tuesday poll on what enterprises actually want from AI coding tools.
Let’s dive in. No floaties needed…

The People Behind Better Language Models
LLM quality doesn’t come from architecture alone. It comes from people who can read a model’s output critically, write prompts that expose its weaknesses, and annotate responses with the nuance that generic pipelines miss. Ours are bilingual, trained on real projects, and ready to integrate into your evaluation workflows.
Prompt design, evaluation, and adversarial testing
Response scoring, ranking, and preference data collection
Multilingual annotation across Spanish, Portuguese, and English
Part of our vetted LATAM talent network, working in U.S.-aligned time zones.
*This is sponsored content

The Laboratory
TL;DR
Compliance is the wedge: Tabnine did not try to beat Copilot or Cursor on raw model power. It is built for the question enterprises actually care about: where does our code go, and who can see it?
Training data became a legal feature: By using only permissively licensed code and excluding GPL-style sources, Tabnine turned copyright caution into a product advantage after the Copilot lawsuits.
The same playbook now extends to agents: Its agentic tools run inside customer-controlled environments, with context, rules, and infrastructure defined by the organization, not the vendor.
The long-term risk is convergence: If Microsoft, Google, and Amazon close the privacy and governance gap, Tabnine’s compliance-first edge gets much harder to defend.
How Tabnine survived the Copilot era by offering safety over frontier AI
Over the past few years, artificial intelligence has evolved from chatbots generating poetry and stories to a technology poised to change the very fabric of developed nations. The transformation has, in part, been possible due to the technology’s impact on human cognitive work, especially in the software development space.
Here, the transition has been so rapid that today, every software developer is either using an AI coding assistant or being pushed to adopt one. The demand has also made the category the most contested piece of technology. GitHub Copilot, backed by Microsoft and built on OpenAI’s models, has more than 20M users and is used by 90% of Fortune 100 companies. Cursor, a San Francisco-based startup, has attracted a devoted and vocal developer following. Meanwhile, Amazon has a product built into its cloud services. Even Google made its coding assistant free for individual developers in February 2025, offering 180,000 AI-generated code suggestions per month. The message from the largest technology companies is consistent: AI that helps write software is now a standard feature, not a premium one.
The missing layer
As the hyperscalers continue to make inroads into the AI coding assistant market, they have failed to fill a crucial gap, opening the path for smaller, more focused enterprises to make their presence felt.
You see, while the AI coding assistant market looks straightforward from the perspective of developer experience, a developer opens their editor, starts typing, and the AI suggests the next step. In this scenario, the tool with the most capable model, the fastest response, and the deepest integration into the editor is the obvious choice.
However, this logic breaks down when one examines how large organizations actually make software purchasing decisions. In large enterprises, the person who approves the tool is usually not the developer, but a procurement committee, a legal team, or a chief information security officer. They are more concerned about what happens to the AI-generated code than they are about the price and ease of use. For them, the bigger concerns are where the code sent to the AI goes, who stores it, whether it ends up in a training dataset, and whether using it creates any legal exposure under copyright or data protection laws.
These concerns are legitimate and have become harder to dismiss in recent years. In 2023, engineers at Samsung were reported to have inadvertently shared internal source code and confidential chip specifications with ChatGPT on multiple occasions, with the data reaching OpenAI’s servers before the company could intervene.
The incident was not an anomaly. A research analysis of 22M enterprise AI prompts found that employees in more than 90% of large organizations are actively using AI tools, often through personal accounts that IT departments have not approved. In 26% of organizations, sensitive data regularly reaches public AI systems. This gap between enterprise requirements and what cloud-based tools are designed to offer is where Tabnine has built its business.
Built for enterprise
Tabnine began as Codota in 2013, founded in Tel Aviv by Dror Weiss and Eran Yahav. In its early years, the company focused on AI-assisted code completion for Java, drawing on academic research from the Technion – Israel Institute of Technology.
In 2019, it acquired TabNine, a project built by Jacob Jackson, a graduate of the University of Waterloo, which used deep learning models to expand code completion across multiple programming languages. The combined company adopted this broader approach and rebranded as Tabnine in 2021.
Under the new approach, the core product does what every other AI coding assistant does: it watches a developer type and suggests what should come next, completing lines and blocks of code based on context. What defines Tabnine’s market position, however, is the range of options for where it runs. Tabnine offers four deployment modes: a standard cloud service, a virtual private cloud where the model runs inside the customer’s own cloud environment, a fully on-premises installation where Tabnine has no access to the customer’s network, and a fully air-gapped mode with no external connections at all.
For organizations in regulated industries, on-premises and air-gapped options mean that code never leaves their infrastructure. And even when it does, that code is processed temporarily and immediately discarded, never stored, and never used to train models by Tabnine.
The company also deliberately chose how it trained its AI. It uses only permissively licensed open-source code and explicitly excludes copyleft licenses such as the GPL, which could create legal obligations for organizations whose developers generate code under them.
This was a direct response to the copyright class action filed against GitHub Copilot in November 2022, which alleged that Copilot reproduced licensed code from public repositories without regard for the licenses’ terms. For corporate legal teams reviewing AI tools, this distinction matters in ways that feature comparisons do not capture.
Safety over smarts
Tabnine has not only survived the crowded AI coding market but also carved out a distinct position within it. As coding assistants have grown more capable, competition has shifted away from raw model performance toward how well these systems fit into real organizational environments. For large enterprises, the decision is rarely about which model is smartest. It hinges on compliance requirements, legal exposure, infrastructure constraints, and the governance frameworks that control how code and data are handled.
This is the layer where Tabnine competes. Rather than trying to outperform foundation models from OpenAI, Google, or Anthropic, it focuses on making AI deployable in restricted environments. That means private installations, stricter data controls, and deeper integration with internal systems, turning usability and trust into its primary differentiators.
The agentic shift
The company applied the same approach to its agentic AI push. In November 2025, Tabnine launched Tabnine Agentic, a product designed to handle multi-step workflows rather than single prompts. Unlike competing agentic tools, Tabnine’s version runs inside the customer’s own infrastructure, pulls context from systems the organization already controls, and operates under rules defined by the organization rather than the vendor.
The timing matters, as AI agents are becoming the next battleground for coding assistants, sitting closer to actual software delivery, and Tabnine is positioning its compliance-first model as the differentiator in that race.
No single winner
The AI coding assistant market is not heading toward a single winner. It is being split into segments that serve different buyers under different conditions. GitHub Copilot and Cursor will continue competing for the mainstream developer market on model quality and feature depth. Google and Amazon will use pricing leverage and cloud integration to compete at scale. The compliance-first segment that Tabnine occupies is growing in importance as regulators in Europe and the United States continue tightening requirements for how organizations use AI systems that process sensitive data, turning conversations Tabnine has been having with enterprise procurement teams for years into a contest the entire industry is now being forced to have.
The open question is whether the major platform providers eventually absorb this segment by improving their own governance and isolation options to the point where the difference disappears. Microsoft has already introduced additional isolation options in Copilot Enterprise, and Amazon and Google are building in the same direction. If the capability gap between Tabnine’s privacy-first deployment and a fully isolated version of Copilot continues to close, the argument for choosing a smaller vendor with a weaker model becomes harder to make.
What Tabnine has already demonstrated, regardless of where that competition goes, is that surviving the platform era does not require beating the platforms at their own game. It requires finding the part of the market that the platforms are not sufficiently motivated to serve, and building something specific enough to matter to the organizations that live there.


Bite-Sized Brains
SAP sends robots to work: SAP and ANYbotics are integrating four-legged inspection robots directly into SAP’s backend systems so faults can trigger maintenance workflows automatically, rather than waiting for human reporting.
Oracle cuts to fund AI: Oracle has begun major layoffs, reportedly affecting thousands, as it tries to free up cash for its aggressive AI data center buildout and broader infrastructure push.
Big Tech’s energy test: S&P Global says the roughly $635B to $665B AI spending wave from the biggest tech firms now faces an energy shock, with power costs and grid pressure becoming real constraints.

Outperform the competition.
Business is hard. And sometimes you don’t really have the necessary tools to be great in your job. Well, Open Source CEO is here to change that.
Tools & resources, ranging from playbooks, databases, courses, and more.
Deep dives on famous visionary leaders.
Interviews with entrepreneurs and playbook breakdowns.
Are you ready to see what’s all about?
*This is sponsored content

Prompt Of The Day
![]() | ’Act as an enterprise AI procurement advisor. I will describe my company’s coding workflow and risk profile. Tell me whether we should prioritize model quality, compliance controls, cloud integration, or deployment flexibility, and explain what tradeoff we are actually making.’ |

Tuesday Poll
🗳️ What matters most when enterprises choose an AI coding assistant? |
The Toolkit
Pika: Fast, social-first AI video tool built for creators who want quick scenes, effects, and remixable clips without pro editing skills.
Runway: Higher-end AI video platform for more controlled, cinematic generation and real production workflows.
CapCut: Consumer-friendly editing stack with AI video tools, captions, speech features, and templates already built for short-form distribution.

Rate This Edition
What did you think of today's email? |





