Luminance Bets On Legal Memory

Plus: Altman Pentagon heat, funky AI valuations, and Google trims store cut.

Here’s what’s on our plate today:

  • 🧪 Luminance, legal AI moats, and who really owns the stack.

  • 📰 Altman Pentagon blowback, dual equity, and Google fees.

  • 🗳️ Poll: Legal AI moats, where’s the real edge?

  • 🧠 Roko’s Pro Tip: Treat institutional memory as your real legal moat.

Let’s dive in. No floaties needed…

Build Internal AI Systems That Actually Work

Buying AI tools is easy. Building workflows your team trusts and actually uses is the hard part. You need designers who understand agents, automations, and LLMs well enough to connect them into something that holds up in production.

This is the kind of talent you get with Athyna Intelligence—vetted LATAM AI specialists working in U.S.-aligned time zones.

*This is sponsored content

The Laboratory

What Luminance reveals about the future of legal AI

TL;DR

  • Legal AI is split between generic model wrappers and domain-specific stacks.

  • Luminance trained a legal-focused transformer on 220M mostly private documents.

  • Its moat is institutional memory plus a multi-model ‘panel of judges’ for checks.

  • In legal AI, the real edge is embedded knowledge and workflows, not model choice.

Eleanor Lightbody of Luminance reflects a different vision for legal AI, one where the lasting advantage comes not from the model alone, but from proprietary data, multi-model systems, and institutional memory. Photo Credit: Cambridge Independent.

There are key differences in how enterprises and end users adopt new technology. For the end user, new research and subsequent advancements may lead to a new product or an improvement over the previous generation. For an enterprise, though, a new technology means creating a roadmap to leverage its capabilities to improve its business operations and increase profits.

This difference in approaches is reflected not only in how the developer packages a technology for different customers but also in how it is marketed and refined over time.

For end users, AI often means generative AI, such as chatbots that let users generate text, video, and audio with prompts. For enterprises, the story is very different. For them, AI is both an opportunity to increase the value of their services and a competitor that can pull customers away from existing subscriptions. For businesses, some difficult choices have to be made, and these choices could mean the difference between dominating their industry or becoming obsolete within a few years.

Currently, the biggest question for most enterprises is how to leverage underlying technology to upsell their existing products or to become the layer between AI and the end consumer. To achieve this, they first have to decide whether to build their own specialized models for their industry or rely on general-purpose models from frontier AI labs. And nowhere is this tension better represented than in the field of legal tech.

In 2025, something unusual took place in the legal tech industry. The sector attracted a healthy $6B in funding, but the money did not spread evenly among the prominent names.

Of the $6B, one company, Harvey, raised $818M across four rounds in a single calendar year and reached an $8B valuation. At the same time, another company operating in this space, Robin AI, failed to secure a $50M funding round, was hit with a tax demand from U.K. authorities, and ultimately sold its managed services team to a smaller competitor.

What makes this contrast interesting is that it raises a question bigger than the legal dispute itself. When companies build AI tools for specialized industries, does it really matter whether they develop their own models or rely on someone else’s?

This is a practical concern, not a theoretical one, for CIOs and technology leaders choosing enterprise AI vendors: the answer affects core issues such as data security, reliance on a single provider, long-term product performance, and overall vendor risk.

Build, wrap, or something in between

So far, the dominant playbook in enterprise AI has been to take a powerful general-purpose model, typically from a company like OpenAI, Anthropic, or Google, and build an application layer on top of it. To ensure it caters to a specific industry, domain-specific data is added via RAG (retrieval-augmented generation), which feeds the model relevant documents at query time. To stand out from the competition, enterprises have relied on diverse workflow designs, user experiences, and enterprise integrations.

This same playbook has been playing out in the legal tech sector. Harvey, the market’s highest-valued player, was originally built on GPT-4 and received early access and seed funding directly from OpenAI.

Robin, meanwhile, was built primarily on Anthropic’s Claude. For both companies, the logic was the same: why spend years and tens of millions training your own model when you can access the world’s most powerful AI systems through an API.

So, why did one of these companies gain immense traction while the other failed to survive? The answer does not lie in a simple comparison of business models or in how these companies deployed AI; it becomes visible when viewed through the trajectory of another player in the legal tech space: Luminance.

Why Luminance stands out in the legal tech space?

Luminance, a Cambridge-founded legal AI company, spent a decade building what it calls a Legal Pre-trained Transformer, a proprietary model trained on over 220M verified legal documents.

Many of these documents are not publicly available, meaning no foundation model provider could have included them in its training data. The company has raised $165M in total, a fraction of Harvey’s war chest, but claims more than 1k clients and revenue doubling for two consecutive years.

Luminance then presents a different, more nuanced approach to legal tech than Harvey or Robin AI. Luminance’s stance is that its proprietary data corpus is the real asset, not just the model itself. Over 150M legal documents that are not in any foundation model’s training set constitute, in theory, a permanent information advantage. General-purpose models can get better at reasoning about legal language, but they cannot learn from contracts they have never seen.

Then Luminance has the advantage of not relying on any single proprietary model. The company’s’ Panel of Judges’ architecture uses multiple models, including proprietary systems, fine-tuned open-source models, and commercial models, that cross-check each other’s work. The company describes this as a ‘Mixture of Experts’ approach, in which different models contribute their strengths and validate each other’s outputs.

The company has also introduced what it calls institutional memory: a system that retains negotiation history, decision rationale, and organizational context throughout the entire contract lifecycle. This is not just a technical feature; it is a strategic lock-in mechanism. If Luminance holds five years of a company’s negotiation precedents and institutional knowledge, switching to a competitor means losing that memory.

CEO Eleanor Lightbody framed this directly: “Enterprise amnesia is real, and it’s costly. Current AI systems are helpful in the moment, but become disconnected over time. Our new platform remembers, reasons, and stays with the work in perpetuity.”

While industry consensus is to use general-purpose models wrapped with industry- and proprietary-specific data, Luminance stands out for its nuanced approach.

Despite different approaches across companies, the legal tech industry remains in a phase of rapid evolution. Despite their differing strategies, Harvey and Luminance’s trajectories increasingly suggest a gradual convergence toward similar architectural approaches.

Luminance, which started with a proprietary model, now uses a diverse range of models, including commercial and open-source options alongside its proprietary systems. Harvey, which started with OpenAI, now uses multiple foundation models plus custom-trained components. Both companies describe their architecture using language about multi-model orchestration and agent-based systems.

If the most effective approach is to use the best available models, combined with domain-specific data and workflow logic, then the old build-versus-buy debate loses its meaning. The real advantage no longer lies in the model itself, but in how well a company integrates it, the quality of its data, the design of its workflows, and the institutional knowledge it builds over time.

In this scenario, Robin AI serves as an important warning about the wrapper approach. And though there was more to the company’s inability to sustain its business, it showed that in the age of AI, merely layering the underlying technology may not be enough. To stand out and build successful business models, companies will have to rethink their strategies and strengthen their core assets to continue evolving alongside tech.

The bet beneath the bet

At its core, the proprietary model wager is a bet on the future pace of foundation model improvement. If general-purpose models continue improving rapidly, they will absorb domain expertise and erode proprietary model advantages quarter by quarter. If progress plateaus or the unique nuances of legal reasoning prove resistant to general-purpose training, companies like Luminance that invested in domain-specific models will be vindicated.

Currently, it is impossible to know which scenario will materialize; however, what is increasingly becoming clear is that the strongest positions in this market will belong not to the purest model builders or the fastest API wrappers. But, to companies that control the most defensible layer of the stack: the ones that hold institutional knowledge, integrate into workflows, and become harder to replace the longer they are used.

Roko Pro Tip

💡 If you build vertical AI, stop obsessing over which frontier model you use and start hoarding institutional memory. The product that remembers five years of decisions, edge cases, and negotiation history will beat the prettier interface that forgets every interaction.

Framer for Start Ups

First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours. No dev team, no hassle. Join hundreds of YC-backed startups that launched here and never looked back.

Key value props:

Eligibility: Pre-seed and seed-stage startups, new to Framer.

*This is sponsored content

Bite-Sized Brains

  • Altman Pentagon fallout: Sam Altman admits OpenAI cannot control the Pentagon’s use of its AI, deepening criticism of military deals and safety red lines.  

  • Dual-price AI rounds: Some AI startups are selling the same equity to retail investors at higher prices than VCs, raising fresh questions about fairness and governance.

  • Google trims Play Store fees: Google will cut many US Play Store commissions from 30% to 20% after its Epic settlement, shifting app margins while keeping its tricky eligibility rules.

Monday Poll

🗳️ Where do you think the most durable moat will be in legal AI stacks like Luminance?

Login or Subscribe to participate in polls.

Meme Of The Day

The Toolkit

  • Synthesise AI: AI-native data platform that turns messy business data into clean customer, revenue, and product foundations for analytics and AI.

  • Runway: Generative video studio for creators and teams to script, edit, and produce high-quality AI video directly in the browser.

  • Tabnine: IDE-first AI coding assistant that autocompletes, generates, and refactors code with strong privacy controls for dev teams.

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.