- Roko's Basilisk
- Posts
- Why The Future Of Coding Isn’t About Typing
Why The Future Of Coding Isn’t About Typing
An interview with Zach Lloyd, Founder & CEO of Warp
Inside the Agentic Developer Environment with Zach Lloyd
Welcome to Revenge of the Nerds. We’re skipping the hype and going straight to the builders. In this edition, we talked about:
Why the future of coding isn’t about typing faster, but guiding intelligent agents with clear intent
The real difference between vibe coding and production-grade agent workflows
Why context, planning, and interface design—not just model quality—will decide who wins in AI-powered development.
Let’s dive in. No floaties needed.

Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
*This is sponsored content

Revenge of the Nerds
Zach Lloyd, Founder & CEO of Warp
He began his career at Google, where he helped grow Google Sheets to hundreds of millions of users. During his time working with developers, he realized that the terminal, one of the most essential tools in software engineering, had barely changed in decades.
In 2020, Lloyd launched Warp to build a faster, more intuitive terminal that supports collaboration, cloud workflows, and AI-assisted development. Under his leadership, Warp has gained strong adoption across the tech industry and is shaping how developers interact with their core tools.
What lessons from Docs or Sheets shaped how you design for developers?
One of the biggest lessons from working on Docs and Sheets is the value of starting with a form factor people already understand. With Google’s tools, we took something familiar like a document or spreadsheet and made it better through collaboration and accessibility. That made it much easier for users to adopt the next generation of the product.
Warp follows the same idea. We began with the terminal because it is one of the most fundamental tools developers use. Once you build a great version of that core experience, you can layer innovation on top of it. Users can still work the way they always have, but they also get the power to interact in plain English and use AI to move faster. Starting from something familiar creates a smoother bridge into a new category of tooling.
What is the biggest misconception about building real software with AI agents today?
One of the biggest misconceptions is that vibe coding and building with agents are the same thing. People often lump them together, but they are very different approaches. Vibe coding is when you repeatedly ask an AI to build something for you, hoping it eventually works. You ignore the code, treat it as an implementation detail, and focus only on the final output. That approach completely breaks down inside real, professional codebases because the code needs to be secure, maintainable, and understandable.
If you use agents correctly, the workflow looks nothing like vibe coding. You plan changes with the agent, review its work, and treat the final code as your own. With that structure, agents can successfully contribute to real production software. The misconception is thinking the failures of vibe coding apply to agent-based development, when in reality they are two very different practices.
In 10 years, what will the default developer environment look like, and what will disappear?
I think the default environment will feel more like a cockpit than a traditional editor. A developer will oversee and coordinate a network of intelligent agents that handle many of the tasks humans do today. Some of these agents will run automatically when something happens, like a server crash or a cluster of user reports. Others will be launched directly by the developer via a natural-language prompt. In both cases, the developer’s role will be more about guiding, reviewing, and steering work rather than doing every step manually.
What will fade into the background is the constant hand work. Typing long blocks of code, running command after command, and digging through systems by hand will happen far less often. You will still be able to do it when needed, but it will feel like going down a level of abstraction. The everyday workflow will revolve around describing intent, reviewing agent output, and keeping a fleet of autonomous or semi-autonomous systems aligned with what actually needs to be built.
What is one bottleneck that still makes coding with AI feel clunky or unreliable today?
The biggest bottleneck right now is context. Agents never have the full picture of the system they are working in because an LLM can only attend to a limited window of information at once. That means the agent often misses dependencies, forgets what it already did, or makes incorrect assumptions about how different parts of the codebase fit together. When a human engineer understands a system, they carry a mental model of how everything interacts. Agents do not have that model.
They rely entirely on whatever slice of information you can fit into the prompt or retrieve for them. Until agents can maintain and reason about a more complete understanding of a codebase, this lack of context will continue to cause drift, mistakes, and moments when things simply fall apart.
With less manual coding, what will make a great developer?
Being a great developer has never really been about typing code. It’s about solving a business problem by telling a computer what to do.
System design becomes even more important—understanding databases, caching, queues, front-end, and how everything fits together. And product sense becomes essential. You have to keep an agent on track and make sure what’s being built solves a real user problem. Writing code won’t be the main job.
What does great human and AI collaboration look like in development?
Great collaboration happens when the human focuses on intent and judgment while the AI handles the execution. The developer sets the direction by explaining what needs to be built, why it matters, and how it should behave. The agent then gathers context, explores the codebase, proposes a plan, and starts doing the work. Throughout the process, the human reviews the reasoning, corrects misunderstandings, and makes decisions that require real product insight or system intuition.
For that to work well, the agent needs real tools, not just a chat box. It should be able to read and write files, create diffs, interact with external systems like Figma or Notion, and store its intermediate state so it does not have to start from zero every time. The human interacts with all of this through an interface built specifically for agents, not a retrofitted IDE window. When the workflow is smooth, the human stays in a high-level problem-solving mode while the agent handles the mechanical work, and together they move far faster than either could on their own.
Will coding become a niche skill or evolve into a different kind of literacy?
It will evolve, not disappear. Coding will still be an essential foundation for anyone who wants to build software, but it will function more like math does today. Most people do not do long division by hand, yet the underlying concepts still matter. The same will be true for coding. Developers will need enough literacy to understand how systems work, how to reason about logic, and how to validate what an agent produces.
What will change is how often people write code manually. It will happen less often because agents will take over a lot of the mechanical work. The real value will come from understanding architecture, thinking clearly about requirements, spotting edge cases, and guiding agents in the right direction. Coding becomes a layer in your mental toolkit instead of the main task you perform all day.
If you had to bet on a new kind of agent, which would be most transformative?
I think the most transformative agents will not be a single vertical type like a ‘reviewer’ or a ‘debugger.’ The real breakthrough will come from agents that developers can customize and program for their own workflows. Instead of picking from a menu of fixed agents, you will be able to shape an agent that fits the exact way your team builds software.
Maybe you want an agent that updates documentation every time you touch a file. Or one that automatically checks new logs when an error rate spikes. Or one that keeps your test suite clean and up to date. These are all different tasks, but they can come from the same underlying agent framework if developers can tailor it. Giving developers the power to build their own agents unlocks far more value than any single specialized agent ever could.
What is something the AI dev community obsesses over that will not matter?
I think the AI dev community spends too much energy obsessing over tiny differences on public evals, like SweetBench or TerminalBench. Teams treat a one-or-two-point swing as a decisive measure of real-world performance, even though these benchmarks are not representative of the complex workflows developers actually deal with. Many of them have even leaked into the training data, making them even less reliable as indicators of real capability.
What really matters is whether an agent can handle the messy, ambiguous tasks that show up in real codebases. That comes from building your own evals based on real user behavior, finding the failure points that actually frustrate developers, and improving the system there. Benchmarks can be fun to watch for bragging rights, but they will not define who wins in practical, everyday use.
What quiet shift today will become a big deal later?
A quiet shift that I think will become huge is the importance of the developer experience around agents. Right now, most people focus almost entirely on the model itself, as if the model is the whole agent. In reality, the leverage comes from the combination of three layers: the model, the agent harness that gathers context and routes work, and the interface that developers use to collaborate with the agent. That last piece is where most teams are still treating AI like a chat widget or a text-only CLI panel, and it is nowhere near enough.
As developers start relying on agents for real work, the tools that make that collaboration fluid, visual, and intuitive will matter a lot more. The teams that build great UX around planning, context, memory, reviewing diffs, and guiding agents will unlock an enormous advantage. It is a slow shift now because people are still glued to traditional workflows, but it will define who actually wins in AI-powered software development.
What does the future look like for Warp, and what is the long-term plan?
Warp’s long-term plan is to become the best environment for developers to build with agents. The interface still looks like a terminal, but only because that is the closest existing form factor for the kind of work developers do every day. Under the hood, the whole product is being rethought from first principles. If the future of development is guiding and managing intelligent agents, then the tools developers use should be built around that workflow rather than retrofitted onto old patterns.
A big focus is on giving developers more power to automate the parts of their job they do not enjoy. Warp is already rolling out features that let developers program and customize their own agents, enabling them to shape the workflow to match their needs. Over time, more of the repetitive or mechanical tasks will be handled by agents, while developers stay focused on architecture, decisions, and intent. The goal is to make Warp the place where you coordinate, review, and direct a whole ecosystem of intelligent assistants.
In the long run, Warp aims to be the primary workbench for real software development in an AI-native world. That means deeper planning tools, richer ways to review agent output, and a development environment built around clarity rather than complexity. The vision is not just faster coding. It is a fundamentally new way of building software where agents do the heavy lifting and developers operate at a higher level of problem-solving.
Tell us about your latest product update, Oz?
In February, we launched Oz, an orchestration platform for cloud agents. Deploying agents at scale requires teams to build extensive scaffolding to keep developers in control. Oz makes it easy for teams to embed agents into existing tech stacks, track agents centrally, and continue work locally. With Oz, developers safely turn repetitive tasks into agent automations, run in parallel with multi-threaded agent workflows, and steer agents seamlessly—so velocity goes up, and prod doesn't go down.
Oz works with any model, with or without Warp, and will support other coding agents in the future. Agents run across multiple repositories and handle multi-repo changes, and cloud agents take less than 10 minutes to set up and can be customized extensively. Turn Skills into automations, start agents through the CLI or API, and host on Warp's infrastructure or your own. Oz works locally, in sandboxes, and in the cloud. Agents are tracked in Warp and on the web with links to their sessions and files. Teams can see running agents, guide them, and edit their work with access controls.
Additional Reads


The context to prepare for tomorrow, today.
Memorandum merges global headlines, expert commentary, and startup innovations into a single, time-saving digest built for forward-thinking professionals.
Rather than sifting through an endless feed, you get curated content that captures the pulse of the tech world—from Silicon Valley to emerging international hubs. Track upcoming trends, significant funding rounds, and high-level shifts across key sectors, all in one place.
Keep your finger on tomorrow’s possibilities with Memorandum’s concise, impactful coverage.
*This is sponsored content

Quick Poll
What will define the next generation of developer tools? |

Rate This Edition
What did you think of today's email? |












