Key takeaways

  • The lean startup MVP was designed to minimise building cost. AI has collapsed that cost, making the original premise obsolete.
  • When building is nearly free, the product itself becomes the best validation instrument: faster and more honest than any interview or smoke test.
  • The new process is not build-measure-learn in sequence. It is continuous and parallel: ship immediately, observe behaviour, iterate daily.
  • The new primary risk is not building the wrong thing. It is speed without learning. Moving fast without extracting clear insight from each cycle.
  • This changes the economics of corporate innovation: experiments are cheaper, so you can run more. The bottleneck shifts from building to learning.

In 2011, Eric Ries published The Lean Startup. The core idea was elegant: building software is expensive and slow, so you should validate your riskiest assumptions before investing in construction. Do not build a product to learn whether people want it. Build the minimum possible, an MVP, to test one assumption at a time, cheaply, before committing more resources.

This was excellent advice for 2011. The problem is that the constraint it was designed to solve no longer exists.

Why the lean model was right, and why it no longer applies

The lean startup logic rested on a simple economic argument. If building costs a lot and takes a long time, you should be very careful about what you build. Validating assumptions before building reduces the risk of wasting expensive development cycles on ideas that do not work.

That logic produced a specific sequence: discovery first (interviews, observation, problem validation), then solution design, then the minimum viable product as a test instrument, then measurement, then learning, then iteration. The MVP was not a product you sold. It was a research instrument you used to decide whether to build the real product.

AI has broken the economic premise that made this sequence rational.

Today, a working prototype of a digital product can be built in hours. A functional MVP, one that real users can interact with and that does something genuinely useful, can exist in days. Not a mockup, not a Wizard of Oz simulation, not a landing page with a waitlist. An actual product.

The implication is simple but radical: when building is nearly free, the most efficient validation instrument is the product itself. The customer interview tells you what people say they would do. The product tells you what they actually do.

What changes in practice

The shift is not just about speed. It is about which activities belong at which stage of the process. Here is what the two approaches look like side by side.

Lean startup (2011) AI-native (2026)
Time to first user contact Weeks to months of discovery before building Days from idea to working product in users' hands
Primary validation instrument Interviews, smoke tests, landing pages, Wizard of Oz The product itself: real behaviour from real users
Role of the MVP Test the riskiest assumption before committing to build Starting point for continuous refinement, not a test
Iteration cycle Weeks per cycle (build, measure, learn) Days or hours per cycle, often parallel
Primary risk Building the wrong thing Speed without learning, moving fast without insight
Where the bottleneck sits Building capacity Learning capacity

The new process: what it looks like step by step

This is not a theoretical model. It is what we do in practice when building a new venture with AI today.

01
Days 1-3

Build the first working version immediately

Not a prototype. Not a mockup. A functional product that does one thing, the core thing, well enough that a real user can use it. With current AI tools, this is achievable in a single focused sprint. The goal is to get something real in front of people as fast as possible, because real contact with real users generates better information than any amount of upfront analysis.

What changes from before: previously, this phase followed weeks of discovery. Now it runs in parallel with discovery, or precedes it entirely. You are not building to validate. You are building to learn, and the building itself is the fastest path to learning.

02
Days 3-7

Release to a small, specific group, and watch

Not a broad launch. A deliberate release to a small group of people who represent the target user. The goal of this phase is observation, not feedback collection. You want to see what people actually do with the product, not what they say they would do with a better version of it. Where do they stop? What do they use? What do they ignore? What do they try to do that the product does not yet support?

The discipline here: resist the urge to explain the product, guide users through it, or ask leading questions. Observation of unmediated behaviour is the most valuable signal you can get at this stage. Your job is to watch, not to convince.

03
Week 2 onwards

Iterate with a specific question attached to each cycle

Each iteration is not just a set of improvements. It is an experiment with a specific question. "If we change this, will retention on day 3 improve?" "If we remove this feature, will the core action become clearer?" The question must be defined before the iteration ships, and the result must be assessed against that question before the next cycle begins. This is where most teams fall down: they ship continuously without a clear learning agenda, accumulating product complexity without accumulating insight.

In practice: keep a simple log. For each iteration: what was the question, what did we ship, what did we observe, what do we do next. Without this log, speed becomes noise.

04
Ongoing

Let the product define the discovery, not the other way around

In the lean model, discovery preceded building. In the AI-native model, the product generates the discovery questions. Real user behaviour surfaces assumptions you would never have thought to test in an interview. People use products in ways nobody anticipated, and those unexpected uses often point toward the most valuable features, or reveal fundamental flaws in the original concept that no amount of upfront research would have uncovered.

The mindset shift: stop thinking of the product as the result of discovery. Start thinking of it as the instrument of discovery. It does not arrive at the end of the learning process. It drives the learning process from day one.

The new risk: speed without learning

The lean startup solved for one risk: building the wrong thing by committing too early. AI removes that risk almost entirely, because the cost of rebuilding is so low that building the wrong thing first is not a serious problem.

But it creates a different risk, one that is harder to see and easier to ignore: moving fast without learning anything.

When you can ship a new version every day, the temptation is to keep building, responding to every piece of user feedback with a new feature, to keep improving the product, to stay in motion. Motion feels like progress. But if each cycle does not generate a clear insight that changes how you think about the product, you are accumulating complexity without accumulating understanding.

Teams that fall into this pattern often have products that grow in features and shrink in clarity. They become responsive to every signal without being guided by any of them. The speed that should be their advantage becomes a mechanism for avoiding the harder work of actually understanding what they are building and for whom.

The AI-native approach requires more learning discipline, not less. Because the speed of building makes it easy to confuse motion with progress, the deliberate practices (the learning log, the question-per-iteration discipline, the commitment to stopping and synthesising before moving forward) matter more than they ever did in the lean model.

What this means for corporate venture building

For corporations building new ventures, this shift is significant in two directions.

First, it dramatically changes the economics of early-stage experimentation. What used to require a six-to-twelve month development cycle before you could put anything real in front of users now takes weeks. This means the cost of a failed experiment is much lower, which means you can afford to run more experiments. The question "should we invest in building this?" becomes much easier to answer, because building it to find out is now a reasonable option.

Second, it moves the bottleneck. In the old model, the constraint was building capacity: could you build fast enough to test your ideas? In the new model, the constraint is learning capacity: can you extract clear, actionable insight from the stream of signals that rapid iteration generates? This is a very different skill, and it is one that most corporate teams are not trained for.

The corporate innovation programmes that will succeed in the next five years are not the ones with the most AI tools or the fastest development pipelines. They are the ones that build systematic learning capacity: the ability to ask precise questions, observe behaviour rigorously, synthesise insight quickly, and make clear decisions about what to do next.

A practical note for teams starting now

If you are building a new venture or running an innovation programme and wondering how to apply this in practice, three things matter more than anything else.

Ship something real in the first week. Not a survey, not a landing page, not a deck. A working product that does the core thing. This forces clarity about what the core thing actually is, and it generates real signals immediately.

Attach a question to every iteration before you ship it. "What are we trying to learn?" If you cannot answer that before the cycle begins, you are building, not learning. The discipline of naming the question changes how you read the results.

Build a learning log from day one. A simple document: date, question, what shipped, what you observed, what you decided. Review it every two weeks. The patterns that emerge from this log will tell you more about your product and your users than any amount of analysis.

The tools have changed the game. The discipline of learning has not. That is the part you still have to build yourself.

Work with us

Building a new venture with AI?

We help European corporations design and run AI-native venture building programmes, from the first prototype to a validated, scalable business. If you are starting a new innovation initiative and want to build it the right way from day one, let us talk.

Start a conversation