Don't Rewrite the Playbook: Agile Still Works for AI

Don't Rewrite the Playbook: Agile Still Works for AI

There’s a persistent myth that building AI products requires throwing out everything we know about Agile. That AI is fundamentally different. That "AI Success" demands large teams, big budgets, longer runways, bigger bets, or fully formed platforms before users ever see value.

That wasn’t our experience at ZenBusiness in 2025.

What actually worked was the opposite: small increments of real user value, tight feedback loops, and architecture that evolved in lockstep with functionality. In other words, plain old Agile—applied deliberately to AI. The core principles of Agile and Lean haven’t changed. If anything, building AI products has made them more relevant, not less.

The uncomfortable truth is that AI doesn’t break Agile. It exposes where teams had already stopped practicing it.

Start With a True MVP (And Mean It)

Across multiple companies, I’ve seen the definition of “Minimum Viable Product” slowly lose the word minimum. MVPs often become the first iteration of a grand product vision (over-scoped, over-designed, and burdened with assumptions before a single user interaction proves they matter). This opens the door for several forms of waste (to borrow the Lean term) and dramatically increases the chances of failure.

We were adamant that this would not happen here.

We didn’t start 2025 trying to build a fully featured AI assistant. We started with Smart Assists, and we were very intentional about how small that first step was.

Smart Assists weren’t chatbots. They weren’t conversational experiences. They didn’t try to answer every question or guide users end-to-end. They did exactly one thing: provide a short, personalized, contextual tooltip at the exact moment a user might be confused during onboarding.

That constraint was deliberate.

From a product perspective, tooltips have a clear success criterion: did they reduce friction at a known point of confusion? From an AI perspective, they dramatically reduced complexity. There was no need for memory, long conversations, or complex orchestration. We didn’t need personality, memory, or multi-step reasoning. We needed clarity, accuracy, and relevance—nothing more.

Technically, this meant building just enough architecture to connect to ChatGPT and Gemini, creating a basic system to manage prompts, passing only information we already knew about the user and their current step, and receiving a single, constrained response. No hidden state. No branching logic. No future-proofing.

This is where many AI MVPs fail. Teams jump straight to chat-based experiences because they look like AI. But chat is expensive— both technically, and operationally. Tooltips let us validate value without taking on unnecessary risk.

That made Smart Assists a true MVP. It mattered, not because it was impressive, but because it forced discipline early.

Build Capability Before Chasing Use Cases

Smart Assists did more than help customers. They forced us to build foundational capability in the right order.

We had to solve secure model access. We had to establish prompt composition patterns that could be reused. We had to put guardrails around user data. We had to understand how to observe and debug AI responses in production. None of this was abstract or theoretical—it was tied directly to a single, narrow use case.

We weren’t just shipping a feature. We were establishing a repeatable, safe way to integrate AI into our product.

Another critical component was measurement. Early on, we avoided the temptation to over-engineer evaluation frameworks. Instead, we built simple Streamlit apps running on Snowflake to monitor outputs, review responses, and identify patterns. These weren’t elegant or comprehensive, but they were immediate and useful.

That mattered more than precision at this stage. Early AI work benefits more from visibility than from correctness.

The goal wasn’t to perfectly score every response. It was to see what the system was doing, understand failure modes, and learn quickly. Those lightweight tools also helped us evaluate third-party platforms later, once we had real data and clearer needs. Again, MVP thinking applied to internal tooling.

This early discipline paid off repeatedly as we expanded.

Iterate Relentlessly, Don’t Redesign

Once Smart Assists were in production and delivering value, we didn’t pivot to a grand redesign. We iterated.

Our Business Name Generator is a good example. This was a tool to help entrepreneurs brainstorm business names prior to starting their business. Version one was intentionally simple. It worked, but more importantly, it taught us how users interacted with AI-generated output—where they hesitated, what confused them, and what clarity actually meant in practice.

We also made a mistake. We built the UI components as standalone, custom pieces for that one tool. That decision slowed us down later when we wanted to build additional standalone AI experiences. We had optimized for speed in the moment, but not for reuse.

Version two, and subsequent standalone tools, weren’t better because the model changed. They were better because the product thinking improved and because we corrected that architectural misstep.

As we moved into One-Click Formation (simplifying formation by gathering required information from external sources) and AI Order Tracking (using AI to interpret open orders and better communicate status and expectations), AI stopped feeling like an experiment and started behaving like infrastructure. It explained status. It anticipated questions. It reduced cognitive load for users navigating complex, often stressful processes.

None of this required a new platform or a rewrite. It required consistency by using the same architectural patterns and expanding them carefully.

Because we aligned architecture and delivery early, each new increment became easier to ship than the last.

We also learned something else along the way: our teams were too large and too fragmented. AI work amplified coordination costs. Prompt iteration, behavior tuning, and rapid learning all suffer when ownership is diffuse and feedback loops are slow. In response, we reduced team size and focused on a small, tightly aligned core group. That decision played a significant role in accelerating progress toward what would become Velo™.

Velo™ Was a Milestone, Not a Leap

By the time we launched Velo™ in July, it may have looked like a major leap from the outside. Internally, it didn’t feel that way.

Velo™ is our AI business guide, designed to help entrepreneurs at any stage of their journey. That’s an aspirational goal, but it was built on a solid foundation we had already proven in production.

Velo™ wasn’t a sudden invention. It was the natural aggregation of capabilities we already had: context awareness, conversational handling, user-specific state, and predictable responses. We didn’t “switch on” intelligence. We assembled it from pieces that already worked.

That’s why Velo™ launched as a usable product instead of a concept. And it’s why we didn’t slow down after launch.

As part of the release, we also built the next iteration of our tooling, reporting, and monitoring. Each of these was designed to meet immediate needs while leaving room to evolve. Some were still Streamlit apps and one-off reports. Others began leveraging third-party platforms for prompt evaluation and tuning. Again, the pattern held: deliver what’s needed now, without painting yourself into a corner.

Iteration Didn’t Stop After Launch—It Accelerated

Once Velo™ was live, we kept doing exactly what had gotten us there. We shipped small, validated increments and let real usage guide the roadmap.

We focused on continuity by adding Velo™ History, accessibility through Anonymous Mode, and low-commitment entry points with Velo™ Q&A. Velo™ Profiles and Starter Mode allowed users to register free accounts, remember progress, and tie experiences together over time.

In parallel, we continued building standalone tools on top of the same shared architecture: AI Business Idea Generation, Startup Cost Estimator, and Business Name Generator V2, with several more launching shortly. Each delivered tangible, focused value for users exploring business formation without introducing new complexity into the system.

Every feature reused the same underlying patterns. No heroics. No replatforming. No need to pause and “rethink AI strategy.” Because the system was designed to evolve, we could move quickly without breaking things. Each release increased value without increasing chaos.

The Real Lesson From 2025

Here’s the part that’s easy to miss: AI didn’t change the rules. It exposed them.

Teams that struggle with AI usually struggle with product fundamentals—unclear MVPs, over-engineered first releases, and a tendency to chase ambition before validating value. Working with AI makes those problems more obvious and more expensive.

At ZenBusiness, coordinating functionality with architecture from day one allowed us to iterate faster after every MVP and compound value over time. We treat AI as a product capability, not a science project. We fully expect models, tools, and techniques to keep changing and we’ve designed our systems so we can pivot without disrupting teams or customers.

Agile didn’t slow us down in 2025. It’s the reason we shipped so much.

And if there’s one takeaway from our year building AI-powered experiences, it’s this:

Just because it’s AI doesn’t mean Agile doesn’t work.
It means Agile matters more than ever.

Read more