constructing with AI, complexity adds up — there’s more uncertainty, more unknowns, and more moving parts across teams, tools, and expectations. That’s why having a solid discovery process is much more vital than when you’re constructing traditional, deterministic software.
In accordance with recent studies, the #1 reason why AI projects fail is that firms use AI for the improper problems. These problems might be:
- too small, so nobody cares
- too easy and never well worth the effort of using AI and coping with more complexity
- or simply fundamentally not fit for AI in the primary place
In this text, I’ll share how we approach discovery for AI-driven products, breaking it down into three key steps:
I’ll use the instance of a recent project within the automotive industry for instance the approach. Among the points described can be latest and specific to AI; others are known from traditional development, but gain much more meaning within the context of AI.
Ideation: Finding the correct AI opportunities
Let’s start with ideation — step one in any discovery process, wherein you are trying to gather a lot of ideas in your development. We are going to take a look at two familiar ways this plays out: a textbook version, where you follow one of the best practices of product management, and a standard real-life scenario, where things are likely to get a little bit biased and messy. Rest assured — each paths can result in success.
The textbook scenario: Problem-first considering
In the best world, you may have plenty of time to explore and structure the chance space — that’s, all the client needs, desires, and pain points you’ve identified. These might come from different sources, equivalent to:
- Customer interviews and feedback
- Sales and support conversations
- Competitive research
- And sometimes just the team’s gut feeling and industry experience
For instance, here is an excerpt from the chance space for our automotive client, whose goal was to make use of AI to observe the worldwide automotive market and create recommendations for strategic innovation:

Note that in this instance, we’re taking a look at a brownfield scenario. The chance space includes not only latest feature ideas, but additionally critiques of existing features, equivalent to “lack of transparency into sources.“
When you’ve mapped out the needs, you take a look at the answer space — all different ways you may technically solve those problems. For instance, these can include:
- Rule-based analytics
- UX improvements
- Artificial Intelligence
- Adding more domain expertise
- …
Importantly, AI is a component of the answer space, but it surely is on no account privileged — it’s one option amongst many others.
Finally, you match opportunities to solutions, as illustrated in the next figure:

Let’s take a look at a few of those links:
- If several users say, you would possibly think about using AI. Nonetheless, a straightforward rule-based system that scrapes competitor offerings from their web sites could solve that too.
- If the issue is, AI starts to shine. Summarizing large amounts of information or text to reframe it and generate latest content is strictly where modern AI excels.
- But when the difficulty is, AI probably isn’t the correct fit in any respect. That’s a UX and transparency challenge, not a machine learning problem.
On this scenario, it’s vital to remain impartial when matching each must the correct solution. Even in case you’re secretly excited to begin constructing with the most recent AI tools (who isn’t?), you may have to be patient and wait for the correct opportunity to surface.
The true-life scenario: “Let’s use AI!”
Now, in point of fact, things often start on a special note. For instance, you’re in a team meeting, and someone says, “Let’s use AI!” Or your CEO makes a magic speech that suddenly puts AI in your agenda without providing any guidance or direction on what to do with it. Without further ado, you risk ending up within the “AI for the sake of AI” trap.
Nonetheless, it doesn’t should be a disaster. We’re talking about an especially versatile technology, and you possibly can work backwards from the AI-first imperative and find great opportunities by ideating across the core advantages and shortcomings of AI.
The AI Opportunity Tree: Specializing in the core advantages of AI
After I work with teams who’ve already decided they “need to do AI,” I help them frame the conversation around what AI is nice at. Within the B2B context, there are 4 foremost advantages you possibly can construct around:
- Automation & productivity: Use AI to make existing processes faster and cheaper. For instance, Intercom uses AI chatbots to handle common customer support questions robotically, reducing response times and freeing up human agents for more complex cases.
- Improvement & augmentation: Help people improve the outcomes of their work. For instance, Notion AI assists with drafting, summarizing, and refining content, while leaving the ultimate decision and editing to the human user.
- Innovation & transformation: Unlock entirely latest products, capabilities, or business models. For instance, Tesla uses AI to shift from selling hardware to delivering continuous software-driven value with features like driver assistance, battery optimization, and in-car experiences via over-the-air updates.
- Personalization: Tailor outputs to specific users or contexts. For instance, Spotify uses AI to create personalized playlists like Discover Weekly, adapting recommendations to every listener’s unique taste.
When ideating, it is best to try to construct a wealthy space of ideas by collecting multiple opportunities for every profit. It will lead to a structured AI Opportunity Tree. Here’s a small a part of the chance tree we in-built the automotive scenario:

Use the shortcomings of AI as exclusion criteria
It’s also vital to acknowledge when AI is just not one of the best answer. Listed here are a few of the user-facing shortcomings of AI, which you should use to filter out inappropriate use cases:
- AI is commonly a black box — users don’t all the time understand how it really works.
- AI introduces uncertainty — the identical or similar inputs can produce different outputs.
- AI will make mistakes — sometimes in ways you possibly can’t fully predict.
In case your use case requires full accuracy, explainability, or predictability, move on — AI is probably going not the correct solution.
Together with your AI opportunities and use cases laid out, let’s now see how you possibly can add more flesh to your ideas and specify them for further prioritization and development.
Specification & validation: Iterate yourself to the optimal system design
When you’ve mapped out your use cases and potential features, the subsequent step is specification and validation. Here, you define how you’re going to construct an AI system to deal with a particular use case. Before we dive into the frameworks, let’s pause and speak about process, and specifically concerning the power of iteration within the context of AI.
Adopting the practice of iteration
The quilt of my book encompasses a dervish. Just as these dancers rotate in an countless and focused motion, you have to construct the habit of iteration to get successful with AI. Firstly of your journey, uncertainty is high:
- You’re exploring a brand new land. In comparison with “traditional” software development, where we’ve plenty of historical wisdom to construct upon, the solutions and best practices aren’t found out yet.
- AI systems will make mistakes, that are a serious risk for trust and adoption. From the beginning, it is best to allocate plenty of time to understanding, anticipating, and stopping these mistakes.
- Your users may have different levels of AI literacy. Some will know the right way to handle errors and uncertainty; others will blindly trust AI outputs, which may result in problems down the road.
Through iteration, you reduce this uncertainty and construct confidence each inside your team and in your users. The hot button is to specify and validate in small steps: run quick experiments, construct prototypes, and create feedback loops to grasp what’s working and what’s not.
Most significantly, get real feedback early. Today, it’s tempting to cocoon yourself on the earth of AI-driven research and simulation. Nonetheless, that’s a dangerous comfort zone. Should you don’t talk over with real users and put your prototypes of their hands, you risk a tough clash when your product finally launches. AI is AI, humans are humans. To construct something successful, you have to understand and connect each worlds.
Specifying your system with the AI System Blueprint
To make an AI idea more concrete, we use the AI System Blueprint. This model represents each the chance and the answer, and its beauty lies in its simplicity and universality. During the last two years, I used to be capable of use it in literally every AI project I encountered to make clear what was being built. It helps align everyone around the identical vision: product managers, designers, engineers, data scientists, and even executives.

Here’s the right way to fill it out:
- Pick a use case out of your AI Opportunity Tree.
- Map out the worth AI can realistically provide to this use case:
- How much of it might probably you automate? Often, only partial automation is feasible (and sufficient).
- What’s going to the price of the mistakes made by the AI be? Start with a rough estimate of the frequency and potential cost of mistakes, and proper as you get more information from prototyping and user testing.
- Do your users actually want automation? In some contexts — especially creative tasks — users might resist automation. They could prefer to do the duty by themselves, or welcome lightweight AI assistance as an alternative of a black-box system taking up their workflow.
3. Specify the AI solution:
- Data can be the raw material powering your AI system.
- Intelligence, which incorporates AI models and your larger architecture, will use AI algorithms to distill value out of your data.
- The user experience is the channel that transports this value to the user.
Thus, the initial blueprint for our use case of making presentations and reports can look as follows:

Avoid narrowing down your solution space too early
The next figure shows a high-level solution space for AI:

An in depth description of this space is out of the scope of this post (you will discover it in chapters 3-10 of my book). Here, I would really like to protect you against a standard mistake — defining your solution space too narrowly. This limits creativity, results in poor engineering decisions, and might lock you into suboptimal paths. Be careful for these three anti-patterns:
- “Let’s construct an agent.” Right away, every other company wants to construct their very own AI agent. But once you ask, , most teams don’t have a transparent answer. That’s normally an indication of hype over strategy.
- “Let’s pick a model and figure it out later.” Some teams start by choosing a model or vendor, and scramble to search out a use case afterward. This almost all the time results in misalignment, iteration dead-ends, and wasted resources.
- “Let’s just go together with what our platform offers.” Many firms default to whatever their cloud provider suggests, skipping critical architectural decisions. Cloud providers are biased toward their very own ecosystems. Should you blindly follow their playbook, you’ll limit your options and miss the possibility to develop AI craft and construct something truly differentiated.
Thus, before you choose on tooling, models, or platforms, take a step back and ask:
- What are the high-level decisions we want to make about data, models, AI architecture, and UX?
- How do they interconnect?
- What trade-offs are we willing to make?
Also, be certain that your entire team understands the entire solution space. In AI, cross-functional dependencies abound. For instance, UX designers have to be acquainted with the training data of an AI model since it largely determines the outputs users see. Alternatively, data and AI engineers need to grasp the UX so that they can put the AI system together in a way that permits it to serve different insights and interactions. Due to this fact, everyone ought to be on-board with a shared mental model of the potential solutions and the ultimate specification of your AI system.
Prioritization: Deciding what to construct first
The last step in our discovery process is prioritization — deciding what to construct first. Now, in case you’ve done a solid job in specification and validation, this can often already point you to make use of cases with a high potential, making your prioritization smoother. Let’s start with the straightforward prioritization matrix after which find out how you possibly can refine your prioritization criteria and process.
The prioritization matrix
Most of us are acquainted with the classic prioritization matrix: you define criteria like user value, technical feasibility, possibly even risk, and also you rating your ideas accordingly. Then, you add up the points, and the highest-scoring opportunity wins. The next figure shows an example for a few of the items in our AI Opportunity Tree:

This type of framework is popular since it creates clarity and makes stakeholders feel good. There’s something reassuring about seeing messy, hairy ideas became numbers. Nonetheless, prioritization matrices are highly simplified projections of reality. They hide the complexity and nuance behind prioritization, so it is best to avoid overrelying on this representation.
Adding nuance to your AI prioritization
Especially when you’re nearly to introduce AI, you’re not only rating features, but making long-term bets in your product direction, tech stack, and positioning and differentiation. As an alternative of reducing prioritization to a spreadsheet exercise, sit with the complexity, the deeper conversations and potential misalignments. Take the time to work through the subtle details, weigh the trade-offs, and make decisions that align not only with what’s easy to construct now, but additionally with the longer-term vision for AI in what you are promoting.
1. Pick the low-hanging fruits first
The AI Opportunity Tree from section 1 provides a primary hint in your prioritization. Normally, you’re higher off starting on the left of the tree and moving to the correct as you gain more experience and traction with AI. Here’s why:
- On the left side, you may have easy automation tasks. These are frequently low risk, easy to measure, and an amazing solution to start.
- As you enterprise to the correct side, you see more advanced, strategic use cases like trend prediction, recommendations, and even latest product ideas. These can add more impact, but additionally more risk and complexity.
Starting on the left helps you construct trust and momentum. It delivers quick wins, gives your organization the time to get comfortable with AI, and builds the muse for more ambitious projects down the road.
2. Work on strategic alignment
Before you choose what to construct, think concerning the role of AI in what you are promoting. While your organization may not have an explicit AI strategy (yet), you possibly can infer vital information from its corporate strategy. For instance, is AI a possible differentiator, or are you simply playing catch-up with the market? If you should gain a competitive edge with AI, you’ll want to move fast along your opportunity tree to implement more advanced and differentiated use cases. Your engineering decisions will lean towards more custom and crafty alternatives like open-source models, custom pipelines, and even on-premise infrastructure. Against this, in case your goal is to follow competitors, you would possibly concentrate on automation and productivity for longer, and select safer, off-the-shelf solutions from large cloud vendors and model providers.
3. Define custom criteria for prioritization
AI projects often require custom prioritization dimensions beyond the same old trio of user value, business impact, and feasibility. Consider aspects like:
- Scalability & generalization power: Will your AI solution generalize across different user groups, markets, or domains? For instance, if you have to inject heavy domain expertise for each latest customer, that limits your scaling curve.
- Privacy & security: Some AI use cases are tightly certain to data governance and privacy concerns. Should you’re in finance, healthcare, or regulated industries, this becomes critical.
- Competitive differentiation: Are you constructing something truly latest, or are you following industry trends? If AI is a component of your differentiation strategy, prioritize novel use cases or unique capabilities, not only features everyone else is shipping.
4. Plan for spill-over effects
One other vital consideration is spillover effects and the long-term value of constructing reusable AI assets. Once you design and develop datasets, models, pipelines, or knowledge representations with reuse in mind, you’re not only solving one isolated problem, but making a foundational AI capability. It’ll enable you to speed up future initiatives, reduce redundancy, and unlock compounding recurring returns in what you are promoting. This is very critical if AI is a strategic differentiator in what you are promoting.
Summary
I hope this text helped you higher understand the worth of a structured discovery process within the messy, complex world of AI product development. Let’s summarize the frameworks and best practices we discussed:
- Use the AI Opportunity Tree to gather, map, and prioritize a broad set of potential AI use cases.
- Depend on iteration and continuous feedback to scale back uncertainty and refine your AI product over time.
- Leverage the AI System Blueprint to align your team around a shared vision and avoid cross-functional disconnects.
- Explore the complete AI solution space — don’t fall into the trap of limiting yourself to specific tools, models, or vendors too early.
- Treat prioritization as strategic alignment, not only feature scoring. It’s a solution to steadily surface, shape, and refine your larger AI strategy.