“The success of an AI product relies on how intuitively users can interact with its capabilities”

-

You call your “AI Strategy Playbook” a set of mental models that help teams align on what to construct and why. Which models most frequently unlock clarity in executive rooms, and why do they resonate?

Considered one of the most important challenges in executive rooms is communication. People mean various things once they discuss AI, which blocks execution. I take advantage of three mental models to create a structured common ground which allows us to maneuver forward without excuses and misunderstandings.

I normally start with the AI Opportunity Tree, which helps us map the landscape of possible AI use cases. Executives often are available with a combination of curiosity and hype — “we want to do something with AI” — but not a transparent view of where value really lies. The default path most teams take from there’s constructing a chatbot, but these projects rarely take off (cf. this text). The Opportunity Tree breaks this pattern by systematically uncovering potential AI use cases and providing a structured, objective basis for prioritization. 

Once we’ve clarity on what and why to construct, we move to the and fill out the AI System Blueprint. This model helps map the information, models, user experience, and governance constraints of the envisioned AI system. It’s especially powerful in multi-stakeholder environments, where business, data science, and compliance teams need a shared language. The blueprint turns the complexity of AI into something tangible and iterative — we will draw it, discuss it, and refine it together.

Finally, I introduce the AI Solution Space Map. It expands the conversation beyond today’s dominant technologies — mainly large language models and agents — and helps teams consider the total space of solution types: from classical ML to hybrid architectures, retrieval systems, and rule-based or simulation-driven approaches. This broader view keeps us grounded in delivering the suitable solution, not only the fashionable one.

Together, these models create a journey that mirrors how successful AI products evolve: from opportunity discovery, to system design, to continuous exploration. They resonate with executives because they bridge strategy and execution.

In your writing, domain expertise is very important in constructing AI products. Where have you ever seen domain knowledge change the whole shape of an AI solution, somewhat than simply improve accuracy on the margins?

One vivid example where domain expertise completely reshaped the answer was a logistics project initially began to predict shipment delays. Once the domain experts joined, they reframed the issue: delays weren’t random events but symptoms of deeper business risks comparable to supplier dependencies, regulatory bottlenecks, or network fragility. We “AI experts” weren’t capable of spot these patterns. 

To include this domain knowledge, we expanded the information layer beyond transit times to incorporate supplier-risk signals and dependency graphs. The AI architecture evolved from a single predictive model to a hybrid system combining prediction, knowledge graphs, and rule-based reasoning. The user experience was expanded from reactive delay forecasts to risk scenarios with suggested mitigations, which were more actionable for experts.

Ultimately, domain knowledge didn’t just improve accuracy, but redefined the issue, the system design, and the worth the business received. It turned an AI model right into a true decision-support tool. After that have, I all the time insist on domain experts joining in through the early stages of an AI initiative. 

Along with your posts on TDS, you furthermore may wrote a book: The Art of AI Product Development: Delivering business value. What are an important takeaways that modified your personal approach to constructing AI products (especially anything that surprised you or overturned a previous belief)?

Writing the book motivated me to reflect on all of the bits and pieces of theoretical knowledge, practical experience, and my very own conviction and structure them into reusable frameworks. Since a book needs to remain relevant for years, it also forced me to tell apart between fundamentals on the one hand, and hype however. Listed here are a few my very own learnings: 

  • First, I learned tips on how to find business value in technology. Often, we oscillate between two extremes — either chasing AI for the sake of AI, or relying solely on user-driven discovery. In the primary case, you usually are not creating real value. Within the second case, who knows how long you’ll should wait for the “perfect” AI problem to return to you. In practice, the sweet spot lies in between: using technology’s unique strengths to unlock value that users can feel, but wouldn’t necessarily articulate.We realize it from great innovators like Steve Jobs and Henry Ford, who created radically recent experiences before customers asked for them. But to do that successfully, you would like that magic mixture of technical expertise, courage, and intuition about what the market needs.
  • Second, I noticed the worth of user experience for AI success. Many AI projects fail not since the models are weak, but since the intelligence isn’t clearly communicated, explained, or made usable. The success of an AI product relies on how intuitively users can interact with its capabilities and the way much they trust its outcomes. While writing the book, I used to be rereading the design classics, like Don Norman’s The Design of On a regular basis Things, and all the time asking myself — how does this apply to AI? I believe we’re still within the early stages of a brand new UX era. Chat is a vital component, nevertheless it is certainly only an element of the total equation. I’m very excited to see the event of recent user interface concepts like generative UX. 
  • Third, AI systems have to evolve through cycles of feedback and improvement, and that process never really ends. That’s why I take advantage of the metaphor of a dervish within the book: spinning, refining, learning constantly. Teams that master early release and constant iteration are likely to deliver way more value than those that wait for a “perfect” model. Unfortunately, I still see many teams taking too long before delivering a primary baseline and spending not enough time on iterative optimization. These systems might make it into production, but adoption will likely not occur, they usually shall be shelved as one other AI experiment. 

For teams shipping an AI feature next quarter, what habits would you recommend, and what key pitfalls should they avoid, to remain focused on delivering real business value somewhat than chasing hype?

First, as above, master the art of iteration. Ship early, but do it responsibly — release something that’s useful enough to earn user trust, then improve it relentlessly. Every interaction brings you recent data, and each piece of feedback is a brand new training signal.

Second, keep a wider outlook. It’s easy to get tunnel vision around the newest LLM or model release, but the true innovation often comes from how you mix technologies — retrieval, reasoning, analytics, UX, and domain logic. Design your system in a modular way so you may extend it, and constantly monitor AI solutions and developments that might improve it (see also our upcoming AI Radar). 

Third, test with real people early and infrequently. AI products live or die by how humans perceive and use them. Internal demos and artificial tests can’t replace the messy, surprising inputs and feedback you get from actual users.

Your long-form writing (book, deep dives) avoids hype and centers on delivering value to organisations. What’s your approach for selecting topics and does writing about these topics show you how to higher understand them? 

Writing has all the time been my way of considering out loud. I take advantage of it to learn, process complex ideas, and generate recent ones. I normally go together with my gut and write about approaches that I actually imagine in and that I’ve seen work in real organizations.

At the identical time, at my company, we’ve a little bit of our own “secret sauce.” Through the years, we’ve developed an AI-driven system for monitoring recent trends and innovations. We offer it to some of select customers in industries like aerospace and finance, but in fact, we also use it for our own purposes. That mix of knowledge and intuition helps me spot topics which might be each relevant now and more likely to matter not only in some months, but additionally two or three years down the road.

For instance, in the beginning of 2025, we published a report about enterprise AI trends, and almost every theme from it has turned out to be highly relevant all year long. So, while my writing is intuitive and private, it’s also grounded in evidence.

To learn more about Janna‘s work and stay up-to-date together with her latest articles, you may follow her on TDS, Substack, or LinkedIn

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x