AI project to succeed, mastering expectation management comes first.
When working with AI projets, uncertainty isn’t only a side effect, it could make or break all the initiative.
Most individuals impacted by AI projects don’t fully understand how AI works, or that errors are usually not only inevitable but actually a natural and vital a part of the method. In the event you’ve been involved in AI projects before, you’ve probably seen how things can go flawed fast when expectations aren’t clearly set with stakeholders.
On this post, I’ll share practical tricks to show you how to manage expectations and keep your next AI project heading in the right direction, specially in projects within the B2B (business-to-business) space.
(Rarely) promise performance
Once you don’t yet know the info, the environment, and even the project’s exact goal, promising performance upfront is an ideal technique to ensure failure.
You’ll likely miss the mark, or worse, incentivised to make use of questionable statistical tricks to make the outcomes look higher than they’re.
A greater approach is to debate performance expectations only after you’ve seen the info and explored the issue in depth. At DareData, one in all our key practices is adding a “Phase 0” to projects. This early stage allows us to explore possible directions, assess feasibility, and establish a possible baseline, all before the shopper formally approves the project.
The one time I like to recommend committing to a performance goal from the beginning is when:
- You’ve gotten complete confidence in, and deep knowledge of, the present data.
- You’ve solved the very same problem successfully repeatedly before.
Map Stakeholders
One other essential step is identifying who will likely be enthusiastic about your project from the very start. Do you’ve got multiple stakeholders? Are they a mixture of business and technical profiles?
Each group could have different priorities, perspectives, and measures of success. Your job is to make sure you deliver value that matters to all of them.
That is where stakeholder mapping becomes essential. You could discover understanding their goals, concerns, and expectations. And also you most tailor your communication and decision-making throughout the project in different dimnsions.
Business stakeholders might care most about ROI and operational impact, while technical stakeholders will deal with data quality, infrastructure, and scalability. If either side feels their needs aren’t being addressed, you will have a tough time shipping your product or solution.
One example from my profession was a project where a customer needed an integration with a product-scanning app. From the beginning, this integration wasn’t guaranteed, and we had no idea how easy it could be to implement. We decided to bring the app’s developers into the conversation early. That’s after we learned they were about to launch the precise feature we planned to construct, only two weeks later. This saved the shopper quite a lot of money and time, and spared the team from the frustration of making something that may never be used.
Communicate AI’s Probabilistic Nature Early
AI is probabilistic by nature, a fundamental difference from traditional software engineering. Normally, stakeholders aren’t accustomed to working in this type of uncertainty. To assist, humans aren’t naturally good at considering in probabilities unless we’ve been trained for it (which is why lotteries still sell so well).
That’s why it’s essential to communicate the probabilistic nature of AI projects from the very start. If stakeholders expect deterministic, 100% consistent results, they’ll quickly lose trust when reality doesn’t match that vision.
Today, this is simpler as an example than ever. Generative AI offers clear, relatable examples: even if you give the very same input, the output is never similar. Use demonstrations early and communicate this from the primary meeting. Don’t assume that stakeholders understand how AI works.
Set Phased Milestones
Set phased milestones from the beginning. From day one, define clear checkpoints within the project where stakeholders can assess progress and make a go/no-go decision. This not only builds confidence but additionally ensures that expectations are aligned throughout the method.
For every milestone, establish a consistent communication routine with reports, summary emails, or short steering meetings. The goal is to maintain everyone informed about progress, risks, and next steps.
Remember: stakeholders would fairly hear bad news early than be left at the hours of darkness.

Steer away from Technical Metrics to Business Impact
Technical metrics alone rarely tell the complete story in relation to what matters most: business impact.
Take accuracy, for instance. In case your model scores 60%, is that good or bad? On paper, it would look poor. But what if every true positive generates significant savings for the organization, and false positives have little or no cost? Suddenly, that very same 60% starts looking very attractive.
Business stakeholders often overemphasize technical metrics because it’s easier for them to know, which might result in misguided perceptions of success or failure. In point of fact, communicating the business value is much more powerful and easier to know.
At any time when possible, focus your reporting on business impact and leave the technical metrics to the info science team.
An example from one project we’ve done at my company: we built an algorithm to detect equipment failures. Every appropriately identified failure saved the corporate over €500 per factory piece. Nevertheless, each false positive stopped the production line for greater than two minutes, costing around €300 on average. Because the associated fee of a false positive was significant, we focused on optimizing for precision fairly than pushing accuracy or recall higher. This manner, we avoided unnecessary stoppages while still capturing the most precious failures.
Business stakeholders often overemphasize technical metrics because they’re easier to know, which might result in misguided perceptions of success or failure.
Showcase Scenarios of Interpretability
More accurate models are usually not at all times more interpretable, and that’s a trade-off stakeholders need to know from day one.
Often, the techniques that give us the very best performance (like complex ensemble methods or deep learning) are also those that make it hardest to elucidate a selected prediction was made. Simpler models, alternatively, could also be easier to interpret but can sacrifice accuracy.
This trade-off will not be inherently good or bad, it’s a choice that must be made within the context of the project’s goals. For instance:
- In highly regulated industries (finance, healthcare), interpretability is likely to be more beneficial than squeezing out the previous few points of accuracy.
- In other industries, akin to when marketing a product, a performance boost could bring such significant business gains that reduced interpretability is a suitable compromise.
Don’t draw back from raising this early. You could know that everybody agrees on the balance between accuracy and transparency before you commit to a path.
Take into consideration Deployment from Day 1
AI models are built to be deployed. From the very start, it is best to design and develop them with deployment in mind.
The last word goal isn’t simply to create a formidable model in a lab, it’s to be certain it really works reliably in the true world, at scale, and integrated into the organization’s workflows.
Ask yourself: Without deployment, your project is just an expensive proof of concept with no lasting impact.
Consider deployment requirements early (infrastructure, data pipelines, monitoring, retraining processes) and also you ensure your AI solution will likely be usable, maintainable, and impactful. Your stakeholders will thanks.
(Bonus) In GenAI, don’t draw back from speaking about the associated fee
Solving an issue with Generative AI (GenAI) can deliver higher accuracy, but it surely often comes at a price.
To attain the extent of performance many business users imagine, akin to the experience of ChatGPT, you could have to:
- Call a big language model (LLM) multiple times in a single workflow.
- Implement Agentic AI architectures, where the system uses multiple steps and reasoning chains to achieve a greater answer.
- Use costlier, higher-capacity LLMs that significantly increase your cost per request.
This implies performance in GenAI projects isn’t nearly it’s at all times a balance between quality, speed, scalability, and value.
Once I speak with stakeholders about GenAI performance, I at all times bring cost into the conversation early. Business users often assume that the high performance they see in consumer-facing tools like ChatGPT will translate directly into their very own use case. In point of fact, those results are achieved with models and configurations which may be prohibitively expensive to run at scale in a production environment (and only possible for multi-billion dollar firms).
The secret’s setting realistic expectations:
- If the business is willing to pay for the top-tier performance, great
- If cost constraints are strict, you could have to optimize for a “adequate” solution that balances performance with affordability.
Those are my suggestions for setting expectations in AI projects, especially within the B2B space, where stakeholders often are available with strong assumptions.
What about you? Do you’ve got suggestions or lessons learned so as to add? Share them within the comments!