Making Smarter Bets: Towards a Winning AI Strategy with Probabilistic Considering

-

In a previous article, we covered key theoretical concepts that underpin — which involves probabilistic weighting of uncertain outcomes — and focused on the relevance to AI product management. , we are going to zoom out and consider the larger picture, taking a look at how probabilistic pondering based on expected values will help AI teams tackle broader strategic problems corresponding to opportunity identification and selection, product portfolio management, and countering behavioral biases that result in irrational decision making. The target market of this text includes AI business sponsors and executives, AI product leaders, data scientists and engineers, and some other stakeholders engaged within the conception and execution of AI strategies.

Identifying and Choosing AI Opportunities

Easy methods to spot value-creating opportunities to take a position scarce resources, after which optimally select amongst these, is an age-old problem. Advances in the speculation and practice of investment evaluation over the past five hundred years have given us such useful tools and ideas as , evaluation, , and , to call but a number of. All these tools acknowledge the uncertainty inherent in making decisions concerning the future and take a look at to account for this uncertainty using educated assumptions and — unsurprisingly — the notion of expected value. For instance, NPV, DCF, and ROIC all require us to forecast expected returns (or money flows) over some future time period. This fundamentally involves estimating the chances of potential business outcomes together with their associated returns in that point period and mixing these estimates to compute the expected value.

With an understanding of expected value, powerful, field-tested methods of investment evaluation corresponding to those mentioned above will be leveraged by AI product teams to discover and choose investment opportunities (e.g., projects to work on and features to ship to customers). In this publication by , a European institute fostering industry-academic collaboration and the promotion of responsible AI, the authors outline an approach to computing the ROIC of AI products using expected values. They show a tree diagram of the ROIC calculation, which breaks down the “return” term of the formula into the “advantages” of the AI product (based on the amount and quality of model predictions) and the uncertainty/expected costs of those advantages. They set these returns against the price of investment, i.e., the entire cost of the resources needed (IT, labor, and so forth) to develop, operate, and maintain the AI product. Calculating the ROIC of various AI investment opportunities using expected values will help product teams discover and choose promising opportunities despite the inherent uncertainty involved.

Using real options may give teams much more flexibility of their decision making (see more information on real options here and here). Common sorts of real options include the choice to (e.g., increasing the functionality of an AI product, offering the product to a broader set of consumers), the choice to or reduce (e.g., only offering the product to premium customers in the longer term), the choice to (e.g., having the flexibleness to maneuver AI workloads from one hyperscaler to a different), the choice to (e.g., deferring the choice to construct an AI product until market readiness will be ascertained), and the choice to (e.g., sunsetting a product). With a view to resolve whether to take a position in a number of of those options, product teams can estimate the expected value of every option and proceed accordingly.

Take a look at the video below for hands-on examples of how standard frameworks (NPV, DCF) and real option evaluation can result in different conclusions concerning the attractiveness of investment decisions:

AI Portfolio Management

At any given time, businesses (especially large ones) are inclined to be energetic on multiple fronts, launching latest products, expanding or streamlining existing products, and sunsetting others. Product leaders are thus faced with the never-ending and non-trivial challenge of , which involves allocating scarce resources (budget, staffing, and so forth) across an evolving portfolio of products that could be at different stages of their lifecycle, with due consideration of internal aspects (e.g., the corporate’s strengths and weaknesses) and external aspects (e.g., threats and opportunities pertaining to macroeconomic trends and changes within the competitive landscape). The challenge becomes especially daunting as latest AI products fight for space within the product portfolio with other essential products and initiatives (e.g., related to overdue technology migrations, modernization of user interfaces, and enhancements targeting the reliability and security of core services).

Although primarily related to the sphere of finance, (MPT) is an idea that relies on expected value evaluation and will be used to administer AI product portfolios. In essence, MPT will help product leaders construct portfolios that mix various kinds of assets (products) to maximise expected returns (e.g., revenue, usage, and customer satisfaction over a future time period) while minimizing risk (e.g., attributable to mounting technical debt, threats from competitors, and regulatory pushback). Probabilistic pondering in the shape of expected value evaluation will be used to estimate expected returns and account for risks, allowing a more sophisticated, data-driven assessment of the portfolio’s overall risk-return profile; this assessment, in turn, can result in actionable recommendations for optimally allocating resources across different products.

See this video for a deeper explanation of MPT:

Countering Behavioral Biases

Suppose you’ve won a game and are presented with the next three prize options: (1) a guaranteed $100, (2) a 50% likelihood of winning $200, and (3) a ten% likelihood of winning $1100. Which prize would you select, and the way would you rank the prizes overall? Whereas the primary prize guarantees a certain return, the latter two include various degrees of risk. Nonetheless, the expected return of the second prize is $200*0.5 + $0*0.5 = $100, so we must (at the very least in theory) be indifferent to receiving either of the primary two prizes; in any case, their expected returns are the identical. Meanwhile, the third prize offers an expected return of $1100*0.1 + $0*0.9 = $110, so clearly, we must always (in theory) select this prize option over the others. By way of rating, we’d give the third prize option the highest rank, and jointly give the opposite two prize options the second rank. Readers who wish to realize a deeper understanding of the above discussion are encouraged to review the speculation section and chosen case studies in this text.

The preceding evaluation assumes that we’re what economists might seek advice from as , at all times making optimal decisions based on the available information. But in point of fact, in fact, we are inclined to be anything but perfectly rational. As human beings, we’re stricken by plenty of so-called (or ), which — despite their potential evolutionary rationale — can often impair our judgment and result in suboptimal decisions. One necessary behavioral bias which will have affected your selection of prize within the above example is known as , which is about having greater sensitivity to losses than gains. Because the first prize option represents a certain gain of $100 (i.e., no feeling of loss), whereas the third prize option comes with a 90% possibility of gaining nothing, loss aversion (or risk aversion) may lead you to go for the primary — theoretically suboptimal — prize option. Actually, even the way in which the prize options are framed or presented can affect your decision. Framing the third prize option as “a ten% likelihood of winning $1100” may make it seem more attractive than framing it as “a 90% risk of getting nothing and a ten% likelihood of getting $1100,” because the latter framing suggests the potential of a loss (in comparison with the guaranteed $100), and makes no explicit mention of “winning.”

Guarding against suboptimal decisions resulting from behavioral biases is significant when developing and executing a sound AI strategy, especially given the hype surrounding generative AI since ChatGPT was released to the general public in late 2022. Nowadays, the subject of AI has board-level attention at firms across industry sectors and calling an organization “AI-first” is prone to boost its stock price. The doubtless game-changing impact of AI (which could significantly bring down the price of making many goods and services) is commonly in comparison with pivotal moments in history corresponding to the emergence of the Web (which reduced the price of distribution), and cloud computing (which reduced the price of IT ownership). The hype around AI, even when it might be justified in some cases, puts tremendous pressure on decision makers in leadership positions to leap on the AI bandwagon despite often being ill-prepared to achieve this effectively. Many firms lack access to the sort of knowledge and AI talent that might allow them to construct competitive AI products. Piggybacking on third-party providers could appear expedient within the short-term, but entails long-term risks attributable to vendor lock-in.

Against this backdrop, company leaders can use probabilistic pondering — and the concept of expected value, particularly — to counter common behavioral biases corresponding to:

  • Herd mentality: Decision makers are inclined to follow the gang. If a CEO sees her counterparts at other firms making substantial investments in generative AI, she may feel compelled to do the identical, though the risks and limitations of the brand new technology haven’t been thoroughly evaluated, and her product teams may not yet be able to properly tackle the challenge. This bias is closely related to the so-called fear of missing out (FOMO). Product leaders will help steer colleagues within the C-suite away from potentially misguided “follow the herd,” FOMO-driven decisions by arguing in favor of making a various set of real options and prioritizing these options based on expected value.
  • Overconfidence: Product leaders may overestimate their ability to predict the success of recent AI-powered products. They may think that they understand the underlying technology and the likely receptiveness of consumers to the brand new AI products higher than they really do, resulting in unwarranted confidence of their investment decisions. Overconfidence can result in excessive risk-taking, especially when coping with unproven technologies corresponding to generative AI. Expected value evaluation will help temper this confidence and result in more prudent decision making.
  • Sunk cost fallacy: This logical fallacy is sometimes called “throwing good money after bad.” It happens when product leaders and teams consider that past investments in something justify additional future investments, even when the return on all these investments could also be negative. For instance, product leaders today may feel compelled to allocate an increasing number of resources to products built using generative AI, though the expected returns could also be negative attributable to issues related to hallucinations, data privacy, safety and security. Considering when it comes to expected value will help guard against this fallacy.
  • Confirmation bias: Company leaders and managers may are inclined to hunt down information that confirms their existing beliefs, leaving them blind to vital information which may counter these beliefs. As an example, when evaluating (generative) AI, product managers might selectively deal with success stories and findings from user research that align with their preconceptions, making it harder to objectively assess limitations and risks. By analyzing the expected value of AI investments, product managers can challenge unfounded assumptions, and make rational decisions without being swayed by prior beliefs or selective information. Crucially, the concept of expected value allows beliefs to be updated based on latest information and encourages a prudent, long-term view of decision making.

See this Wikipedia article for a more exhaustive list of such biases.

The Wrap

As this text demonstrates, probabilistic pondering when it comes to expected values will help shape an organization’s AI strategy in several ways, from discovering real options and constructing robust product portfolios to guarding against behavioral biases. The relevance of probabilistic pondering is maybe not entirely surprising, given that almost all firms today operate in a so-called “VUCA” business environment, which is characterised by various degrees of volatility, uncertainty, complexity, and ambiguity. On this context, expected value evaluation encourages decision makers to acknowledge and quantify the uncertainty of future pay-offs, and act prudently to capture value while mitigating risks. Overall, probabilistic pondering as a strategic toolkit is prone to gain importance in a future where uncertain technologies corresponding to AI play an outsized role in shaping company growth and shareholder value.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x