Evolving Product Operating Models within the Age of AI

-

previous article on organizing for AI (link), we checked out how the interplay between three key dimensions — ownership of outcomes, outsourcing of staff, and the geographical proximity of team members — can yield a wide range of organizational archetypes for implementing strategic AI initiatives, each implying a distinct twist to the product operating model.

Now we take a better have a look at how the product operating model, and the core competencies of empowered product teams specifically, can evolve to face the emerging opportunities and challenges within the age of AI. We start by placing the present orthodoxy in its historical context and present a process model highlighting 4 key phases within the evolution of team composition in product operating models. We then consider how teams will be reshaped to successfully create AI-powered services and products going forward.

Note: All figures in the next sections have been created by the writer of this text.

The Evolution of Product Operating Models

Current Orthodoxy and Historical Context

Product coaches similar to Marty Cagan have done much in recent times to popularize the “3-in-a-box” model of empowered product teams. Generally, in line with the present orthodoxy, these teams should consist of three first-class, core competencies: product management, product design, and engineering. Being signifies that none of those competencies are subordinate to one another within the org chart, and the product manager, design lead, and engineering lead are empowered to jointly make strategic product-related decisions. Being reflects the idea that removing or otherwise compromising on any of those three competencies would result in worse product outcomes, i.e., products that don’t work for patrons or for the business.

A central conviction of the present orthodoxy is that the 3-in-a-box model helps address product risks in 4 key areas: value, viability, usability, and feasibility. Product management is accountable for overall outcomes, and particularly concerned with ensuring that the product is to customers (typically implying a better willingness to pay) and for the business, e.g., by way of how much it costs to construct, operate, and maintain the product in the long term. Product design is accountable for user experience (UX), and primarily taken with maximizing of the product, e.g., through intuitive onboarding, good use of affordances, and a delightful user interface (UI) that permits for efficient work. Lastly, engineering is accountable for technical delivery, and primarily focused on ensuring of the product, e.g., characterised by the flexibility to ship an AI use case inside certain technical constraints, ensuring sufficient predictive performance, inference speed, and safety.

Attending to this 3-in-a-box model has not been a straightforward journey, nonetheless, and the model continues to be not widely adopted outside tech firms. Within the early days, product teams – if they might even be called that – mainly consisted of developers that tended to be chargeable for each coding and gathering requirements from sales teams or other internal business stakeholders. Such product teams would concentrate on feature delivery moderately than user experience or strategic product development; today such teams are thus also known as “feature teams”. The TV show vividly depicts tech firms organizing like this within the Nineteen Eighties and 90s. Shows like underscore how such disempowered teams can persist in IT departments in modern times.

As software projects grew in complexity within the late Nineteen Nineties and early 2000s, the necessity for a dedicated product management competency to align product development with business goals and customer needs became increasingly evident. Firms like Microsoft and IBM began formalizing the role of a product manager and other firms soon followed. Then, because the 2000s saw the emergence of varied online consumer-facing services (e.g., for search, shopping, and social networking), design/UX became a priority. Firms like Apple and Google began emphasizing design, resulting in the formalization of corresponding roles. Designers began working closely with developers to be certain that products weren’t only functional but additionally visually appealing and user-friendly. For the reason that 2010s, the increased adoption of agile and lean methodologies further reinforced the necessity for cross-functional teams that would iterate quickly and reply to user feedback, all of which paved the way in which for the present 3-in-a-box orthodoxy.

A Process Framework for the Evolution of Product Operating Models

Looking ahead 5-10 years from today’s vantage point in 2025, it’s interesting to think about how the emergence of AI as a “table stakes” competency might shake up the present orthodoxy, potentially triggering the subsequent step within the evolution of product operating models. Figure 1 below proposes a four-phase process framework of how existing product models might evolve to include the AI competency over time, drawing on instructive parallels to the situation faced by design/UX only a number of years ago. Note that, at the danger of somewhat abusing terminology, but in keeping with today’s industry norms, the terms “UX” and “design” are used interchangeably in the next to seek advice from the competency concerned with minimizing usability risk.

Figure 1: An Evolutionary Process Framework

Phase 1 within the above framework is characterised by ignorance and/or skepticism. UX initially faced the struggle of justifying its value at firms that had previously focused totally on functional and technical performance, as within the context of non-consumer-facing enterprise software (think ERP systems of the Nineteen Nineties). AI today faces an analogous uphill battle. Not only is AI poorly understood by many stakeholders to start with, but firms which were burned by early forays into AI may now be wallowing within the “trough of disillusionment”, resulting in skepticism and a wait-and-see approach towards adopting AI. There might also be concerns across the ethics of collecting behavioral data, algorithmic decision-making, bias, and attending to grips with the inherently uncertain nature of probabilistic AI output (e.g., consider the implications for software testing).

Phase 2 is marked by a growing recognition of the strategic importance of the brand new competency. For UX, this phase was catalyzed by the rise of consumer-facing online services, where improvements to UX could significantly drive engagement and monetization. As success stories of firms like Apple and Google began to spread, the strategic value of prioritizing UX became harder to overlook. With the confluence of some key trends over the past decade, similar to the supply of cheaper computation via hyper-scalers (e.g., AWS, GCP, Azure), access to Big Data in a wide range of domains, and the event of powerful latest machine learning algorithms, our collective awareness of the potential of AI had been growing steadily by the point ChatGPT burst onto the scene and captured everyone’s attention. The rise of design patterns to harness probabilistic outcomes and the related success stories of AI-powered firms (e.g., Netflix, Uber) mean that AI is now increasingly seen as a key differentiator, very like UX before.

In Phase 3, the roles and responsibilities pertaining to the brand new competency grow to be formalized. For UX, this meant differentiating between the roles of designers (covering experience, interactions, and the feel and look of user interfaces) and researchers (specializing in qualitative and quantitative methods for gaining a deeper understanding of user preferences and behavioral patterns). To remove any doubts concerning the value of UX, it was made right into a first-class, Core Competency, sitting next to product management and engineering to form the present triumvirate of the usual product operating model. The past few years have witnessed the increased formalization of AI-related roles, expanding beyond a jack-of-all conception of “data scientists” to more specialized roles like “research scientists”, “ML engineers”, and more recently, “prompt engineers”. Looking ahead, an intriguing open query is how the AI competency will probably be incorporated into the present 3-in-a-box model. We may even see an iterative formalization of embedded, consultative, and hybrid models, as discussed in the subsequent section.

Finally, Phase 4 sees the emergence of norms and best practices for effectively leveraging the brand new competency. For UX, that is reflected today by the adoption of practices like design pondering and lean UX. It has also grow to be rare to search out top-class, customer-centric product teams with out a strong, first-class UX competency. Meanwhile, recent years have seen concerted efforts to develop standardized AI practices and policies (e.g., Google’s AI Principles, SAP’s AI Ethics Policy, and the EU AI Act), partly to address the hazards that AI already poses, and partly to stave off dangers it might pose in the longer term (especially as AI becomes more powerful and is put to nefarious uses by bad actors). The extent to which the normalization of AI as a competency might impact the present orthodox framing of the 3-in-a-box Product Operating Model stays to be seen.

Towards AI-Ready Product Operating Models

Leveraging AI Expertise: Embedded, Consultative, and Hybrid Models

Figure 2 below proposes a high-level framework to take into consideration how the AI competency may very well be incorporated in today’s orthodox, 3-in-a-box product operating model.

Figure 2: Options for AI-Ready Product Operating Models

Within the embedded model, AI (personified by data scientists, ML engineers, etc.) could also be added either as a brand new, durable, and first-class competency next to product management, UX/design, and engineering, or as a subordinated competency to those “big three” (e.g., staffing data scientists in an engineering team). Against this, within the consultative model, the AI competency might reside in some centralized entity, similar to an AI Center of Excellence (CoE), and leveraged by product teams on a case-by-case basis. For example, AI experts from the CoE could also be brought in temporarily to advise a product team on AI-specific issues during product discovery and/or delivery. Within the hybrid model, because the name suggests, some AI experts could also be embedded as long-term members of the product team and others could also be brought in at times to offer additional consultative guidance. While Figure 2 only illustrates the case of a single product team, one can imagine these model options scaling to multiple product teams, capturing the interaction between different teams. For instance, an “experience team” (chargeable for constructing customer-facing products) might collaborate closely with a “platform team” (maintaining AI services/APIs that have teams can leverage) to ship an AI product to customers.

Each of the above models for leveraging AI include certain pros and cons. The embedded model can enable closer collaboration, more consistency, and faster decision-making. Having AI experts within the core team can result in more seamless integration and collaboration; their continuous involvement ensures that AI-related inputs, whether conceptual or implementation-focused, will be integrated consistently throughout the product discovery and delivery phases. Direct access to AI expertise can speed up problem-solving and decision-making. Nonetheless, embedding AI experts in every product team could also be too expensive and difficult to justify, especially for firms or specific teams that can’t articulate a transparent and compelling thesis concerning the expected AI-enabled return on investment. As a scarce resource, AI experts may either only be available to a handful of teams that could make a robust enough business case, or be spread too thinly across several teams, resulting in antagonistic outcomes (e.g., slower turnaround of tasks and worker churn).

With the consultative model, staffing AI experts in a central team will be cheaper. Central experts will be allocated more flexibly to projects, allowing higher utilization per expert. It’s also possible for one highly specialized expert (e.g., focused on large language models, AI lifecycle management, etc.) to advise multiple product teams directly. Nonetheless, a purely consultative model could make product teams depending on colleagues outside the team; these AI consultants may not at all times be available when needed, and should switch to a different company sooner or later, leaving the product team high and dry. Repeatedly onboarding latest AI consultants to the product team is time- and effort-intensive, and such consultants, especially in the event that they are junior or latest to the corporate, may not feel in a position to challenge the product team even when doing so is perhaps crucial (e.g., warning about data-related bias, privacy concerns, or suboptimal architectural decisions).

The hybrid model goals to balance the trade-offs between the purely embedded and purely consultative models. This model will be implemented organizationally as a hub-and-spoke structure to foster regular knowledge sharing and alignment between the hub (CoE) and spokes (embedded experts). Giving product teams access to each embedded and consultative AI experts can provide each consistency and adaptability. The embedded AI experts can develop domain-specific know-how that will help with feature engineering and model performance diagnosis, while specialized AI consultants can advise and up-skill the embedded experts on more general, state-of-the-art technologies and best practices. Nonetheless, the hybrid model is more complex to administer. Tasks have to be divided rigorously between the embedded and consultative AI experts to avoid redundant work, delays, and conflicts. Overseeing the alignment between embedded and consultative experts can create additional managerial overhead which will must be borne to various degrees by the product manager, design lead, and engineering lead.

The Effect of Boundary Conditions and Path Dependence

Besides considering the professionals and cons of the model options depicted in Figure 2, product teams must also account for boundary conditions and path dependence in deciding incorporate the AI competency.

Boundary conditions seek advice from the constraints that shape the environment through which a team must operate. Such conditions may relate to elements similar to organizational structure (encompassing reporting lines, informal hierarchies, and decision-making processes throughout the company and team), resource availability (by way of budget, personnel, and tools), regulatory and compliance-related requirements (e.g., legal and/or industry-specific regulations), and market dynamics (spanning the competitive landscape, customer expectations, and market trends). Path dependence refers to how historical decisions can influence current and future decisions; it emphasizes the importance of past events in shaping the later trajectory of a corporation. Key elements resulting in such dependencies include historical practices (e.g., established routines and processes), past investments (e.g., in infrastructure, technology, and human capital, resulting in potentially irrational decision-making by teams and executives as a consequence of the sunk cost fallacy), and organizational culture (covering the shared values, beliefs, and behaviors which have developed over time).

Boundary conditions can limit a product team’s options in relation to configuring the operating model; some desirable selections could also be out of reach (e.g., budget constraints stopping the staffing of an embedded AI expert with a certain specialization). Path dependence can create an antagonistic variety of inertia, whereby teams proceed to follow established processes and methods even when higher alternatives exist. This may make it difficult to adopt latest operating models that require significant changes to existing practices. One option to work around path dependence is to enable different product teams to evolve their respective operating models at different speeds in line with their team-specific needs; a team constructing an AI-first product may select to speculate in embedded AI experts prior to one other team that’s exploring potential AI use cases for the primary time.

Finally, it’s value remembering that the selection of a product operating model can have far-reaching consequences for the design of the product itself. Conway’s Law states that “any organization that designs a system (defined broadly) will produce a design whose structure is a replica of the organization’s communication structure.” In our context, because of this the way in which product teams are organized, communicate, and incorporate the AI competency can directly impact the architecture of the services and products that they go on to create. For example, consultative models could also be more prone to end in using generic AI APIs (which the consultants can reuse across teams), while embedded AI experts could also be better-positioned to implement product-specific optimizations aided by domain know-how (albeit at the danger of tighter coupling to other components of the product architecture). Firms and teams should due to this fact be empowered to configure their AI-ready product operating models, giving due consideration to the broader, long-term implications.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x