Introduction
Data science teams can operate in myriad ways inside an organization. These organizational models influence the kind of work that the team does, but in addition the team’s culture, goals, Impact, and overall value to the corporate.
Adopting the flawed organizational model can limit impact, cause delays, and compromise the morale of a team. In consequence, leadership should pay attention to these different organizational models and explicitly select models aligned to every project’s goals and their team’s strengths.
This text explores six distinct models we’ve observed across quite a few organizations. These models are primarily differentiated by who initiates the work, what output the info science team generates, and the way the info science team is evaluated. We note common pitfalls, pros, and cons of every model to allow you to determine which could work best on your organization.
1. The scientist
Prototypical scenario
A scientist at a university studies changing ocean temperatures and subsequently publishes peer-reviewed journal articles detailing their findings. They hope that policymakers will in the future recognize the importance of adjusting ocean temperatures, read their papers, and take motion based on their research.
Who initiates
Data scientists working inside this model typically initiate their very own projects, driven by their mental curiosity and desire to advance knowledge inside a field.
How is the work judged
A scientist’s output is commonly assessed by how their work impacts the pondering of their peers. As an illustration, did their work draw other experts’ attention to an area of study, did it resolve fundamental open questions, did it enable subsequent discoveries, or lay the groundwork for subsequent applications?
Common pitfalls to avoid
Basic scientific research pushes humanity’s knowledge forward, delivering foundational knowledge that allows long run societal progress. Nonetheless, data science projects that use this model risk specializing in questions which have large long run implications, but limited opportunities for near term impact. Furthermore, the model encourages decoupling of scientists from decision makers and thus it could not cultivate the shared context, communication styles, or relationships which might be mandatory to drive motion (e.g., regrettably little motion has resulted from all of the research on climate change).
Pros
- The chance to develop deep expertise on the forefront of a field
- Potential for groundbreaking discoveries
- Attracts strong talent that values autonomy
Cons
- May struggle to drive outcomes based on findings
- May lack alignment with organizational priorities
- Many interesting questions don’t have large business implications
2. The business intelligence
Prototypical scenario
A marketing team requests data concerning the Open and Click Through Rates for every of their last emails. The Business Intelligence team responds with a spreadsheet or dashboard that displays the requested data.
Who initiates
An operational (Marketing, Sales, etc) or Product team submits a ticket or makes a request directly to an information science team member.
How the DS team is judged
The BI team’s contribution can be judged by how quickly and accurately they service inbound requests.
Common pitfalls to avoid
BI teams can efficiently execute against well specified inbound requests. Unfortunately, requests won’t typically include substantial context a few domain, the selections being made, or the corporate’s larger goals. In consequence, BI teams often struggle to drive innovation or strategically meaningful levels of impact. Within the worst situations, the BI team’s work can be used to justify decisions that were already made.
Pros
- Clear roles and responsibilities for the info science team
- Rapid execution against specific requests
- Direct achievement of stakeholder needs (Joyful partners!)
Cons
- Rarely capitalizes on the non-executional skills of information scientists
- Unlikely to drive substantial innovation
- Top talent will typically seek a broader and fewer executional scope
3. The analyst
Prototypical scenario
A product team requests an evaluation of the recent spike in customer churn. The information science team studies how churn spiked and what might need driven the change. The analyst presents their findings in a gathering, and the evaluation is continued in a slide deck that’s shared with all attendees.
Who initiates
Much like the BI model, the Analyst model typically begins with an operational or product team’s request.
How the DS team is judged
The Analyst’s work is usually judged by whether the requester feels they received useful insights. In the most effective cases, the evaluation will point to an motion that’s subsequently taken and yields a desired consequence (e.g., an evaluation indicates that the spike in client churn occurred just as page load times increased on the platform. Subsequent efforts to diminish page load times return churn to normal levels).
Common Pitfalls To Avoid
Analyst’s insights can guide critical strategic decisions, while helping the info science team develop invaluable domain expertise and relationships. Nonetheless, if an analyst doesn’t sufficiently understand the operational constraints in a site, then their analyses is probably not directly actionable.
Pros
- Analyses can provide substantive and impactful learnings
- Capitalizes on the info science team’s strengths in interpreting data
- Creates opportunity to construct deep material expertise
Cons
- Insights may not all the time be directly actionable
- May not have visibility into the impact of an evaluation
- Analysts prone to becoming “Armchair Quarterbacks”
4. The recommender
Prototypical scenario
A product manager requests a system that ranks products on an internet site. The Recommender develops an algorithm and conducts A/B testing to measure its impact on sales, engagement, etc. The Recommender iteratively improves their algorithm via a series of A/B tests.
Who initiates
A product manager typically initiates the sort of project, recognizing the necessity for a suggestion engine to enhance the users’ experience or drive business metrics.
How the DS team is judged
The Recommender is ideally judged by their impact on key performance indicators like sales efficiency or conversion rates. The precise form that this takes will often rely upon whether the suggestion engine is client or back office facing (e.g., lead scores for a sales team).
Common pitfalls to avoid
Suggestion projects thrive after they are aligned to high frequency decisions that every have low incremental value (e.g., What song to play next). Training and assessing recommendations could also be difficult for low frequency decisions, due to low data volume. Even assessing if suggestion adoption is warranted could be difficult if each decision has high incremental value. For example, consider efforts to develop and deploy computer vision systems for medical diagnoses. Despite their objectively strong performance, adoption has been slow because cancer diagnoses are relatively low frequency and have very high incremental value.
Pros
- Clear objectives and opportunity for measurable impact via A/B testing
- Potential for significant ROI if the suggestion system is successful
- Direct alignment with customer-facing outcomes and the organization’s goals
Cons
- Errors will directly hurt client or financial outcomes
- Internally facing suggestion engines could also be hard to validate
- Potential for algorithm bias and negative externalities
5. The automator
Prototypical scenario
A self-driving automobile takes its owner to the airport. The owner sits in the motive force’s seat, just in case they should intervene, but they rarely do.
Who initiates
An operational, product, or data science team can see the chance to automate a task.
How the DS team is judged
The Automator is evaluated on whether their system produces higher or cheaper outcomes than when a human was executing the duty.
Common pitfalls to avoid
Automation can deliver super-human performance or remove substantial costs. Nonetheless, automating a posh human task could be very difficult and expensive, particularly, whether it is embedded in a posh social or legal system. Furthermore, framing a project around automation encourages teams to mimic human processes, which can prove difficult due to unique strengths and weaknesses of the human vs the algorithm.
Pros
- May drive substantial improvements or cost savings
- Consistent performance without the variability intrinsic to human decisions
- Frees up human resources for higher-value more strategic activities
Cons
- Automating complex tasks could be resource-intensive, and thus low ROI
- Ethical considerations around job displacement and accountability
- Difficult to take care of and update as conditions evolve
6. The choice supporter
Prototypical scenario
An end user opens Google Maps and kinds in a destination. Google Maps presents multiple possible routes, each optimized for various criteria like travel time, avoiding highways, or using public transit. The user reviews these options and selects the one which best aligns with their preferences before they drive along their chosen route.
Who initiates
The information science team often recognizes a chance to help decision-makers, by distilling a big space of possible actions right into a small set of top quality options that every optimize for a distinct outcomes (e.g., shortest route vs fastest route)
How the DS team is judged
The Decision Supporter is evaluated based on whether their system helps users select good options after which experience the promised outcomes (e.g., did the trip take the expected time, and did the user avoid highways as promised).
Common pitfalls to avoid
Decision support systems capitalize on the respective strengths of humans and algorithms. The success of this method will rely upon how well the humans and algorithms collaborate. If the human doesn’t want or trust the input of the algorithmic system, then this sort of project is far less prone to drive impact.
Pros
- Capitalizes on the strengths of machines to make accurate predictions at large scale, and the strengths of humans to make strategic trade offs
- Engagement of the info science team within the project’s inception and framing increase the likelihood that it can produce an modern and strategically differentiating capability for the corporate
- Provides transparency into the decision-making process
Cons
- Requires significant effort to model and quantify various trade-offs
- Users may struggle to grasp or weigh the presented trade-offs
- Complex to validate that predicted outcomes match actual results
A portfolio of projects
Under- or overutilizing particular models can prove detrimental to a team’s long run success. As an illustration, we’ve observed teams avoiding BI projects, and suffer from a scarcity of alignment about how goals are quantified. Or, teams that avoid Analyst projects may struggle because they lack critical domain expertise.
Much more often, we’ve observed teams over utilize a subset of models and turn out to be entrapped by them. This process is illustrated in a case study, that we experienced:
A brand new data science team was created to partner with an existing operational team. The operational team was excited to turn out to be “data driven” and in order that they submitted many requests for data and evaluation. To maintain their heads above water, the info science team over utilize the BI and Analyst models. This reinforced the operational team’s tacit belief that the info team existed to service their requests.
Eventually, the info science team became frustrated with their inability to drive innovation or directly quantify their impact. They fought to secure the time and space to construct an modern Decision Support system. But after it was launched, the operational team selected to not put it to use at a high rate.
The information science team had trained their cross functional partners to view them as a supporting org, relatively than joint owners of selections. So their latest project felt like an “armchair quarterback”: It expressed strong opinions, but without sharing ownership of execution or consequence.
Over reliance on the BI and Analyst models had entrapped the team. Launching the brand new Decision Support system had proven a time consuming and frustrating process for all parties. A tops-down mandate was eventually required to drive enough adoption to evaluate the system. It worked!
In hindsight, adopting a broader portfolio of project types earlier could have prevented this example. As an illustration, as a substitute of culminating with an insight some Evaluation projects must have generated strong Recommendations about particular actions. And the info science team must have partnered with the operational team to see this work all through execution to final assessment.
Conclusion
Data Science leaders should intentionally adopt an organizational model for every project based on its goals, constraints, and the encircling organizational dynamics. Furthermore, they ought to be mindful to construct self reinforcing portfolios of various project types.
To pick a model for a project, consider:
- The character of the issues you’re solving: Are the motivating questions exploratory or well-defined?
- Desired outcomes: Are you in search of incremental improvements or modern breakthroughs?
- Organizational hunger: How much support will the project receive from relevant operating teams?
- Your team’s skills and interests: How strong are your team’s communication vs production coding skills?
- Available resources: Do you could have the bandwidth to take care of and extend a system in perpetuity?
- Are you ready: Does your team have the expertise and relationships to make a selected kind of project successful?