Segmentation at Scale to Enable Higher Health Outcomes Motivation TLDR; the Model and Segmentation Logic Technical Considerations Conclusion

-

An illustrative example of the segmentation of a member’s journey

At League it’s our mission to empower people to live happier, healthier lives. With the intention to best serve our members it’s imperative that we’re in a position to predict their individual level of engagement. For instance, if we notice a segment of members is unlikely to return to the app in the following week we will send them notifications to nudge them to return. In doing so we will help promote healthy habits like being energetic and eating well. We call our engagement prediction and segmentation service “Retention Engine”.

So how does all of this work?

We use a Recency, Frequency, Monetary Value (RFM) style model along side a Logistic Regression classifier to calculate the probability a member will return in a given week. (For our purposes we’re only interested by the engagement components of the model so we omit the monetary value feature). We break these probabilities into segments using business logic to find out the segment a member is predicted to belong to in a given week. After these probability cutoffs are applied, we group members into 4 model based segments: unengaged, atrisk, engaged, and loyal. In the following week unengaged members are not possible to return to the platform, whereas atrisk members require immediate motion to retain. The engaged and loyal members have a high predicted probability of returning. We segment the remaining unclassified members using easy rules based logic.

League is a strategic partner of Google Cloud (GCP). For this project we desired to leverage quite a lot of services that GCP provides:

  • Structured data is managed with BigQuery which provides low latency big data processing at scale.
  • Python functions are executed using Cloud Run.
  • Models are saved and loaded from Cloud Storage on a weekly basis.
  • Artifact Registry is used to administer image versions.
  • Processes are scheduled using Cloud Composer.
  • Looker is used to visualise segment counts everyday.
  • Salesforce Marketing Cloud is linked to BigQuery to customize messaging to the person member.
  • Finally, as League is a FHIR native healthcare company, we write the outcomes to Cloud Healthcare API as FHIR Observations.

The granularity of the information plays a big role within the performance and complexity of a model. To simplify the inputs for those consuming the outcomes downstream, we use weekly engagement data. If the member logged in inside a given week then they receive a worth of 1, if not then 0. The outputs of the RFM model, recency, frequency, and t (time for the reason that member entered the information window) are also of interest for further segmentation.

One other consideration is the length of knowledge to make predictions with, also often called the information window. This era is about using business context and performance requirements. From a performance perspective, the information window may be considered a hyperparameter and is comparatively easy to pick for. From the business perspective it’s the stakeholders, the downstream users of the segments and associated outputs, that have to be consulted to be sure that the window is smart. For instance, we may fit with a client whose member base experiences high churn so an extended data window is probably not suitable for messaging.

Transparent performance evaluation is essential to getting stakeholders to purchase into the outcomes of a model. This may be achieved by considering the baseline performance context through which the downstream consumers of the model previously needed to operate. If the retention model can beat out comparator baselines, it becomes clear to stakeholders that they’ll trust the outcomes and apply them on a machine to machine basis at scale.

For our segmentation model we compare its performance to that of two separate baselines. The primary baseline (Majority Classifier) is essentially the most naive because it assumes that almost all motion of members from the last week shall be the motion all members absorb the following week. So if most members logged in inside the last week, it should predict all members will log in inside the following week. The second baseline is more tailored (Last Label Classifier). On this case it predicts on a member basis that last week’s motion is the motion the member will absorb the following week.

For the realm under the receiver operating characteristic curve (ROC AUC) a better rating is best and the rating ranges [0, 1]. Because the Majority Classifier is a no skill classifier that predicts the identical value for all members, it receives a rating of 0.5 each week. Comparatively, the performance of the Last Label Classifier is larger because it is personalized to the member level. The Retention Engine performs the most effective, using the RFM and Logistic Regression combination. Knowing a member’s previous engagement is kind of indicative of their subsequent engagement because the performance curves for Retention Engine and the Last Label Classifier curves mirror one another.

During development we encountered a few challenges that ultimately improved the outcomes of our model. In previous iterations we had tightly constrained the information window. This had the advantage of restricting our predictions to members that our capability teams thought can be most actionable. Nevertheless, the constrained window resulted in convergence errors as our model had too few different data points to suit with.

Increasing the length of the information window ameliorated this issue but resulted within the model predicting on more members than we would actually want to take motion on. Working with our marketing team we were in a position to create a fourth segment, unengaged, that could possibly be classified but restricted from messaging. This also reduced the quantity of knowledge that we would have liked to jot down to Cloud Healthcare API, improving our write times which had been considerable. With these tweaks we were in a position to increase the performance/efficiency of the model/process without compromising its business application. Within the figure below we offer an example of what a member journey might seem like through Retention Engine:

The member begins their journey on day 1 and is energetic for the primary 2 days (black stars). They’re labeled as recent for the primary 2 weeks. After 2 weeks the model has enough data to make a prediction and classifies them as atrisk because the member only logged in inside the first week of the period (throughout the first 2 days). Within the week they’re classified as atrisk the member doesn’t log in so in the next week they’re classified as unengaged. After 3 weeks as unengaged and without logins, the member falls out of the model and is marked dormant. After 90 days without login the member moves from dormant to lapsed. On the 99th day the member returns and is marked reactivated. In the next week the member is picked back up by the model and classified as atrisk. Over the following several weeks the member continues to log back in and moves from engaged to loyal.

Constructing this service took input from Data Scientists, Analysts, Engineers, Marketing, and Clients. This holistic approach allowed us to beat several obstacles and produce a performant model to raised understand our members. Our Retention Engine is just certainly one of the exciting ways we’re working to drive healthier outcomes as our platform grows!

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

1 COMMENT

0 0 votes
Article Rating
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

1
0
Would love your thoughts, please comment.x
()
x