A Product Data Scientist’s Tackle LinkedIn Games After 500 Days of Play

-

I-day streak on LinkedIn Games. Yes, LinkedIn also has games, and so they’ve been around for over a 12 months. Sometimes, I notice recent games, design tweaks, and recent features being rolled out. As a Data Scientist, I even have at all times wondered what LinkedIn is trying to realize with LinkedIn Games and the way they’re testing the changes. 

With AI augmenting and even automating many coding and basic analytics tasks, product sense and domain expertise change into increasingly more necessary for data scientists. Due to this fact, in this text, I’m using LinkedIn Games for example to point out how a Product Data Scientist thinks and works. This can be the kind of mental exercise I practice when preparing for product case interviews.


I. What’s the goal of LinkedIn Games

Step one in any product case is to grasp the product’s goal. Per LinkedIn, “Games on LinkedIn are thinking-oriented games to make it easier to sharpen your minds, take a fast break, and have the chance to attach with one another and spark conversations.”

These games are quick brain teasers, so that they do help users “sharpen minds and take a fast break” to some extent. But I consider the true intention hides behind the last part — “connect with one another and spark conversations”. 

Why does this matter? LinkedIn generates most of its revenue from talent solutions, promoting platform, and premium subscriptions. All of those depend on an energetic user base — recruiters need a big pool of energetic candidates, advertisers need targeted impressions, and the worth of premium subscriptions increases with the network size. Furthermore, the important thing to maintaining an energetic user base is user engagement and interactions, which ultimately result in higher retention.  

In data language, MAU (monthly energetic users) is one of the crucial common metrics to measure the energetic user base of a product. MAU in month X = energetic users in month X-1 + users acquired/resurrected in month X - users churned in month X. For LinkedIn, I consider LinkedIn Games is a feature that’s designed to grow MAU by reducing the last component, “users churned this month.”

II. How does LinkedIn Games achieve this goal

Now that we’re clear concerning the goal of improving retention, the subsequent query is, how does LinkedIn Games achieve it? I feel there are two mechanisms — direct interactions with LinkedIn Games and indirect engagement driven by coming back to the platform. 

1. Direct interactions with LinkedIn Games

Every single day, LinkedIn publishes a game post and encourages users to share their scores and suggestions. This is strictly what they meant by “make it easier to … connect with one another and spark conversations.” Below is a screenshot I took at around 10 PM Pacific Time on 11/29 — around 22 hours after the day by day Zip game was refreshed. You’ll find the entry point to this post after ending a game, or it’d show up in your homepage. This post had 1240 reactions and 1370 comments. Many users post their scores and interact with one another. 

This type of social interaction is valued by many LinkedIn users. Sharing your good game rating can be just like sharing a small achievement, so it doesn’t work against the skilled social network image of LinkedIn. In consequence, LinkedIn Games creates a network effect that increases retention. 

Zip game post by LinkedIn (screenshots by the writer)
Zip game post by LinkedIn (screenshots by the writer)

2. Indirect engagement from returning to the platform

Meanwhile, there are absolutely people like me who’re simply hooked on the games but never share scores or comment on the post. No interaction doesn’t mean LinkedIn Games doesn’t achieve the retention goal for this group of users. The proven fact that it brings users back day-after-day is already a powerful retention lever. 

Linked Games achieves this by making a habit loop. Let me put it within the Hooked Model (Trigger-> Motion -> Reward -> Investment) to unpack it:

  1. Trigger: Users are prompted to return by external triggers like push notifications and homepage modules, and internal triggers comparable to the need to take care of a streak.
  2. Motion: The puzzles are easy to grasp, low-friction to enter, and quick to play.
  3. Reward: Users get a unique puzzle day-after-day, earn streak badges, and might compete with their connections.
  4. Investment: Users “invest” by constructing a streak, getting connections to play, sharing results, improving their leaderboard rank, etc. Every single day, users construct up sunk effort, making it harder to stop.

With this habit loop, users come back day by day. So long as a user opens LinkedIn, there’s also a probability that they may try other things, like network updates, messages, job openings, etc. These actions could all result in meaningful engagements outside of the Games feature and increase overall retention. 

III. Experimentations on LinkedIn Games 

We covered the goal of LinkedIn Games and the mechanisms behind it— LinkedIn Games goals to enhance user retention by encouraging interaction on Games content and increasing overall product engagement. As a knowledge scientist, for those who work on this product, a key a part of your job shall be collaborating with Product Managers, Designers, and Engineers to brainstorm initiatives and run experiments to measure the retention impact. And that is clearly happening with LinkedIn Games, as I even have noticed so many design changes over time. Let me walk through some examples and discuss how data scientists shall be involved.

1. Entry points to LinkedIn Games

Without delay, you possibly can access LinkedIn Games through:

  • Games hub
  • Seek for games on the LinkedIn Search bar
  • My Network page
  • Today’s Games section under LinkedIn News in your Desktop homepage or Side panel within the LinkedIn mobile app
  • Notifications

But this is just not at all times the case. I remember in the future the entry point on the My Network page disappeared, and I had to look within the app to search out the games. But just a few days later, it appeared again. The placement of entry points determines how easy it’s to search out the feature, for each recent and returning users. But more entry points aren’t at all times higher. While more entry points increase visibility, each of them can create a contextual bias — users who land on My Network might behave otherwise than those that come through a notification— thus, different entry point has different impacts on engagement and retention. In other words, they may cannibalize one another. 

For instance, the My Network entry point sits below the invitations and above the connection recommendations. When a user visits this page to play the day by day game, inevitably, they may see the pending invitations, and it can remind them to take motion — expanding connections is a critical part of constructing a user’s LinkedIn experience meaningful and beneficial. Meanwhile, in the event that they go to their homepage for the games, they may as a substitute see other users’ posts, and usually tend to interact with the posts.

LinkedIn Games entry points on My Network page (screenshots by the writer)
LinkedIn Games entry points on homepage (screenshots by the writer)

Several types of interactions have different impacts on retention, and it is difficult to estimate the precise impact of removing/adding an entry point without running an experiment. Now the duty is on the info scientists to design the experiment. 

Here is the way it could look:

  • Experiment design: control = current design, treatment = removing the entry point on My Network
  • Randomization unit: user-level A/B testing, 50% users will randomly see the control vs. the treatment design
  • Primary metric: 7-day retention rate — time window can vary based on how quickly we wish to measure the retention impact and any past learnings. One caveat is that retention is a lagging metric, and LinkedIn Games might need relatively low traffic in comparison with the remainder of the platform, which makes it difficult to detect retention impact within the short term. In that case, the first metric might have to shift to a number one indicator of retention, or data scientists might have to depend on causal inference techniques to estimate the retention lift more reliably.
  • Secondary metrics: % users played a game; % users interacted with network posts; % users added connections; Average sessions per user; Average time spent per user
  • Guardrail metric: average app/website performance

Data scientists will work with the cross-functional team to align on metrics based on the goal of the experiment, run power evaluation to find out the experiment time length and scope, conduct implementation checks, and eventually analyze the outcomes to make a call on the very best combination of entry points. 

2. Notifications

Several months back, I began receiving reminder notifications like “You’re on a xxx-day streak. Play xxx now to maintain it going”. Later, after ending the games, there’s one other set of notifications saying “congrats on ending xxx”. 

LinkedIn Games notification (screenshot by the writer)

Notifications might be annoying, but they’re very effective in bringing users back. For instance, Duolingo is known for its creative and “psychologically manipulative” notifications (I’m on a 1735-day Duolingo streak by the way in which). Their early blog post described how Duolingo used multi-armed bandits to search out the best-performing notification.

Similarly, optimizing notifications can have a big impact on LinkedIn Games. Data scientists can run experiments to check:

  1. When to send the reminder notification — it could possibly be during lunch break or after work time when users usually tend to be available, or when the user normally opens the app, and even after they played the sport yesterday.
  2. When to send the congrats notification — the congrats notification could possibly be used to bring a user back to the app and encourage them to post their results and interact with other players. Similarly, we are able to test sending it right after ending the sport, or perhaps later within the day when more users have played the sport.
  3. The message text — should the tone be neutral or more aggressive? How long should it’s?
  4. The CTA (call to motion) text — “Solve now”? “Play now”? “Extend your streak”? Different text on the button could lead on to different click-through rates.
  5. Frequency — if a user doesn’t come back to play the sport after the primary notification, should we send one other reminder?

Here is just a brief list off the highest of my head, nevertheless it’s already a number of different combos of notification designs. It is completely possible that text A coupled with timing X is healthier than text B coupled with timing Y. Due to this fact, running experiments for every decision one after the other is each inefficient and could lead on to a sub-optimal result. That’s the reason Duolingo mentioned the multi-armed bandit framework above. It’s a framework to check multiple variations concurrently, and in contrast to traditional A/B tests, it quickens the experiments by routinely diverting more traffic to the winning arms based on a reward function and reducing the variety of arms within the test quickly. Due to this fact, the multi-armed bandit could possibly be very useful to check LinkedIn Games notifications. If you would like to learn more, here is one other article by Stitch Fix on how they use multi-armed bandits of their experimentation platform.

So what’s the info scientist’s role here? In fact, they may brainstorm with the stakeholders to give you different variations, define the reward functions (e.g., whether a user plays today’s puzzle), run the multi-armed bandit setup, and interpret the outcomes. 

3. Game results page 

One other area where I even have noticed many changes is the sport results page. After ending the sport, the user first lands on a results summary, with attractive stats cards like “On fire 500-day win streak!”, “Top 95% All players”, and “Smarter than 90% of CEOs”. It also has a outstanding “Share” button that prompts you to share your results as a post or as a direct message to your connections.

After that, there’s an extended results page with seven major sections:

  1. Header — how quickly the user solved the puzzle with copy and share CTAs.
  2. Connection leaderboard — where you rank amongst your connections. When you click on “see full leaderboard”, there are CTAs to nudge connections who haven’t played today.
  3. The “play one other game” CTA asks you to explore different games.
  4. One other summary panel with more stats, including all-time win rate, best rating, streak badges, and a push notification toggle.
  5. Weekly industry and college leaderboards with share options.
  6. Link to the day by day game post, where you possibly can react or comment directly.
  7. “Unlock this week’s bonus puzzle” by inviting your connections to play the sport. 
Game results page (screenshots by the writer)

Do you see the pattern? Every section has some CTAs to encourage sharing, engagement, or social interaction. Nevertheless, is the present sequence of cards the very best sequence for the retention end result? Is there a greater UI for the outcomes stat cards with a better share rate? Do people even care concerning the rating of their employer and college? 

To reply these questions, a knowledge scientist would design experiments just like the one we discussed within the entry points section to measure the click-through rates, interactions, and overall retention impact.

To take one step further, different users might need different preferences. For instance, 

  • User A desires to “exhibit” their rating and the way much smarter they’re than the CEOs, so the present sequence works perfectly for them, as they will do it right on the primary screen. 
  • User B feels a powerful belonging to their community, so they may share the leaderboard, asking coworkers or classmates to hitch the sport to enhance the rating of their company or school. Due to this fact, displaying the leaderboards on top will improve their engagement. 
  • User C likes to share their puzzle suggestions and seek advice from other players, then we should always move the post up on the outcomes page. 
  • User D simply enjoys the sport, and they might invite others to unlock more puzzles in the event that they are offered this selection. But with the present design, they may not even scroll down all of the strategy to the underside and miss the “unlock this week’s bonus puzzle” card. 

This type of personalized results page makes quite a lot of sense theoretically, but easy methods to make it work is one other complicated data science query. Data scientists could segment users based on user profiles and past activities — for instance, what number of connections they’ve, how a lot of their connections have played the games, if the user often posts or comments, etc. Then the info scientists could analyze the experiment results by different user segments to discover which design works the very best for every segment and give you the personalization strategy. To make the system even smarter, data scientists could construct a machine learning model to predict the cardboard layout that maximizes engagement for every user. 

4. Nuance: Network effect

Last but not least, let me speak about a vital nuance for running experiments on a social platform like LinkedIn — the network effect. A/B testing has a powerful Stable Unit Treatment Value Assumption (SUTVA), which assumes a person user’s end result is decided only by the treatment they receive, and is just not affected by the treatments of other users. Nevertheless, this doesn’t at all times hold on social networks. 

Considering the LinkedIn Games example — Assume we modified the leaderboard UI and, consequently, users within the treatment group have a better probability to “nudge” their connections. And plenty of “nudged” users — some within the control group — find yourself playing the sport too. This network effect biases the experiment result and dilutes the relative impact between treatment and control. LinkedIn has written about this exact challenge and walked through how they detected the impact of this interference using cluster-based experiments. In brief, LinkedIn groups closely-connected users into one cluster while minimizing interactions between clusters, then they use clusters because the randomization unit (users in the identical cluster either all go to treatment or all go to regulate) to measure the impact with minimum interference. Due to this fact, for changes that would have a powerful network effect, the cluster-based experiment is an excellent alternative. Though the trade-off is that cluster-based experiments often reduce power, so that they require careful cluster construction and power evaluation.


LinkedIn Games seem easy, but behind every button, notification, streak badge, and leaderboard, there’s likely a sequence of product hypotheses, experiments, decisions, and data science work. 

In fact, that is just my brain exercise as a LinkedIn user, but I hope this text helps you higher understand what Product Data Science looks like in practice. When preparing for product case interviews and even while you notice a brand new feature on a product you employ commonly, you possibly can apply similar mental exercises to sharpen your product sense and change into a stronger data scientist. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Previous article
0
Would love your thoughts, please comment.x
()
x