Home Artificial Intelligence Batched Bandit Problems

Batched Bandit Problems

3
Batched Bandit Problems

Photo by Erik Mclean on Unsplash

Experiments are vital to any business operation. While AB tests are the defacto standard, you’d be surprised to search out many practitioners are usually not running proper randomized experiments. A more likely scenario involves seasoned experimenters using their judgement of when to drag a treatment, overriding the sacred power evaluation that will allow for a proper statistical inference. At it’s heart though, this solves the issue of regret so many randomized experiments struggle with.

The savvy reader has noticed I teed up an incredible intro to dive right into a conversation about bandits, but this conversation is barely different from other articles. Here we concentrate on a category of bandit problems that receives little attention outside of academia: batched bandit problems. In the standard bandit scenario, an agent executes an motion and immediately enjoys a reward. This paradigm is commonly sufficient on the planet of digital marketing and website optimization, but what about when rewards are usually not instantaneous?

Take this instance, you’re running an email campaign, but nobody within the marketing department can resolve which of 5 pieces of copy can be essentially the most effective. Your key KPI can be click through rate and is measured if the user clicks through the ad at any point during 5 days after receiving the e-mail. For now, assume all users respond equivalently. The window to run the campaign is small. Given this information, how will you quickly resolve which creative works the perfect and maximize the CTR over the course of the campaign?

This text is broken up into a number of sections. First we discuss the concept of the grid, the configuration of what number of subjects receive the treatment during each batch. We move on to take a look at tips on how to format an epsilon-greedy bandit algorithm as a batched bandit problem, and extend that to any bandit algorithm. We then study Batched Successive Elimination (BaSE) and it’s similarity to the radomized experiment (AB Test). Finally, we take a look at some experiments to review the effect of variety of batches and variety of arms and the way those influence regret. We’ll see how those algorithms compare to a randomized experiment with the identical grid configurations. A discussion across the experimental results is provided, which is followed by general guidelines concluded from the simulations.

Suppose we’ve got a set of Actions A each corresponding to an arm, each with it’s own associated reward R. In our case, R can be a Bernoulli distributed random variable with a real mean equal to the CTR for our ad copy. In code this looks like:

class Arm:
def __init__(self, mean):
self.mean = mean

def sample(self):
return np.random.binomial(1, self.mean)

The goal of the multi-bandit problem is given the set A, we wish to learn which arm has the utmost payoff, or the best true mean value. We define a policy because the function that tells us which arm is the perfect to drag. The catch here is that while we’re learning, or exploring the motion space, we’re playing sub-optimal actions. Due to this fact, the purpose of bandit algorithms is to balance exploring the possible actions after which exploiting actions that appear promising.

This text assumes readers can be acquainted with the Multi-Armed Bandit problem and the epsilon-greedy approach to the explore-exploit problem. For individuals who are usually not, this text gives a surface level overview. For a comprehensive overview, I like to recommend Sutton and Barto [1] Chapter 2 as a Reference.

As alluded to above, within the batched bandit problem, we don’t receive instantaneous rewards. Due to this fact we should be strategic about how we select actions and update our agent’s policy. This introduces the concept of the grid, what number of users to sample on each batch such that the agent is in a position to learn optimally. Perchet et al [2] introduced the Batched Bandit Problem of their paper and in addition introduced the grid. To formalize the grid, I take advantage of the notation from Gao et al [3].

The primary grid provided is the arithmetic grid, which is kind of trivial. This grid evenly divides the time horizon T into M equal batches. When M=T, that is akin to the normal bandit problem with easy rewards.

The second grid we use is the minimax grid, which goals to reduce the utmost regret. The equation for the ith term is:

The third grid provided is the geometric grid, which optimizes the regret with respect to a maximum regret sure. The equation for the ith terms is:

For more information in regards to the intuition behind the grid origins, I like to recommend the discussion in [2].

The epsilon-greedy algorithm is the same old introduction to the explore-exploit tradeoff fundamental to reinforcement learning. For that reason, I’m opting to begin with converting the epsilon-greedy algorithm to the batched framework, after which show how this might be prolonged to any bandit algorithm.

The difference between the batched bandit problem and the regular bandit problem is just when the agent is allowed to update the policy. A typical bandit algorithm might look something like this:

def eps_greedy_bandit():
""" Not real code! Repo link below """
for i in range(num_steps):
a = agent.get_eps_greedy_action()
r = agent.take_action(a)
agent.update(a, r)

Nonetheless within the batched bandit, we don’t get rewards in real time and must wait until the tip of the batch to update the agent’s policy. One key point is on the last batch the experiment is over. Exploring on the last batch provides no utility, since there are not any future batches, so we opt to go fully greedy on the last batch. Here’s an example of what this might seem like as an adaptation to our code above:

def eps_greedy_bandit(batches):
""" Not real code! Repo link below"""
for batch_size in range(grid[:-1]):
a_hist, r_hist = [], []
for _ in range(batch_size):
a = agent.get_eps_greedy_action()
r = agent.take_action(a)
a_hist.append(a)
r_hist.append(r)
agent.update(a_hist, r_hist) # the udpate location modified!

best_action = agent.get_best_action()
for _ in range(grid[-1]):
r = agent.take(best_action)

We are able to trivially extend this to any class of bandit algorithm. By replacing when the bandit is allowed to update, any algorithm might be used with any grid within the batched bandit framework.

Within the context of an email campaign, we may resolve to focus on 5000 users (T=5000). Depending on the time-frame available, we are able to pick a variety of batches (M = num_days_available / num_days_response). Let’s say we want to launch the campaign in 30 days and it takes 5 days for a response, then the variety of batches we are able to run is 6. We wish to explore in the primary 5 batches, but on the last batch our campaign is over, so on this batch we commit to the perfect motion.

As shown above, it’s easy to increase any bandit algorithm to the batched framework. Gao et al [3] showed this with their adaptation of Successive Elimination to the batched setting. Successive Elimination (SE) works by pruning less promising arms out of the candidate set as early — yet responsibly — as possible. To do that we sample each arm randomly in the course of the batch. At the tip of the batch we construct a confidence threshold as follows

Confidence threshold for pruning arms

where gamma is a scaling factor, T is the time horizon, K is the variety of arms, and tau_m is the variety of observations which have been sampled as much as that time.

To come to a decision if an arm stays within the candidate set, we take the difference between the cumulative rewards for the max arm and the cumulative rewards for one another arm. If the difference between the typical value and the present arm is bigger than our confidence threshold, then that arm is faraway from the set of Arms being explored. Below is the pseudo code for the algorithm provided by the authors.

Algorithm For BaSE. Image taken from Gao et al. [3]

An interesting note for the BaSE agent is how similar it’s to a randomized experiment. Noticing step (a), we randomly sample each arm within the set A and observe the reward, exactly as we’d within the randomized experiment. The difference is within the pruning step (b), where we successively try and remove a candidate arm from the present Arm set based on available information. As alluded to at the start of the article, most practitioners don’t run proper AB tests, but moderately opt to manually review and prune less promising arms manually. BaSE mimics this process by introducing an automatic heuristic that would remove the necessity for human intervention.

To know the algorithms within the batched environment we’ll take a look at a few experiments with variety of batches and the variety of arms. Each algorithm was run for 100 trials with T=5000 for every grid.

To check the effect of variety of batches, an experiment was conducted using a hard and fast set of arms with means 0.5, 0.51, and 0.6. The variety of batches was tested at values of two, 4, 6, and eight. Each agent was run for every of the three grids presented above. To maintain the discussion easy, the perfect performing bandit and grid combination was chosen. Results are shown within the figure below.

Distributions for every M for every agent during batch experiment

To check the effect of the variety of arms, an experiment was conducted performance on different sets of arms. Each arm set contained an optimal arm with mean 0.6 and for every point of the experiment an arm was added to the arm set with mean around 0.5. This was repeated to generate arm sets with cardinality between 2 and 6. The batch size for this experiment was fixed to M=6. The outcomes of the experiment are below.

Distributions for every K for every agent during arm experiment

Full results from the experiments might be present in the repo in this notebook.

As noticed from the experiments, all agents performed the perfect on the geometric grid for just about all settings of the experiment. The intuition behind this comes from the variety of sampled individuals before the ultimate batch.

The figure below shows the cumulative variety of subjects treated during each batch for every grid in a scenario where M=6 and T=5000. The drastic difference here is that the geometric grid treats substantially less subjects than the arithmetic or the minimax grid before the ultimate batch, where the agent opts for a full greedy strategy. Which means that the agents on the geometric grid are able to take advantage of more subjects than those on the minimax or arithmetic grid, which ends up in higher performance. These findings are supported by the ideas present in Bayati et al [4] where the authors do a deep evaluation as to why greedy algorithms achieve surprisingly low regret compared with other algorithms.

Cumulative variety of subjects sampled per batch for every grid.

This trend nonetheless doesn’t generalize to grids with smaller batch numbers. For the case where M=2 the variety of samples in the primary batch of the geometric grid is kind of small. On this scenario, the higher alternative is to think about either a minimax grid or within the case of sparse rewards (ie an especially small mean for the arm’s rewards) an arithmetic grid is likely to be appropriate.

During each experiments the randomized agent (AB Agent) and BaSE agent display very similar performance. That is true just for the geometric grid for the benefits in exploration discussed above. While BaSE does introduce a confidence interval for pruning arms early, this confidence interval isn’t at all times triggered before the ultimate batch and leads to the same performance to the randomized trial.

This problem of triggering the boldness threshold highlights the issue of hyperparameters in experimentation systems. BaSE and Epsilon-Greedy each suffer from this problem. Taking a look at the Epsilon-Greedy agent we are able to see that this agent has extreme variability between trials. That is resulting from the static hyperparameters used between trials. When using an agent like BaSE or Epsilon-Greedy, it is crucial to select hyperparameters appropriate for you problem. These parameters might be determined by a simulation before your experiment.

A surprising note of the experiment comes from the Thompson Sampling Agent (TS Agent) that had relatively stable performance between trials. The TS Agent doesn’t suffer from the hyperparameter problem previously discussed, but does require some prior knowledge. To make use of the TS Agent, an implementation must know the prior distribution and support a derivation of the posterior distribution to update the agent. Within the case of CTR, this is straightforward since we are able to sample results from a Beta distribution and the posterior of the Beta distribution is the Beta distribution. If the reward signal you might be working with is continuous this becomes trickier to be sure your prior is correct. For those who’re inquisitive about learning more about Thompson Sampling, this text provides an excellent surface level introduction and Russo et al. [5] provides a comprehensive overview.

The next appear to be some protected guidelines for general practice based on the simulation:

  • If the experiment goes to be managed (human interaction between batches), then the BaSE agent with human judgement on key KPIs is an excellent option to find out when to enter the exploit phase
  • If the underlying distribution is thought and the experiment can be fully automated, then Thompson Sampling is an excellent option
  • If the underlying distribution just isn’t known or has an advanced posterior and the experiment is fully automated, then rigorously considered parameters for an epsilon greedy agent or a BaSE agent are good options

It’s essential to notice these takeaways apply generally to those arm distributions. Depending in your scenario these agents could respond in another way. For that reason, it’s advised to run your personal simulations to gauge how your experiment ought to be constructed, based on loose expectations of the rewards out of your Arms.

This was a transient introduction to some ideas on tips on how to convert bandit algorithms to batched bandit algorithms. All of those algorithms have been implemented and can be found so that you can access within the repo. For further study I’d suggest the algorithms proposed in [2] and [3], which offer deep intuition into the batched bandit problem and among the underlying proofs for regret bounds.

You may access the Repo Here!

All images unless otherwise noted are by the creator.

[1] Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.

[2] Vianney Perchet, Philippe Rigollet, Sylvain Chassang, & Erik Snowberg (2016). Batched bandit problems. The Annals of Statistics, 44(2).

[3] Gao, Z., Han, Y., Ren, Z., & Zhou, Z.. (2019). Batched Multi-armed Bandits Problem.

[4] Bayati, M., Hamidi, N., Johari, R., & Khosravi, K.. (2020). The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms.

[5] Russo, D., Van Roy, B., Kazerouni, A., Osband, I., & Wen, Z. (2017). A Tutorial on Thompson Sampling.

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here