One Turn After One other

-

While some games, like rock-paper-scissors, only work if all payers choose their actions concurrently, other games, like chess or Monopoly, expect the players to take turns one after one other. In Game Theory, the primary sort of game known as a static game, while turn-taking is a property of so-called dynamic games. In this text, we’ll analyse the latter with methods from game theory. 

This text is the fourth a part of a four-chapter series on the basics of game theory. I like to recommend you to read the primary three articles if you happen to haven’t done that yet, because the concepts shown here will construct on the terms and paradigms introduced within the previous articles. But if you happen to are already aware of the core fundamentals of game theory, don’t let yourself be stopped, and go ahead!

Dynamic games

Dynamic games might be visualized as trees. Photo by Adarsh Kummur on Unsplash

While to this point we only checked out static games, we’ll now introduce dynamic games where payers take turns. As previously, such games include quite a lot of players , a set of actions for every player, and a reward function that assesses the actions of a player given the opposite players’ actions. Beyond that, for a dynamic game, we want to define an order during which the players take their turns. Consider the next tree-like visualization of a dynamic game. 

A visualization of a dynamic game. Figure by writer.

At the highest we have now a node where player 1 has to make your mind up between two actions L and R. This determines whether to follow the left part or the suitable a part of the tree. After player 1’s turn, player 2 takes their turn. If player 1 chooses L, player 2 can determine between l1 and r1. If player 1 chooses R, player 2 has to make your mind up between l2 and r2. On the leaves of the tree (the nodes at the underside), we see the rewards identical to we had them within the matrix cells in static games. For instance, if player 1 decides for L and player 2 decides for r1, the reward is (1,0); that’s, player 1 gets a reward of 1, and player 2 gets a reward of 0. 

I bet you might be desperate to find the Nash equilibrium of this game, as that is what Game Theory is principally about (if you happen to still struggle with the concept of Nash equilibrium, it is advisable to have a look back at chapter 2 of this series). To try this, we are able to transform the sport right into a matrix, as we already know find out how to discover a Nash equilibrium in a game displayed as a matrix. Player 1 decides on the row of the matrix, player 2 decides on the column and the values within the cell then specifies the reward. Nevertheless, there may be one necessary point to note. Once we take a look at the sport displayed as a tree, player 2 decides on their motion player 1 does and hence only cares in regards to the a part of the tree that is definitely reached. If player 1 chooses motion L, player 2 only decides between l1 and r1 and doesn’t care about l2 and r2, because these actions are out of the query anyway. Nevertheless, after we seek for a Nash Equilibrium, we want to concentrate on what would occur, if player 1 would change their motion. Due to this fact, we must know what player 2 would have done if player 1 had chosen a distinct option. That’s the reason we have now 4 columns in the next matrix, to all the time account for decisions in each parts of the tree. 

A column like (r1,l2) might be read as “player 2 chooses r1 if player 1 selected L and chooses l2 if player 1 selected R”. On this matrix, we are able to seek for the very best answers. For instance, the cell (L, (l1,l2)) with reward 3,1 is a best answer. Player 1 has no reason to vary from L to R because that may lower his reward (from 3 to 1), and Player 2 has no reason to vary either because not one of the other options is best (one is nearly as good, though). In total, we discover three Nash equilibria, that are underlined within the upcoming matrix: 

The chocolate-pudding market

We’ll discuss chocolate pudding now. But in addition about game theory. Photo by American Heritage Chocolate on Unsplash

Our next example brings the thought of dynamic games to life. Let’s assume player 2 is a market-leading retailer of chocolate pudding. Player 1 also wants to accumulate his business but isn’t sure yet whether to hitch the chocolate pudding market or whether or not they relatively should sell something else. In our game, player 1 has the primary turn and might determine between two actions. Join the market (i.e., sell chocolate pudding), or don’t join the market (i.e., sell something else). If player 1 decides to sell something apart from chocolate pudding, player 2 stays the market-dominating retailer for chocolate pudding and player 1 makes some money in the opposite area they decided for. That is reflected by the reward 1,3 in the suitable a part of the tree in the next figure. 

The market-game as a dynamic game. Figure by writer. 

But what if player 1 is greedy for the unimaginable riches that lie dormant on the chocolate pudding market? In the event that they determine to hitch the market, it’s player 2’s turn. They will determine to simply accept the brand new competitor, give in and share the market. On this case, each players get a reward of two. But player 2 may determine to start out a price cutting war to exhibit his superiority to the brand new competitor. On this case, each players get a reward of 0, because they damage their profit resulting from dumping prices. 

Identical to before, we are able to turn this tree right into a matrix and find the Nash equilibria by looking for the very best answers:

If player 1 joins the market, the very best option for player 1 is to present in. That is an equilibrium because no player has any reason to vary. For player 1 it doesn’t make sense to go away the market (that may give a reward of 1 as a substitute of two) and for player 2 it isn’t any good idea to change to fighting either (which might give a reward of 0 as a substitute of two). The opposite Nash equilibrium happens when player 1 just doesn’t join the market. Nevertheless, this scenario includes player 2’s decision to fight, if player 1 had chosen to hitch the market as a substitute. He mainly makes a threat and says “In case you join the market, I’ll fight you.” Do not forget that previously we said we want to know what the players would do even within the cases that don’t appear to occur? Here we see why this is vital. Player 1 must assume that player 2 would fight because that’s the only reason for player 1 to remain out of the market. If player 2 wouldn’t threaten to fight, we wouldn’t have a Nash equilibrium, because then joining the market would turn out to be a greater option for player 1. 

But how reasonable is that this threat? It keeps player 1 outside the market, but what would occur if player 1 didn’t consider the threat and decided to still join the market? Would player 2 really perform his threat and fight? That may be very silly because it could give him a reward of 0, whereas giving in would give a reward of two. From that perspective, player 2 used an empty threat that just isn’t very reasonable. If the case really occurs, he wouldn’t carry it out anyway, would he?

Subgame perfect equilibrium

For a subgame perfect equilibrium, before you get the entire picture, it’s worthwhile to start with small parts of the sport. Photo by Ben Stern on Unsplash

The previous example showed that sometimes Nash equilibria occur, that usually are not very reasonable throughout the game. To deal with this problem, a more strict concept of equilibrium has been introduced which known as a subgame perfect equilibrium. This adds some stricter conditions to the notion of an equilibrium. Hence every subgame perfect equilibrium is a Nash equilibrium, but not all Nash equilibria are subgame perfect. 

A Nash equilibrium is if every subgame of this equilibrium is a Nash equilibrium itself. What does that mean? First, we have now to grasp that a subgame is a component of the sport’s tree that starts at any node. For instance, if player 1 chooses L, the rest of the tree under the node reached by playing L is a subgame. In a likewise fashion, the tree that comes after the node of motion R is a subgame. Last but not least, the entire game is all the time a subgame of itself. As a consequence, the instance we began with has three subgames, that are marked in grey, orange and blue in the next: 

The market game has three subgames. Figure by writer.

We already saw, that this game has three Nash equilibria that are (L,(l1,l2)), (L, (l1,r2)) and (R,(r1,r2)). Allow us to now discover which of those are subgame perfect. To this end, we investigate the subgames one after one other, starting with the orange one. If we only take a look at the orange a part of the tree, there may be a single Nash equilibrium that happens if player 2 chooses l1. If we take a look at the blue subgame, there may be also a single Nash equilibrium that’s reached when player 2 chooses r2. Now that tells us that in every subgame perfect Nash equilibrium, player 2 has to decide on option l1 if we arrive within the orange subgame (i.e. if player 1 chooses L) and player 2 has to decide on option r2 if we arrive on the blue subgame (i.e., if player 1 chooses R). Only one in all the previous Nash equilibria fulfills this condition, namely (L,(l1,r2)). Hence that is the one subgame perfect Nash equilibrium of the entire game. The opposite two versions are Nash equilibria as well, but they’re somewhat unlogical within the sense, that they contain some sort of empty threat, as we had it within the chocolate pudding market example before. The tactic we just used to seek out the subgame perfect Nash equilibrium known as backwards induction, by the best way. 

Uncertainty

In dynamic games, it may occur that you will have to make decisions without knowing exactly what node of the sport you might be in. Photo by Denise Jans on Unsplash

Up to now in our dynamic games, we all the time knew which decisions the opposite players made. For a game like chess, that is the case indeed, as every move your opponent makes is perfectly observable. Nevertheless, there are other situations during which you may not make certain in regards to the exact moves the opposite players make. For instance, we return to the chocolate pudding market. You’re taking the angle of the retailer that’s already available in the market and you will have to make your mind up whether you’d start fighting if the opposite player joins the market. But there may be one thing you don’t know, namely how aggressive your opponent will probably be. Once you start fighting, will they be frightened easily and quit? Or will they be aggressive and fight you until only one in all you is left? This might be seen as a call made by the opposite player that influences your decision. In case you expect the opposite player to be a coward, you may prefer to fight, but in the event that they develop into aggressive, you’d relatively want to present in (reminds you of the birds fighting for food within the previous chapter, doesn’t it?). We will model this scenario in a game like this: 

A dynamic game with a hidden decision (indicated by the dotted circle). Figure by writer.

The dotted circle across the two nodes indicates, that these are hidden decisions that usually are not observable to everyone. In case you are player 2, you recognize whether player 1 joined the market or not, but in the event that they joined, you don’t know whether or not they are aggressive (left node) or moderate (right node). Hence you act under uncertainty, which is a quite common ingredient in lots of games you play in the actual world. Poker would turn out to be very boring if everybody knew everyone’s cards, that’s why there may be private information, namely the cards in your hand only you recognize about. 

Now you continue to have to make your mind up whether to fight or give in, although you usually are not exactly sure what node of the tree you might be in. To try this, you will have to make assumptions in regards to the likelihood of every state. In case you are quite certain that the opposite player is behaving moderately, you is perhaps up for a fight, but if you happen to assume them to be aggressive, you may prefer giving in. Say there may be a Probability that the opposite player is aggressive and that they behave moderately. In case you assume to be high, you must give in, but when becomes smaller, there must be some extent where your decision switches to fighting. Let’s try to seek out that time. Particularly, there must be a sweet spot in between where the probability of the opposite player being aggressive vs. moderate is such that fighting and giving in are equal alternatives to at least one one other. That’s, the rewards can be equal, which we are able to model as follows: 

Do you see how this formula is derived from the rewards for fighting or giving in in the various leaves of the tree? This formula solves to p=1/3, so if the probability of the opposite player being aggressive is 1/3 it could make no difference whether to fight or give in. But if you happen to assume the opposite player to be aggressive with a probability of greater than 1/3, you must give in, and if you happen to assume aggressiveness to be less likely than 1/3, you must fight. It is a chain of thought you furthermore mght have in other games where you act under uncertainty. Once you play poker, you may not calculate the chances exactly, but you ask yourself, “How likely is it that John has two kings on his hand?” and depending in your assumption of that probability, you check, raise or quit. 

Summary & outlook

Your journey on the seas of game theory has only just begun. There’s so way more to explore. Photo by George Liapis on Unsplash

Now we have now learned so much about dynamic games. Allow us to summarize our key findings. 

  • Dynamic games include an order during which players take turns. 
  • In dynamic games, the players’ possible actions depend on the previously executed actions of the opposite players. 
  • A Nash equilibrium in a dynamic game might be implausible, because it comprises an empty threat that may not be rational.
  • The concept of subgame perfect equilibria prevents such implausible solutions. 
  • In dynamic games, decisions might be hidden. In that case, players may not exactly know which node of the sport they’re in and should assign probabilities to different states of the games. 

With that, we have now reached the tip of our series on the basics of game theory. We have now learned so much, yet there are many things we haven’t been in a position to cover. Game theory is a science in itself, and we have now only been in a position to scratch the surface. Other concepts that expand the probabilities of game-theoretic analyses include: 

  • Analysing games which are repeated multiple times. In case you play the prisoner’s dilemma multiple times, you is perhaps tempted to punish the opposite player for having betrayed you within the previous round. 
  • In cooperative games, players can conclude binding contracts that determine their actions to succeed in an answer of the sport together. That is different from the non-cooperative games we checked out, where all players are free to make your mind up and maximize their very own reward. 
  • While we only checked out discrete games, where each player has a finite variety of actions to pick from, continuous games allow an infinite variety of actions (e.g., any number between 0 and 1). 
  • A giant a part of game theory considers the usage of public goods and the issue that individuals might eat these goods without contributing to their maintenance. 

These concepts allow us to analyse real-world scenarios from various fields akin to auctions, social networks, evolution, markets, information sharing, voting behaviour and way more. I hope you enjoyed this series and find meaningful applications for the knowledge you gained, be it the evaluation of customer behaviour, political negotiations or the following game night with your pals. From a game theory perspective, life is a game!

References

The topics introduced listed below are typically covered in standard textbooks on game theory. I mainly used this one, which is written in German though:

  • Bartholomae, F., & Wiens, M. (2016). . Wiesbaden: Springer Fachmedien Wiesbaden.

An alternate within the English language may very well be this one:

  • Espinola-Arredondo, A., & Muñoz-Garcia, F. (2023). . Springer Nature.

Game theory is a relatively young field of research, with the primary most important textbook being this one:

  • Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x