In Part III of the Tragedy of the Commons 4-part series, I give my analysis of the game played in class. You should at least read Tragedy of the Commons: Part I — The Setup for a description of the game before reading this section.
The thing to note about the way the game has been orchestrated is that students are attempting to end up with as large a share of the overall generated draw tickets as possible in order to win the real money. This means that the rational player’s competitors/opponents are the entire class, not (just) the people playing in the same game. Thus, the rational player, who plays with disregard for how his/her opponents play and thus plays dominant strategies where possible, should therefore try to maximize his/her utility, even if it means (unfairly) increasing the utility of the three other people with whom the rational player is playing the game; we shall call the other three players and the rational player ipse with whom a rational player is playing his/her “group”.
A dominant strategy is one in which a player adopting it will never be worse off than any player that does not adopt it. However, the monetary draw component introduced by J makes the previous strategy no longer dominant and, as we shall see, eliminates the existence of a dominant strategy. In effect, J has tied all the simultaneously running games together into one super-game. To demonstrate that the once-dominant strategy of always defect is no longer dominant, we consider competitors in a hypothetical group where exactly two people always cooperate and the others always defect. In the always-defect-only group, each player accumulates $2000 of game tickets. In the other group, the defectors end up with $5000. The cooperators receive $3000. The always-defect-only group therefore received fewer draw tickets as a result of their strategy; that is, they did not do at least as well as their opponents. Hence, that strategy is no longer dominant.
For this new super-game, no dominant strategy exists at all. First, we note that the best yield for a player that always cooperates is $300 on a turn (everyone cooperates) while the worst yield for a defector is $100 (everyone else always defects). If we switch two players from these teams, we see that the swapped cooperator only gets $75 and the defector $325. Thus, neither a defection on a given turn nor a cooperation on a given turn is dominant because there could exist a player in another group with the other strategy that does better.
The strategy that I adopted in the class was to cooperate for every move until I was the only cooperator left, at which point I would continue to defect for the rest of the game. I made the following observations before playing the game to arrive at my strategy:
- any rational player would realize that being the sole cooperator in a group yields benefits only to the others to one’s detriment,
- if one other player is cooperating, one’s utility is +$150/turn (this is greater than the $100 from everyone defecting),
- a single round of being the sole cooperator yields minimal losses to one’s position,
- the utility gained by two people defecting is +$250/turn,
- the class had many groups playing, and
- the strategy adopted by people in the class would be helter-skelter.
A hypothesis I formed from #1 is that cooperation is monotone decreasing when playing in a group of rational individuals. Thus, any attempts at defecting could trigger other defections and recovery to initial cooperation becomes unlikely. From #2 and #4, if (at least) one other player cooperates, one gains more tickets per turn than if everyone were defecting. From #5, I realized that the extra utility I gave to two defectors was offset by my gains with one other cooperator due to class size (with exactly two groups playing, there is no improvement in the proportion of the class’ tickets won using this strategy with one consistent cooperator). #3 implied that being the sole cooperator resulted in a $25 (out of $2000 “everyone defects” baseline) loss of utility — 1.25% reduction in chances to win the real prize money. However, with one other cooperator for one turn, the utility gained is 2.5%. Given #6, there was a reasonable chance that at least one person would cooperate in my group for at least the first turn. Because of the $25 loss from no cooperators to the $50 gain from one cooperator, one could afford one turn of everyone defecting without falling behind if someone cooperated for one round. Further, the expected utility from cooperating yields a higher payoff than the cost from being the sole cooperator for a single round. I performed this reasoning while the professor was describing the game, so I was a bit rushed, but upon further reflection, I think the perfectly rational player would defect in the last round and possibly in the second last round. ((The superrational player would cooperate for every round.))
In the post-game analysis given by J, my hypothesis drawn from #1 about monotone decreasing cooperation in this game was shown to have been validated by experiments with many players. In fact, J asserted that most human players defect within the first 5 moves. However, he did express surprise when he looked at the graph for my group. I apparently had been playing against two humans and one “always cooperate” bot. One of the humans defected for the entire game. The other human cooperated for a few turns, began defecting, and then, possibly out of guilt, began collaborating at around turn 9 until the end of the game. Because of the bot, I never defected. Had I adopted J’s supposed perfectly rational strategy, I would have almost undoubtedly ended up with fewer draw tickets at the end of the day.
While speaking to J after class, I was told that this had been the first time he had seen an increase in cooperation happen, though he still believed that my strategy was “irrational” (I’m not sure if he meant as opposed to “rational” or as in “not perfectly rational”/”suboptimal”). I’m still not sure what the optimal strategy is (I’m pretty sure one needs to defect near the end of the game). I am uncertain whether backward induction can be applied here at all, since one’s actions may affect future cooperation. Starting to defect permanently at a random point does not seem to be optimal because beginning defection causes a permanent decrease in utility (assuming one’s opponents are rational). One also cannot test the rationality of one’s opponents without risking causing cooperation to collapse to zero. Anyway, this is starting to get over my head — things seemed so much clearer during the few minutes J spent describing the game! Oh, and the results of the draw? I came up empty handed.
Stay tuned for applications of these results to individual decision making processes as it applies to some contemporary social issues in Part IV.