How to Triumph and Cooperate in Game Theory and Evolution
Introduction
At the heart of game theory, one of the foundations of modern economics, lies the foundational concept of “Nash equilibrium,” named after the late mathematician and Nobel laureate John Nash. Nash showed that for any competitive situation or “game,” there exists a set of strategies upon the use of which, players cannot further improve their winnings. His work continues to appear in important new research today, as described in Erica Klarreich’s recent article “In Game Theory, No Clear Path to Equilibrium” and Emily Singer’s 2015 article “Game Theory Calls Cooperation Into Question.” Anyone interested in gaining simple insights about the world — the very purpose of this column — will want to get acquainted with the fundamental principles of game theory and Nash equilibrium. This month we explore these concepts by playing a variant of an ancient game called Morra, fighting over cake, and reexamining the role of cooperation in natural selection.
Problem 1
Morra is a competitive hand-and-finger game played between two opponents. It begins like Rock-Paper-Scissors, with both players concealing their hands. At a prearranged signal, both players simultaneously show their hands, which reveal one to five outstretched fingers. In some advanced variants of Morra, you have to guess how many fingers your opponent will show, but for our puzzle, we will restrict our attention to a simpler version called Odds and Evens.
In Odds and Evens one of the players is designated Odd, and the other Even. In our variant both players may choose to show either a single outstretched index finger, or the entire palm with a folded thumb, thus showing four fingers. The sum of the number of fingers shown by both players decides who wins and the number of points the winner gets. Thus, if the first player shows one finger and the other shows four, the sum is five, so Odd wins and gets five points, and so on.
Imagine two new players trying their hand (so to speak) at this game. Even reasons as follows: “The game obviously gives both players even chances. In four rounds, on average, I will win either 2 or 8 points, for a total of 10, while Odd will win 5 points two times, also for a total of 10.” So, leaving all to chance, he went ahead and showed one finger half the time and four fingers half the time, at random. Odd, on the other, ahem, hand, thought: “I think there’s something odd about this game. I’m going to mix it up and randomly play one finger three-fifths of the time, and four fingers two-fifths of the time.”
Who wins in the above game? Why does this happen, even though the game looks symmetric? Does the winner have a better strategy?
Odds and Evens is a competitive, zero-sum-game, beloved by the kinds of people who see the world as divided into winners and losers. But many real-world scenarios invite cooperation and have the possibility of win-win scenarios. Next, let’s explore a win-win when it comes to sharing cake.
Problem 2
Amy and Bob are a pair of young twins who, like siblings everywhere, fight a lot and love cake. Their mother frequently bakes a cake that she distributes to them in the following way. She talks independently to each twin and asks about the other twin’s behavior. If neither of them has any complaints, each of them gets half the cake. If only one of them reports a valid infraction by the other, that person gets three-quarters of the cake, the other gets none, and mom gets the remaining quarter. If both of them report valid infractions, they each get only one-quarter of the cake and mom gets the remaining half.
A) What is the best strategy for Amy and Bob if they do not trust each other?
B) What is the best strategy for them if, on the other hand, they do trust each other?
C) If there are 100 such events, and you know the total amount of cake that was consumed by the twins, when can you say that there was more cooperation than betrayal between them and vice versa?
D) As an aside, the mother’s behavior in this example is interesting. How would you quantify the value she places on various factors like fostering trust, reward and punishment and her own fondness for cake?
As readers familiar with game theory will recognize, this is just a version of one of the staples of elementary game theory, the famous “Prisoner’s Dilemma,” albeit one that is framed in the language of reward and cake rather than punishment and jail sentences. The answer to Problem 1 and part A of Problem 2 is the Nash equilibrium, and the answer to part B of Problem 2 is a “Pareto optimal” solution that also maximizes equality and the common good — it results in the twins’ consuming the most cake. Unlike the Nash equilibrium, which only ensures that the other player cannot do anything to decrease your payoff, a Pareto optimal solution is one where neither player can improve his or her situation without the other player’s getting a worse deal. Pareto optimality is named after the Italian engineer and economist Vilfredo Pareto, considered one of the founders of modern mathematical economics. Pareto was also responsible for the well-known “Pareto principle” (or 80/20 rule) which states that roughly 80 percent of effects come from 20 percent of causes. It is important to note that although Pareto optimality is a minimal notion of efficiency and cooperation, it does not guarantee equality or the maximization of the common good: In our example above, the two situations in which just one twin rats out the other are also, technically, Pareto-optimal.
Parts A and B of Problem 2 above are just basic game theory, but part C tells us that if we know the history of the payoffs that players actually received, we can determine how much cooperation or betrayal actually took place. Can we apply this to the question of whether genes are selfish or cooperative?
Problem 3
As I described in the solution of “Are Genes Selfish or Cooperative?,” the Hardy-Weinberg principle predicts that if there are two genes A and a (“alleles”) that are vying for the same sites in a genome and their frequency is p and q respectively, then, in the absence of a change in natural selection, the population stabilizes at the ratios p2 for AA, 2pq for Aa and q2 for aa (see the original column for a more detailed explanation of genes and alleles). The fact that this equilibrium exists implies that this ratio is optimal and has the highest selection value or fitness. This happens under conditions in which both genes have an equal chance (50 percent) of going into the germ cells, as is true for the vast majority of genes — this is like the twins’ both getting half of the cake.
One of our readers, Lee Altenberg, mentioned that a gene can cheat using “meiotic drive,” or “segregation distortion” — a rare phenomenon that could happen in an Aa individual, where one of the alleles, say A, finds a way to get a higher proportion than 50 percent of A’s into the germ cells, and therefore, its offspring. This is analogous to the case where just one twin rats out the other and therefore gets a larger piece of cake. But natural selection does not take this lying down. Unlike the mom, evolution tries to optimize the fitness of the organism, which is analogous to maximizing the total amount of cake eaten by the twins. The new allele ratio will render the population slightly less fit, and the mechanisms that maintain the ratio in nature will try to restore it. Let’s put some numbers on this.
Imagine a pair of alleles A and a that exist in equilibrium at a ratio of 0.6 to 0.4 under normal conditions, in a species that lives for a year and reproduces once a year. The allele A is dominant, so both AA and Aa individuals have similar physical characteristics. A constant allele ratio is generally maintained in the long run by “push-pull” mechanisms in nature. There may be some environmental factors that favor individuals carrying the A allele (AA’s and Aa’s) and would if unchecked, increase its proportion, whereas other factors would tend to favor the a allele and resist A’s increase. For simplicity, let us assume that such factors occur serially. Assume that under normal circumstances, without any segregation distortion, you have three years during which the environment is such that the A allele is favored. Both AA and Aa individuals have a certain survival/reproductive advantage over aa individuals, and this causes the A allele to increase its proportion by 10 percent in the first year, rising to 0.66. The same degree of advantage is present in the second and third years, allowing the proportion of the A allele to rise further. However, in the fourth year the conditions change and the allele ratio falls back again to the equilibrium value. This happens because aa individuals are favored in the fourth year, and extra copies of the a allele survive and find their way to the next generation. The advantage to aa individuals in the fourth year is proportional to the square of the difference in their numbers from the equilibrium value of 0.16. As an example, if the proportion of aa individuals is 0.12 at the start of the fourth year, the advantage they possess will be four times what they would have had if their proportion had been 0.14. Thus the “force” pulling the gene ratio back to equilibrium increases up to a maximum, the more the ratio deviates from it.
Now let’s say that allele A manages to distort segregation so that 60 percent of the copies of A genes go into the next generation in an Aa individual instead of 50. What would the new equilibrium ratio be? What proportion of A’s cheating will the above mechanism let it get away with?
The moral of the above story is that evolution enforces cooperation among genes quite strongly. But what about cooperation among individuals? Can genes and culture enforce that? As Samuel Bowles and Herbert Gintis state in their 2011 book “A Cooperative Species: Human Reciprocity and Its Evolution”: “Why do humans, uniquely among animals, cooperate in large numbers to advance projects for the common good? Contrary to the conventional wisdom in biology and economics, this generous and civic-minded behavior is widespread and cannot be explained simply by far-sighted self-interest or a desire to help close genealogical kin.” The late anthropologist Robert Sussman found that even primates, some of the most aggressive mammals, spend less than 1 percent of their day fighting or otherwise competing. In his article “Why Humans and Other Primates Cooperate” in the September 2014 issue of Scientific American, the primatologist Frans de Waal, one of the deepest thinkers on this subject, theorizes that the parental genes that enforce caring and empathy (“mother love” genes) because such emotions are required for the survival of children when they are young, are co-opted broadly to produce wider intraspecies cooperation.
The roots of cooperation can be endlessly debated, but some part of it is hinted at in the two game theory articles referred to above. Game theory emerged from the analysis of competitive games in which the natural posture adopted by both players is that of conflict and mistrust — where you do your damnedest to defeat the other player and not lose yourself. In such situations, the Nash equilibrium is important. But most situations in human life are not like that. Trust, Pareto optimal equilibria and win-win situations abound. If we could not trust most of the people around us to a large extent, we would not be able to live peacefully at all. In her article, Klarreich quotes the game theorist Roger Myerson, who described how a different concept called “correlated equilibrium” can give more positive societal outcomes than Nash equilibria, as stating, “If there is intelligent life on other planets, in a majority of them they would have discovered correlated equilibrium before Nash equilibrium.” And this brings me to a qualitative question for readers: Do you think that in applying game theory and its mathematical techniques to biology and human behavior, scientists have focused too much on competition rather than cooperation?
As I’ve mentioned before, I think that we apply mathematics far too glibly to very complicated subjects like biology, human psychology and sociology. Mathematics cannot even solve the problem of three or more gravitating bodies analytically. The real world is far messier than our simple models. We cannot expect to find easy solutions to problems that may have hundreds or more complexly interacting variables. Although we have made considerable progress, we have a long, long way to go, especially in modeling win-win scenarios. I’d love to hear your comments.
That’s all for now. Happy puzzling!
Editor’s note: The reader who submits the most interesting, creative or insightful solution (as judged by the columnist) in the comments section will receive a Quanta Magazine T-shirt. And if you’d like to suggest a favorite puzzle for a future Insights column, submit it as a comment below, clearly marked “NEW PUZZLE SUGGESTION.” (It will not appear online, so solutions to the puzzle above should be submitted separately.)
Note that we may hold comments for the first day or two to allow for independent contributions by readers.
Update: The solution has been published here.