A Brief Introduction to NON-COOPERATIVE GAME THEORY

(or, "What the Nobel was for in ‘94")

3 Game Theorists won the ‘94 Nobel in Economics: John Nash (USA), Reinhardt Selten (Germany), and John Harsanyi (Hungary/USA)

1st CONCEPT OF NON-COOPERATIVE EQUILIBRIUM:

BEST-RESPONSE - or "NASH" EQUILIBRIUM (Nash, USA)

"When everyone’s playing their best move to everyone ELSE’S best move, no-one’s going to move."

Like most really powerful ideas, the basic notion of Nash equilibrium is very simple, even obvious. Its mathematical extensions and implications are not, however. The idea of this natural "sticking point" is that no single player can benefit from unilaterally changing his or her move -- a non-cooperative best-response equilibrium.

Competitive Markets come to rest at Nash equilibrium, and the special structure of competitive markets makes them efficient. (As we will see in another game.) But it is important to recognize that MOST Nash-Equilibria are NOT efficient. What do we mean by not efficient?  It's just the idea of getting the "whole pie" -- that if we're really using the whole pie, then no one can get any more unless someone else takes less.  That's the economist's basic idea of allocative efficiency.

A famous game is called "Chicken," named after a famous adolescent hot-rod ceremony from the United States of the 1950s.  Say that Boeing and Airbus are both considering entering the jumbo jet market, but that because of increasing returns to scale and relatively low demand, there is only enough room for one of them.  The game matrix (called the "normal form" of a game) could look like this. (This example is taken from an article by Paul R. Krugman, "Is Free Trade Passe?" in the Journal of Economic Perspectives, Fall 1987.)
 

 AirBus
Boeing

    Build

      Don't
      Build

Build

       -10

-10

         -1

10


Don't
Build

        10

-1

          5

5

Note here that the upper-right and lower-left squares are both Nash Equilibria, sometimes called Non-cooperative Equilibria.  It can be easily confirmed that neither player will have an interest in moving if it finds itself in either of these cells; doing so would only make it worse off.   As in the  hot-rod version of "Chicken," the  point is to convince the other player that you are commited to going straight ahead and build "no matter what," so that he will take the don't build option.  But note also that both of these Nash equilibria are inefficient: the lower-right option would give both players a total pie of size 10, while no other cell can do this well.

Consider another famous example of inefficiency, the so-called "Prisoner’s Dilemma." Mugsy and Spike have just been nabbed for a crime. They have promised each other not to rat on the other if caught. The DA offers them the following prison-expectations (in years) if they stay silent or rat on the other:

  MUGSY

SPIKE

    Silent

        Rat

Silent

         2

2

          0

10

Rat

        10

0

          7

7

What should each of them do?  Here each is trying to minimize his time in jail.  A little thinking shows that each can always do better AS AN INDIVIDUAL by Ratting on the other.  From Spike’s point of view, if Mugsy is Silent, he should Rat, because 0 years is better than 2. But if Mugsy Rats he should also rat, because 7 years is better than 10. Whereas if they could both really keep their promise to stay silent, they would only get 2 years each. So the Nash Equilibrium here is inefficient from the criminals' point of view.  It's easy to see why many kinds of human organizations (not just criminal) go to great lengths to try to get people to keep promises!

The lower-right cell is not only a Nash Equilibrium, it also has another interesting property with which it should not be confused -- it is also a "dominant strategy."  In a Prisoner's Dilemma game, non-cooperation with your partner is always your best move.  The idea of a dominant strategy is that it is always your best move regardless of what the other guys do.  Note that this is a stronger requirement than the idea of Nash Equilibrium, which only says that you have made your best move given what the other guys have done.
 
 
 

2ND CONCEPT OF NON-COOPERATIVE EQUILIBRIUM:

SUB-GAME PERFECT EQUILIBRIUM (Selten, Germany)

"Think Forward, then Reason Backward"

There is a crucial distinction between the "Fog of Business" game (shown in class) and chess, and that has to do with PERCEPTIONS. I can show that in a game like chess (constant sum, with two players) a perfectly rational player does not have to "worry" about whether or not his or her opponent is irrational. You don’t worry about perceptions.  

The simplest example of this is the "Chain Store" paradox.  The established store ("incumbent") threatens to fight a price war if the newcomer ("entrant") comes in.  Analyzed as a "timeless" normal-form game (the matrix), there is no reason not to believe this threat.  And if the entrant does, he will stay out.  There are TWO Nash equilibria in this matrix, lower left and upper right.  

But in fact the upper-right is stronger than just a static Nash equilibrium.  It is what Selten called a "sub-game perfect" equilibrium, because just looking at the last part of the game where the incumbent finds himself one the entrant has entered (the "sub-game"), it would obviously be irrational for him to follow through on this threat.  That is because by fighting, he gets 10 points, and by giving in, he gets 30.  The sequential or "extended form" of the game makes this more clear.  Of course the incumbent would like his threat to be believed, and with less than full information (about motivations, future games, etc.), it may well be.  But if both players have the full information below, and it is "common knowledge" that each is fully informed and rational --- then such a threat is absurd, and will probably never be made.

I will now try to give another example of sequential reasoning, with the emphasis on rationality. Say that you are Black, the other player is White, and you are about to make your last of three possible moves in a chess game:


 
 

Let the first score in every couple (b, w) be the score of Black, the second be that of white. 1 is a win, 0 is a loss, and ½ denotes a tie. Note (0, 1), (1, 0) and (½, ½) are the only possible outcomes. What should B do? There is one right move for B here, on the assumption of W's rationality. And if W is not rational??? -- so much the better for B!

Any way you mix up the outcomes, you will fail to find a situation where W's irrationality could possibly hurt B. That's impossible, because in a constant sum, two player game, one player's failure to maximize payoffs must, by the same token, make the payoffs of the other player greater!

Note that this result does not deny that W might be able to benefit by lulling B into the FALSE BELIEF W is irrational, when in fact s/he is not. If this belief induces B to relax so much that s/he plays less than optimally, then this would be good for W. The point is, one can never do worse by assuming the opponent is rational, and playing accordingly.

This is different when there are more than two players, or, as in the next example, when the sum of the game is not constant.

3rd CONCEPT OF NON-COOPERATIVE EQUILIBRIUM:

BEST-RESPONSE, GIVEN EXPECTATIONS --

EXPECTATIONS EQUILIBRIUM (Harsanyi, Hungary/USA)

"When everyone’s playing his or her best move CONDITIONAL ON EXPECTATIONS of everyone else’s best move, then no-one’s going to want to move."

This idea builds on the forgoing idea of thinking forward and reasoning backward, but now makes it conditional -- not on the assumption of perfect rationality on the part of all players -- but on one’s PERCEPTIONS of what the other player is likely to do. One way to think about this is by considering what a famous 19th century German General, Baron Von Cluasewitz, called the "Fog of Battle."

Clausewitz said that the most clear-sighted plans of war must unfold during the actual "fog of battle." UNLIKE the perfect crystal clarity of the chess board, in real battle you are very unsure of what is actually going on, what all the possible moves are, about your opponent’s perceptions, his perceptions of your perceptions, his perceptions of your perceptions of his perceptions..... Here we focus just on a couple of levels of this fogginess.Here we can look at the closest the US has ever come to getting into a two-sided nuclear war (that is, one in which both sides had nuclear weapons).
 
 

This was the famous Cuban missile crisis of the early 1960s, an episode that is described in the book, "Games for Business and Economics" (Gardner, 1995) with the diagram above.

The crucial point of ambiguity here is that the score is not constant-sum. The "–L "score here is a very large negative number, meaning the "end of the world as we know it." Note also that each player’s final move has to be made simultaneously; i.e., without knowing what the opponent has done. There are two Nash equilibria in this final move, Mutual Doomsday and Mutual Backdown. Doomsday-Backdown combinations are not equilibria: one side can change its move here -- to Backdown – and improve its payoff.

By so-called "backward induction" (thinking forward and then reasoning backward), one can show that the Nash equilibrium of mutual Backdown leads to the Soviets deciding to build the missiles, while the equilibrium of mutual Doomsday leads the Soviets to do nothing; i.e., to not build the missiles.

So different "endings" of the game dictate radically different beginnings, from a strategic point of view. In a game like this, the point is not just to figure out what your opponent is really thinking, but to "psyche out" that opponent, to manipulate her beliefs about what you are thinking! Reading the history of the Cuban missile crisis, recently declassified, it becomes clear that the Soviets did not at first believe that the Americans were willing to go "all the way." And so, consistently with this model, they built the missiles.

But the Americans were willing to play Doomsday – or at least Kennedy made his own Joint Chiefs of Staff believe that. On the evening of his fateful decision, Kennedy told his Joint Chiefs that he hoped they would all be alive tomorrow. With a convincing enough show that the Americans were willing to go Doomsday, the Soviets changed course in midstream, and backed down.

-

This problem of perceptions, and of managing perceptions, does not just appear in warfare, but in the "war" of business, as well. The great English macro-economist, John Maynard Keynes, described stock market speculation (a game at which he was rather good) as a "newspaper beauty contest". This was a contest of his day in which the goal was not supposed to vote one’s own preferences for prettiest faces. The winner was rather he/she who successfully guessed which faces the average contestant found most appealing.

But of course, the average player has a view of what the average player’s view is, a view of what the average player’s view of the average player’s view is, etc., etc., ad infinitum. Keynes said that there were a few people in the stock market able to play on the fifth or sixth level of such a series of expectations. But not many could, and no one could play higher than that.

Harsanyi gave us a "fixed point" for analysis, a point at which expectations about other expectations are consistent with one's optimizing strategy. Such consistency is not the same as being correct. The Soviets obviously changed their expectations. Kennedy’s "psyche-out" won, but led us to the brink of nuclear disaster. The "players" in this Cuban missile crisis met in Havana in 1992 for a "30th Anniversary." The consensus from both sides was, "Even we didn’t imagine how close we really were."


Updated: 2001-03-14, 17:17