Anda di halaman 1dari 567

Applied

Game Theory
Lecture 1

Pietro Michiardi

Before we even start


The Grade Game
Without showing your neighbors what you are doing, write down
on a form either the le6er alpha or the le6er beta. Think of this as
a grade bid. I will randomly pair your form with one other form.
Neither you nor your pair will ever know with whom you were
paired. Here is how grades may be assigned for this class:
If you put alpha and your pair puts beta, then you will get grade A,
and your pair grade C;
If both you and your pair put alpha, then you both will get the
grade B-;
If you put beta and your pair puts alpha, then you will get the grade
C and your pair grade A;
If both you and your pair put beta, then you will both get grade B+

What is game theory?


Game theory is a method of studying strategic
situaGons
Informally:
At one end we have Firms in perfect compeGGon: in this
case, rms are price takers and do not care about what
other do
At the other end we have Monopolist Firms: in this case, a
rm doesnt have compeGtors to worry about, theyre not
price-takers but they take the demand curve
Everything in between is strategic, i.e., everything that
consGtutes imperfect compeGGon
Example: The automoGve industry

Literally, a strategic se/ng is one where the


outcomes that aect you depend on acGons of
others, not only yours

The grade game, explained (1)


Just reading the text is hard to absorb, lets
use a concise way of represenGng the game:
my pair
alpha

alpha

B -

my pair
beta

alpha

me

beta

alpha

B -

beta

B +

me
beta

B +

my grades

pairs grades

The grade game: outcome matrix


Lets use some more compact representaGon:
my pair
alpha

beta
This is an outcome matrix:

alpha

B - , B -

beta

C , A

A , C

me

1st grade
Row player

B + , B +

2nd grade
Column player

It tells us everything that was


in the game we saw

The grade game, explained (2)


What did you do?
How many chose alpha?
How many chose beta?
Why?

The grade game, explained (3)


Regardless of my partner choice, there would
be beWer outcomes for me by choosing alpha
rather than beta;
We could all be collusive and work together,
hence by choosing beta we would get higher
grades.
What we have examined is not a game yet

The grade game, explained (4)


Right now we have:
The players
Strategies, that is the acGons players can take
We know what the outcomes are

We are missing objecGves, i.e. payos


Basically we dont know what players care
about

The grade game, explained (5)


In our previous discussion we had two dierent
payos:
We had the ones where the only thing we care about
was our own grade
We had the ones where we might care about other
peoples payo as well

Were going to explore all possible combinaGons


of payos in the next couple of slides

The grade game: payo matrix


Possible payos: in this case we only care
about our own grades
my pair
alpha

beta

alpha

0 , 0

3, -1

beta

-1, 3

1,1

# of uGles, or uGlity:
(A,C) 3
(B-, B-) 0
Hence the preference order is:
A > B+ > B- > C

me

The grade game, explained (6)


What do we call people who only care about their own
grades?
So, now, given the payo matrix, what should you do?
Play alpha! Indeed, no maWer what the pair does, by
playing alpha you would obtain a higher payo

Deni4on:
We say that my strategy alpha strictly dominates my
strategy beta, if my payo from alpha is strictly greater
than from beta, regardless of what others do.

Lesson 1
Do not play strictly dominated strategies

Dominated strategies (1)


Why shouldnt you play strictly dominated
strategies?
Because if I play a dominating strategy Im doing
beWer than what I could do regardless what the
other does

Lets look again at the payo matrix


If we (me and my pair) reason selshly, we will
both select alpha, and get a payo of 0;
But if we reasoned in a dierent way, we could
end up both with a payo of 1;

Dominated strategies (2)


Whats the problem with this laWer reasoning?
Suppose you have super mental power and
oblige your partner to agree with you and
chose beta, so that you both would end up
with a payo of 1
Even with communica4on, it wouldnt work,
because at this point, youd be beWer of by
choosing alpha, and get a payo of 3

Lesson 2
RaEonal choice (i.e., not choosing a dominated
strategy) can lead to outcomes that suck!

The Prisoners Dilemma


Did you know it?
Any other examples?
What kind of remedies we have for such
situaGons?

The grade game: payo matrix


Possible payos: This Gme people are more
incline to be altruisGc
my pair
alpha

beta

alpha

0 , 0

-1, -3

beta

-3,-1

1,1

# of uGles, or uGlity:
(A,C) 3 4 = -1
my A my guilt
(C, A) -1 2 = -3
my C my indignaGon
This is a coordina4on problem

me

The grade game, explained (6)


What would you do in this case?
By choosing alpha you may minimize your losses
By choosing beta you may maximize your prot

We have the same game structure, the same


outcomes, but the payos are dierent
Is there any dominated strategy in this game?

Lesson 3:
Payos ma6ers!
<< you cant get what you want unless you
know what you want >>

The grade game: payo matrix


Selsh vs. AltruisGc

my pair
(AltruisGc)
alpha

beta

0 , 0

3, -3

-1,-1

1,1

In this case, alpha sGll dominates


The fact we (selsh player) are playing
against an altruisGc player doesnt change
my strategy, even by changing the other
Players payo

alpha
Me
(Selsh)
beta

The grade game: payo matrix


AltruisGc vs. Selsh

my pair
(Selsh)
alpha

beta

alpha

0 , 0

-1, -1

Me
By thinking of what my opponent will do (AltruisGc)
I can decide what to do.
beta

-3,3

1,1

What happened here?


Do I have a dominating strategy?
Does the other player have a dominaGng
strategy?

Lesson 4:
Put yourself in others shoes and try to gure out
what they will do

ObservaGons
In realisGc selngs:
It is omen hard to determine what are the payos
of your opponent
It is easier to gure out my own payos

In general, we have to gure out what are the


odds (probability) of my opponent being
selsh or altruisGc

A slightly more complicated game


The Pick a Number Game
Without showing your neighbor what youre doing, write down an
integer number between 1 and 100. I will calculate the average
number chosen in the class. The winner in this game is the person
whose number is closest to two-thirds of the average in the class.
The winner will win 5 euro minus the dierence in cents between
her choice and that two-thirds of the average.
Example: 3 students
Numbers: 25, 5, 60
Total: 90, Average: 30, 2/3*average: 20
25 wins: 5 euro 5cents = 4.95 euro

While we sum-up in the next couple of slides,


pick your number, write it down and give it to
me

The story so far (1)


Weve seen a compact representaGon of games: this is
called the normal form
Lessons we learned:
1. Do not play strictly dominated strategies
2. Put yourself in others shoes

It doesnt just maWer what your payos are


Its also important what other peoples payo are,
because you want to try and gure out what theyre
going to do and respond appropriately

The story so far (2)


Weve seen an important class of games:
Prisoners Dilemma games
1. Joint project:

Each individual may have an incenGve to shirk

2. Price compeGGon

Each rm has an incenGve to undercut prices


If all rms behave this way, prices are driven down
towards marginal cost and industry prot will suer

3. Common resource

Carbon emissions
Fishing

The story so far (3)


In each of the previous examples we end up
with a bad outcome
This is not a failure of communica4on
SoluGons:
Contracts change the payos
Repeated interacGon

Let me introduce some notaGon


and by the way hand in your numbers

Nota4on

E.g.: number game

Players

i, j,

You all

Strategies

si: a parGcular
strategy of player i

13

s-i: the strategy of


everybody else except
player i

Payos

Si: the set of possible


strategies of player i

{1, 2, , 100}

s: a parGcular play of
the game
strategy prole
(vector, or list)

The collecGon of your


pieces of paper

ui(s1,, si,, sN) =


ui(s)

$5 - error
ui(s) =
0

AssumpGons
We assume all the ingredients of the game to
be known
Everybody knows the possible strategies everyone
else could choose
Everybody knows everyone elses payos

This is not very realisGc, but things are


complicated enough to give us material for
this class

Example 1
2
C

L
1

T
B

5, -1
6, 4

11, 3
0, 2

0,0
2, 0

Players

1, 2

Strategy sets

S1={T,B}

S2={L,C,R}

Payos

U1(T,C) = 11

U2(T,C) = 3

NOTE: This game is not symmetric

Example conGnued
How is the game going to be played?
Does player 1 have a dominated strategy?
Does player 2 have a dominated strategy?
For a strategy to be dominated, we need
another strategy for the same player that does
always beWer (in terms of payos)

DeniGon: Strict dominance


We say player is strategy si is strictly dominated
by player is strategy si if:
ui(si, s-i) > ui(si, s-i) for all s-i
No maWer what other people do, by choosing si
instead of si , player i will always obtain a higher
payo.

Example 2: Hannibal game


An invader is thinking about invading a country, and
there are 2 ways through which he can lead his army.
You are the defender of this country and you have to
decide which of these ways you choose to defend: you
can only defend one of these routes.
One route is a hard pass: if the invader chooses this
route he will lose one baWalion of his army (over the
mountains).
If the invader meets your army, whatever route he
chooses, he will lose a baWalion

Example 2: Hannibal game


aWacker
e
defender

E
H

1, 1
0, 2

1, 1
2, 0

e, E = easy ; h,H = hard


Payos are how many baWalions aWacker will
arrive with in your country

Example 2: Hannibal game


Youre the defender: what would you do?
Is it true that defending the easy route
dominates defending the hard one?
Youre the aWacker: what would you do?
Now, what the defender should do, if he
would put himself in the aWacker shoes?

DeniGon: Weak dominance


We say player is strategy si is weakly
dominated by player is strategy si if:
ui(si, s-i) ui(si, s-i) for all s-i
ui(si, s-i) > ui(si, s-i) for some s-i
No maWer what other people do, by choosing si
instead of si , player i will always do at least as
well, and in some cases she does strictly beWer.
It turns out that, historically, Hannibal chose H!

Back to the pick a number game


What we know:
Do not chose a strictly dominated strategy
Also, do not chose a weakly dominated strategy
You should put yourself in others shoes, try to
gure out what they are going to play, and
respond appropriately

What did you do in this game?

Back to the pick a number game


A possible assumpGon:
People chose numbers uniformly at random
The average is 50
2/3 * average = 33.3

Whats wrong with this reasoning?

Back to the pick a number game


Lets try to nd out whether there are
dominated strategies
If everyone would chose 100, then the
winning number would be 66
numbers > 67 are weakly dominated by 66
RaGonality tells not to choose numbers > 67

Back to the pick a number game


So now weve eliminated dominated strategies, its like
the game was to be played over the set [1, , 67]
Once you gured out that nobody is going to chose a
number above 67, the conclusion is
Also strategies above 45 are ruled out
They are weakly dominated, only once we delete
68-100
This implies raGonality, and knowledge that others are
raGonal as well

Back to the pick a number game


Eventually, we can show that also strategies
above 30 are weakly dominated, once we
delete previously dominated strategies
We can go on with this line of reasoning and
end up with the conclusion that:
1 was the winning strategy!!

Back to the pick a number game


Common knowledge: you know that others
know that others know and so on that
raGonality is underlying all players choices
In pracGce:
Average number was:
Winning number was: 2/3*average

Theory vs. PracGce


Why was it that 1 wasnt the winning answer?
We need a strong assumpGon, that is that all
players are raGonal and they know that
everybody elses raGonal as well

To sum up (1)
Weve explored a bit the idea of deleGng
dominated strategies
Look at a game
Figure out which strategies are dominated
Delete them
Look at the game again
Look at which strategies are dominated now
and so on

To sum up (2)
Itera4ve dele4on of dominated strategies
seems a powerful idea, but its also dangerous
if you take it literally
In some games, iteraGve deleGon converges to
a single choice, in others it may not (well see
shortly an example)
HINT: try to idenGfy all dominated strategies
at once before you delete, this may save you
some rounds

Our rst model: poliGcs (1)


Imagine there are 2 candidates
These candidates are choosing their poliGcal
posiGons on a spectrum
To make life easy lets assume the spectrum
has 10 posiGons
1
LEFT WING

10
RIGHT WING

Our rst model: poliGcs (2)


We assume that there are 10% of the voters at
each of these posiGons:
Voters are uniformly distributed

We assume voters will eventually vote for the


closest candidate, that is for the candidate whose
posiGon is closest to their own
We break Ges by splilng votes equally
We have players (candidates), and acGons
(poliGcal posiGons): what are we missing?

Our rst model: poliGcs (3)


We assume payos follow the idea that the
candidates aim to maximize their share of
vote
What is it going to happen in this game?
Are there any dominated strategies here?

Our rst model: poliGcs (4)


Is posiGon 1 dominated? If so, what dominates it?
Lets test, e.g. how is 1 vs. 2
Vs. 1

u1(1,1) = 50 %

<

u1(2,1) = 90%

Vs. 2

u1(1,2) = 10 %

<

u1(2,2) = 50%

Vs. 3

u1(1,3) = 15 %

<

u1(2,3) = 20%

Vs. 4

u1(1,4) = 20 %

<

u1(2,4) = 25%

Do you see a paWern coming up here?


We conclude that 2 strictly dominates 1
Were not saying that 2 wins over 1

Our rst model: poliGcs (5)


Using a similar argument, we have that:
9 strictly dominates 10
Is there anything else dominated here?
What about 2 being dominated by 3?
Vs. 1

U1(2,1) = 90 %

>

U1(3,1) = 85%

Our rst model: poliGcs (6)


Even though 2 is not a dominated strategy, if
we do the process of iteraGve deleGon and
delete dominated strategies (1 and 9)
Would 3 dominate 2?
Vs. 2

u1(2,2) = 50 %

<

u1(3,2) = 80%

Vs. 3

u1(2,3) = 20 %

<

u1(3,3) = 50%

Vs. 4

u1(2,4) = 25 %

<

u1(3,4) = 30%

Vs. 5

U1(2,5) = 30 %

<

u1(3,5) = 35%

Our rst model: poliGcs (7)


Strategies 2 and 8 are not dominated
They are dominated once we realize that
strategies 1 and 10 wont be chosen
If we conGnued the exercise, where would we
get?

Our rst model: poliGcs (9)


It turns out that 5 and 6 are not dominated
Whats the predicGon that game theory suggests
here?
Candidates will be squeezed towards the center,
theyre going to chose posiGons very close to
each other
In poliGcal science this is called the
Median Voter Theorem

The Median Voter Theorem


The same model has applicaGons in economics as
well (and computer science): product placement
Example: in product placement youre placing a
gas staGon, and you might think that it would be
nice if gas staGons spread themselves evenly out
over the town, or on every road, so that there
would be a staGon close by when you run out of
gas
As we all know, this doesnt happen: all gas
staGons tend to crowd into the same corners, all
the fast foods crowd as well, you name it

WHY?
This is going to be (one of) your next homework

CriGcs (1)
We have been using a model of a real-world
situaGon, and tried to predict the outcome
using game theory
What is missing? Is there anything wrong with
the model?

CriGcs (2)
Voters are not evenly distributed
Many voters do not vote
There may be more than 2 candidates
See these in your homework
There may be higher dimensions to the
problem

CriGcs (3)
So if were missing so many things, our model is
useless, and in general modeling (as an
abstracGon eort) is useless!!
No: rst, analyze a problem with simplifying
assumpGons, then relax them and see what
happens
E.g.: would a dierent voters distribuGon change
the result?

LimitaGons of IDEL (1)


2

l
U
1

M
D

5,1
1,3
4,2

0,2
4,1
2,3

This is a simple game with two players


What are the dominated strategies?
Imagine youre player 1: what would you do?

LimitaGons of IDEL (2)


Would you chose U?
What if you knew in advance that player 2 was
going to chose l ?
U would be the best response to l
E.g.: your boss asks why the heck you chose U
Given your beliefs, that was the best thing to
do!!

LimitaGons of IDEL (3)


Similarly, if you knew player 2 would chose r,
your best response would be to play M, right?
What if you are not sure what your opponent
is going to play?

LimitaGons of IDEL (4)


2

l
U
1

M
D

5,1
1,3
4,2

0,2
4,1
2,3

Expected payo of U vs. 50% l , 50% r


0.5*5 + 0.5* 0 = 2.5
Expected payo of M vs. 50% l , 50% r
0.5*1 + 0.5* 4 = 2.5
Expected payo of D vs. 50% l , 50% r
0.5*4 + 0.5* 2 = 3

LimitaGons of IDEL (5)


2

l
U
1

M
D

5,1
1,3
4,2

0,2
4,1
2,3

Expected payo of D vs. 50% l , 50% r


0.5*4 + 0.5* 2 = 3
It turns out that D is the best response, when
theres an equal probability that your
opponent will play l or r.

Best Response
Obviously, the 50% l 50% r is just a belief
I could believe my opponent would lean to
lem, e.g. with a 75% l 25% r probabiliGes
Can we use a representaGon to sum up all
these possibiliGes and come up with a
predicGon?

Expected Payo
5

E[u1(U, p(r) )] = [1 p(r)] * 5 + p(r) * 0


E[u1(D, p(r) )] = [1 p(r)] * 4 + p(r) * 2

u1(U,l)

u1(M,r)

E[u1(M, p(r) )] = [1 p(r)] * 1 + p(r) * 4

u1(D,l)
u1(D,r)

u1(U,r)

u1(M,l)
0

x
BR is U

0.5
BR is D

1
BR is M

p(r)
belief

Applied Game Theory


Lecture 2

Pietro Michiardi

Recap
We introduced the idea of best response (BR)
do the best you can do, given your belief
about what the other players will do
We saw a simple game in which we applied
the BR idea and worked with plots

Soccer: Penalty Kick Game (1)


l
L
kicker M
R

4, -4
6, -6
9, -9

goalie

9, -9
6, -6
4, -4

Payos approximate the probabiliLes of scoring for the


kicker, and the negaLve of that for the goalie
AssumpLon: we ignore the stay put opLon for the goalie
Example:
u1(L,l) = 4 40% chance of scoring
u1(L,r) = 9 90% chance of scoring

Penalty Kick Game (2)


What would you do here?
Is there any dominated strategy?
If we stopped to the idea of iteraLve deleLon
of dominated strategies, we would be stuck!
If you were the kicker, were would you shoot?

Expected Payo
10

E[u1(L, p(r) )]

E[u1(R, p(r) )]

8
6
4

E[u1(L, p(r) )]

0.5

p(r)
belief

Penalty Kick Game (3)


Whats the lesson here?
Assume for a moment these numbers are true
If the goalie is jumping to the right with a
probability less than 0.5, then you should
shoot .
Lesson: Dont shoot to the middle

Lesson 1:
Do not choose a strategy that is
never a BR to any(*) belief

(*) any means all probabiliLes

Penalty Kick Game (4)


NoLce how we could eliminate one strategy
even though nothing was dominated
With deleLon of dominated strategies we got
nowhere
With BR, we made some progress
Can we do befer? What are we missing here?

Penalty Kick Game (5)


Right footed players nd it easier to shoot to
their leh!
The goalie might stay in the middle
The probabiliLes we used before are arLcial,
what about reality?
What about considering also the speed?
And the precision?

Expected Payo
10
8
6
4
2

0.5

See what happens? If you are less precise but strong


youd be befer o by shooLng to the middle

p(r)
belief

DeniLon: Best Response


Player is strategy i is a BR to strategy s-i of other
players if:
ui(i , s-i) ui(si , s-i) for all si in Si
or
i solves max ui(si , s-i)

DeniLon: Best Response (general)


Player is strategy i is a BR to the belief p about
the others choices if:
E[ ui(i , p) ] E[ ui(si , p) ] for all si in Si
or
i solves max { E [ ui(si , s-i) ] }

The Partnership Game (1)


Two individuals (players) who are going to
supply an input to a joint project
The two individuals share 50% of the prot
The two individuals supply eorts individually
Each player chooses the eort level to put into
the project (e.g. working hours)

The Partnership Game (2)


Lets be more formal, and normalize the eort
in hours a player chooses
Si = [0,4]

Note: this is a conLnuous set of strategies

The Partnership Game (3)


Lets now dene the prot to the partnership
Prot = 4 [s1 + s2 + b s1 s2]
Where:
si = the eort level chosen by player i
b = synergy / complementarity
0 b

Why is there the term s1 s2 ?

The Partnership Game (4)


Whats missing? Payos!
u1(s1 , s2) = [4 (s1 + s2 + b s1 s2)] - s12
u2(s1 , s2) = [4 (s1 + s2 + b s1 s2)] - s22
That is:
Players share the prot in half
They bear a cost proporLonal to the square of
their eort level
Note: payo = benet - cost

The Partnership Game (5)


Alright, how can we proceed now?
Lets analyze this game with the idea of BR

But how can we draw a graph with a


conLnuous set of strategies?
Recall the deniLon of best response

The Partnership Game (6)


DeniLon: Best Response
Player is strategy i is a BR to strategy s-i of other
players if:
i = arg max ui(si , s-i)
We are going to use some calculus here
1 = arg max { 2 (s1 + s2 + b s1 s2) - s12 }

The Partnership Game (7)


So we dierenLate:
F.o.c. : 2 (1 + b s2) - 2s1 = 0
S.o.c. : -2 < 0
1 = 1 + b s2 = BR1(s2)
2 = 1 + b s1 = BR2(s1)
due to symmetry of the game

The Partnership Game (8)


Alright, we have the expressions that tell me:
player i best response, given what player j is
doing
Now, lets draw the two funcLons we found
and have a look at what we can say
Lets also x the only parameter of the game:
b =

s2
5
BR1(s2)

4
3

BR2(s1)

2
1

s1

s2
5
BR1(s2)
4
3
2
1

s1

s2
5
4
3
BR2(s1)

2
1

s1

s2
5
BR1(s2)
4
3
BR2(s1)

2
1

s1

s2

BR1(s2)

2
7/4

BR2(s1)

6/4
5/4

5/4

6/4

7/4

s1

The Partnership Game (9)


We started with a game
We found what player 1 BR was for every possible
choice of player 2
We did the same for player 2
We eliminated all strategies that were never a BR
We looked at the ones that were leh, and
eliminated those that were never a best response

Where are we going to?

The Partnership Game (10)


s*1 = 1 + b s2
s*2 = 1 + b s1
The intersecLon s*1 = s*2
s*1 = 1/(1-b)

The Partnership Game (11)


We came up with a predicLon on the eort
levels
QuesLon: is the amount of work we found
previously a good amount of work?
QuesLon: are the players working more or less
than an ecient level?

The Partnership Game (12)


Why is it that in a joint project we tend to get
ineciently lifle eort when we gure out
whats the best response in the game?
NOTE: this is not a PD situaLon
Why?

The Partnership Game (13)


The problem is not really the amount of work
Also, the problem is not about synergy, i.e. the factor b
The problem is that at the margin, I bear the cost for the
extra unit of eort I contribute, but Im only reaping half of
the induced prots, because of prot sharing
This is known as an externaility
When Im guring out the eort I have to put I dont take
into account that other half of prot that goes to my
partner
In other words, my eort benets my partner, not just me

The Partnership Game (14)


By the way, how would the situaLon change
by varying the only parameter of the game?
Informally, what we have done so far is to
determine the Nash Equilibrium of the game

Introducing NE
So in the partnership game weve seen what a NE
is
Recall the numbers game: what was the NE
there?
Did you play a NE?
Although NE is a central idea in game theory, be
aware that it is not always going to be played
By repeaLng the numbers game, however, weve
seen that we were converging to the NE

DeniLon: Nash Equilibrium


A strategy prole (s1*, s2*,, sN*) is a Nash
Equilibrium (NE) if, for each i, her choice si* is a
best response to the other players choices s-i*
Why is it an important concept?
Its in textbooks
Its used in many applicaLons

Dont jump to the conclusion that now we know


NE, everything weve done so far is irrelevant

NE: observaLons
It is not always the case that players play a NE!
E.g.: in the numbers game, we saw that playing NE
is not guaranteed

RaLonality NE is NOT true!!!


What are the moLvaLons for studying NE?

NE: moLvaLons (1)


NO REGRETS
Holding everyone elses strategies xed, no
individual has a strict incenLve to move away
Having played a game, suppose you played a
NE: looking back the answer to the quesLon
Do I regret my acLons? would be
No, given what other players did, I did my
best

NE: moLvaLons (2)


Self-fulllling beliefs
If I believe everyone is going to play their parts
of a NE, then everyone will in fact play a NE
Why?

s2

Lets play a liMle bit with this graph:


5
BR1(s2)

- Graphical way of nding NE

4
3
2
1

s1

Finding NE point(s)
Next we will play some very simple games
involving few players and few strategies
Get familiar with nding NE on normal form
games
We will have a glimpse on algorithmic ways of
nding NE and their complexity

A simple game (1)


l
U
Player 1 M
D

0,4
4,0
3,5

Player 2
c
r

4,0
0,4
3,5

5,3
5,3
6,6

Is there any dominated strategy for player 1/2?


What is the BR for player 1 if player 2 chooses leh?
What is the BR if player 2 chooses center?
What about right?
Can you do it for player 2?

A simple game (2)


l
U
Player 1 M
D

0,4
4,0
3,5

Player 2
c
r

4,0
0,4
3,5

5,3
5,3
6,6

BR1(l) = M BR2(U) = l
BR1(c) = U BR2(M) = c
BR1(r) = D BR2(D) = r
What is the NE?
Why?

A simple game (3)


It looks like each strategy of player 1 is a BR to
something
And the same is true for player 2
DeleLon of dominated strategies wouldnt
lead anywhere here
Would it be raLonal for player 1 to chose M?

Another simple game (1)


l
U
Player 1 M
D

Player 2
c
r

0,2 2,3
11,1 3,2
0,3 1,0

4,3
0,0
8,0

What is the NE for this game?


Whats tricky in this game?
Do BR have to be unique?

Are players happy about playing the NE?

NE vs. Dominance (1)


Weve seen how to nd NE on a normal form
game
Weve seen how NE relates to the idea of BR
We have a NE when the BR coincide

What is the relaLon between NE and the


noLon of dominance?

NE vs. Dominance (2)


Player 2
alpha
Player 1

alpha
beta

beta

0,0 3,-1
-1,3 1,1

What is this game?


Are there any dominated strategies?
What is the NE for this game?

NE vs. Dominance (3)


Claim: no strictly dominated strategies could
ever be played in NE
Why?
A strictly dominated strategy is never a best
response to anything
What about weakly dominated strategies?

NE vs. Dominance (4)


Player 2

Player 1

l
r

1,1
0,0

0,0
0,0

Are there any dominated strategies?


What is the NE for this game?

NE vs. Dominance (5)


First observaLon: the game has 2 NE!
Informally weve seen that a NE can be:
Everyone plays a BR
None has any strict incenLve to deviate

Whats annoying here? What is the predicLon


game theory leads us to?
Is that reasonable?

The Investment Game (1)


The players: you
The strategies: each of you chose between
invesLng nothing in a class project ($0) or invest
($10)
Payos:
If you dont invest your payo is $0
If you invest youre going to make a net prot of $5.
This however requires more than 90% of the class to
invest. Otherwise, you loose $10

As usual no communicaLon please!!

The Investment Game (2)


What did you do?
Who invested?
Who did not invest?

What is the NE in this game?

The Investment Game (3)


There are 2 NE in this game
All invest
None invest

Lets check:
If everyone invests, none would have regrets, and
indeed the BR would be to invest
If nobody invests, then the BR would be to not
invest

The Investment Game (4)


How did we nd the NE?
We could have checked rigorously what
everyones best response would be in each case
We can just guess and check!

Actually, checking is easy, guessing is hard


What does this remind you? Can you tell anything
about the complexity of nding a NE?

Note: checking is easy when you have many


players but few strategies

The Investment Game (5)

Lets recap: what did you do in this game?


Players: you
Strategies: invest $0 or invest $10
Payos:
If no invest $0
$5 net prot if 90% invest
If invest $10
-$10 net prot if < 90% invest

The Investment Game (6)


I want you to play the game again, no
communicaLon please!!
What did you do?
Who did invest?
Who did not invest?

I want you to play again


Where are we going to?

The Investment Game (7)


We are heading toward an equilibrium
There are certain cases in which playing
converges in a natural sense to an equilibrium
But were going towards only one of the two
equilibriums!
Is any of these two NE befer than the other?

The Investment Game (8)


Clearly, everyone invesLng is a befer NE
Nevertheless we were converging very rapidly
to a bad equilibrium, where no one gets
anything, in which all money is leh on the
table!
How can that be?

The Investment Game (9)


Formally, we say that one NE
pareto dominates the other
Why did we end up going to a bad
equilibrium?

The Investment Game (10)


Remember when we started playing?
We were more or less 50 % invesLng

The starLng point was already bad for the


people who invested for them to lose
condence
Then we just tumbled down
What would have happened if we started with
95% of the class invesLng?

The Investment Game (11)


Note also the process of converging towards
the bad equilibrium
It coincides with the idea of a self-fullling
predicLon
Provided you think other people are not
going to invest, you are not going to invest

The Investment Game (12)


Does this game belong to the Prisoners
Dilemma family?
Was there any strictly dominated strategy?
Coordina?on game

The Investment Game (13)


Why is this a coordinaLon game?
Wed like everyone to coordinate their acLons
and invest
There are a lot of coordinaLon problems in
real life

CoordinaLon Games (1)


Ok, lets discuss some examples of
coordinaLon games:
Party in a Villa
On-line Web Sites
Establishment of technological monopolies
(Microsoh, HDTV)
Bank runs

A (trusted) third party could drive the crowd


to a befer equilibrium!

CoordinaLon Games (2)


Lets try to compare this to the Prisoners
Dilemma
In that case, even the presence of a TTP would
not help, because the strategy beta would be
sLll dominated and people would chose alpha
no mafer!
So why a TTP works in coordinaLon games?

CoordinaLon Games (3)


In coordinaLon games communica?on helps!
Indeed, a TTP is not going to impose players to
adopt a strictly dominated strategy, but is just
leading the crowd towards a befer NE point
In the PD game, you need to change the
payo of the game to move peoples acLons

Lets recap a lifle

Recap (1)
Weve seen the investment game, which is
one instance of a coordina?on game
Lesson 1: communica?on can help
CoordinaLon games are very dierent from
Prisoners Dilemma games
Do you remember why?

Recap (2)
In the Prisoners Dilemma game communicaLon
cannot help
one strategy is dominated nobody can oblige you to
play a dominated strategy

Instead, in the Investment Game communicaLon helps


A third party can convince you to play, among two Nash
Equilibrium, the one that is pareto dominant

In coordinaLon games there is scope for leadership

Recap (3)
Player 2

Player 1

U
D

1,1
0,0

0,0
1,1

Clearly in this game what mafers is coordinaLon


If you played this game, it is quite likely you
would end-up being uncoordinated
A lifle bit of leadership would make sure you
coordinate

Recap (4)
We introduced the noLon of
strategic complements
Investment game: the more people invest the
more likely you are to invest
Partnership game: the more the other person
does, the more likely for me to do more

s2

Just as a reminder: in the partnership game


the more eort player 1 makes, the more
eort player 2 is going to make

BR1(s2)

4
3

BR2(s1)

2
1

s1

Another coordinaLon game


Player 2
BU
GS

SW

Bourne UlLmatum

2,1 0,0 0,-1

The good shepherd

0,0 1,2 0,-1

Snow White

-1,0 -1,0 -2,-2

Player 1

The Going to the Movies game


A pair is meeLng at the movies and have to
decide which movies to watch
How would you play this game?

Going to the movies (1)


Player 2
BU
GS

SW

BU

2,1 0,0 0,-1

GS

0,0 1,2 0,-1

SW

-1,0 -1,0 -2,-2

Player 1

Are there any dominated strategies?


If so, how is the game transformed?

Going to the movies (2)


Player 2

Player 1

BU
GS

BU

GS

2,1
0,0

0,0
1,2

How do we play this game?


Lets try it out: form a pair, write down what
you would do, without showing!!

Going to the movies (3)


Player 2

Player 1

BU
GS

BU

GS

2,1
0,0

0,0
1,2

Which kind of game is this?


Does communica?on help here?
Lets nd the Nash Equilibrium of this game

Going to the movies (4)


Player 2

Player 1

BU
GS

BU

GS

2,1
0,0

0,0
1,2

Player 2

Player 1

U
D

1,1
0,0

0,0
1,1

NE: (BU,BU) and (GS,GS)


So it looks like a standard coordinaLon game,
with two NE
What is the trick here?

CoordinaLon Games
Pure coordinaLon games: there is no conict whether
one NE is befer than the other
E.g.: in the investment game, we all agreed that the NE
with everyone invesLng was a befer NE

General coordinaLon games: there is a source of


conict as players would agree to coordinate, but one
NE is befer for a player and not for the other
E.g.: Bafle of the Sexes

CommunicaLon might fail in this case

Enough with coordinaLon games


Lets talk a lifle economics!

Cournot Duopoly (1)


For those who have seen this, dont worry
well look at it through the eyes of Game
Theory
For those who dont know what Im talking
about, dont worry, well be reviewing the
basic concepts

Cournot Duopoly (2)


Why do we study it? [Game theory answer]
So far weve seen two types of games
Those with few players and few (discrete)
strategies
Those with a lot of players (e.g. the number game)
and few strategies

CD is a game with few players but a conLnuum


of strategies

Cournot Duopoly (3)


Why do we study it? [Economics answer]
This games lies between two extreme cases in
economics, in situaLons where rms (e.g. two
companies) are compeLng on the same
market
Perfect compeLLon
Monopoly

Were interested to understand what happens


in the middle

Cournot Duopoly (4)


Given a Cournot Duopoly model of a market,
we want to understand what will happen in
the market
We want to understand, from the welfare
point of view, if what happens is good or bad
for producers / consumers

Cournot Duopoly: the game (1)


The players: 2 Firms, e.g. Coke and Pepsi
Strategies: quanLLes players produce of
iden?cal products: qi, q-i
Products are perfect subsLtutes

Cournot Duopoly: the game (2)


Cost of producLon: c * q
Simple model of constant marginal cost
What is marginal cost?

Prices: p = a b (q1 + q2)

Price in the Cournot Duopoly Game


Demand curve

Slope: -b
a

Tells the quanLty


demanded for a
given price

q1 + q2

Cournot Duopoly: the game (3)


The payos: rms aim to maximize prot
u1(q1,q2) = p * q1 c * q1
Prots = Revenues Costs
Game vs. maximizaLon problem

Cournot Duopoly: the game (4)


u1(q1,q2) = p * q1 c * q1
p = a b (q1 + q2)

u1(q1,q2) = a * q1 b * q21 b * q1 q2 c * q1

Cournot Duopoly: the game (3)


Now, weve dened the players, the strategies
and the payos
We want to nd the NE of this game
How do we do this?

Cournot Duopoly: the game (4)


First order condiLon
Second order condiLon

Cournot Duopoly: the game (5)


First order condiLon
Second order condiLon
[make sure its a max]

Cournot Duopoly: the game (6)


We could just nd the NE now, right?
How would you go for doing this?
Instead, lets see things graphically

q2

Lets nd this point with


some math

q1
What if Firm 2 didnt produce at all?
q2 = 0
What is the best response for Firm 1

BR for Firm 1 when q2 = 0


What would be the BR for Firm 1 if Firm 2
didnt produce at all?

Lets put this quanLty on the plot

q2

q1

Economics 101 interpretaLon


What is this quanLty we just found called?
It is called the monopoly quan?ty
When Firm 2 does not produce, then Firm 1 is
a monopolist on the market
Can anyone tell me how to nd this with the
price curve we saw when dening the game?

MONOPOLY:
When marginal revenue = Marginal cost

Demand curve
Slope: -b
Marginal revenue
Slope: -2b
Marginal cost

q1

MONOPOLY

q2
Lets nd this point with
some math

0
How much quanLty should Firm 2 produce for Firm 1
not to produce at all?
q2 such as q1 would be the best response for Firm 1

q1

When BR for Firm 1 is q1 = 0 ?


We simply take the BR expression and set it to
zero

Lets put this quanLty on the plot

q2

This is the BR for Firm 1

q1

Economics 101 interpretaLon


What is this quanLty we just found called?
It is called the perfect compe??on quan?ty
When Firm 2 produces this quanLty, the best
response for Firm 1 is not to produce
Why?
Can anyone tell me how to nd this with the price
curve we saw when dening the game?

PERFECT COMPETITION:
When demand = Marginal cost

Demand curve
Slope: -b
Marginal revenue
Slope: -2b
Marginal cost

q1

MONOPOLY

PERFECT
COMPETITION

PERFECT COMPETITION:
When demand = Marginal cost

Demand curve
Slope: -b

Marginal cost

q1
If Firm 1 would produce more, the
selling price would not cover her costs

The game is symmetric


q2

BR1
Monopoly

BR2
0

Perfect
compeLLon

q1

What is the NE of the


Cournot Duopoly?
Graphically weve seen it, formally we have:

We have found the COURNOT QUANTITY

q2

BR1
Monopoly

NE
BR2

Perfect
compeLLon

q1

Cournot Duopoly: observaLons (1)


This game is dierent from the games weve
seen so far:
Partnership game
Investment game

In those games, the more the other player


would do the more I would do
Strategic complements

Cournot Duopoly: observaLons (2)


In this game, the more the other player do,
the less I would do
This is a game of strategic subs?tutes
Note: of course the goods were subsLtutes
Were talking about strategies here

Cournot Duopoly:
what about the market?
Lets now take the perspecLve of the market
and not of a single player
What about the total industry prots?
Are they maximized?
Where on the plot we drew before, are industry
prots maximized?

q2

BR1
Industry prots
are maximized

BR2
0

q1

Where else are industry prots maximized?

q2

BR1
Both rms
produce half
of the monopoly
quanLty

BR2
0

q1

Cartels, agreements (1)


How could Firm 1 and Firm 2 set an
agreement so as to prot more from the
market?
E.g.: they could decide both to produce half of
the monopoly quanLty and they would earn
more
Can you see this on the previous plots?

Cartels, agreements (2)


What is wrong with this agreement?
What is the BR for a player? Can you see on the
graph where such an agreement would end up?
Is there anything else wrong in this reasoning?
What happened to the producLon quanLLes?
The market is not fully exploited
So?

Cournot Duopoly: last observaLons


How do quanLLes and prices weve
encountered so far compare?
Perfect
CompeLLon

Perfect
CompeLLon

Cournot
QuanLty

Cournot
QuanLty

Monopoly

Monopoly

QUANTITIES

PRICES

Applied Game Theory


Lecture 3
Pietro Michiardi

Recap (1)
We started to study imperfect compe++on
We used the Cournot Model
What happens between monopoly and perfect
compeBBon?

We used three approaches to answer


TheoreBc approach + calculus
TheoreBc approach + graphical representaBon
Economics insights

Recap (2)
This is the general approach you use when
modeling a problem with Game Theory
You formally solve the problem, with handy
mathemaBcal tools
You graphically solve the problem to gain insights
and intuiBon to the problem
You translate your ndings in the real world
and gure out if they make sense

Recap (3)
In Cournot Equilibrium we have:
Saddle points sit in between monopoly and perfect
compeBBon
QuanBty produced is less than perfect compeBBon
but more than monopoly
Industry prot is less than monopoly but larger than
perfect compeBBon

In our model we set quanBBes and we let prices


take care of themselves and seTle

Bertrand CompeBBon
What if we take a somehow more realisBc
view and let Firms decide on prices instead of
quanBBes?
Companies compete on prices and let
quanBBes seTle as a consequence of
imperfect compeBBon
Lets dene the game

Bertrand Model (1)


Players: 2 Companies, E.g.: Coke and Pepsi
Producing perfect subsBtutes
Constant marginal costs

Strategies: Companies set prices, p1, p2


Let strategy si be: 0 pi 1

Firms maximize their prots

Bertrand Model (2)


Where do quanBBes come from?
where p is the lowest of the prices (p1, p2)
The demand for company 1 would be:

Bertrand Model (3)


What are the payos?

Revenues

Costs

Bertrand Model (4)


ObservaBons
This is the same basic model we used in Cournot
Before: Firms set quanBBes, Market determine
prices
Now: Firms set prices, Market determine
quanBBes

How to nd a NE of this game?


NOTE: calculus is not going to help here
Why?

Bertrand Model (5)


Best response for Firm 1
Assume p2 < c
What is the BR in this case?
get out of the market!
Why?
How?

Bertrand Model (6)


Best response for Firm 1
Assume p2 > c
What is the BR in this case?
Undercut Firm 2!
Why?
How?
Whats the corollary condiBon?

Bertrand Model (7)


Best response for Firm 1
Assume p2 > pMON
What is the BR in this case?
Be a monopolist!
Why?
How?

Bertrand Model (8)


Best response for Firm 1
Assume p2 = c
What is the BR in this case?
Price at least as much as Firm 2
Why?
How?

Bertrand Model (9)


Summary for BR1(p2)

Bertrand Model (10)


So now we know the BR for both Firms
(symmetric game), what is the NE of the
game?
The NE is for both companies to set their
prices exactly equal to the marginal costs!
Lets check this is a NE

Bertrand Model (11)

Suppose we have (p1, p2) such as:



What would Firm 1 do?

Now what would Firm 2 do?

you can imagine were we heading

Bertrand Model (12)


Hence, the NE = (c,c)
The prot for both Firms is zero
The outcome is perfect compe**on
Same semng as Cournot, we only changed
strategy set, we got a completely dierent
outcome!

Remarks on Bertrand Model


We hardly believe this to be representaBve of
reality
We need to relax some assumpBons here
We would like to get back at imperfect
compeBBon
How can we do that?

Linear City Model (1)


NOTE: do this at home, its a good exercise for the
exam
The assumpBon we change on the Bertrand
Model is that product are not idenBcal anymore
And this is somehow more realisBc
Despite being priced equally, you can discern from a
beer to another

Linear City Model (2)


Players: 2 Firms, E.g.: Coke and Pepsi
Constant marginal costs

Strategies: Companies set prices, p1, p2


Let strategy si be: 0 pi 1

Firms maximize their prots

Linear City Model (3)


Firm 1
0

Firm 2
y

Each consumer chooses the product whose


total cost to her is smaller
similar to the demand stated before, smaller price wins

Example: consumer at posiBon y pays

Linear City Model (3)


AssumpBons:
Uniform distribuBon of consumers across the city
Consumers buy only one product from Firm 1 or
Firm 2

GeneralizaBon:
The linear city model can be used to think about a
dimension of the product
(think of beer, Bud light vs. Guinness)

LETS MOVE FROM ECONOMICS TO


POLITICS

The Candidate-Voter Model (1)


This is an extension to the model we saw
during the rst lecture, i.e., the Downs and
Hotelling Model
Basically we have the same semng, but the
game is a liTle bit dierent

The Candidate-Voter Model (2)


Leq wing

Right Wing

We assume even distribuBon of voters


Voters vote for the closest candidate
New assump*ons:
The number of candidates is not xed
(Endogenous)
Candidates cannot choose their posiBons

The Candidate-Voter Model (3)


Lets now describe the game
Players: voters/candidates
Strategies: run for the elecBon or dont
Voters vote for closest running candidate
Candidate wins with plurality (ip if Bes)

Payos:

Prize if win = B
Cost for running = C, with B = 2 C
DisuBlity (cost) if winning candidate is far from your
poliBcal posiBon = |x-y|

The Candidate-Voter Model (4)


Example:

If Mr. X enters and wins payo = B C


If Mr. X enters but Mr. Y wins payo = -C - |x-y|
If Mr. X stays out payo = - |x-y|

Lets put some real numbers:

Suppose we have N = 17 players


B = $2000
C = $1000
Each place is worth a 1/17th of $1000 for each
posiBon away form you the winner is, you loose ~ $60

The Candidate-Voter Model (5)


Y

R
Who is going to win the elecBon?
Is this a Nash Equilibrium?
Y runs payo = - $1000 - $240
Y stays put payo = -$240
A player can deviate by staying put

The Candidate-Voter Model (6)


Y X
L

R
Who is going to win the elecBon?
Is this a Nash Equilibrium?
Y runs payo = - $1000 - $60
Y stays put payo = -$60
A player can deviate by staying put

The Candidate-Voter Model (7)


X
L

Who is going to win the elecBon?


Is this a Nash Equilibrium?
What if another candidate enters? Devia*on: run

The Candidate-Voter Model (8)


Looks like we understood the mechanics
QuesBon:
Is there a NE with zero candidates running?
QuesBon: given an odd number of candidates
Is there a NE with only 1 candidate running?

The Candidate-Voter Model (9)


L

Is this an equilibrium?
What happens if someone else runs?
Who is the winner in case orange player runs?

The Candidate-Voter Model (10)


Y

Is there an NE with 2 candidates?


What about the example above?
We need to check for all possible deviaBons

The Candidate-Voter Model (11)


DeviaBon 1: Z enters the scene not only Z looses, but it pushes the winner further away

DeviaBon 2: Z enters the scene Z simply looses

DeviaBon 3: X drops X runs, then expected payo is 50% B-C , 50% -C - |x-y|

X drops, then payo is C - |x-y| for sure!!

The Candidate-Voter Model (12)


Summarizing:
NE with 0 candidates NO
NE with 1 candidate Yes, if odd # of voters and
centrist candidate
NE with 2 candidates YES, if equally distant
from the center!

Can we elaborate more on this last point?

The Candidate-Voter Model (13)


Lets recap rst
There are many NE, not all at the center
Entry can lead to a more distant candidate
winning
X

The Candidate-Voter Model (14)


If runners are too far apart, theres an
incenBve for a third party to run and win
X

QuesBon: how far apart can two equilibrium


candidates be?

The Candidate-Voter Model (15)


??

1/6

2/6

1/2

4/6

5/6

If runners are exactly at 1/6 and 5/6 a middle runner could


win with probability 1/3
If they slightly converge towards the center, the middle
candidate is squeezed out
Although there is not a full thrust to aggregate exactly at
the center (cf. Downsian model) there is sBll a force
pushing candidates towards the center

Game theory lesson


Basically weve seen that the guess and
check technique is very eecBve
HINTS:
Be systemaBc when guessing
Be careful when checking: do not ignore hidden
deviaBons of players

LETS NOW MOVE FROM POLITICS


TO SOCIOLOGY

The LocaBon Model (1)


Assume we have 2N players in this game
Players have two types: tall and short
There are N tall players and N short players

Players are people that need to decide in which


town to live
There are two towns: East town and West town
Each town can host no more than N players

The LocaBon Model (2)


Players: 2N people
Strategies: East or West town
Lets put some numbers
We have 140 short people and 140 tall people
Each town can host 140 people

As usual, were missing something: payos

The LocaBon Model (3)

UBlity for player i


1

1/2

70

140

# of your type
in your town

The LocaBon Model (4)


The idea is:
If you are a minority in your town you get a payo of
zero
If you are in majority in your town you get a payo of

If you are well integrated you get a payo of 1

People would like to live in mixed towns, but if


they cannot, then they prefer to live in the
majority town

The LocaBon Model (5)


Lets put a few more rules to dene the game
We assume a simultaneous move game
UnrealisBc, but will do for now

We assume that if the number of people choosing


a parBcular town is larger than the town capacity,
the surplus will be redistributed randomly

Tall player
Short player

West Town
East Town

This is the iniBal


picture
We assume an iniBal
choice for all players
What we are going to do
is to simulate players
acBon in repeaBng the
game

Tall player
Short player

West Town
East Town

First itera6on
For tall players
Theres a minority of east
town giants to begin with
For short players
Theres a minority of west
town dwarfs to begin with
short players will want to switch
towns (WE)
tall players will want to switch
towns (EW)

Tall player
Short player

West Town
East Town

Second itera6on
We keep the same trend
short players will want to switch towns
(WE)
tall players will want to switch towns
(EW)

Few excepBons that sBll


didnt understand the
game
What is their payo?

Tall player
Short player

West Town
East Town

Third itera6on
What happened?
People got segregated

The LocaBon Model (6)


The players ended up being segregated
However, the payo curve didnt say so:
People would have preferred to be in an
integrated town

What happened? Why did we end up like this?

The LocaBon Model (7)


People that started in a minority (even though
not a bad minority) had incenBves to
deviate
What are the NE of this game?

The LocaBon Model (8)


Its clear, there are two NE in which people are
segregated
Short, E ; Tall, W
Short, W; Tall, E

How can we verify these are indeed NE?


Look for any protable deviaBons!

Are there any other equilibrium?

The LocaBon Model (9)


We also have an integrated equilibrium
EXACTLY 50% split
50% S + 50% T, E ; 50% S + 50% T, W

There is however something worrying about


this equilibrium

The LocaBon Model (10)


Informally, a deviaBon from the integrated
equilibrium makes us relaBvely unhappy
NoBon of stability
If we move away from the 50% raBo, even a liTle
bit, were going to end up in one of the segregated
equilibrium
Similar to what you learned in Physics 101!

The LocaBon Model (11)


Conversely, the segregated equilibria are
stable
Assume a segregated society and introduce a liTle
perturba*on
It is clear that segregaBon will happen again preTy
quickly

Lets sum up things

The LocaBon Model (12)


2 segregated NE, which are strict and stable
1 integrated NE, which is weak and unstable
Which of these is a preferred equilibrium by
players?

Tipping point: introduced by Nobel Prize


winner Shelling
Similar observaBon for the investment game we
played last Bme

The LocaBon Model (13)

Claim: there is another equilibrium in this game

The LocaBon Model (14)


All players (tall and small) select exactly the same
town (East or West)
What would happen in this case?
They would be redistributed randomly

By the law of large numbers this would result exactly


in an integrated equilibrium, which is preferred by
players!

Having society randomize for you ends up being


beTer than any ac*ve choice

Warning
Note how a Bny modeling detail ended up
being very important
We added randomizaBon to add things up
correctly
It turned out to yield an equilibrium in the
game

The LocaBon Model (15)


An external device can be used to avoid
acBve choice and achieve randomizaBon,
which turns out to yield a preferred
equilibrium
In principle, there is another way to achieve
randomizaBon

The LocaBon Model (16)


Players could just ip a fair coin (every player
should do this separately)
AsymptoBcally, this would end up having an
exact 50% integraBon in the two towns
Randomize over your strategies yield an
equilibrium

RANDOMIZATION AND MIXED


STRATEGIES

Mixed strategies (1)


So far, we have been discussing how to
achieve NE by players selecBng their pure
strategies
In principle, players can also randomize over
their pure strategies
Lets see an example before being more
formal

Rock, Scissors, Paper Game (1)


R
R
S
P

0,0 1,-1 -1,1


-1,1 0,0 1,-1
1,-1 -1,1 0,0

Is there any dominated strategy?


What is the NE of this game?
NoBce the cycle?

Pure strategies = {R, S, P}

Rock, Scissors, Paper Game (2)


R
R
S
P

0,0 1,-1 -1,1


-1,1 0,0 1,-1
1,-1 -1,1 0,0

Claim: there is a NE if player choose with


probability 1/3 each of her pure strategies
How can we verify this is a NE?

Rock, Scissors, Paper Game (3)

Rock, Scissors, Paper Game (4)


In the RSP game, playing each strategy with
probability 1/3 against a player doing the
same, is a Nash Equilibrium
Well see in a moment that this is called a
Mixed Strategies NE
Are you convinced it is indeed a BR?

Deni+on: Mixed strategies


A mixed strategy pi is a
randomizaBon over is pure
strategies
is the probability that pi assigns to pure
strategy si
could be zero in RSP: (1/2, 1/2, 0)
could be one in RSP: R a pure strategy

Mixed Strategies
The pure strategies are embedded in our
mixed strategies
QuesBon: What are the payos from playing
mixed strategies?
In parBcular, what is the expected payo?

Deni+on: Expected Payos


The expected payo of the mixed
strategy pi is the weighted average of the
expected payos of each of the pure
strategies in the mix
Basically, every player is mixing, hence you have
to take the joint probabiliBes for a strategy prole
to occur

Example: The BaTle of the Sexes


a
Player 1

A
B

Player 2

2,1
0,0

0,0
1,2

1/5
4/5

Suppose the following mixed strategies:


Player 1: p = (1/5, 4/5)
Player 2: q = (1/2, 1/2)

What Player 1s expected payo by using p?

Example (2)

Example (3)

The expected payos for both players are


computed as the weighted average of the pure
strategies expected payos against the other
players mix

Example (4)
Lets focus on player 1s expected payo 3/5
Obviously we have:
The weighted average
must lie between the two
pure strategies expected
payos

ObservaBon
The expected payo from mixed strategies
must lie between the pure strategies expected
payos
This simple observaBon turns out to be the
key to compute mixed strategies NE
If a mixed strategy is a best response then each of
the pure strategies in the mix must be best
responses
They must yield the same expected payo

Some Details (1)


Main Lesson: If a mixed strategy is a best
response then each of the pure strategies
involved in the mix must itself be a best
response. In parBcular, each must yield the
same expected payo
Before explaining why this must be true, let's
just try to rewrite this lesson formally

Some Details (2)


If player i's mixed strategy pi is a best response
to the (mixed) strategies of the other players
p-i, then, for each pure strategy si such that
pi(si) > 0, it must be the case that si is itself a
best response to p-i
In parBcular, E[ui(si, p-i)] must be the same for
all such strategies

Some Details (3)


Sketch of proof
Suppose it were not true. Then there must be at least one pure
strategy si that is assigned posiBve probability by my best-response
mix and that yields a lower expected payo against pi
If there is more than one, focus on the one that yields the lowest
expected payo. Suppose I drop that (low-yield) pure strategy from
my mix, assigning the weight I used to give it to one of the other
(higher-yield) strategies in the mix
This must raise my expected payo
But then the original mixed strategy cannot have been a best
response: it does not do as well as the new mixed strategy
This is a contradicBon

Deni+on: Mixed Strategies Nash


Equilibrium
A mixed strategy prole is a
mixed strategy NE if for each player i:
is a BR to
This is the same deniBon of NE weve been
using so far, except that weve been looking at
pure strategies, and now well look at mixed ones

ObservaBon
Our informal lesson in red before implies that

Weve been formal so far, lets play a game to


x these ideas

Tennis Game (1)


Were going to look at a game in the game:
assume two players (Venus and Serena),
where Serena is at the net

VENUS

RIGHT

LEFT

SERENA

right

leq

Viewpoint

Tennis Game (2)


l
Venus

L
R

Serena

50,50 80,20
90,10 20,80
q

p
(1-p)

(1-q)

Have a look at the payos

E.g.: if Venus chooses L and Serena guesses wrong


and jumps to the r, Venus wins the point 80% of the
Bme

Is there any dominated strategy?


Is there a pure strategies NE?

Tennis Game (3)


Lets nd the mixed strategy NE
Lesson 1: Each players randomizaBon is the best
response to the other players randomizaBon
Lesson 2: If players are playing a mixed strategy
as part of a NE, then each of the pure strategies
involved in the mix must itself be a best response

Tennis Game (4)


What I would like to do is to nd a mixture for
Serena and one for Venus that are in
equilibrium
TRICK:
To nd Serenas mix (q) Im going to put myself in
Venuss shoes and look at her payos
And vice-versa for Venuss mix (p)

Tennis Game (5)


Venuss expected payos:

If Venus is mixing in this NE then the payo to


the leq and to the right must be equal, they
must both be best responses
Otherwise Venus would not be mixing

Tennis Game (6)


Venuss expected payos must be equal:

I was able to derive Serenas mixing probability


This is the soluBon to the equaBon in one unknown
that equates Venuss payos in the mix

Tennis Game (5)


Serenas expected payos:

Similarly to before, we computed Venuss mixing


probability

Tennis Game (6)


We found the mixed strategy NE:
Venus Serena
[(0.7, 0.3) , (0.6, 0.4)]
L R l r
What would happen if Serena jumped to the leq
more oqen than 0.6?

Venus would be beTer o playing the pure strategy R

What if she jumped less oqen than 0.6?

Venus would be shooBng to the L all Bme!

Tennis Game (7)


l
Venus

L
R

Serena

30,70 80,20
90,10 20,80
q

p
(1-p)

(1-q)

Suppose a new coach teaches Serena how to


backslash, and the payo would change
accordingly
There is sBll no pure strategy NE
What would happen in this game?

Tennis Game (8)


Lets rst let our intuiBon work
Basically Serena is beTer at her backhand and
when Venus shoots there, Serena scores more
oqen than before
Direct eect: Serena should increase her q
But, Venus knows Serena is beTer at her
backhand, hence she will shoot there less oqen
Indirect eect: Serena should decrease her q

Tennis Game (9)


Lets compute again q:

We see that in the end Serenas q went down from 0.6


to 0.5!!
The indirect or strategic eect was predominant

Tennis Game (10)


Serenas expected payos:

Similarly to before, the strategic eect was


predominant
Venus will be shooBng to the leq with less probability

Tennis Game: summary


We just performed a compara*ve sta*s*cs
exercise
We looked at a game and found an equilibrium, then
we perturbed the original game and found another
equilibrium and compared the two NE

Suppose Serenas q had not changed


Venus would have never shot to the leq
But this couldnt be a mixed strategy NE
There was a force to put back things at equilibrium
and that was the force that pulled down Serenas q

Applied Game Theory


Lecture 4
Pietro Michiardi

Recap from last lecture


Last :me we formally discussed about mixed
strategies and mixed strategy NE
The big idea was that if a player is playing a
mixed strategy in equilibrium, then every pure
strategy in the mix must also be a best
response to what the other side is doing

Tennis Game (recap)


Were going to look at a game in the game:
assume two players (Venus and Serena),
where Serena is at the net

VENUS

RIGHT

LEFT

SERENA

right

leN

Viewpoint

Tennis Game (recap)


l
Venus

L
R

Serena

50,50 80,20
90,10 20,80
q

p
(1-p)

(1-q)

We iden:ed the mixed strategy NE for this game


Venus Serena
[(0.7, 0.3) , (0.6, 0.4)]
L R l r
p* (1-p*) q* (1-q*)

Tennis Game (recap)


How do we actually check that this is indeed an
equilibrium?
Lets verify that in fact p* is BR(q*)
Venus payos:
Pure strategy L 50*0.6 + 80*0.4 = 0.62
Pure strategy R 90*0.6 + 20*0.4 = 0.62
Mix p* 0.7*0.62 + 0.3*0.62 = 0.62

Venus has no strictly protable pure-strategy


devia:on

Tennis Game (recap)


But is this enough? There are no pure-strategy
devia:ons, but could there be any other mixes?
Any mixed strategy yields a payo that is a
weighted average of the pure strategy payos
This already tells us: if you didnt nd any pure-
strategy devia:ons then youll not nd any other
mixes that will be protable

To check if a mixed strategy is a NE we only have


to check if there are any pure-strategy protable
devia:ons

Discussion (1)
Can anybody suggest some other places where we see
randomiza:on or at least mixed strategy equilibria in
spor:ng events?
Pick your own!!

In general, if you listen to sport comments, youll be


surprised to hear all kind of stories around sta:s:cs
and tac:cs
Especially arguing that since theres a sta:s:cally equal
chance when randomizing, there must be no point in
playing those strategies
This is misleading and wrong: why?

Discussion (2)
Since were in a mixed strategy equilibrium, it
must be the case that the payos are equal
Indeed, if it was not the case, then you
shouldnt be randomizing!!

Discussion (3)
ANer the security problems at U.S. and worldwide airports
due to high risks of ajacks, the need for devices capable of
inspec:ng luggage has raised considerably
The problem is that there are not enough of such machines
Wrong statements have been promoted by local
governments:

If we put a check device in NY then all ajacks will be shiNed to


Boston, but if we put a check device in Boston, the ajacks will
be shiNed to yet another city
The claim was that whatever the security countermeasure, it
would only shiN the problem

Discussion (4)
The problem with that line of reasoning was that
the concept of mixed strategy was not adopted
What if you wouldnt no:fy where you would
actually put the check devices, which boils down
to randomizing?
The hard thing to do in prac:ce is how to mimic
randomiza:on!!

Da:ng and income tax declara:on

INTERPRETATIONS TO MIXED
STRATEGIES

The Bajle of the Sexes (revisited)


a
Player 1

A
B

Player 2

2,1
0,0

0,0
1,2

1-q

p
(1-p)

We already know a lot about this game


There are two pure-strategy NE:
(A,a) and (B,b)
We know that there is a problem of coordina<on
We know that without communica:on, it is possible (and
quite probable) that the two players might fail to
coordinate

The Bajle of the Sexes (2)


Lets nd the mixed strategy NE

Any volunteer?

The Bajle of the Sexes (3)


Player 1 perspec:ve, nd NE q:

Player 2 perspec:ve, nd NE p:

The Bajle of the Sexes (3)


Lets check that p=2/3 is indeed a BR for
Player1:

The Bajle of the Sexes (4)


We just found out that there is no strictly
protable pure-strategy devia:on
There is no strictly protable mixed-strategy
devia:on
The mixed strategy NE is:
Player 1

p 1-p

Player 2

1-q

The Bajle of the Sexes (5)


What are the payos to players when they play
such a mixed strategy NE?
Player 1

Player 2

Why are the payos so low?


What is the probability for the two players not to
meet?
Prob(meet) = 2/3*1/3+1/3*2/3=4/9
1- Prob(meet) = 5/9 !!!

The Bajle of the Sexes (6)


This results seems to conrm our intui:on
that magically achieving the pure-strategy
NE would be not always possible
So the real ques:on is: why are those players
randomizing in such a way that it is not
protable?

Mixed Strategies:
Interpreta:on #2
Rather than thinking of players actually
randomizing over their strategies, we can
think of them holding beliefs of what the
other players would play
What weve done so far is to nd those beliefs
that make players indierent over what they
play since theyre going to obtain the same
payos

Mixed Strategies:
Interpreta:on #3
We could actually think in terms of frac<on of
a popula<on when we discussed mixed
strategies
Lets mo:vate this line of thinking through an
example/game

The Income Tax Game (1)


Tax payer
Cheat
Honest

Auditor

A
N

2,0 4,-10
4,0 0,4
q

p
(1-p)

1-q

Lets focus on a simultaneous move game (despite in this case its not
realis:c)
The auditor can decide to audit or not a tax payer
The tax payer can decide to be honest or to cheat in declaring income tax
Take a look at the payos

The Income Tax Game (2)


Tax payer
Cheat
Honest

Auditor

A
N

2,0 4,-10
4,0 0,4
q

p
(1-p)

1-q

Is there any pure-strategy NE?


Lets nd what is the mixed-strategy NE
Despite the mathema:cs exercise looks and is the
same as we saw so far, well give it a dierent
interpreta:on

The Income Tax Game (3)


Mixed strategies NE:

The Income Tax Game (4)


From the auditors point of view, he/she is going
to audit a single tax payer 2/7 of the :me
this prac:cally implies that the auditor is going to
audit 2/7th of the popula:on
From the tax payer perspec:ve, he/she is going to
be honest 2/3 of the :me
this implies that 2/3rd of the popula:on is going
to pay taxes honestly

The Income Tax Game (5)


We have been considering so far a randomiza:on
of a single player
Instead, now we say that this is a mixture in the
popula:on
Mixed strategies can be thought of as not players
mixing their pure strategies but as a mix in a
large popula<on of which some people are doing
one thing and the other group are doing the
other

The Income Tax Game (6)


What could ever be done if one policy maker
(e.g. the government) would like to increase
the propor:on of honest tax payers?
One idea could be for example to prevent
fraud by increasing the number of years a tax
payer would spend in jail if found guilty

The Income Tax Game (7)


Tax payer
Cheat
Honest

Auditor

A
N

2,0 4,-20
4,0 0,4
q

p
(1-p)

1-q

So we changed the payo matrix


What happens to q*?
What is now the mixed-strategy NE?

The Income Tax Game (8)


Mixed strategies NE:

The Income Tax Game (9)


What happened?
It looks like the propor:on of honest tax payers
didnt change!
NOTE: what determines the equilibrium mix for the
column player is the row players payos!!

What happened to the probability of checking


with an audit a single tax payer?
This is good news, as audits cost money to society and
having less frequent audits is benecial for all!!

The Income Tax Game (10)


What can we actually do to increase the
number of honest tax payers?
1. We could modify the payos to auditors
Make audits cheaper
Make more protable an audit

2. We could abandon the idea of Game Theory


and just set the probability of audits out of
band
What would be the problem here?

To sum up
Lesson 1: mixed strategies can have dierent
interpreta:ons (frac:on of popula:on)
Lesson 2: we can verify a mixed strategy NE is
eec:vely one simply by checking pure-
strategy devia:ons
Lesson 3: Row players payos impact Column
players mixing probability and vice-versa

Building up on the last interpreta:ons of mixed strategies

FROM GAME THEORY TO


EVOLUTION

Evolu:on (1)
Concept related to a specic branch of Biology
Relates to the evolu:on of the spices in nature
Powerful modeling tool that has received a lot
of ajen:on lately by the computer science
community
Why look at evolu:on in the context of Game
Theory?

Evolu:on (2)
Game Theory had a tremendous inuence on
evolu:onary Biology
Study animal behavior and use GT to understand
popula:on dynamics
Idea:
Relate strategies to phenotypes of genes
Relate payos to gene:c tness
Strategies that do well grow, those that obtain lower
payos die out

Important note:

Strategies are hardwired

Evolu:on (3)
Examples:
Group of lions deciding whether to ajack in group
an antelope
Ants deciding to respond to an ajack of a spider
Mobile ad hoc networks
TCP varia:ons
P2P applica:ons

Evolu:on (4)
Evolu:onary biology had a great inuence on
Game Theory
Similar ideas as before, relate strategies and
payos to genes and tness
Example:

Firms in a compe::ve market


Firms are bounded, they cant compute the best
response, but have rules of thumbs and adopt
hardwired (consistent) strategies
Survival of the jest == rise of rms with low costs
and high prots

Simplifying assump:ons
When studying evolu:on through the lenses of
GT, we need to make some assump:ons to make
our life easy
We will relax these assump:ons later on

1. Within spices compe::on


We assume no mixture of popula:on: ants with ants,
lions with lions

2. Asexual reproduc:on
We assume no gene redistribu:on

Evolu:onary Game Theory (1)


A simple model
We will look at simple games at rst
Two player symmetric games: all players have the
same strategies and the same payo structure

We will assume random tournaments


In a large popula:on of individuals, we pick two
individuals at random and we make them play the
symmetric game
The player adop:ng the strategy yielding higher payo
will survive (and eventually gain new elements)
whereas the player who lost the game will die out

Evolu:onary Game Theory (2)


A simple model
Assume a large popula:on of players with
hardwired strategies
We suppose the en:re popula:on play strategy s
We then assume a muta<on happens, and a
small group of individuals start playing strategy s
The ques:on we will ask is whether the mutants
will survive and grow or if they will eventually die
out

Evolu:onary Game Theory (3)


A simple model
Study the existence of Evolu:onarily Stable
(ES) strategies
Note:
With our assump:ons we start with a large
frac:on of players adop:ng strategy s and a small
por:on using strategy s
In random matching, the probability for a player
using s to meet another player using s is high,
whereas mee:ng a player using s is low

Example (1)
Player 2
Cooperate Defect
Player 1

C
D

2,2
3,0

0,3
1,1

1-

Have you already seen this game?


Examples:
Lions hun:ng in a coopera:ve group
Ants defending the nest in a coopera:ve group

Ques:on: is coopera<on evolu<onarily stable?

Example (2)
Player strategy
hardwired C

Spa<al Game
All players are coopera:ve
and get a payo of 2
What happens with a
muta:on?

Example (3)
Player strategy
hardwired C
Player strategy
hardwired D
Focus your ajen:on on this
random tournament:
Coopera:ng player will obtain
a payo of 0
Defec:ng player will obtain a
payo of 3
Survival of the jest:
D wins over C

Example (4)
Player strategy
hardwired C
Player strategy
hardwired D

Example (5)
Player strategy
hardwired C
Player strategy
hardwired D

Example (6)
Player strategy
hardwired C
Player strategy
hardwired D
A small ini:al muta:on is
rapidly expanding instead of
dying out
Lets now try to be a lijle bit
more formal

Example (7)
Player 2
Cooperate Defect
Player 1

C
D

2,2
3,0

0,3
1,1

1-


1-

For C being a majority


For D being a majority

Is coopera:on ES?
C vs. [(1-)C + D] (1-)2 + 0 = 2(1-)
D vs. [(1-)C + D] (1-)3 + 1 = 3(1-)+
3(1-)+ > 2(1-)

C is not ES because the average payo to C is lower than


the average payo to D

Example (8)
Player 2
Cooperate Defect
Player 1

C
D

2,2
3,0

0,3
1,1

1-


1-

For C being a majority


For D being a majority

Is defec:on ES?
D vs. [(1-)D + C] (1-)1 + 3 = (1-)+3
C vs. [(1-)D + C] (1-)0 + 2 = 2
(1-)+3 > 2

D is ES: any muta:on from D gets wiped out!

Observa:ons
Lesson 1: Nature can suck

It looks like animals dont cooperate, but weve seen


so many documentaries showing the opposite!!!
Why?
Sexual reproduc:on, and gene redistribu:on might
help here

Lesson 2: If a strategy is strictly dominated then it


is not Evolu:onarily Stable
The strictly dominant strategy will be a successful
muta:on

Another
e
xample
(
1)
a
b

a
b
c

2,2 0,0 0,0


0,0 0,0 1,1
0,0 1,1 0,0

2 player symmetric game with 3 strategies


Is c ES?
c vs. [(1-)c + b] (1-) 0 + 1 =
b vs. [(1-)c + b] (1-) 1 + 0 = 1-
1- >
c is not evolu:onary stable, as b can invade it

Another
e
xample
(
2)
a
b

a
b
c

2,2 0,0 0,0


0,0 0,0 1,1
0,0 1,1 0,0

So c is not ES, as b can invade an grow to


of the popula:on roughly
NOTE: b, the invader, is itself not ES!!
But it s:ll avoids dying out completely

Another
e
xample
(
3)
a
b

a
b
c

2,2 0,0 0,0


0,0 0,0 1,1
0,0 1,1 0,0

Is (c,c) a NE?
No, because b is a protable devia:on

Observa:ons
Lesson 3:
If s is not Nash (that is (s,s) is not a NE), then s is not
evoluHonary stable (ES)

If s is ES, then (s,s) is a NE


Ques:on: is the opposite true?

Yet another example (1)


Player 2
a
b
Player 1

a
b

1,1
0,0

What are the NE of this game?


NE = (a,a) and (b,b)

Is b ES?
b 0
a (1-) 0 + 1 =
> 0

(b,b) is a NE, but it is not ES!

0,0
0,0
1-

Yet another example (2)


Player 2
a
b
Player 1

a
b

1,1
0,0

0,0
0,0
1-

Why is b not ES despite it is a NE?


This relates to the idea of a weak NE
If (s,s) is a strict NE then s is ES

Deni:on 1 [Maynard Smith 1972]


In a symmetric 2 player game, the pure
strategy is ES (in pure strategies) if there
exists an 0 > 0 such as:

for all possible devia:ons s and for all


muta:on sizes < 0

Deni:on 2
In a symmetric 2 player game, the pure
strategy is ES (in pure strategies) if:
A)
and
B)

Theorem (1)
Deni:on 1
Deni:on 2
Lets see Def. 2
Def. 1
Sketch of proof:
Fix a a
nd suppose (,) is NE, that is

There are two possibili:es

Theorem (2)
Case 1:
u( s, s ) > u( s, s) s
the mutant dies out because she meets oNen
Case 2:

the mutant does ok against (the mass) but


badly against s (itself)

Lets recap in words


Weve seen a deni:on that connects
Evolu:onary Stability to Nash Equilibrium
Basically, all we need to do is:
First check if (,) is a symmetric Nash Equilibrium
If it is a strict NE, were done
Otherwise, we need to compare how performs
against a muta:on, and how a muta:on performs
against a muta:on
If performs bejer, then were done

Guess what? An example!!


Player 2
a
b
Player 1

a
b

1,1
1,1

1,1
0,0
1-

What is the NE of this game?


No prizes: NE = (a,a)

Is it symmetric? Easy to check


a is a good candidate to be ESS
Is (a,a) a strict NE?

Example con:nued
Player 2
a
b
Player 1

a
b

1,1
1,1

1,1
0,0
1-

No, its not a strict NE


If you deviate to b, its easy to no:ce that
u(a,a)=u(b,a)

Have to check our third rule


How does u(a,b) compare to u(b,b)?
Its bigger! Were done: a is an ESS

Evolu:on of social conven:on (1)


Evolu:on is oNen applied to social sciences
Lets have a look at how driving to the leN or
right hand side of the road might evolve
L
R

2,2
0,0

0,0
1,1

Any clues on the interpreta:on of this game?

Evolu:on of social conven:on (2)


L
R

2,2
0,0

0,0
1,1

Whats liable to be evolu:onary stable in this


sezng?
Well, lets nd the NE of this game:
NE = (L,L) and (R,R) , which are in fact symmetric

Are those NE strict?

Evolu:on of social conven:on (2)


L
R

2,2
0,0

0,0
1,1

Yes, they are strict! Were done:


L and R are both ESS

Lesson 1: we can have mul:ple ES conven:ons

Evolu:on of social conven:on (2)


L
R

2,2
0,0

0,0
1,1

Lesson 2: Mul:ple ESS need not to be equally


good
This should remind you something weve
already seen
These are coordina<on games

The game of Chicken (1)


a
b

0,0
1,2

2,1
0,0

This is just a symmetric version of the Bajle of the


Sexes game weve studied extensively

In this version you have to imagine a James Dean version


of the BoS

Biology interpreta:on:

a : individuals that are aggressive


b : individuals that are non-aggressive

The game of Chicken (2)


a
b

0,0
1,2

2,1
0,0

Whats evolu:onary stable in this game?


Easy: look for Nash equilibria
We know already a lot about this game, lets go
straight to the point

There are 2 NE in pure strategies:


(a,b) and (b,a)

The game of Chicken (3)


a
b

0,0
1,2

2,1
0,0

Are the pure strategies NE symmetric?


No, and thats the problem: according to our
deni:on of ESS, neither the pure strategy a
not b can be ES

If you had only aggressive genes, theyd do very badly


against each other, hence they could be invaded by a
gentle gene
Of course, vice-versa is also true

The game of Chicken (4)


a
b

0,0
1,2

2,1
0,0

What should we do? Look at mixed strategies!


Whats the mixed strategy NE of this game?
Mixed strategy NE = [ (2/3, 1/3) , (1/3 , 2/3) ]
Note: now its symmetric

There is an equilibrium in which 2/3 of the genes


are aggressive and 1/3 are non-aggressive

The game of Chicken (5)


a
b

0,0
1,2

2,1
0,0

Now, before we dene ES in the mix

Ques<on: can a mixed strategy NE be strict?

Deni:on 2bis
In a symmetric 2 player game, the mixed
strategy p is ES (in mixed strategies) if:
A)
and
B)

The game of Chicken (6)


a
b

0,0
1,2

2,1
0,0

Ques<on: can a mixed strategy NE be strict?


No, by deni:on of a mixed NE: payos are
equal for both pure strategies
In our example, we need to check

The game of Chicken (7)


a
b

0,0
1,2

2,1
0,0

Instead of a formal proof, lets discuss an heuris:c to check


that this is true
Weve got a popula:on in which 2/3 are aggressive and 1/3 are
passive
Suppose there is a muta:on p that is more aggressive than p
(e.g. 90% aggressive, 10% passive)
Since the aggressive muta:on is doing very badly against
herself, it would eventually die out
Indeed, the muta:on would obtain a payo of 0

Recap on Chickens
It turns out that in many cases that arise in
nature, the only equilibrium is a mixed
equilibrium
But what does it mean to have a mix in nature?
It could mean that the gene itself is randomizing,
which is plausible
It could be that there are actually two types surviving
in the popula:on, and this is connected to our
alterna:ve interpreta:on of mixed strategies

The Hawks and Dove game (1)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

Were now going to look at a more general


game of aggression vs. non-aggression
Note: were s:ll in the context of within spices
compe<<on
So its not a bajle against two dierent animals,
hawks and doves

The Hawks and Dove game (2)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

The idea is that there is a poten:al bajle


against an aggressive vs. a non-aggressive
animal
The prize is food, and its value is v > 0
Theres a cost for gh:ng, which is c > 0

The Hawks and Dove game (3)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

Were going to analyze ES strategies (ESS)


Were going to be able to understand what
happens to the ESS mix as we change the
values of prize and costs

The Hawks and Dove game (3)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

Can we have a ES popula:on of doves?


Is (D,D) a NE?
No, hence D is not ESS
Indeed, a muta:on of hawks against doves would
be protable in that it would obtain a payo of v

The Hawks and Dove game (4)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

Can we have a ES popula:on of Hawks?


Is (H,H) a NE?
It depends: it is a symmetric NE if (v-c)/2 0
Case 1: v>c (H,H) is a strict NE H is ESS

The Hawks and Dove game (5)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

Case 2: v=c (v-c)/2 = 0 u(H,H) = u(D,H)


Need to check how H performs against a muta:on
of D
Is u(H,D) = v larger than u(D,D) = v/2?

H is ESS if v c

The Hawks and Dove game (5)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

What if c > v?
We know H is not ESS and D is not ESS
What about a mixed strategy?

Step 1: we need to nd a mixed NE

The Hawks and Dove game (5)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

The Hawks and Dove game (5)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

The mixed NE is not strict by deni:on


We need to check:

No formal proof, same heuris:c as before

Recap on H&D (1)


In case v < c we have an evolu:onarily stable
state in which we have v/c hawks
1. As v we will have more hawks in ESS
2. As c we will have more doves in ESS

What are the payos?

Recap on H&D (2)


H
H
D

(v-c)/2, (v-c)/2
0, v

v,0
v/2, v/2

Lets take the D perspec:ve

What happens if the cost of gh:ng grows?

Recap on H&D (3)


The theory weve learned today is amenable
to iden<ca<on
We can run experiments and measure the
propor:on of H and D
From observa:ons, we can deduce the actual
values of v/c

It turns out that this theory is also able to


predict outcomes that are not well-known
facts

One last
e
xample
(
1)
S
B

S
B
T

1,1 v,0 0,v


0,v 1,1 v,0
v,0 0,v 1,1

Assume 1<v<2
What is this game?

Scratch, bite, trample == Rock, paper, scissors

What was the only NE? Its a mixed NE with


probabili:es 1/3,1/3,1/3
Note: we made it symmetric

One last
e
xample
(
2)
S
B

S
B
T

1,1 v,0 0,v


0,v 1,1 v,0
v,0 0,v 1,1

The only hope for an ESS is (1/3,1/3,1/3) =


This is as NE, but its not strict, its weak
We need to check whether:

One last
e
xample
(
3)
S
B

S
B
T

1,1 v,0 0,v


0,v 1,1 v,0
v,0 0,v 1,1

Lets assume p = S
There is no ESS!!

Applied Game Theory


Lecture 5
Pietro Michiardi

Cash in a Hat game (1)


Two players, 1 and 2
Player 1 strategies: put $0, $1 or $3 in a hat
Then, the hat is passed to player 2
Player 2 strategies: either match (i.e., add
the same amount of money in the hat) or take
the cash

Cash in a Hat game (2)


Payos:
$0 $0
Player 1: $1 if match net prot $1, -$1 if not
$3 if match net prot $3, -$3 if not
Match $1 Net prot $1.5
Player 2: Match $3 Net prot $2
Take the cash $ in the hat

Cash in a Hat game (3)


Lets play this game in class
What would you do?
How would you analyze this game?
This game is a toy version of a more important
game, involving a lender and a borrower

Lender & Borrower game


Lets make a couple of moUvaUng examples
Lenders: Banks, VC Firms,
Borrowers: you guys having a cool project idea to
develop

The lender has to decide how much money to


invest in the project
A[er the money has been invested, the borrower
could
Go forward with the project and work hard
Shirk, and run to Mexico with the money

Simultaneous vs. SequenUal Moves


QuesUon: what is dierent about this game with
regards to all the games weve played so far?
This is a sequen9al move game
What really makes this game a sequenUal move
game?
It is not the fact that player 2 chooses a[er player 1,
so Ume is not the really key idea here
The key idea is that player 2 can observe player 1s
choice before having to make his or her choice
NoUce: player 1 knows that this is going to be the
case!

Analyzing sequenUal moves games


A useful representaUon of such games is game
trees also known as the extensive form
For normal form games we used matrices,
here well focus on trees
Each internal node of the tree will represent the
ability of a player to make choices at a certain
stage, and they are called decision nodes
Leafs of the tree are called end nodes and
represent payos to both players

Cash in a hat representaUon


2

(0,0)

$0
1

$1 2
$3
2

$1

(1, 1.5)

- $1

(-1, 1)

$3

(3, 2)

- $3

(-3, 3)

What do we do to analyze such game?

Analyzing sequenUal moves games


The idea is: players that move early on in the
game should put themselves in the shoes of
other players
Here this reasoning takes the form of
an9cipa9on
Basically, look towards the end of the tree and
work back your way along the tree to the root

Backward InducUon
Start with the last player and chose the
strategies yielding higher payo
This simplies the tree
ConUnue with the before-last player and do
the same thing
Repeat unUl you get to the root
This is a fundamental concept in game theory

Backward InducUon in pracUce (1)


2

(0,0)

$0
1

$1 2
$3
2

$1

(1, 1.5)

- $1

(-1, 1)

$3

(3, 2)

- $3

(-3, 3)

Backward InducUon in pracUce (2)


2

(0,0)

$0
1

$1 2

(1, 1.5)

$3
2

(-3, 3)

Backward InducUon in pracUce (3)


2

(0,0)

$0
1

$1 2
$3
2

$1

(1, 1.5)

- $1

(-1, 1)

$3

(3, 2)

- $3

(-3, 3)

Player 1 chooses to invest $1, Player 2 matches

What is the problem in the


outcome of this game?
2

(0,0)

$0
1

$1 2
$3
2

$1

(1, 1.5)

- $1

(-1, 1)

$3

(3, 2)

- $3

(-3, 3)

Very similar to what we learned


with the Prisoners Dilemma

The problem with the


lenders and borrowers game
It is not a disaster:
The lender doubled her money
The borrower was able to go ahead with a small scale
project and make some money

But, we would have liked to end up in another branch:


Larger project funded with $3 and an outcome befer for
both the lender and the borrower

What does prevent us from gegng to this lafer good


outcome?

Moral Hazard
One player (the borrower) has incenUves to do
things that are not in the interests of the other
player (the lender)
By giving a too big loan, the incenUves for the
borrower will be such that they will not be aligned
with the incenUves on the lender
NoUce that moral hazard has also disadvantages
for the borrower

Moral Hazard: an example


Insurance companies oers full-risk policies
People subscribing for this policies may have
no incenUves to take care!
In pracUce, insurance companies force me to
bear some deducUble costs (franchise)

How can we solve the


Moral Hazard problem?
Weve already seen one way of solving the
problem keep your project small
Are there any other ways?

Introduce laws
Similarly to what we discussed for the PD
Today we have such laws: bankruptcy laws
But, there are limits to the degree to which
borrowers can be punished
The lender can say: I cant repay, Im bankrupt
And he/shes more or less allowed to have a
fresh start

Limits/restricUons on money
Another way could be to asking the borrowers a
concrete plan (business plan) on how he/she will
spend the money
This boils down to changing the order of play!
But, whats the problem here?
Lack of exibility, which is the moUvaUon to be an
entrepreneur in the rst place!
Problem of Uming: it is someUmes hard to predict
up-front all the expenses of a project

Break the loan up


Let the loan come in small installments
If a borrower does well on the rst
installment, the lender will give a bigger
installment next Ume
It is similar to taking this one-shot game and
turn it into a repeated game
Do you recall what happens to the PD game with
repeated interacUons?

Change contract to avoid shirk


The borrower could re-design the payos of
the game in case the project is successful
2

(0,0)

$0
1

$1 2
$3
2

$1

(1, 1.5)

- $1

(-1, 1)

$3

(1.9, 3.1)

- $3

(-3, 3)

IncenUve Design (1)


IncenUves have to be designed when dening the
game in order to achieve goals
NoUce that in the last example, the lender is not
gegng a 100% their money back, but they end up
doing befer than what they did with a smaller
loan
SomeUmes a smaller share of a larger pie can be
bigger than a larger share of a smaller pie

IncenUve Design (2)


In the example we saw, even if $1.9 is larger
than $1 in absolute terms, we could look at a
dierent metric to judge a lenders acUons
Return on Investment (ROI)
For example, as an investment banker, you could
also just decide to invest in 3 small projects and
get 100% ROI

IncenUve Design (3)


So should an investment bank care more
about absolute payos or ROI?
It depends! On what?
There are two things to worry about:
The funds supply
The demand for your cash (the project supply)

IncenUve Design (4)


There are two things to worry about:
The funds supply
The demand for your cash (the project supply)

If there are few projects you may want to


maximize the absolute payo
If there are innite projects you may want to
maximize your ROI

Examples of incenUves
IncenUves in contracts for CEOs
Bad interpretaUon, they screw up the world
Mild interpretaUon, they align CEOs acUons
towards the interests of the shareholders

Manager of sport teams


In the middle age, piece rates / share cropping
IncenUve design is a topic per-se, we wont go
into the details in this lecture

Beyond incenUves
Can we do any other things rather than
providing incenUves?
Ever heard of collateral?
Example: subtract house from run away payos
Lowers the payos to borrower at some tree
points, yet makes the borrower befer o!

Collateral example
The borrower could re-design the payos of
the game in case the project is successful
2

(0,0)

$0
1

$1 2

$1
- $1

$3
2

(1, 1.5)
(-1, 1 - HOUSE)

$3

(3,2)

- $3

(-3, 3 - HOUSE)

Collaterals
They do hurt a player enough to change his/
her behavior
Lowering the payos at certain points of the
game, does not mean that a player will be
worse o!!
Collaterals are part of a larger branch called
commitment strategies
Next, an example of commitment strategies

Norman Army vs. Saxon Army Game


Back in 1066, William the Conqueror lead an
invasion from Normandy on the Sussex
beaches
Were talking about military strategy
So basically we have two players (the armies)
and the strategies available to the players are
whether to ght or run

Norman Army vs. Saxon Army Game


N
S

N
invade

ght

ght

run

run

N ght
run

(0,0)
(1,2)
(2,1)
(1,2)

Lets analyze the game with


Backward InducUon

Norman Army vs. Saxon Army Game


N
S

N
invade

ght

ght

run

run

N ght
run

(0,0)
(1,2)
(2,1)
(1,2)

Norman Army vs. Saxon Army Game


N
S

N
invade

ght
run

(1,2)
N
(2,1)

Norman Army vs. Saxon Army Game


N
S

N
invade

ght

ght

run

run

N ght
run

Backward InducUon tells us:


Saxons will ght
Normans will run away

(0,0)
(1,2)
(2,1)
(1,2)

What did William the


Conqueror did?

Norman Army vs. Saxon Army Game


N
S

ght

run

run

N ght

Not burn
boats

run

Burn boats
S

ght

ght
run

N ght

N ght

(0,0)
(1,2)
(2,1)
(1,2)

(0,0)
(2,1)

Norman Army vs. Saxon Army Game


N
S

ght

run

run

N ght

(1,2)
(2,1)

Not burn
boats
Burn boats
S

ght
run

N ght

N ght

(0,0)
(2,1)

Norman Army vs. Saxon Army Game

(1,2)

Not burn
boats
Burn boats
S

(2,1)

Norman Army vs. Saxon Army Game


N
S

ght

run

run

N ght

Not burn
boats

run

Burn boats
S

ght

ght
run

N ght

N ght

(0,0)
(1,2)
(2,1)
(1,2)

(0,0)
(2,1)

Lesson learned
SomeUmes, gegng rid of choices can make me
befer o!
Commitment:

Fewer opUons change the behavior of others


Do you remember another segng weve seen in class
in which this applied?

The other players must know about your


commitments
Example: Dr. Strangelove movie

From simultaneous to sequenUal moves segngs

REVISITING ECONOMICS 101

Cournot CompeUUon (1)


The players: 2 Firms, e.g. Coke and Pepsi
Strategies: quanUUes players produce of
iden9cal products: qi, q-i
Products are perfect subsUtutes

Cournot CompeUUon (2)


Cost of producUon: c * q
Simple model of constant marginal cost

Prices: p = a b (q1 + q2)

Price in the Cournot Duopoly Game


Demand curve

Slope: -b
a

Tells the quanUty


demanded for a
given price

q1 + q2

Cournot CompeUUon (3)


The payos: rms aim to maximize prot
u1(q1,q2) = p * q1 c * q1
Prots = Revenues Costs
Game vs. maximizaUon problem

Cournot CompeUUon (4)


u1(q1,q2) = p * q1 c * q1
p = a b (q1 + q2)

u1(q1,q2) = a * q1 b * q21 b * q1 q2 c * q1

Cournot CompeUUon (5)


First order condiUon
Second order condiUon

Cournot CompeUUon (6)


First order condiUon
Second order condiUon
[make sure its a max]

When BR for Firm 1 is q1 = 0 ?


We simply take the BR expression and set it to
zero

That was the perfect compeUUon quanUty

What is the NE of the


Cournot Duopoly?
Graphically weve seen it, formally we have:

We have found the COURNOT QUANTITY

q2

BR1
Monopoly

NE
BR2

Perfect
compeUUon

q1

Stackelberg Model (1)


We are going to assume that one rm gets to
move rst and the other moves a[er
That is one rm gets to set the quanUty rst

Assuming were in the world of compeUUon, is


it an advantage to move rst?
Or maybe it is befer to wait and see what the
other rm is doing and then react?

We are going to use backward induc9on

Stackelberg Model (2)


Unfortunately we wont be able to draw trees,
as the game is too complex
First well go for an intuiUve explanaUon of
what happens, then well gure out the math

Stackelberg Model (3)


Lets assume rm 1 moves rst
Firm 2 is going to observe rm 1s choice and
then move
How would you go about it?

q2

q2

BR2

q2
0

q1

q1

q1

Stackelberg Model (4)


By deniUon of Best Response, we know
whats the prot maximizing strategy of rm 2,
given an output quanUty produced by rm 1
Alright, now we know what rm 2 will do,
whats interesUng is to look at what rm 1 will
come up with

Stackelberg Model (5)


What quanUty should rm 1 produce, knowing
that rm 2 will respond using the BR?
This is a constrained op9miza9on problem

One legiUmate quesUon would be: should rm 1


produce more or less than the quanUty she
produced when the moves were simultaneous?
In parUcular, should rm 1 produce more or less than
the Cournot quanUty?

Stackelberg Model (6)


QuesUon: should rm 1 produce more than

Remember, we are in a strategic subs9tutes


segng
The more rm 1 produces, the less rm 2 will
produce and vice-versa

Firm 1 producing more rm 1 is happy

Stackelberg Model (7)


If q1 increases, then q2 will decrease (as
suggested by the BR curve)
What happens to rm 1s prots?
They go up, for otherwise rm 1 wouldnt have set
higher producUon quanUUes

What happens to rm 2s prots?


The answer is not immediate

What happened to the total output in the


market?
Even here the answer is not immediate

Stackelberg Model (8)


What happened to the total output in the
market?
Consumers would like the total output to go up,
for that would mean that prices would go down!

My claim is that the total output went indeed


up
This is a direct consequence of the BR curve

q2

q2

BR2

q2
0

q1

q1

q1

Stackelberg Model (9)

So, what happens to rm 2s prots?


q1 went up, q2 went down
q1+q2 went up prices went down
Firm 2s costs are the same

Firm 2s prot went down

Stackelberg Model (10)


Lets have a nerdy look at the problem:

Lets apply the Backward InducUon principle


First, solve the maximizaUon problem for rm 2,
taking q1 as given
Then, focus on rm 1

Stackelberg Model (11)


Lets focus on rm 2:

We now can take this quanUty and plug it in


the maximizaUon problem for rm 1

Stackelberg Model (12)


Lets focus on rm 1:

Stackelberg Model (13)


Lets derive F.O.C. and S.O.C.

Stackelberg Model (14)


This gives us:

Stackelberg Model (15)


All this math to verify our iniUal intuiUon!

ObservaUons (1)
Is what weve looked at really a sequenUal
game?
Despite we said rm 1 was going to move rst,
theres no reason to assume shes really going
to do so!
What do we miss?

ObservaUons (2)
We need a commitment
In this example, sunk cost could help in
believing rm 1 will actually play rst
Assume rm 1 was going to invest a lot of
money in building a plant to support a large
producUon: this would be a credible
commitment!

ObservaUons (3)
Lets make an example: assume the two rms
are NBC and Murdoch trying to gain market
shares for newspapers producUon in a city
Suppose theres a board meeUng where the
strategy of the rms are decided
What could Murdoch do to deviate from
Cournot?

ObservaUons (4)
An example would be to be somehow
dishonest and hire a spy to gain more
informaUon on NBCs strategy!
To make the scenario even more intriguing,
lets assume NBC knows that theres a spy in
the board room
What should NBC do?

Simultaneous vs. SequenUal


There are some key ideas involved here
1. Games being simultaneous or sequenUal is
not really about 9ming it is about
informa9on
2. SomeUmes, more informaUon can hurt!
3. SomeUmes, more opUons can hurt!

First mover advantage


Advocated by many economics books
Is being the rst mover always good?
Yes, some9mes: as in the Stackelberg model
Not always, as in the Rock, Paper, Scissors game
SomeUmes neither being the rst nor the second
is good, as in the I split you choose game

The NIM game


We have two players
There are two piles of stones, A and B
Each player, in turn, decides to delete some
stones from whatever pile
The player that remains with the last stone
wins
Lets play the game

The NIM game (2)


If piles are equal second mover advantage
If piles are unequal rst mover advantage
Youll know who will win the game from the
iniUal setup
You can solve through backward inducUon

The Zermelo Theorem (1)


Lets try to draw a grander lesson out of the
games weve seen so far
Would it be possible to state, when and if a
game has a soluUon? In this case, would it be
possible to state whether there is any
advantage for players moving rst or second?

The Zermelo Theorem (2)


Consider a general 2 Player game
We assume perfect informa9on
Players know where they are in the game tree and
how they got there

We assume a nite game, i.e. a game-tree


with a nite number of nodes
There can be three or fewer outcomes:
W1 (player 1 wins), L1 (player 2 wins), T (Ue)

The Zermelo Theorem (3)


The result (or soluUon) of this game is:
Either player 1 can force a win (over player 2)
Or player 1 can force a Ue
Or player 2 can force a loss (on player 1)

The Zermelo Theorem (4)


This theorem appears to be trivial:
Three possible outcomes
Games are subdivided in three categories:
Those in which, whatever player 2 does, player 1 can
win (provided he/she plays well)
Those in which player 1 can always force a draw/Ue
Those in which, player 1 is toast, and can only loose

Examples of games
NIM, that we played earlier
Tic-tac-toe:

If players play correctly, you can always force a Ue


If players make wrong moves, they can loose

Checkers has a soluUon!

Two players
Perfect informaUon
Finite
Three outcomes

Chess has a soluUon!


In fact, the theorem doesnt tell you how to play, it just tells you
there is a soluUon!

Theorem proof (1)


Were going to prove the theorem, in a
sketchy way, as this is relates to backward
inducUon
Proof methodology:
Induc9on on maximum length of a game N
Well start with an inducUon hypothesis
And well prove this is true for longer games

Theorem proof (2)


If N = 1
W1
T

1
W1

L1
L1
L1

L1
L1

W1
L1
T

L1

1
T

L1

Theorem proof (3)


InducUon hypothesis:
Suppose the claim is true for all games of
length N
We claim, therefore it will be true for games of
length N+1
Lets take an example

Theorem proof (4)


1
2

Example of a more
complex game

1
1
1
2
1

What is the
maximum length of
the game?

Theorem proof (5)


1
2

2
1
1
1
2
1

We have two sub-


games
The upper sub-
game: follows 1
and it has length 3
The lower sub-
game: follows 1
and has length 2

Theorem proof (6)


By inducUon hypothesis (for N=3), upper sub-
game has a soluUon, say W1
Again, by inducUon hypothesis (N=2), lower
sub-game has a soluUon, say L1

W1
L1

This game has a


soluUon, it is a game of
length 1 we know
already!

A more complex example


Suppose we have an array
of stones, and two players
SequenUal moves, each
player can delete some
stones
Select one, delete all stones
that lie above and right

The looser is the person


who ends up removing the
last rock

A more complex example


According to Zermelos
Theorem, this game has a
soluUon and the
advantage depends on
NxM, the size of the array
Think hard about it, could
come at the exam

SequenUal move games, and their interpretaUon

SOME FORMAL DEFINITIONS

DeniUon: Perfect InformaUon


A game of perfect informa9on is one in which
at each node of the game tree, the player
whose turn is to move knows which node she
is at and how she got there

DeniUon: Pure Strategy


A pure strategy for player i in a game of
perfect informaUon is a complete plan of
acUons: it species which acUon i will take at
each of its decision nodes

Example (1)
Strategies
1
2
1 U
D

l
r
(1,0)

(2,4)

(3,1)

(0,2)

Player 2:
[l], [r]
Player 1:
[U,u], [U,d]
[D, u], [D,d]

Hey, they look redundant!!

Example (2)
Note:
1
2
1 U
D

l
r
(1,0)

(2,4)

(3,1)

(0,2)

In this game it
appears that
player 2 may
never have the
possibility to play
her strategies
This is also true for
player 1!

Example (3)
Backward InducUon
1
2
1 U
D

l
r

(2,4)

(3,1)

(0,2)

(1,0)

Start from the end


d higher payo

Summarize game
r higher payo

Summarize game
D higher payo

BI :: {[D,d],r}

Example (4)
l
1
2
1 U
D

l
r
(1,0)

u
d
(0,2)

(2,4)
(3,1)

U u

2,4

0,2

U d

3,1

0,2

D u

1,0

1,0

D d

1,0

1,0

From the extensive form


To the normal form

Example (4)
l
1
2
1 U
D

l
r

u
d

(2,4)
(3,1)

(0,2)

(1,0)

U u

2,4

0,2

U d

3,1

0,2

D u

1,0

1,0

D d

1,0

1,0

Backward Induc9on

Nash Equilibrium

{[D, d],r}

{[D, d],r}
{[D, u],r}

A Market Game (1)


Assume there are two players
An incumbent monopolist (MicroSo[, MS) of O.S.
A young start-up company (SU) with a new O.S.

The strategies available to SU are:


Enter the market (IN) or stay out (OUT)
The strategies available to MS are:
Lower prices and do markeUng (FIGHT) or stay
put (NOT FIGHT)

A Market Game (2)


What should you do?
MS F
SU IN
OUT

NF
(0,3)

(-1,0)
(1,1)

Analyze the game with BI


Analyze the normal form
equivalent and nd NE

A Market Game (3)


MS F
SU IN

NF

OUT

NF

IN

-1,0

1,1

OUT

0,3

0,3

(-1,0)
(1,1)

(0,3)

Backward Induc9on

Nash Equilibrium

(IN, NF)

(IN, NF)
(OUT, F)

This is a NE, but relies


on an incredible threat

Applied Game Theory


Lecture 6
Pietro Michiardi

RECAP FROM LAST TIME

A Market Game (1)


Assume there are two players
An incumbent monopolist (MicroSo@, MS) of O.S.
A young start-up company (SU) with a new O.S.

The strategies available to SU are:


Enter the market (IN) or stay out (OUT)
The strategies available to MS are:
Lower prices and do markeKng (FIGHT) or stay
put (NOT FIGHT)

A Market Game (2)


What should you do?
MS F
SU IN
OUT

NF
(0,3)

(-1,0)
(1,1)

Analyze the game with BI


Analyze the normal form
equivalent and nd NE

A Market Game (3)


MS F
SU IN

NF

OUT

NF

IN

-1,0

1,1

OUT

0,3

0,3

(-1,0)
(1,1)

(0,3)

Backward Induc;on

Nash Equilibrium

(IN, NF)

(IN, NF)
(OUT, F)

This is a NE, but relies


on an incredible threat

Informal discussion on how to build up your mojo

REPUTATION

A more elaborate se[ng (1)


Suppose there is one rm
The rm holds a monopoly in ten dierent
markets
In each market, there are reasons to believe
the rm will face an entrant
Assume each entrant will come in order

A more elaborate se[ng (2)


Lets try to see (intuiKon rst) what happens
when each entrant, in order, decides whether
to step in the market or stay out

Ques;on: for each entrant, is the monopolist


going to ght?
Lets play, I will be the monopolist

A more elaborate se[ng (3)


Last Kme, we analyzed the game as a single
market and the outcome (using backward
inducKon) was that as the entrant decided to
step in, the monopolist should not ght
This Kme, instead, there are reasons to believe
the monopolist will try to establish a
reputaBon as being a tough rm

A more elaborate se[ng (4)


The intuiKon tells us that early ghts may keep
later entrants out of the markets
Ques;on: whats worrying about this
argument on establishing a reputaKon?
How should we analyze such a sequenKal game?

A more elaborate se[ng (5)


Lets use backward inducBon and start from
the last entrant
What happened in the game we played? I
didnt ght the last entrant. But why?
When we look at ten markets, the game
seems very complicated
But if we look at the last entrant only, the
game is nothing but the single market game!

A more elaborate se[ng (6)


In the last market, we know what the
monopolist should do: do not ght, and the
entrant should step in!
There are no incenBves to establish a
reputaBon for subsequent markets, as there
arent any
Ques;on: what happens in the 9th market
then?

A more elaborate se[ng (7)


Since we are in a se[ng of perfect informaBon,
the 9th entrant knows where she stands
As the 9th entrant knows that in the last round,
the monopolist is not going to ght, because
theres no point
Whatever the 9th entrant will do, the monopolist
will let the 10th entrant in
There is no point for the monopolist to establish a
reputaKon of ghKng at the 9th market

Hence, the 9th entrant will step in

A more elaborate se[ng (8)


But now we know whats going to happen
when we look at the 8th market
The 8th entrant knows that whatever she will
do, in any case the monopolist will let the 9th
and 10th entrants in
The 8th entrant will step in, and so on

A more elaborate se[ng (9)


Using backward inducKon we arrived at a
completely dierent result than what we
actually played in class
Nevertheless, the idea of establishing a
reputaKon sounded right!
Next, lets try to discuss more about the
concept of reputaKon

ReputaKon (1)
To make our intuiKon work, lets try to introduce
a new idea
Assume theres a small chance (say 1%), that the
monopolist rm is crazy
This implies that the payos are not exactly the
same as in the single market game we looked at
Its like the monopolist, someKmes, actually prefers to
ght just for the fun of it

ReputaKon (2)
Now, lets look at the 1st entrant: she knows
theres a 1% possibility that the monopolist
will go bonkers and ght
If there was only one market, we would be
done: the entrant should step in
When there are 10 markets, things are
dierent. Lets see why

ReputaKon (3)
Assume that the 1st entrant thinks that with
0.9 probability the monopolist will not ght
and she enters
What if the monopolist goes crazy and ght?
What happens to the other 9 entrants?
Subsequent entrants would modify their
beliefs and assume a higher probability for the
monopolist to go bonkers

ReputaKon (4)
Entrants would start believe that the
monopolist is indeed crazy and step out of the
markets!
The small possibility that the monopolist
would be crazy allowed to build a reputaBon
that keeps entrants out of the markets

ReputaKon (5)
Now, this argument can be strengthened
Assume that, in fact, the monopolist is not crazy
By acKng as if she was crazy early on in the game,
she was able to scare the entrants
In game theory, we always assume players know
how to put themselves in others shoes
Hence, the entrants should know that theres a
possibility that the monopolist is not crazy but
shes acKng out

ReputaKon (6)
Entrants stay out not only because they think the
monopolist is crazy
They stay out also because they think that even if the
monopolist was not crazy, she would ght in any case!
We just saw that irrespecKvely of the monopolist being
crazy or not, she will act like crazy. Hence, a raKonal
entrant would know that and would not update her
beliefs entrants learn nothing by observaKon
Ques;on: Can this be an equilibrium?

ReputaKon (7)
To answer the quesKon, lets use backward inducKon and
go to the 10th market
The 10th entrant, learns nothing about the monopolist
being crazy or not
As such, she would sKll believe the monopolist could be sane
with .99 probability

The 10th entrant would step in, and this would unravel back
to the 1st market
The answer to the quesKon involves mixed strategies: a
monopolist should (with some probability) act like crazy,
build a reputaKon and keep entrants out of the markets

Chain Store Paradox


What we just discussed informally has a name:
its called the chain store paradox (by Selten,
Nobel prize for this contribuKon)
ReputaBon is a key concept: by introducing a
small probability to play crazy helps in many
cases
Short fuse people, Markets, Doctors,
Accountants, Hostage negoKaKons,

Or when is more important than what

DUELS

The Duel Game (1)


Two players, with a gun loaded with one bullet
They stand face to face at a certain distance
and the strategies available are:
SHOOT
GET ONE STEP CLOSER

As the distance between the two players is


large, theres a possibility that the shooter will
miss
In that case, theres no second chance

The Duel Game (2)


There are many examples in which duels arise
Historical
Sports: e.g. Tour de France
Economics: R&D eorts to come out with a new
product

Duel games have a unique feature we did not


encounter yet
The strategic decision is not about what to do, but
about when

The Duel Game (3)


Lets introduce some notaKon
Let Pi(d) be player is probability of hi[ng if i
shoots at distance d
Example:

The Duel Game (4)


Assump;ons:
1. At distance d=0 the probability of hi[ng the
opponent is 1
2. As the distance increases, the probability of hi[ng
the opponent decreases
3. The two players have dierent abiliKes
4. The two abiliKes are known by the two players

In the graph shown before, whos the bemer


shot?

The Duel Game (5)


Ques;on: what do you think it is going to
happen, given the game we outlined?
We know that player 1 is bemer at shooKng
than player 2
Is player 1 going to shoot rst?
Is player 2 going to shoot rst?

The Duel Game (6)


Some possible arguments:
Player 1 should shoot rst because in the end hes
bemer at shooKng
Player 2 should shoot rst because he knows that
player 1 is the best shot and hes willing to take
the chance to shoot before being shot

This line of reasoning is called preempBon


It uses concepts of dominance and backward
induc2on

The Duel Game (7)


The analysis of this game is not obvious
We want to answer the following quesKons:
Whos going to shoot rst?
At exactly what distance?

Lets start with some facts

The Duel Game (8)


Assume we start with player 1
Assume player 1 believes player 2 is not going
to shoot in the next move
Ques;on: what should player 1 do then?

The Duel Game (9)

The Duel Game (10)


Assume we start with player 1
Assume player 1 believes player 2 is going to
shoot in the next move
Ques;on: what should player 1 do then?

The Duel Game (11)

The Duel Game (12)


Formally, player i should shoot at distance d (if
she believes player j will shoot at distance d-1)
if and only if:

Pi (d) 1 P j (d 1) Pi (d) + P j (d 1) 1

The Duel Game (13)

The Duel Game (14)


Claim: the rst shot should occur at distance
d*
No one should shoot before d*, by dominance
At d* there is no dominance, we need to use
backward inducBon: we need to know what are
the beliefs of what the opponent will do in the
next move

The Duel Game (15)


Lets start at d=0 and assume player 2 is
choosing
Player 2 should shoot (prob. 1 of winning)

Now, we are at d=1 and player 1 is choosing


If P
1 (1)
+ P 2 (0)
1
then player 1 should shoot

Now, we are at d=2 and player 2 is choosing

If P
2 (2)
+ P
1 (1)
1
then player 1 should shoot

The Duel Game (16)


So the answer to our quesKon is: who shoots
rst is not necessarily the bemer or the worse
shooter, but whoevers turn it is rst at d*
d* is determined by the joint abiliKes of the
players

Repeated games, discounted payos and a limle bit of algebra

BARGAINING GAMES

UlKmatum game (1)


There are two players, player 1 and player 2
Player 1 is going to make a take it or leave it
oer to player 2
This oer concerns a pie, thats worth $1
Player 1 is given a pie and has to decide how
to divide it (hence the value each gets)
(S, 1-S)
E.g. (0.75$, .25$)

UlKmatum game (2)


Player 2 has two choices:
Accept the oer
Decline the oer

If player 2 accepts:
Player 1 gets S, player 2 gets 1-S

If player 2 declines:
Player 1 and player 2 get nothing

UlKmatum game (3)


First thing to noKce is that this game doesnt
look like the players are really bargaining
Lets try to play this game in class to get some
intuiKons about it
I provide the $1 pie

UlKmatum game (4)


It turns out that in this game a lot of people
would reject oers
Lets try to analyze it using backward
inducBon
Lets start with the receiver of the oer, choosing
to accept or refuse (1-S)
Assuming player 2 is trying to maximize her prot,
what should she do?

UlKmatum game (5)


Player 2 is choosing between 0 and 1-S
She should always accept the oer!
So why there were so many oers that were
rejected?
Why it seems that the games converges to
an even split, even if this is not what
backward inducKon predicts?

UlKmatum game (6)


Backward inducKon is giving a clear
predicKon:
Player 2 should always accept the oer
Player 1 should oer essenBally nothing

Ques;on: why there seem to be a mismatch


between BI and reality?

UlKmatum game (7)


Reasons why player 2 may reject:
Pride
She may be sensiKve to how her payos relates to
others
IndignaKon
Player 2 may want to teach a lesson to Player 1
to oer more

UlKmatum game (8)


What we really played is a one-shot game
If we have played it more than once, it makes
sense to revisit the concept of reputaBon
By rejecKng an oer, player 2 would also induce player
1 to obtain nothing, which may be an incenKve for
player 1 to oer more in the next round of the game

Ques;on: Why is a fair (50,50) share so focal


here?

UlKmatum game (9)


Lesson learned: even in very simple games,
we should be careful about the results
backward inducKon provides, especially if we
study real world problems

Two-period bargaining game (1)


There are two players, player 1 and player 2
Player 1 is going to make a take it or leave it
oer to player 2
This oer concerns a pie, thats worth $1
Player 1 is given a pie and has to decide how
to divide it (and its value)
(S1, 1-S1)

Two-period bargaining game (2)


Player 2 has two choices:
Accept the oer
Decline the oer

If player 2 accepts:
Player 1 gets S1, player 2 gets S2=1-S1

If player 2 declines:
We ip roles and play the game again
This is the second stage of the game

Two-period bargaining game (3)


The second stage game is nothing but the
ulKmatum game:
Player 2 will oer a split (S2, 1-S2)
Player 1 can:
Accept, and the deal is done
Reject, and all players get nothing

Two-period bargaining game (4)


Now, we add one important element
In the rst round, the pie is worth $1
If we end up in the second round, the pie is worth
less, its like part of the money is lost

Example:
If I give you $1 today, thats what you get
If I give you $1 in 1 month, we assume its worth
less, say < 1

Two-period bargaining game (5)


DiscounBng factor:
From today perspecKve, $1 tomorrow is worth

<1

Lets now try to play this game in class

Again, I give you guys the $1 worth pie

Two-period bargaining game (6)


It is clear that the decision to accept or reject
partly depends on what you think the other side
is going to do in the second round
This is backward inducBon!

By working backwards, we can see that what you


should oer in the rst round should be just enough
to make sure its accepted, knowing that the person
whos receiving the oer in the rst round is going to
think about the oer theyre going to make you in the
second round, and theyre going to think about
whether youre going to accept or reject

Two-period bargaining game (7)


Lets try to analyze the game formally with
backward inducBon
And lets forget about pride for a moment

One stage game (the ulKmatum game)


Oerers split Receivers split

1-period

Two-period bargaining game (8)


Two-stage game
Oerers split Receivers split

1-period
2-period

<1

Lets be careful:
In the second round of the two-period game, player 2 makes the oer about
the whole pie
We know that this is going to be an ulKmatum game, so player 2 will keep the
whole pie and player 1 will accept (by BI)
However, seen from the rst round, the pie in the second round that player 2
could get, is worth less than $1

Two-period bargaining game (9)

Two-period bargaining game (10)

Two-period bargaining game (11)


Lets comment the previous graph
We assume player 1 moves rst
Our goal is: what should player 1 oer in the rst
round so that player 2 would be indierent to arrive at
the second round?
The second round slope indicates what player 2 would
gain if she refused the rst oer, which is a discounted
value of the pie, as seen from the rst round
Then, player 1 should oer at least that amount of a
share

Three-period bargaining game (1)


The rules are the same as for the previous games,
but now there are two possible ips
Period 1: player 1 oers rst
Period 2: if player 2 rejected the oer in period 1, she
gets to oer
Period 3: if player 1 rejected the oer in period 2, he
gets to oer again

NOTE: the value of the pie keeps shrinking


Its not the pie that really shrinks, its that we assumed
players are discounBng

Three-period bargaining game (2)


DiscounBng: the value to player 1 of a pie in
2
round three is discounted by =
Lets analyze the game with backward
inducBon
Again, assume no pride
We start from round three, which is our ulKmatum
game and we know there that player 1 can get the
whole pie, since player 2 will accept the oer
2
Player 1 could get a pie worth

Three-period bargaining game (3)


Three-period game
Oerers split

1-period
1
1
2-period
3-period 1 (1 )

Receivers split

0
<1
(1 )

NOTE: in the table, we report the split player 1 should oer in the rst

round of the game


In the rst round, if the oer is rejected, we go into a 2-period game, and
we know what
the split is going to look
like

Three-period bargaining game (4)

Bargaining games (1)


What about a 4-period bargaining game?

Oerer Receiver
1
0

1-period
1
2-period
3-period 1 (1 )
4-period
?

<1
(1 )

NOTE: give people just enough today so theyll accept the oer, and just
enough today
get tomorrow discounted by delta
is whatever they
You dont need to go back all the way up to period 1

Bargaining games (2)


Does anyone see a pamern emerging here?
Lets clear out the algebra

1-period
2-period
3-period
4-period

Oerer
1

Receiver
0

1 +2

1 +2 3

2 +3

Bargaining games (3)


Say we look at a 10-stage bargaining game
What would be the share for player 1?
That would be:

S1(10) = 1 + 2 3 + 4 + ... 9

These are geometric series

Do you remember how to compute the sum?

Bargaining games (4)


(10)
1

= 1 + + + ...

S1(10) = (1 + 2 3 + 4 + ... 9 )
(10)
1

(1+ )S

=1
10

(10)
1

1
=
1+

10

Bargaining games (5)


A few observaKons before we move on
In the one-stage game, theres a huge rst-mover
advantage
In the two-stage game, its more dicult: it depends
on how large is delta. If it is large, youd prefer being
the receiver
In the three-stage game it looks like youd be bemer
o by making the oer, but again its not very easy
What about the 10-stage game? It seems that the two
players are ge[ng closer in terms of payos, and that
the iniKal bargaining power has diminished

Bargaining games (6)


Lets push it harder: lets study the asymptoKc
behavior of this game, when there is an
innite number of stages

()
1

S2()

1
1
=
=
1+
1+

()
= 1 S1 =
=
1+
1+

Bargaining games (7)


Now, lets imagine that the oers are made in
rapid succession: this would imply that the
discount factor we hinted at is almost
negligible, which boils down to assume delta
to be very close to 1
()
1

S2()

1
1
1
=

1+
2

1
1
=

1+
2

Bargaining games (8)


So what is the conclusion?
If we assume rapid alternaBng oers, the
principle of backward inducKon seems to
work:
We get a @y-@y split, which was what we hinted
at at the beginning as a fair result

Bargaining games (9)


Now, lets shake the model up a bit
Ques;on: what are the assumpKons we have
made throughout the previous examples that
seem not realisBc?
Ques;on: backward inducKon tells us that there
is not actually going to be any bargaining at all!
We can predict what oer to make in the rst
stage and make sure it is accepted. Whats
disturbing with this argument?

Bargaining Games (10)


We can drop the assumpKon that players have
equal discount factors, and study the problem
with 1 2

This relates to players being paBent or impaBent


Who do you think is bemer o? A paKent or impaKent
player?

We can drop the assumpKon that a player knows


the delta of the other player
We can drop the assumpKon that the iniKal value
of the pie is known

Slowly moving from a perfect world to reality

IMPERFECT INFORMATION

IntroducKon
We have seen simultaneous move games, in
which players cant observe strategies and have
to reason based on the idea of best response
We have seen sequenBal move games, in which
observaBon is allowed, and players reason using
backward inducKon
Now, lets study a class of games in which these
two approaches are blended

Example (1)
2
U
1

M 2

(4,0)

(0,4)
(0,4)

u
d

D
2

(4,0)

(1,2)

(0,0)

SequenKal move game


Assume for a moment
perfect informaKon
We know how to solve it
using backward inducBon
NOTE: player 1 knows that
if he chooses U or M,
player 2 can crush him
Player 2 has a huge second
mover advantage in the
rst branches of the tree

Example (2)
2
U
1

M 2
D
2

(4,0)

d
u

(0,4)
(0,4)

(4,0)

(1,2)

(0,0)

SequenKal move game


Imperfect informaBon
Player 2 cannot disKnguish
where she is on (some
parts of) the tree

If player 1 chooses D,
player 2 can observe it
If player 1 chooses U or
M, player 2 doesnt know
which choice was made

Example (3)
InformaKon set

U
1

(4,0)

(0,4)
(0,4)

u
d

D
2

(4,0)

(1,2)

(0,0)

The idea is that the two


internal nodes are in
the same informaBon
set
Player 2 knows that
player 1 chose whether
U or M, but not which
one

How can we analyze


this kind of games?

Example (4)
InformaKon set

U
1

M
D
2

(4,0)

d
u

(0,4)
(0,4)

(4,0)

(1,2)

(0,0)

The simple backward


inducKon argument (player
2 could always crush player
1) does not hold anymore
Moreover, player 1 knows
that player 2 cannot
disKnguish U or M

Player 1 might decide to


randomize over U and M, and
hope to get an expected
payo of 2
A payo of 2 is bemer than
what player 1 could ever
obtain by choosing D

Deni2on: InformaKon set


An informaBon set of player i is a collecKon of
player is nodes among which i cannot
disKnguish
Ques;on: Are these informaKon sets?

InformaKon sets: rules


Rule 1: a player must not be able to infer in
which node she is by looking at the number of
available strategies she has
Rule 2: provided a player can recall what she
did earlier on in the tree, she shouldnt be
able to disKnguish where she is
This assumpKon is called perfect recall
NOTE: perfect recall is not always realisKc!

Deni2on: Perfect / Imperfect


InformaKon
A game of perfect informaBon is a game in
which all informaKon sets in the game tree
include just one node
A game of imperfect informaBon is not a
game of perfect informaKon

Example (1)
U
1

(2,2)

(-1,3)

2
D

(3,-1)

(0,0)

The informaBon set


indicates that player 2
cannot observe whether
player 1 moved up or down
Perfect informaBon: player 2
could have chosen separately,
in each node, whether to
choose leI or right
Imperfect informaBon: player
2 has only the choice of
choosing leI or right, for both
nodes, since she doesnt
know which one shell be at

Example (2)
U
1

(2,2)

(-1,3)

2
D

(3,-1)

(0,0)

Theres a catch here


that makes the game
easy:
Whatever is the
informaKon set, for
player 2 choosing right is
consistently bemer than
choosing leI
This game solves out
rather like when using
backward inducBon

Example (3)
Player 2
l
U
1

(2,2)
(-1,3)

Player 1
D

2
D

(3,-1)

(0,0)

2,2
3,-1

-1,3
0,0

NoKce that we dont have


redundant strategies in the
matrix
Indeed, we cant have a complete
acKon plan when we dont know
where we are in the tree
This implies we have to revisit our
deniKon of strategy

Example (4)
Player 2
l
U
1

(2,2)
(-1,3)

2
D

U
Player 1
D

2,2
3,-1

-1,3
0,0

Ques;on: What game is this?


l

(3,-1)

(0,0)

NoKce that by using informaBon


sets, we where able to represent
in a tree a simultaneous move
game
It does not really mamer the Bme
here, what mamers is informaBon

Deni2on: Pure strategies


A pure strategy of player i is a complete plan
of acKon: it species what player i will do at
each of its informa;on sets
It looks like the same deniKon we saw last
Kme, but this one involves informaKon sets
and it is more general
The idea remains the same: we want to transform
a game tree in a matrix

Example (1)
l
U
1

(a1,a2)
m

(b1,b2)
(c1,c2)

2
D

l
r

(d1,d2)
(e1,e2)
(f1,f2)

As in the previous
example, player 2 does
not know if player 1
chooses up or down
Player 2 has just three
choices
Our goal now is to
transform the game into
a matrix

Example (2)
l
U
1

(a1,a2)
m

(b1,b2)
(c1,c2)

U
D

a1,a2
d1,d2

b1,b2
e1,e2

c1,c2
f1,f2

2
D

l
r

(d1,d2)
(e1,e2)
(f1,f2)

CLAIM: If we look at the matrix


above it is not obvious that the
game tree on the le@ is the only
possible tree that could generate
the matrix

Example (3)
U
D

a1,a2
d1,d2

b1,b2
e1,e2

c1,c2
f1,f2

l
m 1

In the game tree to the right,


player 2 moves rst, then player 1
moves but she doesnt know which
acKon player 2 chose

CLAIM: these two game three are equivalent

(a1,a2)

(d1,d2)
(b1,b2)

U
D
U

(e1,e2)
(c1,c2)

(f1,f2)

ObservaKons
What mamers is not Bme, but informaBon
By the end of today we will set-up the
machinery to analyze such games and predict
what it is going to happen

Example (1)
l
U
1

(4,2)
u

(0,0)

1 d

(1,4)

r
2

(0,0)

(2,4)

Before we analyze the


game, lets gure out
some basic facts
How many informaKon
sets we have:

Player 2 has 1 informaKon


set
Player 1 has 2 informaKon
sets

What are the strategies


Player 1: Uu, Ud, Du, Dd
Player 2: l, r

Example (2)
l
l

(4,2)
u

Ud

r
1 d

(1,4)
Du

2
D

(0,0)

Uu

Dd
l

(0,0)

(2,4)

4,2
4,2
0,0
0,0

0,0
1,4
2,4
2,4

Do you no;ce the redundancy


here?
Lets nd the NE of this game

Example (3)
Nash Equilibria:
l
Uu
Ud
Du
Dd

4,2
4,2
0,0
0,0

0,0
1,4
2,4
2,4

(Uu,l)
(Du,r)
(Dd,r)

Example (4)
Backward Induction

l
U
1

(0,0)

1 d

(1,4)

r
2

(4,2)

sub-game
l

(0,0)

(2,4)

Lets try to use BI


StarKng from the end,
player 1 will choose down
Then, although player 2
doesnt know where she
is on the tree, she will
noKce that shes always
bemer-o choosing right
This implies that player 1
will then choose down

Example (5)
Nash Equilibria:

(Uu,l) not compaKble with BI


(Du,r) not compaKble with BI
(Dd,r) This is a sub-game perfect equilibrium

Were not saying these are not NE, its just that they
are inconsistent with what we could predict with BI
We need a new noKon of soluKon, that is able to treat
games that have both sequenKal moves and
simultaneous moves

Example (1)
(1,0,0)
A

B`

u
2

(0,1,1)

(0,0,2)

3
d
l

(0,0,-1)

(2,1,0)

This is a three player


game
We will model, in the
next slide, the game as
follows:
Player 1 chooses a
matrix
Player 2 and 3 will play
the game player 1 chose

Example (2)
Player 1: A
l
U
2

1,0,0
1,0,0

Player 1: B
3

1,0,0
1,0,0

l
U
2
D

0,1,1
0,0,-1

0,0,2
2,1,0

There are a lots of NE in this game!


E.g.: [A,U,l]
Ques;on: How can you check that it is a NE?
Ques;on: Does this NE make sense?

Example (3)
(1,0,0)
A

B`

u
2

(0,1,1)

(0,0,2)

Lets have a look at the


sub-game we idenKfy in
the game-tree
Observa;on: it involves
only two players

3
Player 3

d
l
r

(0,0,-1)
(2,1,0)

U
Player 2
D

Sub-game

1,1
0,-1

0,2
1,0

Example (4)
Player 3
U
Player 2
D

1,1
0,-1

0,2
1,0

What are the NE of this sub-game?


NoKce that player 3 has a dominant strategy
NE = (D,r)

This new equilibrium clashes with the equilibrium


we just found before!

Deni2on: sub-games
A sub-game is a part of the game that looks
like a game within the tree. It saKses the
three following properKes:
1. It starts from a single node
2. It comprises all successors to that node
3. It does not break up any informaKon set

Examples of sub-games
2

3
2

Deni2on: sub-game perfect


equilibrium
A Nash Equilibrium (s1*,s2*,,sN*) is a sub-
game perfect equilibrium (SPE) if it induces a
Nash Equilibrium in every sub-game of the
game
Example:
In the example before, the SPE is (B,D,r)

Anda mungkin juga menyukai