Markov Chain
(Bab 4 Taylor dan Karlin)
NUR Iriawan
Statistika FMIPA-ITS
14
2-15
15
2-16 n-step transition probabilities into state 1
16
2-17 Sequence of Transition Prob Matrix
17
2-18 Steady state condition
18
Doubly Stochastic
Pn = P
Pn = P, if n is odd
Pn = identity matrix,
Long-Run Behavior (Steady State) of Markov Chains if n is even 25
1 1
2 2
State 0 1 2 3
0 0.080 0.184 0.368 0.368
1 0.632 0.368 0 0
2 0.264 0.368 0.368 0
3 0.080 0.184 0.368 0.368
Transient Irreducible
Set T Set S1
Irreducible
Set S2
In steady state, we know that the Markov chain will
eventually end in an irreducible set and the previous
analysis still holds, or an absorbing state.
The only question that arises, in case there are two or more
irreducible sets, is the probability it will end in each set
Reducible Markov Chains
Transient
Set T s1
r Irreducible
sn
Set S
i
1 0.5
0 1 2
0.5 1 0 1 0
Periodic State d = 2 P 0.5 0 0.5
0 1 0
Equivalence classes:
{5} is an absorbing class
{2, 4}: recurrent class
(each of them has a period of 2)
{0}, {1}, {3}: transient classes
Long-Run Behavior (Steady State) of Markov Chains 55
Long Run Properties
Assume that a particular customer
switches brands among products A, B, C,
and D according to the following Markov
chain: State A B C D
A 0.4 0.2 0.2 0.2
P= B 0.05 0.05 0.7 0.2
C 0.05 0.25 0.4 0.3
D 0.1 0.1 0.6 0.2
State A B C D
A 0.200 0.160 0.420 0.220
P2 = B 0.078 0.208 0.445 0.270
C 0.083 0.153 0.525 0.240
3 0.095 0.195 0.450 0.260
State A B C D
A 0.096 0.173 0.482 0.248
P8 = P4* P4= B 0.096 0.173 0.482 0.248
C 0.096 0.173 0.482 0.248
3 0.096 0.173 0.482 0.248
Reducible
MC
1 2 3
1 0 1 0
2 0 0
Long-Run Behavior (Steady State) of Markov Chains
3 0 0 60
0 1 2
0 0 1 0
1 0 0 1
Periodic MC
2 1 0 0
Long-Run Behavior (Steady State) of Markov Chains 61
Long-Run Behavior (Steady State) of Markov Chains 62
Long-Run Behavior (Steady State) of Markov Chains 63
Long-Run Behavior (Steady State) of Markov Chains 64
Theorem
For any irreducible ergodic Markov chain
( n)
lim p ij exists and is independent of i.
n
Furthermore, n ij j 0 where j
( n)
lim p
satisfy the following steady state equations
M
j i pij for j 0,1,..., M
i 0
M
j 0
j 1.
Letting n PX n 1 j j , PX n i i
M
j i pij
i 0
Long-Run Behavior (Steady State) of Markov Chains 66
Linear Algebra Notation
Let 0 1 .. M for a finite MC.
Then the steady-state equations can be
written as
P
and
1 1 where 1 1 1 ... 1
T
0 1
Example: P
1 0
However, for an irreducible (ergodic or not)
Markov Chain the following limit always exist.
1 (k )
n
lim pij j
~
k 1
n n
Long-Run Behavior (Steady State) of Markov Chains 71
Steady State Probabilities
This limit is the long-run average proportion of
time spent in state j.
Moreover, this limit satisfies the steady-state
equations although it is not the steady-state
probability M
~ j ~i pij for j 0,1,..., M
i 0
M
j
~ 1.
j 0
If the MC is ergodic ~i i
k n 1 j 0 k n 1 j 0
1 k 1 k M
lim E C X n lim C j pij( n )
k
k n 1 k k n 1 j 0
M
1 k (n) M
C j lim pij C j ~ j
k k
j 0 n 1 j 0
1 k M ~
lim C X k j C ( j )
n 1
k k
j 0
$31.46
Long-Run Behavior (Steady State) of Markov Chains 88
Long-Run Average Cost Per
Unit Time
The method described can be employed
given that:
The cost can be written as
1 k
lim C X n , Dn m where m0
k k
n 1
P{Xn, n 0} is an irreducible MC.
{Dn}s form a sequence of random variables
which are independent and identically
distributed (iid)
m
i 0
j i Pij , j 0,1,...,
i 1
i 0
i 0
i 1
Numerically from Pn which i 0
0 (1 p ) 2
P (1 p ) p
1 1 p 1 1
1 2
, ,
i i 1 2 0 p 1
0
3 p
1
3 p
2
3 p
0 1 2 1
1 p
P{gets wet} 0 p p
3 p
0 0 1
P 0 0.9 0.1
0.9 0.1 0
Numerically determine limit of Pn
0.310 0.345 0.345
lim P n 0.310 0.345 0.345 ( n 150)
n
0.310 0.345 0.345
j Pji i Pij j Pji i Pij
i 0 i 0 jS i 0 jS i 0
j jiP Pji i ij i ij
P P
jS iS iS jS iS iS
P P
jS
j
iS
ji
iS
i
jS
ij
0 1 2 n n+1
P10 Pn ,n 1 Pn 1,n
P00 Pn ,n
P P
j 0 i n 1
j ji
j 0 i n 1
i ij n Pn ,n 1 n 1Pn 1,n
0 1 2 n n+1
q(1 p) q(1 p) q(1 p)
(1 p) (1 p)(1 q) pq (1 p)(1 q) pq
0 1 2 n n+1
q(1 p) q(1 p) q(1 p)
(1 p) (1 p)(1 q) pq (1 p)(1 q) pq
p/q
0 p 1q(1 p) 1 0
1 p
p(1 q)
n p(1 q) n 1q(1 p) n 1 n , n 1
q(1 p)
p(1 q)
Define: p / q,
q(1 p)
1 0
1 p n n 1 0 , n 1
, n 1 1 p
n 1 n
1 p 1 1 p q(1 p) p(1 q) q p 1
q(1 p) q
0 1
n (1 ) , n 1
n 1
0 j = 1
j = 0 M j = 1
j = 1/(M+1),
i =0 i Pij
= (1/M+1) i =0 Pij
= 1/(M+1)
Since limiting probabilities is the unique positive solution. The
limiting probability for all states is 1/(M+1).
j 0
j 1
T
Condition on the first transition,
sij d ij Pik skj
k 1
and note that transitions from recurrent
states to transient states are impossible
F I PT R
1
Solution:
|| 0 p1 q1 ||
|| ||
P = ||q2 0 p2 ||
|| ||
||p3 q3 0 ||
1 + 2 + 3 = 1,
1 = q2 2 + p3 3
2 = p1 1 + q3 3
P00 (1 1 / 3)(1 1 / 2) 1 / 3
P01 1 / 3
P02 0; P03 (1 1 / 3)(1 / 2) 1 / 3
P10 0; P11 0; P12 1; P13 0
P20 (1 1 / 2)(1 1 / 3) 1 / 3; P21 1 / 3; P22 0; P23 (1 1 / 3)(1 / 2) 1 / 3
1 / 3 1 / 3 0 1 / 3
0 0 1 0
P
1 / 3 1 / 3 0 1 / 3
1 / 3 1 / 3 0 1 / 3
2 / 9 2 / 9 3 / 9 2 / 9
3 / 9 3 / 9 0 / 9 3 / 9
P2
2 / 9 2 / 9 3 / 9 2 / 9
2 / 9 2 / 9 3 / 9 2 / 9
1 / 4 1 / 4 1 / 4 1 / 4
1 / 4 1 / 4 1 / 4 1 / 4
1 / 4 1 / 4 1 / 4 1 / 4
1 / 4 1 / 4 1 / 4 1 / 4
1 / 3 1 / 6 0 1 / 2
0 0 1 0
P
1 / 3 1 / 6 0 1 / 2
1 / 3 1 / 6 0 1 / 2
And in the long run
2 / 7 1 / 7 1 / 7 3 / 7
2 / 7 1 / 7 1 / 7 3 / 7
2 / 7 1 / 7 1 / 7 3 / 7
2 / 7 1 / 7 1 / 7 3 / 7
Long-Run Behavior (Steady State) of Markov Chains 128
Insurance Firms market share
The current market position of an
insurance firm as well as its two
competitors is given by the following :
1 El Rose Ins 12 %
2 Cte dOr Ins 40 %
3 Majestic Ins 48 %
.1 .3 .6
P .1 .5 .4
.1 .3 .6
Given this matrix, we wish to predict the yearly
profit of each fund (from transactions and
percentage of the funds gains) when the initial
market share of each of the funds is given by
b(n) b(0)P n
or
b1 (1) .12(.1) .40(.1) .48(.1) 010
.
b (1) .12(.3) .40(.5) .48(.3) 0.38
2
b3 (1) .12(.6) .40(.4) .48(.6) 0.52
pi ,i 1 F ( i ); i 1,2,3
p44 F ( 4 )
otherwise, pij 0
p11 p12 0 0
p 0 p23 0
P
21
p31 0 0 p34
p41 0 0 p44
1 F1 (1 ) F1 (1 ) 0 0
1 F ( ) 0 F ( ) 0
P 2 2 2 2
1 F3 ( 3 ) 0 0 F3 ( 3 )
1 F4 ( 4 ) 0 0 F4 ( 4 )