the value of Xt, the values of Xs for s > t are not influenced by the values of Xu for
u < t. In other words, the probability of any particular future behavior of the
process, when its current state is known exactly, is not altered by additional
knowledge concerning its past behavior. A discrete time Markov chain is a
Markov process whose state space is a finite or countable set, and whose time (or
stage) index set is T = (0, 1, 2,). In formal terms, the Markov property is that
P
X n 1 j | X 0 i0 ,..., X n 1 in 1 , X n i
(2-1)
P
X n 1 j | X n i
for all time points n and all states i0, , in-1, i, j.
It is customary to label the state space of the Markov chain by the nonnegative
integers {0, 1, 2, } and to use Xn=i to represent the process being in state i at
time (or stage) n.
The probability of Xn+1 being in state j given that Xn is in state i is called the
one-step transition probability and is denoted by Pijn , n 1 . That is
Pijn , n 1 P
X n 1 j | X n i
(2-2)
The notation emphasizes that in general the transition probabilities are functions
not only of the initial and final states, but also of the time of transition as well.
If the one-step transition probabilities are independent of the time variable n, i.e.,
Pijn , n 1 Pij , we say that the Markov chain has stationary transition probabilities.
We will limit our discussions to Markov processes with stationary transition
probabilities only. It is customary to arrange these numbers Pij in a matrix, in the
infinite square array
P00 P01 P02 P03
P10 P11 P12 P13
P P21 P22 P23
P = 20
(2-3)
Pi 0 Pi1 Pi 2 Pi 3
the process. The (i+1)st row of P is the probability distribution of the values of
Xn+1 under the condition that Xn=i. If the number of states is finite, then P is a
finite square matrix whose order (the number of rows) is equal to the number of
states.
Since probabilities are non-negative and since the process must make a transition
into some state, it follows that
Pij 0
P
j 0
ij
for i, j 0,1,2
(2-4)
for i 0,1,2
(2-5)
P
X n in | X n 1 in 1Pin1 ,in
Thus,
By induction,
(2-7)
P
X 0 i0 , X 1 i1 ,, X n in
P
X 0 i0 , X 1 i1 ,, X n 1 in 1
Pin1 ,in
P
X 0 i0 , X 1 i1 ,, X n in Pi0 Pi0 ,i1 Pin 1 , in
(2-6)
(2-8)
(2-9)
0 0.6 0.4
0.5 0 0.5
0.96
0.4
0.94
0.06
0.94
0.06
0.96
0.04
Pijn, n 1 Pij 0 if j i
Pijn , n 1 Pij 0 if j i 1
Therefore,
Pii Pi ,i 1 1 .
C (i,1)C ( N i,1)
C (i,1)C (5 i,1)
Pi ,i 1
0.1
C ( N ,2)
C (5,2)
i (5 i )
10
2 Transition probability matrices of a Markov chain
Let Pij(n ) denote the probability that the process goes from state i to state j in n
transitions, i.e.
Pij( n ) P
X m n j | X m i.
(2-10)
(2-11)
k 0
1 i j
where Pij( 0)
.
0 i j
(2-12)
In other words, the n-step transition probabilities Pij(n ) are the entries in the
matrix Pn, the nth power of P.
A more general form of Equation (2-11) which is known as the ChapmanKolmogorov equations is
( n m )
ij
Pik( n ) Pkj( m )
(2-13)
k 0
time n is
Pk( n ) p j Pjk( n ) P
X n k
j 0
Example 4
A Markov chain
probability matrix
0
0
1
2
0.1 0.2 0.7
P= 1
2
0 .1 0 .2 0 .7
0.2 0.13
0.1
( 2)
01
1 0 0
P
1 = e1
P2 e2
ei
1
(c) What is P
X 3 1 | X 0 0?
P
X 3 1, | X 0 0P01( 3)
0 .2
0.1 0.2 0.7
0 .1 0 .2 0 .7
0 .2 0 .2 0 .6
0 .2
0 .6 0 .1 0 .3
0.1
(2-14)
1 0 0
P
1 = e1
P3 e2
Example 5
A Markov chain
probability matrix
0
0
1
2
0.3 0.2 0.5
P= 1
2
P
X 2 0p j Pj(02 ) 0.5 P00( 2 ) 0.5 P10( 2)
j 0
2
P
X 3 0p j Pj(03) 0.5P00(3) 0.5 P10(3) 0.416
j 0
0 .3
P
X 2 00.5
0 .3 0 .2 0 .5
0 .5
0 .5
0 .3
0.5
0 .5 0 .1 0 .4
0.5 0.42
0 .5
P =
0.5 0.1 0.4
0.5 0.1 0.4 =
0.40 0.19 0.41
P =
0.40 0.19 0.41
0.5 0.1 0.4 =
0.420 0.181 0.399
P00( 2 ) 0.440,
P10( 2) 0.400
P00( 3) 0.412,
P10(3) 0.420
( n)
ik
1 .
k 0
0 1
1 0
2
0
,
0 0 1
v E
T | X 0 1
P
X T 0 | X 1 k
P X 1 k | X 0 1
k 0
1
u
0
Thus,
u u
(2-15)
.
1
(2-16)
v E
T | X 0 1
1
0
v
0
1 v
Thus,
v 1 v
(2-17)
1
v
.
1
(2-18)
Now, let
s consider the four state Markov chain whose transition probability
matrix is
0
1
2
3
0
1
0
0
0
1 P10 P11 P12 P13
P=
2 P20 P21 P22 P23
3
0
0
0
1
Absorption now occurs in states 0 and 3, and states 1 and 2 are
transit
. The
probability of ultimate absorption in state 0 depends on the transit state in which
the process begin. Therefore, we must extend our notation to include the starting
state. Let
T min{n 0; X n 0 or X n 3}
ui P
X T 0 | X 0 ifor i 1,2 [The probability of being absorbed in state 0]
vi E
T | X 0 i for i 1,2
u0 1, u3 0, v0 v3 0.
Applying the first step analysis, it yields
u1 P10 P11u1 P12u2
u2 P20 P21u1 P22u2
(2-19)
(2-20)
v1 1 P11v1 P12v2
v2 1 P21v1 P22v2
Example 6
A Markov chain
probability matrix
(2-21)
(2-22)
0
1
P=
2
3
0
1
2
3
1
0
0
0
0.4 0.3 0.2 0.1
0.1 0.3 0.3 0.3
0
0
0
1
43
u 2 19
43
P
X T 0p j u j
j 0
Now let
s consider the probability for the process being absorbed in state 3.
Applying the first step analysis yields
43
u 2 24
43
Now, we want to develop a general form for an N+1 state Markov chain. Let
r i N ).
Q R
0 I
(2-23)
j 0
j r
j k
(2-24)
r 1
U ik Pik PijU jk , 0 i r , r k N .
j 0
(2-25)
wi E
g
Xn
| X 0 i
n 0
(2-26)
g ( X n i )
,
0 otherwise
(2-27)
wi E
T | X 0 i vi
n 0
(2-28)
then
g ( X n i ) g k
X n i
for 0 ,
i k r ,
0 if i k
(2-29)
then
T
Wik E
gk
Xn
| X 0 i
n 0
(2-30)
wi g (i ) Pij w j for 0 i r .
(2-31)
j 0
g ( X n i )
0 otherwise
gives vi E
T | X 0 i by
r
1
(2-32)
1 if i k
r
1
(2-33)
j 0
Example 7
A Markov chain
probability matrix
0
1
2
P=
3
4
5
0 1
0 0.9
0 0.5
0 0
0 0
0 0
0 0
2
3
4
5
0
0
0 0.1
0.4 0
0 0.1
0.6 0.2 0.1 0.1
0.4 0.5 0 0.1
0.4 0 0.5 0.1
0
0
0 1.0
w4 0.4w2 0.5w4
The unique solution is w0 4.5, w1 5.0, w2 6.25, w3 w4 5.0.
P=
0
1
0
1
1 a
a
, where 0 a, b 1
b 1 b
(2-34)
P=
0
1
0
1
1 a a
, where 0 a, b 1
1 a a
Pij P
X n1 j | X n ipc 1 a (or a ) , for j 0 (or 1).
1
i 0
i 0
p j pi Pij pc pi pc
Pn =
[Proof]
Let
n
1 b a
1 a b a a
b b
a b b a
a b
(2-35)
b a
b a
A=
and B =
a a
b b
then,
Pn =
a b A
1 a b B .
1
Also,
AP = A and
BP = (1a b)B
It is easily seen P1 = P .
Assume the formula is true for n, then
Pn P
a b
AP
1 a b BP
a b
A
1 a b B
a b A
1 a b B P
1
n 1
P n 1
The formula holds for n+1 if it holds for n. Therefore, it holds for all n.
Note that 1-a-b 1 when 0 a, b 1 , therefore,
b
lim P n a b
n
b
a b
a
a b
a
a b
(2-36)
a
i 0
1 . Let 0 , 1 , , n ,
X n n ,
n 0,1, . Since 0 , 1 , , n ,
a1
a1
a1
a2
a2
a2
(2-37)
2
a2
a2
a0 a1 a2
0
0
0 A0
1 0
P 2 0
3 0
1
a1
A1
0
0
2
a2
a2
A2
0
3
a3
a3
a3
a0 a1 a2 a3
3
a3
a3
a3
A3
(2-38)
(2-39)
Example 8 Suppose
1,
2 , represent successive bids on certain asset that is
offered for sale and X n max
1 , , n is the maximum that is bid up to stage
n. Suppose that the bid that is accepted is the first bid that equals or exceeds a
prescribed level M. What is the average time of sale, i.e., the average time that is
required to accept the bid?
[Solution]
E (T ) E (T |
1 M
E (T |
1 M
1 M ) P
1 M ) P
E (T |
1 M
1 P
1 M
1 M ) P
E (T |
1 E
T | 2 M
P 2 M
E
T | 2 M
P 2 M
1 M )
Since future bids
2,
3, have the same probabilistic properties as in the original
problem, it yields
E
T [1 E (T )]P
1 M
1 P
1 M
E (T ) 1 E
T P
1 M
(2-40)
1
a1
a0
0
0
2
a2
a1
a0
0
3
a3
a2
a1
a0
(2-41)
0
0
1
P= 2
3
0
0
0
2
0
0
0
3
0
0
4
0
0
0
(2-42)
We generalize the above success run process for cases for which state i 1 can
only be reached from state i and the run length is renewed (set to zero) if a failure
occurs. The corresponding probability transition matrix is therefore given by
0 1 2 3 4
0 p0 q0 0 0 0
1 p1 r1 q1 0 0
P = 2 p2 0 r2 q2 0
(2-43)
3 p3 0 0 r3 q3
Note that state 0 can be reached in one transition from any other state.
Another example of success run process is the current age in a renewal process.
Consider a light bulb whose lifetime, measured in discrete units, is a random
variable
, where
P
i
ai for i 1,2, and
a
i
1
1 .
Let each bulb is replaced by a new one when it burns out. Suppose the first bulb
lasts until time
1, the second bulb until time
1+
2 , and the nth bulb until time
1 n , where the individual lifetimes 1 , 2 , are independent
observations of a random variable . Let Xn be the age of the bulb in service at
time n. Such current age process is depicted in the following figure. Note that the
failure can only occur at integer times and, by convention, we set Xn = 0 at the
time of a failure.
Current Age
5
4
3
2
1
0
Time, n
ak 1 ak 2
(2-44)
where pk is the probability of returning to state 0 at the next stage, given that the
current state is state k.
P
X n k
P
X n 1 0, X n k
P
k 1ak 1
i 1
i 1
P
X n k
P
k i ak i .
Consider the success runs Markov Chain on N+1 states whose transition matrix is
0
1
2
P=
N 1
N
0
1 2
1
0 0
p1 r1 q1
p2 0 r2
pN 1 0 0
0
0 0
3
0
0
q2
0
0
N
0
0
0
(2-45)
q N 1
1
Note that states 0 and N are absorbing states. Let T be the hitting time to states 0
or N, i.e., T min
n 0; X n 0 or X n N .
It can be shown that
uk P
X T 0 | X 0 k
qk
1
p q
k
k
q N 1
, k 1,, N 1.
p q
N 1
N 1
(2-46)
u0 1, u N 0.
The mean hitting time vk E
T | X 0 k
k , N 1
1
vk
k , k 1
pk qk pk 1 qk 1
pN 1 q N 1
(2-47)
where
qk
kj
p q
k
k
qk 1
q j 1
, k j.
p q
k 1 p j 1 q j 1
k 1
(2-48)
For a given state j (0< j < N), the mean total visits to state j starting from state i
(Wij) is
1
, j i
pi qi
1
qi q j 1
Wij
, i j
p q
p q
i p j 1 q j 1
j
i
0 , i j
(2-49)
0
1
2
P=
0
r0
q1
0
1
p0
r1
q2
2
0
p1
r2
0
i1 i i+1
0
0 0
p2 0
qi ri pi 0
(2-50)
The one-dimensional random walk can be used to depict the fortune of a player
(player A) engaged in a series of contests. Suppose that a player with fortune k
plays a game against an infinitely rich adversary (player B) and has probability pk
of winning one unit and probability qk = 1 pk of losing one unit in the next
contest, and r0 = 1. The process Xn, where Xn represents his fortune after n contests,
is a random walk. Once the state k = 0 is reached, the process remains in that state.
The event of reaching state 0 is known as the
gambler
s ruin
.
If the adversary (player B) also starts with a limited fortune l and player A has an
initial fortune k (k + l = N), we then consider the Markov chain process Xn
representing player A
s fortune after n contests. The transition probability matrix is
0
1
2
P=
0
1
q1
0
1
0
r1
q2
N 0
2
0
p1
r2
3
0
0
p2
q N 1 rN 1
0
0
p N 1
1
(2-51)
P=
1
2
q 0
0 q
N 0
p 0
0 p 0
q 0 p
0 0 1
(2-52)
(2-53)
N i
when p q 0.5
N
ui
i
N
q p
q p
when p q
N
1
q p
(2-54)
Also,
u0 1 and u N 0 .
It can be shown that
Gambler
s ruinis the event that the process reaches state 0 before reaching state
N. This event can be stated more formally if we introduce the concept of hitting
time T. Let T be the random time that the process first reaches state 0 or N.
T min
n 0, X n 0 or X n N .
The event XT = 0 is the event of gambler
s ruin, and the probability of such event
starting from the initial state k is given by
U k 0 u k P
X T 0 | X 0 k
uk qu k 1 pu k 1 for k 1, , N 1 .
(2-55)
with u0 1 , u N 0 .
Let xk uk u k 1 for k 1, , N 1 and using p q 1 , we have
u k qu k 1 pu k 1
uk qu k pu k
Therefore,
0 p(uk 1 u k ) q(uk u k 1 ) for k 1, , N 1 .
pxk 1 qxk 0 for k 1, , N 1 .
(2-56)
px2 qx1 0
px3 qx2 0
px N qx N 1 0 .
2
q x , x q
q
q
x2
x2
p
1
p
p
x1 , , x N
x1 u1 u0 u1 1 ,
x2 u 2 u1 u2 x1 1 , x1 x2 u 2 1 ,
x3 u3 u 2 u3 x2 x1 1 , x1 x2 x3 u3 1 ,
xk uk u k 1 , x1 xk u k 1 ,
x N u N u N 1 u N 1 , x1 x N u N 1 1 ,
Therefore, the equation for general k gives
N
1
x N 1
p
x.
p
u k 1 x1 xk
uk 1 x1 (q / p) x1
q p x1
k
1
uk 1 1
q p
q p x1 , for k 1, , N 1 .
k
1
(2-57)
Since u N 0 , it yields
0 1 1
q p
q p x1
N
1
1
x1
.
N
1
1
q p
q p
(2-58)
1
q p
q p
k
1
(2-59)
if p q 0.5
k
k
1
q p
if p q
1
q p
Hence,
N k
1 (k / N )
when p q 0.5
uk
k
N
1 (q / p) k
q p
q p
when p q
N
N
1
(
q
/
p
)
q
p
(2-60)
N 0 0 0 0 1
(2-61)
N 1
ui i
, i 1,, N 1.
1 1 N 1
(2-62)
where
q1q2 qk
, k 1,, N 1.
p1 p2 pk
(2-63)
1 N 1
vk
1 1 k 1
1 k 1
, k 1,, N 1.
1 1 N 1
(2-64)
where
1
1
1
1
k
q q q q
1 2
2 3
k 1 k
1
k , k 1,, N 1.
1 i 1
k N 1
1
, i k
1 N 1
qk k 1
Wik
k N 1
1 i 1
, i k
k
i 1
1 N 1
k k 1
(2-65)
(2-66)
[Proof]
(1)
uk pk uk 1 rk uk qk uk 1 for k 1,2,, N 1 , and u0 1 , u N 0 .
uk pk uk rk uk qk uk (since pk rk qk 1)
0 pk (uk 1 uk ) qk (uk 1 uk ) .
Let xk uk uk 1 , k 1,2,, N , and we have
q
0 pk xk 1 qk xk , xk 1 k xk .
pk
k 1
qi
q1
q2
q2 q1
q1q2 qk 1
x2 x1 , x3 x2
x1 , , xk
x1
x1 .
p1
p2
p2 p1
p1 p2 pk 1
i 1 pi
Let k
q1q2 qk
, then xk k 1 x1 , for k 1,2,, N .
p1 p2 pk
x1 u1 u0 ,
x1 u1 1 .
x2 u2 u1 ,
x1 x2 u2 1
x3 u3 u2 ,
x1 x2 x3 u3 1
x1 x2 xk uk 1 ( k 1,2,, N )
x1 x2 xN u N 1 1 .
uk 1 x1 x2 xk
1 x1 1 x1 k 1 x1
Since u N 0 , it yields
u N 1 x1 1 x1 N 1 x1 1
1 1 N 1
x1 0
1
x1
1 1 N 1
Therefore,
uk 1 x1 1 x1 k 1 x1 1
1 1 k 1
x1
1 1 k 1
1 1 N 1
k N 1
1 1 N 1
uk
(2)
q
1
x2 1 x1
p1
p1
q
1
q q1
1 1
qq
q
1
x3 2 x2 2
x1
1 2 x1 2
p2
p2 p2 p1
p1 p2 p1 p2
p1 p2 p2
x4
q1q2 q3
qq
q
1
x1 2 3 3
p1 p2 p3
p1 p2 p3 p2 p3 p3
q2 qk 1 q3 qk 1
qk 1 1
q qk 1
xk 1
x1
p p
p1 pk 1
p1 pk 1 p2 pk 1
k 2 k 1 pk 1
xk k 1 x1 k 1 k 1 k 1 k 1
q1
1q2 2 q3
k 2 qk 1
Let
1
1
1
k 1 k 1 k 1 k 1 k 1
k 1
q1
1q2 2 q3
k 2 qk 1
q
q
1 2
2 3
k 2 k 1
1
xk k 1 x1 k 1 .
x1 v1 v0 v1 ,
x2 v2 v1 v2 x1 , x1 x2 v2 , , x1 x2 xk vk , for k 1,2,, N .
vk x1 x2 xk
x1
1 x1 1
2 x1 2
k 1 x1 k 1
1 1 k 1
x1
1 k 1
Let k = N,
vN
1 1 N 1
x1
1 N 1 0
1 N 1
1 1 N 1
N 1
vk
1 1 k 1 1
1 k 1
1 1 N 1
x1
P=
(2-67)
R QR
I
(2-68)
R QR Q 2 R
I
(2-69)
(I Q Q n-1 )R
I
(2-70)
Q2
P =
0
2
P3 =
Q3
0
Qn
P =
0
n
Let Wij(n ) be the mean total visits to state j up to stage n for a Markov chain
starting from state i , i.e.,
n
Wij( n ) E
X l j
| X 0 i
l 0
1 if X l j
where
X l j
.
0 if X l j
Consider
X l j as a Bernoulli random variable, and we have
E
X l j
| X 0 i
1P
X l j | X 0 i
0 P
X l j | X 0 i
.
Pij(l )
(2-71)
X l j
| X 0 i
Pij( l )
(2-72)
Therefore,
n
Wij( n ) E
X l j
| X 0 i
l 0
X l j
| X 0 i
l 0
(2-73)
l 0
(n)
ij
1 if i j
In particular, Qij( 0)
.
0 if i j
W ( n ) I Q Q 2 Q n
I Q(I Q Q 2 Q n -1 )
W ( n ) I QW ( n -1) .
r 1
r 1
k 0
k 0
(2-74)
(2-75)
The limit of Wij(n ) represents the expected value of the total visits to state j from
the initial state i, i.e.,
W I Q Q 2
W I QW .
Or,
(2-76)
r 1
Wij ij PikWkj , 0 i, j r.
(2-77)
W - QW (I Q)W I
(2-78)
W (I Q)-1
(2-79)
k 0
T
X n j
.
(2-80)
j 0 n 0
Since Wij is the mean number of total visits to state j from the initial state i (both
are transient states), it follows that
T 1
Wij E
X n j
| X 0 i , 0 i, j r.
n 0
(2-81)
Let vi E
T | X 0 i be the mean time to absorption starting from state i.
It follows that
r 1
vi E
T | X 0 i
Wij
(2-82)
j 0
r 1
T 1
X n j
| X 0 i E
T | X 0 i
vi , 0 i r.
ij
j 0
j 0
n 0
r 1
r 1
r 1
r 1 r 1
j 0
j 0
j 0 k 0
Wij ij PikWkj , 0 i r.
r 1
r 1
j 0
k 0
vi Wij 1 Pik vk , 0 i r.
(2-83)
Starting from initial state i, the probability of being absorbed in state k, i.e. the
hitting probability of state k, is expressed by
U ik P
X T k | X 0 i for 0 i r and r k N .
U ik P
X T k | X 0 i
lim P
X T k | X 0 i
T 1
T 1
PX
T 1
k | X 0 i
P
X 1 k | X 0 i
P
X 1 ST , X 2 k | X 0 i
P
X l ST , l 1,2, , n 1; X n k | X 0 i
P
time of absorption in state k being less than or equal to n | X 0 i
P
the process being absorbed in state k up to stage n | X 0 i
P
n T , X T k | X 0 i
where fik(n) is the probability that, starting from state i, the process reaches state k
for the first time in n steps, and ST represents the set of all transient states, i.e.
ST={0,1,, r1}.
n
Let U ik( n ) P
. Since state k is an absorbing state, once the
X T k | X 0 i
T 1
.
U ik( n ) P
X T k | X 0 i
P
X n k | X 0 i
T 1
(2-84)
[Remark]
PX
T 1
k | X 0 i
U ik( n ) P
X T k | X 0 i
T 1
P
X 1 k | X 0 i
P
X 2 k | X 0 i
P
X n k | X 0 i
Pik(1) Pik( 2) Pik( n )
n
Pik(l )
l 1
U( n ) I Q Q n -1 R
U ( n ) W ( n -1)R
U lim U( n ) I Q Q 2 R
n
U WR
r 1
U ik Wij R jk , 0 i r and r k N .
j 0
U ik Wij R jk
ij PilWlj
R jk
j 0
j 0
l 0
r 1 r 1
r 1
r 1
j 0 l 0
l 0
j 0
U ik Rik PilU lk
l 0
r 1
j 0,1, , N and
1 .
j 0
Example 9 The Markov chain with the following transition probability matrix
0
1
0 0.33 0.67
P=
1 0.75 0.25
P2 =
0.6114 0.3886
0.4932 0.5068
, P3 =
,
0.4350 0.5650
0.5673 0.4327
P4 =
0.5428 0.4572
0.5220 0.4780
, P5 =
,
0.5117 0.4883
0.5350 0.4560
P6 =
From Eq. (2-36), the limiting probabilities are 0.5282 and 0.4718.
Example 10 The Markov chain with the following transition probability matrix
0
1
2
0 0.40 0.50 0.10
P= 1
2
j k Pkj , k 0,1, , N ,
k 0
(2-85)
1 .
k 0
(2-86)
[Proof]
Since the Markov chain is regular, we have a limiting distribution,
lim Pij( n ) j 0 ,
j 0,1, , N and
1 .
j 0
From Pn Pn -1 P , it yields
N
j 0, , N .
k 0
j k Pkj , k 0,1, , N .
k 0
(Solution)
X n is in state j.
(1) if it was sunny today and yesterday, then it will be sunny tomorrow with
probability 0.8;
(2) if it was sunny today but cloudy yesterday, then it will be sunny tomorrow
with probability 0.6;
(3) if it was cloudy today but sunny yesterday, then it will be sunny tomorrow
with probability 0.4;
(4) if it was cloudy for the last two days, then it will be sunny tomorrow with
probability 0.1.
Such a model can be transformed into a Markov chain provided we say that the
state at any time is determined by the weather conditions during both that day and
the previous day. We say the process is in
(S,S) : if it was sunny for both today and yesterday;
(S,C) : if it was sunny yesterday but cloudy today;
(C,S) : if it was cloudy yesterday but sunny today;
(C,C) : if it was cloudy for both today and yesterday.
Then the transition matrix is
Today
s State
Yesterday
s State
(S , S )
0 .8 0 .2
(S , C )
(C , S )
(C , C )
0 .4 0 .6
0 .6 0 .4
0 .1 0 .9
0.80 0.62 0
0.20 0.42 1
0.41 0.13 2
0.61 0.93 3
0 1 2 3 1
0 3 / 11, 1 1 / 11, 2 1 / 11, 3 6 / 11 .
8 Eigenvectors and Calculation of the Limiting Probabilities
Example 12 A city is served by two newspapers, UDN and CT. The CT,
however, seems to be in trouble. Currently, the CT has only a 38% market share.
Furthermore, every year, 10% of its readership switches to UDN while only 7% of
the UDN
s readership switch to CT. Assume that no one subscribes to both papers
and the total newspaper readership remains constant. What is the long-term
38 62
=
38.54 61.46
0.07 0.93
Or
0.90 0.07
38
38.54
=
X 1
0.10 0.93
62
61.46
PT
X 0 X 1
The readerships at the end of the second year:
0.90 0.07
38.54
38.99
=
X 2
0.10 0.93
61.46
61.01
PT
X 1 X 2
Repeatedly, we have
39.36
39.67
39.93
40.14
X 3 ,
X 4 ,
X 5 ,
X 6
60.64
60.33
60.07
59.86
It is clear that CT not only is not in trouble; it is actually thriving. Its market share
grows year after year! However, it does show that the rate of growth is slowing,
and we can expect that eventually the readerships will reach an equilibrium state,
i.e.
lim(P T ) n
X 0 X and P T
X X .
n
(2-87)
( A I ) X 0
(2-88)
for some scalar is called an eigenvector for A. The scalar is called the
eigenvalue.
Now, let
s consider calculation of Ak
B where B is an n1 vector. Suppose the
matrix A has n eigenvalues (or n eigenvectors). Let the eigenvectors be X1, X2, ,
Xn. These eigenvectors are linear independent and form a basis for an
n-dimensional space. Therefore, we can express the vector B by a linear
combination of Xi
s, i.e.,
x11
x12
x1n
x21
x22
x
X 1
, X 2
, X n 2 n
x
x
x
n1
n 2
nn
1 1
2
2
B
X1 X 2 X n
n
n
1
2
Xn
Ak
B Ak 1 A
X1
X2
Ak 1
AX 1
1
2
AX n
AX 2
Since A
X X , we have
Ak
B Ak 1
2 X 2
1 X1
Ak 1
X1
Ak 1
X1
1
2
n X n
X2
1
0
Xn
X2
Xn
Ak
B Ak 1
X1
Ak 2
X1
X2
X2
1
2 0
2
0 n
n
Xn
Xn
2 2
X1
X2
Xn
k 2
Ak B
X1
X2
k
1
0
k
Xn
k 2
0
k2
kn
(2-89)