Anda di halaman 1dari 36

Chapter 2 Markov Processes and Markov Chains

1 Definition of a Markov Processes


AMarkov process { } T t X
t
e , is a stochastic process with the property that, given
the value of X
t
, the values of X
s
for s > t are not influenced by the values of X
u
for
u < t. In other words, the probability of any particular future behavior of the
process, when its current state is known exactly, is not altered by additional
knowledge concerning its past behavior. Adiscrete time Markov chain is a
Markov process whose state space is a finite or countable set, and whose time (or
stage) index set is T = (0, 1, 2,). In formal terms, the Markov property is that
{ }
{ } i X j X P
i X i X i X j X P
n n
n n n n
= = =
= = = =
+
+
|
, ,..., |
1
1 1 0 0 1
(2-1)
for all time points n and all states i
0
, , i
n-1
, i, j.
It is customary to label the state space of the Markov chain by the nonnegative
integers {0, 1, 2, } and to use X
n
=i to represent the process being in state i at
time (or stage) n.
The probability of X
n+1
being in state j given that X
n
is in state i is called the
one-step transition probability and is denoted by
1 , + n n
ij
P . That is
{ } i X j X P P
n n
n n
ij
= = =
+
+
|
1
1 ,
(2-2)
The notation emphasizes that in general the transition probabilities are functions
not only of the initial and final states, but also of the time of transition as well.
If the one-step transition probabilities are independent of the time variable n, i.e.,
ij
n n
ij
P P =
+1 ,
, we say that the Markov chain has stationary transition probabilities.
We will limit our discussions to Markov processes with stationary transition
probabilities only. It is customary to arrange these numbers P
ij
in a matrix, in the
infinite square array
P =

3 2 1 0
23 22 21 20
13 12 11 10
03 02 01 00
i i i i
P P P P
P P P P
P P P P
P P P P
(2-3)
and to refer to P =
ij
P as the Markov matrix or transition probability matrix of
the process. The (i+1)st row of P is the probability distribution of the values of
X
n+1
under the condition that X
n
=i. If the number of states is finite, then P is a
finite square matrix whose order (the number of rows) is equal to the number of
states.
Since probabilities are non-negative and since the process must make a transition
into some state, it follows that
2 , 1 , 0 , for 0 = > j i P
ij
(2-4)
2 , 1 , 0 for 1
0
= =

=
i P
j
ij
(2-5)
AMarkov process is complete defined if its transition probability matrix and
initial state X
0
(or, more generally, the probability distribution of X
0
) are specified.
{ }
n n
i X i X i X P = = = , , ,
1 1 0 0

{ }
1 1 1 1 0 0
, , ,

= = = =
n n
i X i X i X P (2-6)
{ }
1 1 1 1 0 0
, , , |

= = = =
n n n n
i X i X i X i X P
By definition of a Markov process, we have
{ }
{ }
n n
i i n n n n
n n n n
P i X i X P
i X i X i X i X P
, 1 1
1 1 1 1 0 0
1
|
, , , |

= = = =
= = = =

(2-7)
Thus,
{ }
n n
i X i X i X P = = = , , ,
1 1 0 0

{ }
n n
i i n n
P i X i X i X P
, 1 1 1 1 0 0
1
, , ,


= = = =
By induction,
{ }
n n
i X i X i X P = = = , , ,
1 1 0 0

n n
i i i i i
P P P
, ,
1 1 0 0
= (2-8)
It is also evident that
{ }
{ }
n n m m n n
n n m m n n
i X j X j X P
i X i X i X j X j X P
= = = =
= = = = =
+ +
+ +
| , ,
, , , | , ,
1 1
1 1 0 0 1 1


(2-9)
for all time points n, m and all states i
0
, , i
n
, j
1
, , j
m
.
Example 1 AMarkov chain X
0
, X
1
, X
2
has the transition probability matrix
0 1 2
P =
5 . 0 1 . 0 4 . 0
4 . 0 3 . 0 3 . 0
1 . 0 3 . 0 6 . 0
2
1
0
If it is known that the process starts in state X
0
= 1, determine the probability
{ } 2 , 0 , 1
2 1 0
= = = X X X P .
Example 2 AMarkov chain X
0
, X
1
, X
2
has the transition probability matrix
0 1 2
P =
5 . 0 0 5 . 0
4 . 0 6 . 0 0
1 . 0 2 . 0 7 . 0
2
1
0
Determine the conditional probabilities { } 0 | , 1 , 1
1 3 2
= = = X X X P and
{ } 0 | , 1 , 1
0 2 1
= = = X X X P .
Example 3 Asimplified model for the spread of a disease goes this way: The
total population size is N = 5, of which some are diseased and the remainder are
healthy. During a single period of time, two people are selected at random from
the population and assumed to interact. The selection is such that an encounter
between any pair of individuals in the population is just as likely as between any
other pair. If one of these persons is diseased and the other is not, then with
probability o = 0.1 the transmission takes place. Let X
n
denote the number of
diseased persons in the population at the end of nth period. Specify the transition
probability matrix.
0 1 2 3 4 5
0 1 0 0 0 0 0
1 0 0.96 0.4 0 0 0
2 0 0 0.94 0.06 0 0
3 0 0 0 0.94 0.06 0
4 0 0 0 0 0.96 0.04
5 0 0 0 0 0 1
i j P P
ij
n n
ij
< = =
+
if 0
1 ,
1 if 0
1 ,
+ > = =
+
i j P P
ij
n n
ij
Therefore,
1
1 ,
= +
+ i i ii
P P .
10
) 5 (
1 . 0
) 2 , 5 (
) 1 , 5 ( ) 1 , (
) 2 , (
) 1 , ( ) 1 , (
1 ,
o
o
i i
C
i C i C
N C
i N C i C
P
i i

=
+
2 Transition probability matrices of a Markov chain
Let
) (n
ij
P denote the probability that the process goes from state i to state j in n
transitions, i.e.
{ } i X j X P P
m n m
n
ij
= = =
+
|
) (
. (2-10)
The n-step transition probability matrix is then expressed by P
(n)
=
) (n
ij
P .
Theorem 2.1 The n-step transition probabilities of a Markov chain satisfy

=
0
) 1 ( ) (
k
n
kj ik
n
ij
P P P (2-11)
where

=
=
=
j i
j i
P
ij
0
1
) 0 (
.
Equation (2-11) is equivalent to P
(n)
= P P
(n-1)
and therefore, by iteration, we
have
P
(n)
= P P P P= P
n
. (2-12)
In other words, the n-step transition probabilities
) (n
ij
P are the entries in the
matrix P
n
, the nth power of P.
Amore general form of Equation (2-11) which is known as the Chapman-
Kolmogorovequationsis

=
+
=
0
) ( ) ( ) (
k
m
kj
n
ik
m n
ij
P P P (2-13)
for all 0 , > m n , all i, j.
If the probability of the process initially being in state j is p
j
, i.e., the distribution
law of X
0
is { }
j
p j X P = =
0
, then the probability of the process being in state k at
time n is
{ } k X P P p P
n
j
n
jk j
n
k
= = =

=0
) ( ) (
(2-14)
Example 4 AMarkov chain { }
n
X on the states 0, 1, 2 has the transition
probability matrix
0 1 2
P =
3 . 0 1 . 0 6 . 0
6 . 0 2 . 0 2 . 0
7 . 0 2 . 0 1 . 0
2
1
0
(a) Compute the two step transition matrix P
2
.
(b) What is { } 0 | , 1
1 3
= = X X P ?
{ }
( ) 13 . 0
1 . 0
2 . 0
2 . 0
7 . 0 2 . 0 1 . 0
0 | , 1
) 2 (
01 1 3
=
|
|
|
.
|

\
|
=
= = = P X X P
( ) 0 0 1
) 2 (
01
= P P
2
|
|
|
.
|

\
|
0
1
0
=
1
e' P
2
2
e
|
|
|
|
|
|
|
|
|
.
|

\
|
=
0
0
1
0
0

i
e
(c) What is { } 0 | 1
0 3
= = X X P ?
{ }
( )
|
|
|
.
|

\
|
|
|
|
.
|

\
|
=
= = =
1 . 0
2 . 0
2 . 0
3 . 0 1 . 0 6 . 0
6 . 0 2 . 0 2 . 0
7 . 0 2 . 0 1 . 0
7 . 0 2 . 0 1 . 0
0 | , 1
) 3 (
01 0 3
P X X P
The i-th element
( ) 0 0 1 = P
3
|
|
|
.
|

\
|
0
1
0
=
1
e' P
3
2
e
Example 5 AMarkov chain { }
n
X on the states 0, 1, 2 has the transition
probability matrix
0 1 2
P =
3 . 0 2 . 0 5 . 0
4 . 0 1 . 0 5 . 0
5 . 0 2 . 0 3 . 0
2
1
0
and initial distribution p
0
= p
1
= 0.5. Compute the probabilities { } 0
2
= X P and
{ } 0
3
= X P .
[Solution]
{ }
{ } 416 . 0 5 . 0 5 . 0 0
5 . 0 5 . 0 0
) 3 (
10
) 3 (
00
2
0
) 3 (
0 3
) 2 (
10
) 2 (
00
2
0
) 2 (
0 2
= + = = =
+ = = =

=
=
P P P p X P
P P P p X P
j
j j
j
j j
{ } ( )
( ) 42 . 0
5 . 0
5 . 0
3 . 0
4 . 0 1 . 0 5 . 0 5 . 0
5 . 0
5 . 0
3 . 0
5 . 0 2 . 0 3 . 0 5 . 0 0
2
=
|
|
|
.
|

\
|
+
|
|
|
.
|

\
|
= = X P
P
2
=
|
|
|
.
|

\
|
3 . 0 2 . 0 5 . 0
4 . 0 1 . 0 5 . 0
5 . 0 2 . 0 3 . 0
|
|
|
.
|

\
|
3 . 0 2 . 0 5 . 0
4 . 0 1 . 0 5 . 0
5 . 0 2 . 0 3 . 0
=
|
|
|
.
|

\
|
42 . 0 18 . 0 40 . 0
41 . 0 19 . 0 40 . 0
38 . 0 18 . 0 44 . 0
P
3
=
|
|
|
.
|

\
|
42 . 0 18 . 0 40 . 0
41 . 0 19 . 0 40 . 0
38 . 0 18 . 0 44 . 0
|
|
|
.
|

\
|
3 . 0 2 . 0 5 . 0
4 . 0 1 . 0 5 . 0
5 . 0 2 . 0 3 . 0
=
|
|
|
.
|

\
|
398 . 0 182 . 0 420 . 0
399 . 0 181 . 0 420 . 0
406 . 0 182 . 0 412 . 0
420 . 0 , 412 . 0
400 . 0 , 440 . 0
) 3 (
10
) 3 (
00
) 2 (
10
) 2 (
00
= =
= =
P P
P P
.
Note that the n-step transition matrix P
n
satisfies 1
0
) (
=

= k
n
ik
P .
3 First Step Analysis
Consider he Markov chain { }
n
X whose transition probability matrix is
0 1 2
P =
1 0 0
0 0 1
2
1
0
| o ,
where 0 , 0 , 0 > > > | o and 1 = + + | o . Two questions arise:
(1) In which state, 0 or 2, is the process ultimately trapped? [absorption state]
(2) How long, on the average, does it take to reach one of these states? [time of
absorption]
The time of absorption of the process can be defined as 0 ; 0 min{ = > =
n
X n T
or } 2 =
n
X . Also, let
{ } 1 | 0
0
= = = X X P u
T
[The probability of being absorbed in state 0]
| | 1 |
0
= = X T E v [Average time of absorption]
From the transition probability matrix, it yields
{ }
{ } { }
0 1
1 | | 0
1 | 0
2
0
0 1 1
0
+ + =
= = = = =
= = =

=
| o u
X k X P k X X P
X X P u
k
T
T
Thus,
u u | o + = (2-15)
o
o
|
o
+
=

=
1
u . (2-16)
The absorption time T is always at least 1. If either 0
1
= X or 2
1
= X , then no
further steps are required. If, on the other hand, 1
1
= X , then the process is back
at its starting point, and, on the average, | | 1 |
0
= = X T E v additional steps are
required before absorption occurs. Weighting these contingencies by their
respective probabilities, we have
| |
v
v
X T E v
|
| o
+ =
+ + + =
= =
1
0 0 1
1 |
0
Thus,
v v | + =1 (2-17)
|
=
1
1
v . (2-18)
Now, lets consider the four state Markov chain whose transition probability
matrix is
0 1 2 3
P =
1 0 0 0
0 0 0 1
3
2
1
0
23 22 21 20
13 12 11 10
P P P P
P P P P
Absorption now occurs in states 0 and 3, and states 1 and 2 are transit. The
probability of ultimate absorption in state 0 depends on the transit state in which
the process begin. Therefore, we must extend our notation to include the starting
state. Let
0 ; 0 min{ = > =
n
X n T or } 3 =
n
X
{ } 2 , 1 | 0
0
= = = = i for i X X P u
T i
[The probability of being absorbed in state 0]
| | 2 , 1 |
0
= = = i for i X T E v
i
[Average time of absorption]
. 0 , 0 , 1
3 0 3 0
= = = = v v u u
Applying the first step analysis, it yields
2 12 1 11 10 1
u P u P P u + + = (2-19)
2 22 1 21 20 2
u P u P P u + + = (2-20)
(
2 1
, u u ) can then be solved simultaneously.
As for the mean time to absorption, the first step equations are
2 12 1 11 1
1 v P v P v + + = (2-21)
2 22 1 21 2
1 v P v P v + + = (2-22)
Example 6 AMarkov chain { }
n
X on the states 0, 1, 2, 3 has the transition
probability matrix
0 1 2 3
P =
1 0 0 0
3 . 0 3 . 0 3 . 0 1 . 0
1 . 0 2 . 0 3 . 0 4 . 0
0 0 0 1
3
2
1
0
From the first step analysis, it yields
2 1 2
2 1 1
3 . 0 3 . 0 1 . 0
2 . 0 3 . 0 4 . 0
u u u
u u u
+ + =
+ + =
1 . 0 7 . 0 3 . 0
4 . 0 2 . 0 7 . 0
2 1
2 1
= +
=
u u
u u
43
19
,
43
30
2 1
= = u u .
What is the probability for the process being absorbed in state 0?
We need to know the distribution law of X
0
, i.e., { }
j
p j X P = =
0
.
{ }

=
= =
0
0
j
j j T
u p X P
Suppose that in this example
, 2 . 0
0
= p , 3 . 0
1
= p , 25 . 0
2
= p . 25 . 0
3
= p
Then,
{ }
519767 . 0
0 25 . 0
43
19
25 . 0
43
30
3 . 0 1 2 . 0 0
=
+ + + = =
T
X P
For the mean time of absorption, it yields
2 1 2
2 1 1
3 . 0 3 . 0 1
2 . 0 3 . 0 1
v v v
v v v
+ + =
+ + =
Now lets consider the probability for the process being absorbed in state 3.
Applying the first step analysis yields
2 1 2
2 1 1
3 . 0 3 . 0 3 . 0
2 . 0 3 . 0 1 . 0
u u u
u u u
+ + =
+ + =
3 . 0 7 . 0 3 . 0
1 . 0 2 . 0 7 . 0
2 1
2 1
= +
=
u u
u u
43
24
,
43
13
2 1
= = u u .
The probability for the process being absorbed in state 1 is
{ }
480233 . 0
1 25 . 0
43
24
25 . 0
43
13
3 . 0 0 2 . 0 1
=
+ + + = =
T
X P
Now, we want to develop a general form for an N+1 state Markov chain. Let
{ }
n
X be a Markov chain on the states 0, 1, , N. Suppose that states 0, 1, , r1
are transient in that 0
) (

n
ij
P as 0 n for 1 , 0 s s r j i while states r,, N
are absorbing states ( N i r P
ii
s s = , 1 ).
The transition probability matrix has the form
P =
I 0
R Q
(2-23)
where 0 is an (Nr+1)r matrix all of whose entries are zero, I is an (Nr+1)
(Nr+1) identity matrix, and
ij ij
P Q = for r j i < s , 0 .
Started at one of the transient states X
0
= i, where r i < s 0 , such a process will
remain in the transient states for some random duration, but ultimately the process
will be absorbed in one of the absorbing states N r i , , = .
Let the probability of being absorbed in state k ( N k r s s ) when the initial state
is X
0
= i ( r i < s 0 ) be expressed by U
ik
.

=
=

=
+ + =
N
k j
r j
ij
r
j
jk ij ik ik
P U P P U 0
1
0
(2-24)

=
+ =
1
0
r
j
jk ij ik ik
U P P U , r i < s 0 , N k r s s . (2-25)
Let the random absorption time be expressed by } ; 0 min{ r X n T
n
> > = . Suppose
that a rate g(i) is associated with each transient state i and that we wish to
determine the mean total rate that is accumulated up to absorption. Let w
i
be such
total rate when the initial state X
0
= i, i.e.,
( )
(

= =

=
T
n
n i
i X X g E w
0
0
| (2-26)
If the rate g(i) is defined as

s s
= =
otherwise
r i
i X g
n
0
1 0 1
) ( , (2-27)
then
i
T
n
i
v i X T E w =
(

= =

=
0
0
| (2-28)
(the mean time of absorption when the initial state X
0
= i.)
If, for a specified transient state k, the rate g(i) = g
k
(i) is defined as
( )

=
=
= = = =
k i if
k i if
i X g i X g
n k n
0
1
) ( for r k i < s , 0 , (2-29)
then
( )
(

= =

=
T
n
n k ik
i X X g E W
0
0
| (2-30)
is the mean number of visits to state k( r k < s 0 ) prior to absorption, when
the process starts from state i.
Applying the first step analysis, we have

=
+ =
1
0
) (
r
j
j ij i
w P i g w for r i < s 0 . (2-31)
The special case of

s s
= =
otherwise
r i
i X g
n
0
1 0 1
) (
gives | | i X T E v
i
= =
0
| by
. 1 , , 1 , 0 for , 1
1
0
= + =

=
r i v P v
r
j
j ij i
(2-32)
For the case of ( )

=
=
= = = =
k i if
k i if
i X g i X g
n k n
0
1
) ( , we have

=
+ =
1
0
r
j
jk ij ik ik
W P W o for r i < s 0 . (2-33)
Example 7 AMarkov chain { }
n
X on the states 0, 1, 2, 3, 4, 5 has the transition
probability matrix
0 1 2 3 4 5
P =
0 . 1 0 0 0 0 0
1 . 0 5 . 0 0 4 . 0 0 0
1 . 0 0 5 . 0 4 . 0 0 0
1 . 0 1 . 0 2 . 0 6 . 0 0 0
1 . 0 0 0 4 . 0 5 . 0 0
1 . 0 0 0 0 9 . 0 0
5
4
3
2
1
0
Find the mean duration spent in state 2 if the beginning state is 0.
[Solution]

=
+ + = + =
1
0
52 12 2 0 02 02
1 . 0 9 . 0 0
r
j
j j
W W W P W o
52 12 02
1 . 0 9 . 0 W W W + =
Similarly,
52 22 12 12
1 . 0 4 . 0 5 . 0 W W W W + + =
52 42 32 22 22
1 . 0 1 . 0 2 . 0 6 . 0 1 W W W W W + + + + =
52 32 22 32
1 . 0 5 . 0 4 . 0 W W W W + + =
52 42 22 42
1 . 0 5 . 0 4 . 0 W W W W + + =
52 52
1W W =
It is apparent that . 0
52
= W
Therefore, a simplied expression is
1 0
9 . 0 w w =
4 3 2 2
1 . 0 2 . 0 6 . 0 1 w w w w + + + =
3 2 3
5 . 0 4 . 0 w w w + =
4 2 4
5 . 0 4 . 0 w w w + =
The unique solution is , 5 . 4
0
= w , 0 . 5
1
= w , 25 . 6
2
= w . 0 . 5
4 3
= = w w
4 Special Markov Chains
The Two State Markov Chain
Let
0 1
P =
b b
a a

1
1
1
0
, where 1 , 0 < < b a (2-34)
be the transition matrix of a two state Markov chain.
If b a =1 , then the states , ,
2 1
X X are independent identically distributed
random variables with { } b X P
n
= = 0 and { } a X P
n
= =1 .
[Proof]
0 1
P =
a a
a a

1
1
1
0
, where 1 , 0 < < b a
{ } ) (or 1 |
1
a a p i X j X P P
c n n ij
= = = = =
+
, for ). 1 (or 0 = j
c
i
i c ij
i
i j
p p p P p p = = =

= =
1
0
1
0
Therefore,
ij j
p p = and it indicates that
n
X and
1 + n
X are independent. (Why
are they identically distributed?)
The n-step transition matrix is given by
P
n
=
( )
b b
a a
b a
b a
a b
a b
b a
n


+

+
+
1 1
(2-35)
[Proof]
Let
A =
a b
a b
and B =
b b
a a


then,
P
n
= ( ) ( ) | | B A
n
b a b a + +

1
1
.
Also,
AP = A and BP = (1a b)B
It is easily seen P
1
= P .
Assume the formula is true for n, then
( ) ( ) | |
( ) ( ) | |
( ) ( ) | |
1 n
n
P
B A
BP AP
P B A
P P
+
+

=
+ + =
+ + =
+ + =
1 1
1
1
1
1
1
n
n
n
b a b a
b a b a
b a b a
The formula holds for n+1 if it holds for n. Therefore, it holds for all n.
Note that 1 1 < -a-b when 1 , 0 < < b a , therefore,
b a
a
b a
b
b a
a
b a
b
n
+ +
+ +
=

n
P lim . (2-36)
Markov Chains Associated with IID Random Variables
Let denote a discrete valued random variable whose values are nonnegative
integers and | |
i
a i P = = for , 1 , 0 = i and

=
=
0
1
i
i
a . Let , , , ,
1 0 n

represent independent observations of . We shall now study the kinds of Markov
chains whose state spaces are coincide with the independent observations of .
(1) Transition probability matrix characterizing iid random processes
Consider a random process { },
n n
X = , 1 , 0 = n . Since , , , ,
1 0 n

are independent observations of , { }
n
X is therefore an iid random process.
The following transition probability matrix characterizes such iid random
processes.

2 1 0
2 1 0
2 1 0
P
a a a
a a a
a a a
= (2-37)
The row vector represents the probability mass function of .
(2) Successive Maximum Series
Let { }
n n
X , , , max
1 0
= . It is readily seen that } {
n
X is a Markov chain
and { }
1 1
, max
+ +
=
n n n
X X . Therefore, the transition probability matrix of
} {
n
X is given by
0 1 2 3

3 2 1 0
3 2 1 0
3 2 1 0
3 2 1 0
0 0 0
0 0
0
3
2
1
0
a a a a
a a a a
a a a a
a a a a
+ + +
+ +
+
= P (2-38)
0 1 2 3

3
3 2
3 2 1
3 2 1 0
0 0 0
0 0
0
3
2
1
0
A
a A
a a A
a a a A
= P (2-39)
where

=
=
i
j
j i
a A
0
for , 1 , 0 = i .
Example 8 Suppose , ,
2 1
represent successive bids on certain asset that is
offered for sale and { }
n n
X , , max
1
= is the maximum that is bid up to stage
n. Suppose that the bid that is accepted is the first bid that equals or exceeds a
prescribed level M. What is the average time of sale, i.e., the average time that is
required to accept the bid?
[Solution]
The time of sale is the random variable } ; 1 min{ M X n T
n
> > = .
( ) ( ) M P M T E M P M T E T E > > + < < =
1 1 1 1
) | ( ) | ( ) (
( ) ( ) M P M P M T E > + < < =
1 1 1
1 ) | (
( ) ( ) ( ) ( ) M P M T E M P M T E M T E > > + < < + = <
2 2 2 2 1
| | 1 ) | (
Since future bids
2
,
3
, have the same probabilistic properties as in the original
problem, it yields
( ) ( ) ( ) M P M P T E T E > + < + =
1 1
1 )] ( 1 [
( ) ( ) M P T E T E < + =
1
1 ) ( (2-40)
(3) Successive Partial Sums Series
Let
n n
q + + + =
2 1
, , 2 , 1 = n , and, by definition, 0
0
= q . The process
n n
X q = is a Markov chain with the following transition probability matrix
0 1 2 3

0
1 0
2 1 0
3 2 1 0
0 0 0
0 0
0
3
2
1
0
a
a a
a a a
a a a a
= P (2-41)
Markov Chains of Success Runs
Consider the case of conducting repeated Bernoulli trials (each of which admits
only two possible outcomes, success S or failure F). Suppose that in each trial, the
probability of S is o and the probability of F is | = 1 o . We define the run length
of a success run (or the success run length) as the number of consecutive trials
which yield success. That is, we say a success run of length r happened if the
outcomes in the preceding r+2 trials, includingthepresent trial asthelast, were
respectively, F, S, S,, S, F. Lets label the present state of the process by the
lengthof thesuccessruncurrentlyunder way. The process is Markov since the
individual trials are independent of each other, and its transition probability matrix
is given by
0 1 2 3 4
P =

o |
o |
o |
o |
0 0 0
0 0 0
0 0 0
0 0 0
3
2
1
0
(2-42)
We generalize the above success run process for cases for which state 1 + i can
only be reached from state i and the run length is renewed (set to zero) if a failure
occurs. The corresponding probability transition matrix is therefore given by
0 1 2 3 4
P =

3 3 3
2 2 2
1 1 1
0 0
0 0
0 0
0 0
0 0 0
3
2
1
0
q r p
q r p
q r p
q p
(2-43)
Note that state 0 can be reached in one transition from any other state.
Another example of success run process is the current age in a renewal process.
Consider a light bulb whose lifetime, measured in discrete units, is a random
variable , where
| |
i
a i P = = for , 2 , 1 = i and

=
=
1
1
i
i
a .
Let each bulb is replaced by a new one when it burns out. Suppose the first bulb
lasts until time
1
, the second bulb until time
1
+
2
, and the nth bulb until time
n
+ +
1
, where the individual lifetimes , ,
2 1
are independent
observations of a random variable . Let X
n
be the age of the bulb in service at
time n. Such current age process is depicted in the following figure. Note that the
failure can only occur at integer times and, by convention, we set X
n
= 0 at the
time of a failure.
0 2 4 6 8
Time, n
Current Age
5
4
3
2
1
The current age is a success runs Markov process for which
( ) + +
=
+ +
+
2 1
1
k k
k
k
a a
a
p (2-44)
where
k
p is the probability of returning to state 0 at the next stage, given that the
current state is state k. It is equivalent to say that
| |
| |
| | k X P
k X X P
k X X P p
n
n n
n n k
=
= =
= = = =
+
+
, 0
| 0
1
1
| | | |
1 1
1 , 0
+ +
= + = = = =
k n n
a k P k X X P
| | | |


=
+

=
= + = = =
1 1 i
i k
i
n
a i k P k X P .
Consider the success runs Markov Chain on N+1 states whose transition matrix is
0 1 2 3 N
P =
1 0 0 0 0
0 0 0
0 0
0 0
0 0 0 0 1
1
2
1
0
1 1
2 2 2
1 1 1

N N
q p
q r p
q r p
N
N
(2-45)
Note that states 0 and N are absorbing states. Let T be the hitting time to states 0
or N, i.e., { } N X X n T
n n
= = > = or 0 ; 0 min .
It can be shown that
| |
. 1 , , 1 , 1
| 0
1 1
1
0
=
|
|
.
|

\
|
+
|
|
.
|

\
|
+
=
= = =

N k
q p
q
q p
q
k X X P u
N N
N
k k
k
T k

(2-46)
, 1
0
= u . 0 =
N
u
The mean hitting time | | k X T E v
k
= =
0
|
1 1
1 ,
1 1
1 ,
1

+
+ +
+
+
+ +
+
+
+
=
N N
N k
k k
k k
k k
k
q p q p q p
v
t t
(2-47)
where
. ,
1 1
1
1 1
1
j k
q p
q
q p
q
q p
q
j j
j
k k
k
k k
k
kj
<
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
+
=

+ +
+
t (2-48)
For a given state j (0< j < N), the mean total visits to state j starting from state i
(W
ij
) is

>
<
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
+
=
+
=

j i
j i
q p q p
q
q p
q
i j
q p
W
j j j j
j
i i
i
i i
ij
, 0
,
1
,
1
1 1
1
(2-49)
One-Dimensional Random Walks
Aone-dimensional random walk is a Markov chain whose state space is a finite or
infinite subset of integers, in which a particle, if it is in state i, can in a single
transition either stay in i or move to one of the neighboring states i 1, i+1. Let the
state space be represented by nonnegative integers, the transition probability
matrix of a random walk has the form
0 1 2 i1 i i+1
P =

0 0
0 0
0 0
0 0
2
1
0
2 2 2
1 1 1
0 0
i i i
p r q
p r q
p r q
p r
i
(2-50)
The one-dimensional random walk can be used to depict the fortune of a player
(player A) engaged in a series of contests. Suppose that a player with fortune k
plays a game against an infinitely rich adversary (player B) and has probability p
k
of winning one unit and probability q
k
= 1 p
k
of losing one unit in the next
contest, and r
0
= 1. The process X
n
, where X
n
represents his fortune after n contests,
is a random walk. Once the state k = 0 is reached, the process remains in that state.
The event of reaching state 0 is known as the gamblers ruin.
If the adversary (player B) also starts with a limited fortune l and player Ahas an
initial fortune k (k + l = N), we then consider the Markov chain process X
n
representing player As fortune after n contests. The transition probability matrix is
0 1 2 3 N
P =
1 0 0 0
0 0
0
0 0 0 1
2
1
0
1 1 1
2 2 2
1 1 1

N N N
p r q
p r q
p r q
N
(2-51)
Note that when player As fortune reaches 0 (player Ais ruined) or N (player B is
ruined) it remains in this same state forever. Also, different contests may be
adopted at different stages, and therefore the probability of winning (or losing)
one unit depends on players fortune.
Lets consider the games with identical contests, i.e., q p q p p
k k
= = = 1 ,
1 all for > k and 1
0
= r . The transition probability matrix is
0 1 2 3 N
P =
1 0 0 0
0
0 0 0
0 0
0 0 0 1
2
1
0

p q
p q
p q
N
(2-52)
Let
0 i i
U u = be the probability of gamblers (player A) ruin starting with initial
fortune i. The first step analysis yields
1 1 +
+ =
i i i
pu qu u for 1 , , 2 , 1 = N i . (2-53)
Also,
1
0
= u and 0 =
N
u .
It can be shown that
( ) ( )
( )


= =

=
q p
p q
p q p q
q p
N
i N
u
N
N i
i
when
1
5 . 0 when
(2-54)
Gamblers ruin is the event that the process reaches state 0 before reaching state
N. This event can be stated more formally if we introduce the concept of hitting
time T. Let T be the random time that the process first reaches state 0 or N.
{ } N X X n T
n n
= = > = or 0 , 0 min .
The event X
T
= 0 is the event of gamblers ruin, and the probability of such event
starting from the initial state k is given by
| | k X X P u U
T k k
= = = =
0 0
| 0
1 , , 1 for
1 1
= + =
+
N k pu qu u
k k k
. (2-55)
with 0 , 1
0
= =
N
u u .
Let 1 , , 1 for
1
= =

N k u u x
k k k
and using 1 = + q p , we have
1 1 +
+ =
k k k
pu qu u
k k k
pu qu u + =
Therefore,
) ( ) ( 0
1 1 +
=
k k k k
u u q u u p for 1 , , 1 = N k .
0
1
=
+ k k
qx px for 1 , , 1 = N k . (2-56)
0
1 2
= qx px
0
2 3
= qx px

0
1
=
N N
qx px .
1 2
x
p
q
x
|
.
|

\
|
= ,
1
2
2 3
x
p
q
x
p
q
x
|
.
|

\
|
=
|
.
|

\
|
= , ,
1
1
1
x
p
q
x
p
q
x
N
N N

|
.
|

\
|
=
|
.
|

\
|
= .
1
1 0 1 1
= = u u u x ,
1
1 2 1 2 2
= = x u u u x , 1
2 2 1
= + u x x ,
1
1 2 3 2 3 3
= = x x u u u x , 1
3 3 2 1
= + + u x x x ,

1
=
k k k
u u x , 1
1
= + +
k k
u x x ,

1 1
= =
N N N N
u u u x , 1 1
1
= = + +
N N
u x x ,
Therefore, the equation for general k gives
k k
x x u + + + =
1
1
( )
1
1
1 1
) / ( 1 x p q x p q x u
k
k

+ + + + =
( ) ( ) | |
1
1
1 1 x p q p q u
k
k

+ + + + = , for 1 , , 1 = N k . (2-57)
Since 0 =
N
u , it yields
( ) ( ) | |
1
1
1 1 0 x p q p q
N
+ + + + =
( ) ( )
1 1
1
1

+ + +
=
N
p q p q
x

. (2-58)
Substituting the (2-52) into (2-51) gives
( ) ( )
( ) ( )
1
1
1
1
1

+ + +
+ + +
=
N
k
k
p q p q
p q p q
u

. (2-59)
( ) ( ) ( )
( )

= =
= + + +

q p
p q
p q
q p k
p q p q
k k
if
1
1
5 . 0 if
1
1

Hence,
( ) ( )
( )

= =

=
=
q p
p q
p q p q
p q
p q
q p
N
k N
N k
u
N
N k
N
k
k
when
1 ) / ( 1
) / ( 1
1
5 . 0 when ) / ( 1
(2-60)
Similarly, it can be shown that, when 5 . 0 = = q p , the mean duration
| | ( ) N k k N k k X T E v
k
, , 1 , 0 for , |
0
= = = = . (2-61)
The General Random Walks
Consider the following general case of a one-dimensional random walk.
0 1 2 3 N
P =
1 0 0 0 0
0 0
0 0
0 0 0 0 1
2
1
0
2 2 2
1 1 1

p r q
p r q
N
where 0 , 0 > >
k k
q p for 1 , , 2 , 1 = N k . Let { } N X X n T
n n
= = > = or 0 ; 0 min
be the random time that the process first reaches state 0 or N, i.e. the hitting time.
The probability of gamblers ruin ( | | i X X P u
T i
= = =
0
| 0 ), the mean hitting time
( | | k X T E v
k
= =
0
| ), and the mean total visit to state k from i (
ik
W ), can be
expressed by
. 1 , , 1 ,
1
1 1
1
=
+ + +
+ +
=

N i u
N
N i
i



(2-62)
where
. 1 , , 1 ,
2 1
2 1
= = N k
p p p
q q q
k
k
k

(2-63)
( ) ( ) . 1 , , 1 , 1
1
1 1 1 1
1 1
1 1
= u + + u + + +
|
|
.
|

\
|
+ + +
u + + u
=

N k v
k k
N
N
k



(2-64)
where
. 1 , , 1 ,
1 1 1 1
1 3 2 2 1 1
=
|
|
.
|

\
|
+ + + + = u

N k
q q q q
k
k k
k


(2-65)
( )( )
( )( )
( )

>
|
|
.
|

\
|
(

+ +
+ +
+ + + +
s
|
|
.
|

\
|
+ +
+ + + +
=




k i
q
k i
q
W
k k
i k
N
N k i
k k N
N k i
ik
,
1
1
1
,
1
1
1
1
1
1
1 1
1 1
1 1


(2-66)
[Proof]
(1)
1 1 +
+ + =
k k k k k k k
u q u r u p u for 1 , , 2 , 1 = N k , and 1
0
= u , 0 =
N
u .
k k k k k k k
u q u r u p u + + = (since ) 1 = + +
k k k
q r p
) ( ) ( 0
1 1 k k k k k k
u u q u u p + =
+
.
Let
1
=
k k k
u u x , N k , , 2 , 1 = , and we have
k k k k
x q x p =
+1
0 ,
k
k
k
k
x
p
q
x =
+1
.
1
1
1
2
x
p
q
x = ,
1
1
1
2
2
2
2
2
3
x
p
q
p
q
x
p
q
x = = , ,
1
1
1
1
1 2 1
1 2 1
x
p
q
x
p p p
q q q
x
k
i i
i
k
k
k [

|
|
.
|

\
|
= =

.
Let
k
k
k
p p p
q q q

2 1
2 1
= , then
1 1
x x
k k
= , for N k , , 2 , 1 = .
0 1 1
u u x = , 1
1 1
= u x .
1 2 2
u u x = , 1
2 2 1
= + u x x
2 3 3
u u x = , 1
3 3 2 1
= + + u x x x

1
2 1
= + + +
k k
u x x x ( N k , , 2 , 1 = )

1 1
2 1
= = + + +
N N
u x x x .
1 1 1 1 1
2 1
1
1
x x x
x x x u
k
k k

+ + + + =
+ + + + =

Since 0 =
N
u , it yields
( )
1 1
1
1 1 1 1 1 1 1 1
1
1
0 1 1 1


+ + +

=
= + + + + = + + + + =
N
N N N
x
x x x x u


Therefore,
( )
( )
( )
( )
1 1
1
1 1
1 1
1 1 1 1 1 1 1 1
1
1
1
1
1 1 1


+ + +
+ +
=
+ + +
+ + +
=
+ + + + = + + + + =
N
N k
k
N
k
k k k
u
x x x x u





(2)
1 1
1
+
+ + + =
k k k k k k k
v q v r v p v for 1 , , 2 , 1 = N k , and 0
0
= v , 0 =
N
v .
k k k k k k k
v q v r v p v + + =
) ( ) ( 1 0
1 1 +
+ =
k k k k k k
v v q v v p
Let
1
=
k k k
v v x , N k , , 2 , 1 = , and we have
k k k k
x q x p + =
+1
1 0 .
k
k
k
k
k
p
x
p
q
x
1
1
=
+
1
1
1
1
2
1
p
x
p
q
x =
2 2 1
2
1
2 1
2 1
2 1
1
1
1
2
2
2
2
2
2
3
1 1 1 1
p p p
q
x
p p
q q
p p
x
p
q
p
q
p
x
p
q
x =
|
|
.
|

\
|
= =
3 3 2
3
3 2 1
3 2
1
3 2 1
3 2 1
4
1
p p p
q
p p p
q q
x
p p p
q q q
x =

1 1 2
1
1 2
1 3
1 1
1 2
1
1 1
1 1
1


|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|
=
k k k
k
k
k
k
k
k
k
k
p p p
q
p p
q q
p p
q q
x
p p
q q
x

1 2
1
3 2
1
2 1
1
1
1
1 1

=
k k
k k k k
k k
q q q q
x x



Let
1
1 2 3 2 2 1 1 1 2
1
3 2
1
2 1
1
1
1
1
1 1 1 1

|
|
.
|

\
|
+ + + + = + + + + = u
k
k k k k
k k k k
k
q q q q q q q q




1 1 1
u =
k k k
x x .
1 0 1 1
v v v x = = ,
1 2 1 2 2
x v v v x = = ,
2 2 1
v x x = + , ,
k k
v x x x = + + +
2 1
, for . , , 2 , 1 N k =
( ) ( ) ( )
( ) ( )
1 1 1 1 1
1 1 1 2 1 2 1 1 1 1
2 1
1


u + + u + + + =
u + + u + u + =
+ + + =
k k
k k
k k
x
x x x x
x x x v



Let k = N,
( ) ( )
( )
( )
( )
( )
( )
( )
1 1
1 1
1 1
1 1
1 1
1 1
1
1 1 1 1 1
1
1
1
0 1


u + + u
+ + +
u + + u
+ + + =
+ + +
u + + u
=
= u + + u + + + =
k
N
N
k k
N
N
N N N
v
x
x v






5 Review of the First Step Analysis
Consider the Markov chain of N+1 states. Suppose that states 0, 1, , r1 are
transient in that 0
) (

n
ij
P as 0 n for 1 , 0 s s r j i while states r,, N are
absorbing states ( N i r P
ii
s s = , 1 ).
The transition probability matrix has the form
P =
I 0
R Q
(2-67)
where 0 is an (Nr+1)r matrix all of whose entries are zero, I is an (Nr+1)
(Nr+1) identity matrix, and
ij ij
P Q = for r j i < s , 0 .
P
2
=
I 0
QR R Q
2
+
(2-68)
P
3
=
I 0
R Q QR R Q
2 3
+ +
(2-69)
For higher values of n,
P
n
=
I 0
)R Q Q (I Q
1 - n n
+ + +
(2-70)
Let
) (n
ij
W be the mean total visits to state j up to stage n for a Markov chain
starting from state i , i.e.,
( )
(

= = =

=
n
l
l
n
ij
i X j X E W
0
0
) (
| o (2-71)
where ( )

=
=
= =
j X
j X
j X
l
l
l
if 0
if 1
o .
Consider ( ) j X
l
= o as a Bernoulli random variable, and we have
( ) | |
| | | |
) (
0 0
0
| 0 | 1
|
l
ij
l l
l
P
i X j X P i X j X P
i X j X E
=
= = + = = =
= = o
.
( ) | |
) (
0
|
l
ij l
P i X j X E = = = o (2-72)
Therefore,
( )
( ) | |


=
=
= = =
(

= = =
n
l
l
n
l
l
n
ij
i X j X E
i X j X E W
0
0
0
0
) (
|
|
o
o

=
=
n
l
l
ij
n
ij
P W
0
) ( ) (
for all states i, j. (2-73)
Eq. (2-70) indicates that . , 0 when
) ( ) (
r j i Q P
l
ij
l
ij
< s = Therefore,
. , 0 ,
) ( ) 1 ( ) 0 (
0
) ( ) (
r j i Q Q Q Q W
n
ij ij ij
n
l
l
ij
n
ij
< s + + + = =

=

In particular,

=
=
=
j i
j i
Q
ij
if 0
if 1
) 0 (
.
In matrix form, we have
) Q Q Q Q(I I
Q Q Q I W
1 - 2
2 ) (
n
n n
+ + + + + =
+ + + + =

1) - ( ) (
QW I W
n n
+ = . (2-74)
. , 0 ,
1
0
) 1 (
1
0
) 1 ( ) (
r j i W P W Q W
r
k
n
kj ik ij
r
k
n
kj ik ij
n
ij
< s + = + =


=

o o (2-75)
The limit of
) (n
ij
W represents the expected value of the total visits to state j from
the initial state i, i.e.,
| | . , 0 , | to visits Total lim
0
) (
r j i i X j E W W
n
ij
n
ij
< s = = =

+ + + =
2
Q Q I W
QW I W + = . (2-76)
Or,
. , 0 ,
1
0
r j i W P W
r
k
kj ik ij ij
< s + =

=
o (2-77)
I Q)W I QW - W = = ( (2-78)
-1
Q) I W = ( (2-79)
The matrix
-1
Q) I W = ( is called the fundamental matrix associated with Q.
The random absorption time is expressed by } ; 0 min{ r X N n T
n
> > > = and it
can be expressed by
( )

=
= =
1
0
1
0
r
j
T
n
n
j X T o . (2-80)
Since W
ij
is the mean number of total visits to state j from the initial state i (both
are transient states), it follows that
( ) . , 0 , |
1
0
0
r j i i X j X E W
T
n
n ij
< s
(

= = =

=
o (2-81)
Let | | i X T E v
i
= =
0
| be the mean time to absorption starting from state i.
It follows that
| |

=
= = =
1
0
0
|
r
j
ij i
W i X T E v (2-82)
( ) | | . 0 , | |
0
1
0
0
1
0
1
0
r i v i X T E i X j X E W
i
T
n
n
r
j
r
j
ij
< s = = =
(

= = =


=

=
o
. 0 ,
1
0
1
0
1
0
1
0
r i W P W
r
j
r
j
r
k
kj ik ij
r
j
ij
< s + =


=

=
o
. 0 , 1
1
0
1
0
r i v P W v
r
k
k ik
r
j
ij i
< s + = =


=

=
(2-83)
Starting from initial state i, the probability of being absorbed in state k, i.e. the
hitting probability of state k, is expressed by
| | i X k X P U
T ik
= = =
0
| for r i < s 0 and N k r s s .
| | | |

= =

= = = = = =
1 1
0 0
| lim |
T
n
T
T
n
T ik
i X k X P i X k X P U
| |
| | | |
| |
| |
| |
| | i X k X T n P
i X | n k P
i X n k P
i X k X n l S X P
i X k X S X P i X k X P
f f f
i X k X P
T
n T l
T
n
ik ik ik
n
T
T
= = > =
= =
= =
= = = e +
+ = = e + = = =
+ + + =
= =

=
0
0
0
0
0 2 1 0 1
) ( ) 2 ( ) 1 (
1
0
| ,
stage to up state in absorbed being process the
| to equal or than less being state in absorption of time
| ; 1 , , 2 , 1 ,
| , |
|

where f
ik
(n)
is the probability that, starting from state i, the process reaches state k
for thefirst timein n steps, and S
T
represents the set of all transient states, i.e.
S
T
={0,1,, r1}.
Let | |

=
= = =
n
T
T
n
ik
i X k X P U
1
0
) (
| . Since state k is an absorbing state, once the
process reaches state k, it will always remain in that state. Therefore,
| | | | i X k X P i X k X P U
n
n
T
T
n
ik
= = = = = =

=
0
1
0
) (
| | .
) ( ) ( n
ik
n
ik
P U = for r i < s 0 and N k r s s (2-84)
[Remark] It is worthy to note that the expression of | |

=
= =
n
T
T
i X k X P
1
0
|
requires careful interpretation. The meaning of i T k X
T
= = for is not the same
as k X
i
= . Since T is the time of absorption, i T k X
T
= = for implies that the
process reaches the absorbing state k for thefirst timeat stage i; where k X
i
=
implies that the process is in state k at stage i.
| |
| | | | | |

=
=
=
+ + + =
= = + + = = + = = =
= = =
n
l
l
ik
n
ik ik ik
n
n
T
T
n
ik
P
P P P
i X k X P i X k X P i X k X P
i X k X P U
1
) (
) ( ) 2 ( ) 1 (
0 0 2 0 1
1
0
) (
| | |
|

) ( ) (
lim lim
n
ik
n
n
ik
n
ik
P U U

= =
From Eq. (2-70),
( )R Q Q I U
1 - ) ( n n
+ + + =
R W U
1) - ( ) ( n n
=
Taking n to infinity, it yields
( )R Q Q I U lim U
2 ) (
+ + + = =

n
n
WR U =
N k r r i R W U
r
j
jk ij ik
s s < s =

=
and 0 ,
1
0
.
From Eq. (2-77),

=
+ = + =
|
.
|

\
|
+ = =
1
0
1
0
1
0
1
0
1
0
1
0
1
0
r
l
r
j
jk lj il ik
r
j
r
l
jk lj il ik
r
j
jk
r
l
lj il ij
r
j
jk ij ik
R W P R R W P R
R W P R W U o

=
+ =
1
0
r
l
lk il ik ik
U P R U

=
+ =
1
0
r
l
lk il ik ik
U P P U N k r r i s s < s and 0 ,
6 The Long Run Behavior of Markov Chains
Suppose that a transition matrix P on a finite number of states labeled 0, 1,, N,
has the property that the matrix P
k
has all its elements strictly positive. Such a
transition matrix, or the corresponding Markov chain, is called regular. For a
regular Markov chain, there exists a limiting probability distribution
) , , , (
1 0 N
t t t = where N j P
j
n
ij
n
, , 1 , 0 , 0 lim
) (
= > =

t and

=
=
N
j
j
0
1 t .
Example 9 The Markov chain with the following transition probability matrix
0 1
P =
25 . 0 75 . 0
67 . 0 33 . 0
1
0
P
2
=
5650 . 0 4350 . 0
3886 . 0 6114 . 0
, P
3
=
4327 . 0 5673 . 0
5068 . 0 4932 . 0
,
P
4
=
4883 . 0 5117 . 0
4572 . 0 5428 . 0
, P
5
=
4560 . 0 5350 . 0
4780 . 0 5220 . 0
,
P
6
=
4747 . 0 5253 . 0
4693 . 0 5307 . 0
, P
7
=
4706 . 0 5294 . 0
4729 . 0 5271 . 0
.
From Eq. (2-36), the limiting probabilities are 0.5282 and 0.4718.
Example 10 The Markov chain with the following transition probability matrix
0 1 2
P =
45 . 0 50 . 0 05 . 0
25 . 0 70 . 0 05 . 0
10 . 0 50 . 0 40 . 0
2
1
0
P
2
=
3325 . 0 6000 . 0 0675 . 0
2925 . 0 6400 . 0 0675 . 0
2100 . 0 6000 . 0 1900 . 0
P
4
=
3002 . 0 6240 . 0 0758 . 0
2986 . 0 6256 . 0 0758 . 0
2825 . 0 6240 . 0 0908 . 0
P
8
=
2981 . 0 6250 . 0 0769 . 0
2981 . 0 6250 . 0 0769 . 0
2978 . 0 6250 . 0 0772 . 0
Every transition probability matrix on the states 0, 1, , N that satisfies the
following two conditions is regular:
(1) For every pair of states i, j there is a path k
1
, k
2
, , k
r
for which
0
2 1 1
>
j k k k ik
r
P P P .
(2) There is at least one state i for which P
ii
> 0.
Theorem Let P be a regular transition probability matrix on the states 0, 1, ,
N. Then the limiting distribution ) , , , (
1 0 N
t t t = is the unique nonnegative
solution of the equations

=
= =
N
k
kj k j
N k P
0
, , , 1 , 0 , t t (2-85)

=
=
N
k
k
0
1 t . (2-86)
[Proof]
Since the Markov chain is regular, we have a limiting distribution,
N j P
j
n
ij
n
, , 1 , 0 , 0 lim
) (
= > =

t and

=
=
N
j
j
0
1 t .
From
P P P
1 - n n
= , it yields

= =
N
k
kj
n
ik
n
ij
N j P P P
0
) 1 ( ) (
, , 0 , .
Let n , then
j
n
ij
P t
) (
while
k
n
ik
P t
) 1 (
. Therefore,

=
= =
N
k
kj k j
N k P
0
. , , 1 , 0 , t t
Proof of the uniqueness of the solution is skipped.
Example 11 Determine the limiting distribution for the Markov chain with the
following transition probability matrix
0 1 2
P =
45 . 0 50 . 0 05 . 0
25 . 0 70 . 0 05 . 0
10 . 0 50 . 0 40 . 0
2
1
0
(Solution)
0 2 1 0
05 . 0 05 . 0 40 . 0 t t t t = + +
1 2 1 0
50 . 0 70 . 0 50 . 0 t t t t = + +
2 2 1 0
45 . 0 25 . 0 10 . 0 t t t t = + +
1
2 1 0
= + + t t t
104 / 31 , 8 / 5 , 13 / 1
2 1 0
= = = t t t
The limiting distribution
j
t , N j , , 1 , 0 = , can also be interpreted as the long
run mean fraction of time that the random process { }
n
X is in state j.
7 Including History in the State Description
There are cases that the phenomenon under investigation is not naturally a Markov
process. However, such phenomenon can still be modeled as a Markov process by
including part of the past history in the state description.
Suppose that the weather on any day depends on the weather conditions for the
previous two days. To be exact, we suppose that
(1) if it was sunny today and yesterday, then it will be sunny tomorrow with
probability 0.8;
(2) if it was sunny today but cloudy yesterday, then it will be sunny tomorrow
with probability 0.6;
(3) if it was cloudy today but sunny yesterday, then it will be sunny tomorrow
with probability 0.4;
(4) if it was cloudy for the last two days, then it will be sunny tomorrow with
probability 0.1.
Such a model can be transformed into a Markov chain provided we say that the
state at any time is determined by the weather conditions during both that day and
the previous day. We say the process is in
(S,S) : if it was sunny for both today and yesterday;
(S,C) : if it was sunny yesterday but cloudy today;
(C,S) : if it was cloudy yesterday but sunny today;
(C,C) : if it was cloudy for both today and yesterday.
Then the transition matrix is
Todays State
(S,S) (S,C) (C,S) (C,C)
9 . 0 1 . 0
4 . 0 6 . 0
6 . 0 4 . 0
2 . 0 8 . 0
) , (
) , (
) , (
) , (
C C
S C
C S
S S
0 2 0
6 . 0 8 . 0 t t t = +
1 2 0
4 . 0 2 . 0 t t t = +
2 3 1
1 . 0 4 . 0 t t t = +
3 3 1
9 . 0 6 . 0 t t t = +
1
3 2 1 0
= + + + t t t t
11 / 6 , 11 / 1 , 11 / 1 , 11 / 3
3 2 1 0
= = = = t t t t .
8 Eigenvectors and Calculation of the Limiting Probabilities
Example 12 Acity is served by two newspapers, UDN and CT. The CT,
however, seems to be in trouble. Currently, the CT has only a 38% market share.
Furthermore, every year, 10% of its readership switches to UDN while only 7% of
the UDNs readership switch to CT. Assume that no one subscribes to both papers
and the total newspaper readership remains constant. What is the long-term
Y
e
s
t
e
r
d
a
y

s
S
t
a
t
e
outlook for CT?
[Solution]
The readerships (in percentages) one year later
| | 62 38
(

93 . 0 07 . 0
10 . 0 90 . 0
=| | 46 . 61 54 . 38
Or
(

62
38
93 . 0 10 . 0
07 . 0 90 . 0
=
1
46 . 61
54 . 38
X =
(

1 0
X X =
T
P
The readerships at the end of the second year:
(

46 . 61
54 . 38
93 . 0 10 . 0
07 . 0 90 . 0
=
2
01 . 61
99 . 38
X =
(

2 1
X X =
T
P
Repeatedly, we have
3
64 . 60
36 . 39
X =
(

,
4
33 . 60
67 . 39
X =
(

,
5
07 . 60
93 . 39
X =
(

,
6
86 . 59
14 . 40
X =
(

It is clear that CT not only is not in trouble; it is actually thriving. Its market share
grows year after year! However, it does show that the rate of growth is slowing,
and we can expect that eventually the readerships will reach an equilibrium state,
i.e.
X X
n
=

0
) lim
n T
(P and X X =
T
P .
Eigenvector of a square matrix
Let A be an nn matrix. Anon-zero vector X such that
X X A = (2-87)
0 ) ( = X I A (2-88)
for some scalar is called an eigenvector for A. The scalar is called the
eigenvalue.
Now, lets consider calculation of B A
k
where B is an n1 vector. Suppose the
matrix A has n eigenvalues (or n eigenvectors). Let the eigenvectors be X
1
, X
2
, ,
X
n
. These eigenvectors are linear independent and form a basis for an
n-dimensional space. Therefore, we can express the vector B by a linear
combination of X
i
s, i.e.,
,
1
21
11
1
|
|
|
|
|
.
|

\
|
=
n
x
x
x
X

,
2
22
12
2
|
|
|
|
|
.
|

\
|
=
n
x
x
x
X
|
|
|
|
|
.
|

\
|
=
nn
n
n
n
x
x
x
X

2
1
( )
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
n n
n
X X X X B
o
o
o
o
o
o

2
1
2
1
2 1
B A
k
( )
|
|
|
|
|
.
|

\
|
=

n
n
k
X X X A A
o
o
o

2
1
2 1
1
( )
|
|
|
|
|
.
|

\
|
=

n
n
k
AX AX AX A
o
o
o

2
1
2 1
1
Since X X A = , we have
= B A
k
( )
|
|
|
|
|
.
|

\
|

n
n n
k
X X X A
o
o
o

2
1
2 2 1 1
1
( )
( )
|
|
|
|
|
.
|

\
|
A =
|
|
|
|
|
.
|

\
|
|
|
|
|
|
.
|

\
|
=

n
n
k
n n
n
k
X X X A
X X X A
o
o
o
o
o
o

2
1
2 1
1
2
1
2
1
2 1
1
0 0
0 0
0 0
= B A
k
( )
|
|
|
|
|
.
|

\
|
A

n
n
k
X X X A
o
o
o

2
1
2 1
1
( )
( )
|
|
|
|
|
.
|

\
|
A =
|
|
|
|
|
.
|

\
|
A =

n
k
n
n
n
k
X X X
X X X A
o
o
o
o
o
o

2
1
2 1
2
1
2
2 1
2
( )
|
|
|
|
|
.
|

\
|
A =
n
k
n
k
X X X B A
o
o
o

2
1
2 1
(2-89)
Note that A is a diagonal matrix and
|
|
|
|
|
.
|

\
|
= A
k
n
k
k
k

0 0
0 0
0 0
2
1

Anda mungkin juga menyukai