Anda di halaman 1dari 3

ECE 534 RANDOM PROCESSES FALL 2011

SOLUTIONS TO PROBLEM SET 5


1 Estimation of the parameter of an exponential in additive exponential noise
(a) By assumption, Z has the exponential distribution with parameter , and given Z = z, the
conditional distribution of Y z is the exponential distribution with parameter one (for any .) So
f
cd
(y, z|) = f(z|)f(y|z, ) where
f(z|) =
_
e
z
z 0
0 else
and for z 0 : f(y|z, ) =
_
e
(yz)
y z
0 else.
(b)
f(y|) =
_
y
0
f
cd
(y, z|)dz =
_
e
y
(e
(1)y
1)
1
= 1
ye
y
= 1.
(Apparently there is not a closed form solution for the maximizing . I suspect that

ML
(y) = +
for 0 y 1 and that

ML
(y) is nite and monotone decreasing over 1 < y < with

ML
(2) = 1.
BH)
(c)
Q(|
(k)
) = E[ lnf
cd
(Y, Z|) |y,
(k)
]
= ln + (1 )E[Z|y,
(k)
] y,
which is a concave function of . The maximum over can be identied by setting the derivative
with respect to equal to zero, yielding:
(k+1)
= arg max

Q(|
(k)
) =
1
E[Z|y,
(k)
]
=
1
(y,
(k)
)
.
2 Maximum likelihood estimation for HMMs
(a) APPROACH ONE Note that p(y|z) =

T
t=1
b
zt,yt
. Thus, for xed y, p(y|z) is maximized with
respect to z by selecting z
t
to maximize b
zt,yt
for each t. Thus, (

Z
ML
(y))
t
= arg max
i
b
i,yt
for
1 t T.
APPROACH TWO Let
i
=
1
Ns
and a
i,j
=
1
Ns
for all states i, j of the hidden Markov process Z.
The HMM for parameter


= ( ,

A, B) is such that all N
T
s
possible values for Z are equally likely,
and the conditional distribution of Y given Z is the same as for the HMM with parameter . Use
the Viterbi algorithm with parameter

to compute

Z
MAP
, and that is equal to

Z
ML
for the HMM
with parameter .
(b) Let
i
= 1 if
i
> 0 and
i
= 0 if
i
= 0 for 1 i N
s
. Similarly, let a
i,j
= 1 if a
i,j
> 0 and
a
i,j
= 0 if a
i,j
= 0 for 1 i, j N
s
. While and the rows of

A are not normalized to sum to one,
they can still be used in the Viterbi algorithm. Under parameter


= ( ,

A, B), every choice of
possible trajectory for Z has weight one, every other trajectory has weight zero, and the conditional
distribution of Y given Z is the same as for the HMM with parameter . Use the Viterbi algorithm
with parameter

to compute

Z
MAP
, and that is equal to the constrained estimator

Z
ML
for the
HMM with parameter .
(c) Note that P(Y = y|Z
1
= i) =
i
(1)b
i,y
1
, where
i
(1) can be computed for all i using the back-
ward algorithm. Therefore,

Z
1,ML
(y) = arg max
i

i
(1)b
i,y
1
.
(d) Note that P(Y = y|Z
to
= i) =

i
(to)P{Y =y}
P{Zto
=i}
, where
i
(t
o
) can be computed by the forward
1
backward algorithm, and P{Z
to
= i} = (A
to1
)
i
. Then

Z
to,ML
(y) = arg max
i

i
(to)
P{Zto
=i}
.
3 Estimation of a critical transition time of hidden state in HMM
(a) Since F is an absorbing set, for 1 t T, the event {
F
t} is the same as the event {Z
t
F}.
So, for 1 t T,
P(
F
t|Y ) = P(Z
t
F|Y ) =

iF

i
(t),
and the s can be computed by the forward-backward algorithm.
(b) This problem is more complicated than part (b) because the state process Z might enter F and
then exit it again. One way to handle this is to replace each state in F
c
by two states, with one
of them indicating that the state process has not yet entered F and the other indicating that the
process has entered F. To be specic, consider a new state space

S dened by

S = {(i, 0) : i F
c
} {(i, 1) : i S}
Note that for each state i F
c
, there are two states in

S, (i, 0) and (i, 1), while for each state i F,
there is only one state, (i, 1), in

S. Let

Z be the new state process with statespace

S dened by

Z
i
=
_
(Z
i
, 0) if i <
F
(Z
i
, 1) if i
F
and let

F = {(i, 1) : i S}. Since

Z is a function of Z and the distribution of (Z, Y ) is already
specied, the distribution of (

Z, Y ) is well dened. In fact, (

Z, Y ) is an HMM with parameter

= ( ,

A,

B) with

(i,)
=
_
_
_

i
if = 0

i
if i F and = 1
0 if i F
c
and = 1

A
(i,),(j,

)
=
_
_
_
a
i,j
if =

a
i,j
if = 0,

= 1, j F
0 else
and

b
(i,),k
= b
i,k
. Note that
F
for Z is equal to
e
F
for

Z. Since

F is an absorbing set for

Z, the
conditional distribution of
F
given Y can therefore be computed by the method of part (a).
4 On distributions of three discrete-time Markov processes
(a) A probability vector is an equilibrium distribution if and only if satises the balance equa-
tions: = P. This yields
1
=
0
and
2
=
3
=
1
/2. Thus, =
_
1
3
,
1
3
,
1
6
,
1
6
_
is the unique
equilibrium distribution. However, this Markov process is periodic with period 2, so lim
t
(t)
does not necessarily exit. (The limit exists if and only if
0
(0) +
2
(0) = 0.5.)
(b) The balance equations yield
n
=
1
n

n1
for all n 1, so that
n
=

0
n!
. Thus, the Poisson
distribution with mean one,
n
=
e
1
n!
, is the unique equilibrium distribution. Since there is an
equilibrium distribution and the process is irreducible and aperiodic, all states are positive recur-
rent and lim
t
(t) exits and is equal to the equilibrium distribution for any choice of initial
distribution.
(c) The balance equations yield
n
=
n1
n

n1
for all n 1, so that
n
=

0
n
. But since

n=1
1
n
= , there is no way to normalize this distribution to make it a probability distribu-
tion. Thus, there does not exist an equilibrium distribution. The process is thus transient or null
2
recurrent: lim
t

n
(t) = 0 for each state n. (It can be shown that the process is recurrent.)
5 On distributions of three continuous-time Markov processes
(a) Since the state space is nite (so explosion is impossible) a probability vector is an equilibrium
distribution if and only if satises the balance equations: Q = 0. This yields
1
= 0,
2
= 3
3
.
Thus, the probability distributions of the form
_
0,
3
4
,

4
, 1
_
, where 0 1, is the set of
equilibrium distributions. Yes, for any initial distribution, lim
t
(t) exists, and is equal to one
of the equilibrium distributions. (True for any nite-state, continuous time, time-homogeneous
Markov process.)
(b) The transition rate diagram is that of an M/M/1 queue with parameters = 1 and = 2.
Thus,
n
= 2
n
gives the unique equilibrium distribution, and since this is a positive recurrent
Markov process, lim
t

n
(t) = 2
(n+1)
for all n.
(c) This transition rate diagram is that of a birth death process. It is nonexplosive by Proposition
6.3.7 (the maximum jump intensity out of any state is 2 +c) so is an equilibrium distribution if
and only if Q = 0. As seen in Section 6.4, the equilibrium distribution exists if and only if S
1
< ,
where
S
1
=

n=0

0

n1

1

n
= 1 +

n=1
a
n
where a
n
=
n

k=1
_
1 +
c
k
_
1
As shown in Problem 1 of Problem Set 2, S
1
< if and only if c > 1. Thus, if c > 1, there
is a unique equilibrium distribution and it is given by
n
= a
n
/S
1
, and lim
t
(t) is equal to
that distribution. If 0 c 1, then there is no equilibrium distribution; the Markov process is
null-recurrent (it is recurrent because S
2
= ) and lim
t

n
(t) = 0 for all n.
6 Generating a random spanning tree
(a) Yes. Let T and T
f
be any two distinct spanning trees. Suppose |T T
f
| = k, that is, suppose
the two trees have k edges in common. It suces to show that there is a spanning tree T

with
p
T,T
> 0 such that |T

T
f
| = k + 1. Beginning with tree T, there is positive probability (
n1k
mn+1
,
to be exact) that the edge e selected from E T is an edge in T
f
. Given that e T
f
, since T
f
does not have any cycles, at least one edge in the cycle in T {e} is not in T
f
, and the conditional
probability that that edge is deleted is positive. Deletion of that edge would result in a tree T

satisfying the given requirements, and p


T,T
> 0.
(b) Yes, because p
T,T
> 0 for each state T; each state has period one.
(c) Consider any pair of distinct states, T and T

. Then p
T,T
> 0 if and only if |T T

| = n, which
is true if and only if p
T,T
> 0. Furthermore, if p
T,T
> 0, then there is a unique cycle C in T T

,
and p
T,T
and p
T

,T
are both equal to
1
(mn+1)|C|
.
(d) If P is a symmetric one-step transition probability matrix on a state space S and
i
=
1
|S|
for
all states i S, then for any state j,

i
p
i,j
=
1
S

i
p
i,j
=
1
S

i
p
j.i
=
1
S
=
j
That is, = P, so
is the equilibrium distribution. (More generally, if P is a one-step transition probability matrix
and is a probability distribution satisfying the detailed balance condition:
i
p
i,j
=
j
p
j,i
for all
i, j, then is an equilibrium distribution and the process is reversible.)
3

Anda mungkin juga menyukai