Anda di halaman 1dari 6

ECE531 Homework Assignment Number 4

Solution

Make sure your reasoning and work are clear to receive full credit for each problem.
1. 4 points. Consider our standard coin flipping problem where you have an unknown coin,
either fair (HT) or double headed (HH), and you observe the outcome of n flips of this coin.
Assume a uniform cost assignment. For notational consistency, let the state and hypothesis
x0 and H0 be the case when the coin is HT and x1 and H1 be the case when the coin is HH.
When n = 2, find the Neyman-Pearson decision rule and corresponding power for a false
alarm probability 0 < < 1. Repeat this for n = 3 and comment on any changes.
Solution: When n = 2, we can form the conditional probability matrix as

0.25 0
P = 0.5 0
0.25 1

and the likelihood ratio vector as

0
L = 0 .
4

Case 1: 0 < < 0.25 v = 4, accordingly we have



if L(y) = 4
NP
(y) =
.
0 if L(y) < 4
where,
P
:L >v P,0
= 4.
= P
:L =v P,0

The power of the test is then = PD = 4.


Case 2: = 0.25 v = 0 and
NP

(y) =

1 if L(y) > 0
.
0 if L(y) 0

The power of the test is then = PD = 1.


Case 3: 0.25 < < 1 v = 0 Notice that, in this case, if the state is x1 , it is impossible
for y = 0 and y = 1 to be observed. Thus increasing can not increase the probability
of detection. Hence the decision rule in Case 2 also applies here in Case 3.

When n = 3, the conditional probability matrix is

0.125
0.375
P =
0.375
0.125

and the likelihood ratio vector is

0
0

0
1

0
0

L=
0 .
8

Case 1: 0 < < 0.125 v = 8, hence

NP

(y) =

if L(y) = 8
.
0 if L(y) < 8

where
P
:L >v P,0
= 8.
= P
:L =v P,0

and the power of the test = PD = 8. Note that the additional observation effectively
doubles the power of the test with respect to the n = 2 case when 0 < < 0.125.
Case 2: = 0.125 v = 0
NP

(y) =

1 if L(y) > 0
.
0 if L(y) 0

and the power of the test = PD = 1.


Case 3: 0.125 < < 1 v = 0 This case is the same as when we had n = 2 and
0.25 < < 1 v = 0.
2. 4 points. Poor textbook Chapter II, Problem 2 (c).
Solution: From the N-P lemma, the optimum decision rule must be of the form

1 if 2(y+1) > v
3
=v ,
if 2(y+1)
N P (y) =

0 if 3 < v
2(y+1)

Since L(y) is monotone decreasing in y, the above test is equivalent to

where =

3
2v

1 if y <
N P (y) =
if y = ,

0 if y >

1. Thus, the false-positive probability is:


NP

Pfp (

) = P0 (Y < ) =

2
(y + 1)dy =

2
3

 if 0
+ 1 if 0 < < 1 .
if 1

The threshold for Pfp (N P ) = is the solution to

which is =


2 
+ 1 = ,
3 2
1 + 3 1. Hence, the -level Neyman-Pearson test is


1 if y 1 + 3 1
NP
(y) =
0 if y > 1 + 3 1

where randomization is not necessary because of the continuous observations space.


The power of the test is
NP

= PD (

)=

dy = =

1 + 3 1,

0 < < 1.

3. 4 points. Poor textbook Chapter II, Problem 6 (c).


Solution: Here we have p0 (y) = pN (y + s) and p1 (y) = pN (y s), which gives
L(y) =

1 + (y + s)2
.
1 + (y s)2

Figure 1 shows L(y) versus y. Note that there is no monotonic relationship.

L(y)

0
10

0
y

Figure 1: L(y) versus y.

10

The critical region where we decide H1 is


1 = {y | L(y) > v.} .
It should be clear from Figure 1 that as we decrease v from to 1, the region 1 goes from
to a single interval eventually filling [0, ). As we decrease v from 1 to 0, 1 becomes
two disjoint intervals (, 1 ] and [2 , ), where 1 < 2 are the solutions to L(y) = v for
0 < v < 1. We can formalize this with a precise description the following three cases:
Case 1: v > 1,
(
1 =

y|

s(v + 1)

)
p
4s2 v (v 1)2
s(v + 1) + 4s2 v (v 1)2
<y<
v1
v1

Case 2: v = 1,

1 = {y | 0 < y < }

Case 3: 0 < v < 1,


)
(
p
p
s(v + 1) + 4s2 v (v 1)2
s(v + 1) 4s2 v (v 1)2
or y <
1 = y | y >
v1
v1
Given and s, it is easy to determine which case you are in by computing
Z
arctan(s) 1
p0 (y)dy =
p=
+ .

2
0
If p > , then v > 1. If p = , then v = 1. Otherwise, 0 < v < 1. If you are in Case 1 or
Case 3, you will need to numerically determine v by integrating p0 (y) over the appropriate
critical region 1 such that
Z
p0 (y)dy = .

y1

With v, the detection probability can be found numerically by


Z
p1 (y)dy.
=
y1

4. 4 points. Poor textbook Chapter II, Problem 19.


Solution:
(a) The likelihood ratio is given by
L(y) =

Qn

(yk 1 )2 /212
1
k=1 21 e
Qn
(yk 0 )2 /202
1
k=1 20 e
(
  2
 n

n 0 21
0
1
exp
2
exp
2
1
2 0
1
202

which has the desired form.

1
2
21

X
n
k=1

yk2

exp

(

1 0

12 02

X
n
k=1

yk

(b) If 1 = 0 and 12 > 02 , then we can simplify the comparison L(y) > v. Skipping
the algebraic details, the Neyman-Pearson test in this case can be written as

Pn
2

k=1 (yk ) >


1
P
2
n
N P (y) =
0/1
k=1 (yk ) =

2
n
0
k=1 (yk ) <
where is the appropriate threshold selected to satisfy the false positive probability
constraint.
Alternatively, if 1 > 0 and 12 = 02 , then the NP test can be written in the form

Pn

1
k=1 yk >
P
n
N P (y) =
0/1
yk =

Pnk=1

0
k=1 yk <

where is the appropriate threshold selected to satisfy the false positive probability
constraint. Note that, in the first case, the test statistic is quadratic in the observations,
and in the second case it is linear.
(c) For n = 1, 1 = 0 , and 12 > 02 , the NP

1
NP
(y) =
0/1

test is of the form


(y1 )2 >
(y1 )2 =
(y1 )2 <

where > 0 is an appropriate threshold. We have


Pfp (N P ) = Prob[(Y1 )2 > | x0 ]
 

= 2Q
0
where Q(x)
 isthe usual Q-function. Thus, for a test with significance-level we have to

solve 2Q 0 = which can be done in Matlab numerically or, even better, with the
erfcinv function. The detection probability is
PD (N P ) = Prob[(Y1 )2 > | x1 ]
 

= 2Q
1
 

where is the solution to 2Q 0 = .

5. 4 points. Poor textbook Chapter III, Problem 3. Also, try part (a) for the case when the
noise is distributed as N (0, ) where Rnn is the covariance matrix of the noise.
Solution:

(a) Since this is an M -ary hypothesis testing problem with equiprobable priors, and we wish
to minimize error probability, we will have the critical regions:


n
max
pm (y)
k =
y R | pk (y) =
m{0,...,M 1}

Since pm (y) has the density N (sm , 2 I), this critical region can be reduced to


n
2
2
k =
y R | ky sk k =
min
ky sm k
m{0,...,M 1}


T
T
n
max
sm y .
=
y R | sk y =
m{0,...,M 1}

Intuitively, the detector here is just computing the deterministic correlation between
each signal vector sm and the observation vector y and selecting the one that is largest.
The minimum error probability decision rule is simply
= arg

max

m{0,...,M 1}

sTm y

If we have noise that is distributed as N (0, ), we can use the decorrelation trick discussed in Lecture 4. We factor = S S and do a coordinate transformation on the
observation and the signal vectors so that the noise in the transformed coordinate space
is white. Then all of the above analysis applies directly.
(b) Under the assumption of M equiprobable signal vectors, the error probability can be
written as
M
1 X
Prob[Y ck | xk ]
Pe =
M
k=0

where ck = Y\k and xk means that signal vector sk was sent. We can write


c
T
Prob[Y k | xk ] = 1 Prob[Y k | xk ] = 1 Prob arg
max
sm Y = k | xk .
m{0,...,M 1}

Under the assumed orthogonality of the signal vectors s1 , , sn , it is not difficult to


show that, conditioned on xk , the deterministic correlations sT1 Y , sT2 Y , , sTn Y are
independent Gaussian random variables with variances 2 ks1 k2 , and with means m = 0
for m 6= k and mean m = ks1 k2 for m = k. Thus


T
Prob arg
max
sm Y = k | xk =
m{0,...,M 1}


Z
1
2
2
2

Prob
max
sTm Y < z | xk e(zks1 k )/2 ks1 k dz
m{0,...,k1,k+1,...,M
}
2ks1 k
Now

Prob

max

m{0,...,k1,k+1,...,M }

sTm Y

< z | xk

m{0,...,k1,k+1,...,M }

 
=

z
ks1 k

M 1



Prob sTm Y < z | xk

where the first inequality is a consequence of the independence of the individual deterministic correlations and (x) is the usual CDF of a zero-mean, unit variance Gaussian
random variable.
Combining the above and setting x = z/ks1 k yields
Z
2
1

[(x)]M 1 e(xd) /2 dx
1 Prob [Y k | xk ] =
2

for k = 0, . . . , M 1. Since these are all the same, the desired expression for Pe immediately follows.

Anda mungkin juga menyukai