= and . lim
0 0
t t t t
o o
L L +
+
=
The critical regions for these two decision rules are,
), | ( ) ( ) | ( ) )( 1 ( | {
0 10 00 1 01 11 1
H y p C C H y p C C y
L L
s I e = I
t t
and
), | ( ) ( ) | ( ) )( 1 ( | {
0 10 00 1 01 11 1
H y p C C H y p C C y
L L
< I e = I
+
t t
Take a number ] 1 , 0 [ e q and devise a decision rule
~
L
t
o that uses the decision rule
L
t
o with probability
qand uses
+
L
t
o with probability q 1 . It means that it decides
1
H if
+
I e
1
y , decides
0
H if
c
y ) (
1
I e and
decides
1
H with probability q if y is on the boundary of
I
1
.
ELEC6111: Detection and Estimation Theory
Minimax Hypothesis Testing
Discussion
Note that the Bayes risk is not a function of q , so ) ( ) , (
~
L L
V r
L
t o t t = ' but the conditional risks
depend on q,
). ( ) 1 ( ) ( ) (
~
+
+ =
L L
L j j j
R q qR R
t t
t o o o
To achieve ) ( ) (
~
1
~
0
L L
R R
t t
o o = , we need to choose,
.
) ( ) ( ) ( ) (
) ( ) (
1 0 1 0
1 0
+ +
+ +
+
=
L L L L
L L
R R R R
R R
q
t t t t
t t
o o o o
o o
Note that ) ( ) ( ) (
0 0
= '
L L
R R V
L t t
o o t , so we have:
.
) ( ) (
) (
+
+
' '
'
=
L L
L
V V
V
q
t t
t
This is called a randomized decision rule.
ELEC6111: Detection and Estimation Theory
Example: Measurement with Gaussian Error
Consider the measurement with Gaussian error with unifom costs
The function ) (
0
t V can be written as,
)), ( 1 )( 1 ( ) ( ) (
1
0
0
0 0
o
t
t
o
t
t t
'
+
'
= Q Q V
With
.
2
)
1
log(
1 0
0
0
0 1
2
t
t
o
t
+
+
= '
We can find the rule making conditional risks ) (
0
o R and ) (
1
o R equal by letting,
)) ( 1 ( ) (
1 0
o
t
o
t '
=
'
Q Q
and solving for t ' .
ELEC6111: Detection and Estimation Theory
Example: Measurement with Gaussian Error
We can solve this by inspection and get:
.
2
1 0
t
+
= '
L
So, the minimax decision rule is:
+ <
+ >
=
. 2 / ) ( 0
2 / ) ( 1
) (
1 0
1 0
o
t
y if
y if
y
L
Conditional risks for measurement with Gaussian error
ELEC6111: Detection and Estimation Theory
Neyman-Pearson Hypothesis Testing
In Bayes hypothesis testing as well as minimax, we are concerned with the average risk, i.e., the
conditional risk averaged over the two hypotheses. Neyman-Pearson test, on the other hand, recognizes the
asymmetry between the two hypotheses. It tries to minimize one of the two conditional risks with the other
conditional risk fixed (or bounded).
In testing the two hypotheses
0
H and
1
H , the following situations may arise:
-
0
H is true but
1
H is decided. This is called a type I error or a false alarm. This comes from radar
application where
0
H represents no target and
1
H is the case of target present. The probability
of this event is called false alarm probability or false alarm rate and is denoted as ) (o
F
P
-
1
H is true but
0
H is decided. This is called a type II error or a miss. The probability of this event is
called miss probability and is denoted as ) (o
M
P
-
0
H is true and
0
H is decided. Probability of this event is ) ( 1 o
F
P .
-
1
H is true and
1
H is decided. This case represents a detection. The detection probability is
) ( 1 ) ( o o
M D
P P = .
In testing
0
H versus
1
H , one has to tradeoff between the probabilities of two types of errors. Neyman-
Pearson criterion makes this tradeoff by bounding the probability of false alarm and minimizing miss
probability subject to this constraint, i.e., the Neyman-Pearson test is,
) ( max o
o
D
P subject to , ) ( o o s
F
P
where o is the bound on false alarm rate. It is called the level of the test.
ELEC6111: Detection and Estimation Theory
Neyman-Pearson Hypothesis Testing
For obtaining a general solution to the Neyman-Pearson test, we need to define a randomized decision
rule. We define the randomized test,
<
=
>
=
L
L
L
y L if
y L if q
y L if
y
L
t
t
t
o
t
) ( 0
) (
) ( 1
) (
~
where
L
t is the threshold corresponding to
L
t .
While in a non-randomized rule, ) ( y o gives the decision, in a randomized rule, ) (
~
y
L
t
o gives the probability of
decision.
Then we have,
}
I
= = , ) | ( ) ( )} ( { ) (
0
~ ~
0
~
dy H y p y Y E P
F
o o o
where {.}
0
E is expectation under hypothesis
0
H . Also,
}
I
= = . ) | ( ) ( )} ( { ) (
1
~ ~
1
~
dy H y p y Y E P
D
o o o
ELEC6111: Detection and Estimation Theory
Neyman-Pearson Lemma
Consider a hypothesis pair
0
H and
1
H :
0 0
~ : P Y H
Versus
1 1
~ : P Y H
where
j
P has density ) | ( ) (
j j
H y p y p = for 1 , 0 = j . For 0 > o , the following statements are true:
1. Optimality: Let
~
o be any decision rule satisfying . ) (
~
o o s
F
P Let
~
o ' be any decision rule of the form
) (
), | ( ) | ( 0
) | ( ) | ( ) (
) | ( ) | ( 1
0 1
0 1
0 1
~
A
H y p H y p if
H y p H y p if y
H y p H y p if
<
=
>
= '
q
q
q
o
where 0 > q and 1 ) ( 0 s s y are such that . ) (
~
o o = '
F
P Then ). ( ) (
~ ~
o o
D D
P P > '
This means that any size- decision rule of form (A) is Neyman-Pearson rule.
2. Existence: For any ) 1 , 0 ( e o there is a decision rule, NP
~
o , of form (A) with
0
) ( = y for which
. ) (
~
o o = NP
F
P
3. Uniqueness: Suppose that o ' ' is any Neyman-Pearson rule of size- for
0
H versus
1
H . Then o ' ' must be of
the form (A).
ELEC6111: Detection and Estimation Theory
Neyman-Pearson Lemma (Proof)
1. Not that, by definition, we always have 0 )] | ( ) | ( )][ ( ) ( [
0 1
~ ~
> ' H y p H y p y y o o (why?)
So, we have,
. 0 )] | ( ) | ( )][ ( ) ( [
0 1
~ ~
> '
}
I
dy H y p H y p y y o o
Expanding the above expression, we get,
. ) | ( ) ( ) | ( ) ( ) | ( ) ( ) | ( ) (
0
~
0
~
1
~
1
~
} } } }
I I I I
(
=
>
=
Otherwise, choose
0
arbitrarily. Consider a Neyman-Pearson decision rule, NP
~
o , with
0
q q = and
0
) ( = y . For this
decision rule, the false alarm arte is,
o q q o o = = + > = = )] | ( ) | ( [ )] | ( ) | ( [ } { ) (
0 0 1 0 0 0 0 1 0
~
0
~
H Y p H Y p P H Y p H Y p P E P NP NP
F
.
ELEC6111: Detection and Estimation Theory
Neyman-Pearson Lemma (Proof)
.
2. See the text.
ELEC6111: Detection and Estimation Theory
Neyman-Pearson Lemma (Example): Measurement with Gaussian Error
For this problem, we have,
), ( 1 ) ( ) ( ) ) ( [ )] | ( ) | ( [
0 0 0 1 0
o
q q
|
o
q q
q q q
'
=
'
= ' > = > = > Q Y P y L P H Y p H Y p P
where .
2
) log(
1 0
0 1
2
q
o q
+
= '
Any value of o can be achieved by choosing,
. ) 1 ( ) (
0
1
0
1
0
o o| o o q + = + = '
Q
Since 0 ) (
0
= ' =q Y P , the choice of
0
is arbitrary and we can choose 1
0
= . So, we have
' <
' >
=
. 0
1
) (
0
0
~
q
q
o
y if
y if
y NP
ELEC6111: Detection and Estimation Theory
Neyman-Pearson Lemma (Example): Measurement with Gaussian Error
The detection probability for NP
~
o is
( ) d Q Q Q Q Q Y P Y E P NP NP
D
= |
.
|
\
|
=
'
= ' > = =
) ( ) ( ) ( ) ( )} ( { ) (
1 0 1 1 1 0
0 1
~
1
~
o
o
o
o
q
q o o