Estimation Theory
Examples
1. Ranging
2. Localization
3. Communications
4. Speech
5. Imaging
6. ...
Estimation Theory
Example: estimation of the mean
n = 0, , N 1 ,
x[n]
1. Choose A =
N n=0
2. Or choose A = x[0]
Introduction
Look at the mean:
=E
E(A)
1
N
and also
N
1
X
n=0
x[n]
N 1
1 X
=
E(x[n]) = A
N
n=0
=A
E(A)
Both estimators are unbiased.
Look at the variance, define 2 = var(w[n]):
!
N
1
N
1
2
X
X
1
1
= var
var(A)
var(x[n]) =
x[n] = 2
N
N
N
n=0
n=0
whereas
= 2
var(A)
0 as N : the estimator is consistent.
A has smaller variance. Also, var(A)
Conclusions: (i) estimators are random variables; (ii) what are optimal estimators?
Introduction
Problem Statement
Estimation Techniques
A natural criterion that comes to mind is the Mean Square Error (MSE):
h
i2
h
i
+ (E()
)
= E ( )2 = E ( E())
mse()
h
= E ( E())
)2
)2 = var()
+(E()
+ (E()
| {z } | {z }
variance
bias
The MSE depends not only on the variance but also on the bias, a function of the
unknown .
This means that an estimator that tries to minimize the MSE will often depend on
the parameter , and is therefore unrealizable.
n = 0, , N 1 ,
w[n] N (0, 2 )
n=0
+ (a 1)2 A2
mse(A) =
N
Differentiate the MSE with respect to a to find
d mse(A)
2a 2
=
+ 2(a 1)A2
da
N
A2
aopt = 2
A + 2 /N
Solution: constrain the bias to zero and choose an estimator that minimizes the
variance. This leads to the so-called Minimum Variance Unbiased (MVU) estimator:
unbiased:
minimum variance:
=
E()
var()
for all
Remark: The MVU does not always exist and is generally difficult to find.
var()
var()
1
2
3 MVU
No MVU estimator
1
2
3
N (, 1) , 0
x N (, 1) ,
y
N (, 2) , < 0
1
1 = (x + y) ,
2
2
1
2 = x + y
3
3
18 ,
1
36
var(1 ) = (var(x) + var(y)) =
27 ,
4
36
1
4
var(2 ) = var(x) + var(y) =
9
9
<0
20
36
24
36
<0
1
1
"
=
2 #
2 ln p(x; )
ln p(x; )
E
E
2
An unbiased estimator may be found that attains the bound for all iff
ln p(x; )
= I()(g(x) )
=
for some functions g and I. The estimator then is = g(x) with mean E()
= 1 .
and variance var()
I()
An estimator is called efficient if it meets the CRLB with equality. In that case it is
the MVU (the converse is not necessarily true).
10
( )p(x; )dx = 0
( )p(x; )dx = 0
Z
Z
p(x; )
p(x; )dx + ( )
dx = 0
Z
ln p(x; )
( )
p(x; )dx = 1
Z
p
ln p(x; ) p
( ) p(x; )
p(x; )dx = 1
( )2 p(x; )dx
Z
ln p(x; )
var()
E
"
ln p(x; )
11
2 #
2
p(x; )dx 1
2 ln p(x; )
2
=E
"
ln p(x; )
2 #
E
=0
Z
ln p(x; )
p(x; )dx = 0
Z 2
ln p(x; )
ln p(x; ) p(x; )
p(x;
)
+
dx = 0
2
2
Z
Z 2
ln p(x; )
ln p(x; )
p(x; )dx
p(x;
)dx
=
2
"
2
2 #
ln p(x; )
ln p(x; )
E
=
E
2
12
ln p(x; )
= I()( ), it is easy to show that
"
2 #
ln p(x; )
E
E() = ,
var() =
I 2 ()
2
Proof:
ln p(x; )
= I()( )
2
ln p(x; )
I()
=
I()
+
( )
2
2
ln p(x; )
= I()
E
2
13