Anda di halaman 1dari 13

Notes

Estimation theory
Johan E. Carlson
Dept. of Computer Science, Electrical and Space Engineering
Lule University of Technology

Lecture 1

1 / 26

Outline
1. General course information
2. Introduction (Chapter 1)
2.1. Estimation theory in Signal Processing
2.2. Problem formulation
2.3. Assessing estimator performance

3. Minimum variance unbiased estimation (Chapter 2)


3.1. Unbiased estimators
3.2. Existence of the MVUB estimator
3.3. Finding the MVUB estimator
3.4. Extension to a vector parameter

2 / 26

Notes

General course information

Notes

Course web page:


http://staff.www.ltu.se/~johanc/estimation_theory/
Text book:
Steven M. Kay, Fundamentals of Statistical Signal Processing:
Estimation Theory, Vol 1. Prentice Hall, 1993.
ISBN10: 0133457117
Examination:
Completion of theoretical homework assignments (written
solutions to be handed in to me).
Completion of computer assignments (short lab reports to me).

3 / 26

Estimation in Signal Processing


Modern estimation theory is central to many electrical system, e.g.
Radar, Sonar, Acoustics
Speech
Image and video
Biomedicine (Biomedical engineering)
Communications (Channel estimation, synchronization, etc.)
Automatic control
Seismology

4 / 26

Notes

Estimation in Signal Processing

Notes

Example: Ultrasonic characterization of thin multi-layered


materials

Estimate thickness of layers


Locate flaws/cracks in internal layers

5 / 26

Estimation in Signal Processing


Simply put, given an observed N -point data set
{x[0], x[1], . . . , x[N 1]}
which depends on an unknown parameter , wee define the
estimator
b = g(x[0], x[1], . . . , x[N 1]),
where g is some function.

6 / 26

Notes

Mathematical problem formulation

Notes

Since data are inherently random, we will describe them in terms


of probability density functions (PDF), i.e.
p(x[0], x[1], . . . , x[N 1]; ),
where semicolon (;) denotes that the PDF is parameterized by the
unknown parameter .
The estimation problem is thus to find (infer ) the value of from the
observations. The PDF should be chosen so that
It takes into account any prior knowledge or constraints
It is mathematically tractable

7 / 26

Mathematical problem formulation


Example: The Dow-Jones index
DowJones average

3200
3100
3000
2900
2800
0

20

40
60
Day number

It appears it is "on average increasing".

8 / 26

80

100

Notes

Mathematical problem formulation

Notes

Example: The Dow-Jones index


A reasonable model could then be
x[n] = A + Bn + w[n], n = 0, 1, . . . , N 1,
where w[n] is white Gaussian noise (WGN), i.e. each sample of
w[n] has the PDF N (0, 2 ), and is uncorrelated with all the other
samples. The unknown parameters can arranged in the vector
= [A B]T . The PDF of x[n] then is
#
"
N 1
1
1 X
2
(x[n] A Bn) ,
p(x; ) =
N exp
2 2
(2 2 ) 2
n=0

where x = [x[0], x[1], . . . , x[N 1]]T


9 / 26

Mathematical problem formulation


Example: The Dow-Jones index
The straight line assumption is consistent with data. A models
the offset and B models the linear increase over time.
The choice of the Gaussian PDF makes the model
mathematically tractable.
Here the parameters are assumed to be unknown, but
deterministic.
One could also assume that is random, but constrained, say
A is in [2800, 3200], and that it is uniformly distributed in this
interval.
This would lead to a Bayesian approach, and
the joint PDF will be
p(x; ) = p(x|)p()
10 / 26

Notes

Assessing estimator performance

Notes

Consider the following signal (a realization of a DC voltage


corrupted by noise).
3

x[n]

2
1
0
1
0

20

40

60

80

100

A realistic signal model would then be


x[n] = A + w[n],
where w[n] is N (0, 2 ).
11 / 26

Assessing estimator performance


How to estimate A?
A reasonable estimator would be the sample mean
N 1
1 X
b
x[n]
A=
N
n=0

b to A?
How close will A
Are there any better estimators than the sample mean?

12 / 26

Notes

Assessing estimator performance

Notes

Another estimator could be


b = x[0]
A

Intuitively, this should not perform as well, since were not


using all available data.
But, for any given realization of x[n] it might actually be closer
to the true A than the sample mean.
So, how do we assess the performance?
We need to consider the estimators from a statistical
perspective!
b and var(A)
b
Lets consider E(A)
13 / 26

Assessing estimator performance


For the first estimator
b = E
E(A)

N 1
1 X
x[n]
N
n=0

1
N

N
1
X

E(x[n])

n=0

= A
For the second estimator
b = E(x[0]
E(A)
= A

14 / 26

Notes

Assessing estimator performance

Notes

For the first estimator


b = var
var(A)

N 1
1 X
x[n]
N

n=0

=
=
=

1
N2

N
1
X

var(x[n])

n=0

1
N 2
N2
2
N

For the second estimator


b = var(x[0])
var(A)
= 2
15 / 26

Assessing estimator performance


b = A, i.e. they
So, the expected value of both estimators, E(A)
are both unbiased.
The variance of the second estimator is 2 , which is larger
than the variance of the first estimator.
It appears that indeed the sample mean is a better estimate
than x[0]!

16 / 26

Notes

Minimum variance unbiased (MVUB) estimation

Notes

Lets start by considering estimation of unknown but


deterministic parameters.
We will restrict the search for estimators to those who on
average yield the true parameter value, i.e. to unbiased
estimators.
For all possible unbiased estimators, we will then look for the
one with the minimum variance, i.e. the minimum variance
unbiased estimator (MVUB).

17 / 26

Unbiased estimators
An estimator is said to be unbiased if
b = ,
E()
for all possible values of .
If = g(x), this means that
Z
b
E() = g(x)p(x; )dx = ,
for all .

18 / 26

Notes

The minimum variance criterion

Notes

Lets look at a natural optimality criterion, known as the mean


square error (MSE)
h
i
b = E (
b )2
mse()

19 / 26

The minimum variance criterion


Unfortunately, this criterion often leads to unrealizable estimators,
since
h
 
i2 
b
b
b
b
mse() = E
E() + E()
h
i2
b + E()
b
= var()
b + b2 (),
= var()
which shows that the MSE depends both on the variance of the
estimator and on the bias. If the bias depends on the parameter
itself, were in trouble!
Lets restrict ourselves to search only for
unbiased estimators!
20 / 26

Notes

Existence of the MVUB estimator

Notes

Does an MVUB estimator always exist?


No!

21 / 26

Existence of the MVUB estimator


A counterexample of to the existence
Assume that we have to independent observations x[0] and x[1]
with PDF
x[0] N (, 1)

N (, 1), 0
x[1]
N (, 2), < 0

22 / 26

Notes

Existence of the MVUB estimator

Notes

The two estimators


b1 =
b2 =

1
(x[0] + x[1])
2
2
1
x[0] +
3
3

can easily be shown to be unbiased. The variances are


var(b1 ) =
var(b2 ) =

1
(var(x[0]) + var(x[1]))
4
4
1
var(x[0]) + var(x[1])
9
9

23 / 26

Existence of the MVUB estimator


As a result (looking back at the PDFs), we have that
 18
36 , 0
var(b1 ) =
24
36 , < 0
 20
36 , 0
var(b2 ) =
24
36 , < 0
So, for 0, the minimum variance is 18/36 (estimator 1) and for
< 0 it is 24/36 (estimator 2). Hence, no single estimator will have
the uniformly minimum variance for all .

24 / 26

Notes

Finding the MVUB estimator

Notes

There is no (known) universal method that will always produce


the MVUB estimator.
There are some possible approaches though
Determine the Cramer-Rao lower bound (CRLB) and check if
some estimator satisfies it (Chapter 3 and 4).
Apply the Rao-Blackwell-Lehmann-Scheffe (RBLS) theorem
(Chapter 5).
Further restrict the estimators to be also linear. Then find the
MVUB estimator within this class. (Chapter 6).

25 / 26

Extension to a vector parameter


If = [1 , 2 , . . . , p ]T is a vector of unknown parameters, we say
that an estimator is unbiased if
E(bi ) = i ,
for i = 1, 2, . . . , p. By defining

E() =

E(b1 )
E(b2 )
..
.
E(bp )

we can define un unbiased estimator as having the property


b = .
E()
26 / 26

Notes

Anda mungkin juga menyukai