Anda di halaman 1dari 15

1/15

Ch. 1 Introduction to Estimation


2/15
An Example Estimation Problem: DSB Rx
S( f )
f
o
M( f )
f f f
o
) 2 cos( ) ( ) , ; (
o o o o
t f t m f t s + =
Audio
Amp
BPF
&
Amp
) ( ) ( ) ( t w t s t x + =
X
)

2 cos(
o o
t f +
Est. Algo.
Electronics Adds
Noise w(t)
(usually white)
o o
f

&

f
) (

f M
Oscillator
w/
o o
f

&

Goal: Given
Find Estimates
(that are optimal in some sense)
) ( ) , ; ( ) ( t w f t s t x
o o
+ =
Describe with
Probability Model:
PDF & Correlation
3/15
Discrete-Time Estimation Problem
These days, almost always work with samples of the observed
signal (signal plus noise):
] [ ] , ; [ ] [ n w f n s n x
o o
+ =
Our Thought Model: Each time you observe x[n] it
contains same s[n] but different realization of noise w[n],
so the estimate is different each time.
o o
f

&

are RVs
Our Job: Given finite data set x[0], x[1], x[N-1]
Find estimator functions that map data into estimates:
) ( ]) 1 [ , ], 1 [ ], 0 [ (

) ( ]) 1 [ , ], 1 [ ], 0 [ (

2 2
1 1
x
x
g N x x x g
g N x x x g f
o
o
= =
= =

These are RVs


Need to describe w/
probability model
4/15
PDF of Estimate
Because estimates are RVs we describe them with a PDF
Will depend on:
1. structure of s[n]
2. probability model of w[n]
3. form of est. function g(x)
)

(
o
f p
o
f

o
f
Mean measures centroid
Std. Dev. & Variance
measure spread
Desire:
( ) small }

{
2
2

=
)
`

=
=
o o
f
o o
f E f E
f f E
o

5/15
1.2 Mathematical Estimation Problem
General Mathematical Statement of Estimation Problem:
ForMeasured Data x = [ x[0] x[1] x[N-1] ]
Unknown Parameter = [
1

2

p
]
is Not Random
x is an N-dimensional random data vector
Q: What captures all the statistical information needed for an
estimation problem ?
A: Need the N-dimensional PDF of the data, parameterized by
) ; ( x p
In practice, not given PDF!!!
Choose a suitable model
Captures Essence of Reality
Leads to Tractable Answer
Well use p(x;) to find
) (

x g =
6/15
Ex. Estimating a DC Level in Zero Mean AWGN
] 0 [ ] 0 [ w x + =
Consider a single data point is observed
Gaussian
zero mean
variance
2
~ N(,
2
)
So the needed parameterized PDF is:
p(x[0]; ) which is Gaussian with mean of
Soin this case the parameterization changes the data PDF mean:

1
p(x[0];
1
)
x[0]

2
p(x[0];
2
)
x[0]

3
p(x[0];
3
)
x[0]
7/15
Ex. Modeling Data with Linear Trend
See Fig. 1.6 in Text
Looking at the figure we see what looks like a linear trend
perturbed by some noise
So the engineer proposes signal and noise models:
| | ] [ ] [
] , ; [
n w Bn A n x
B A n s
+ + =
"# "$ %
Signal Model: Linear Trend Noise Model: AWGN
w/ zero mean
AWGN = Additive White Gaussian Noise
White = x[n] and x[m] are uncorrelated for n m { } I w w w w
2
) )( ( =
T
E
8/15
Typical Assumptions for Noise Model
W and G is always easiest to analyze
Usually assumed unless you have reason to believe otherwise
Whiteness is usually first assumption removed
Gaussian is less often removed due to the validity of Central Limit Thm
Zero Mean is a nearly universal assumption
Most practical cases have zero mean
But if not
u + = ] [ ] [ n w n w
zm
Non-Zero Mean of u Zero Mean Now group into signal model
Variance of noise doesnt always have to be known to make an
estimate
BUT, must know to assess expected goodness of the estimate
Usually perform goodness analysis as a function of noise variance (or
SNR = Signal-to-Noise Ratio)
Noise variance sets the SNR level of the problem
9/15
Classical vs. Bayesian Estimation Approaches
If we view (parameter to estimate) as Non-Random
Classical Estimation
Provides no way to include a priori information about
If we view (parameter to estimate) as Random
Bayesian Estimation
Allows use of some a priori PDF on
The first part of the course: Classical Methods
Minimum Variance, Maximum Likelihood, Least Squares
Last part of the course: Bayesian Methods
MMSE, MAP, Wiener filter, Kalman Filter
10/15
1.3 Assessing Estimator Performance
Can only do this when the value of is known:
Theoretical Analysis, Simulations, Field Tests, etc.
is a random variable Recall that the estimate ) (

x g =
Thus it has a PDF of its own and that PDF completely displays
the quality of the estimate.
Illustrate with 1-D
parameter case

( p
Often just capture quality through mean and variance of
) (

x g =
Desire:
( ) small }

{
2
2

=
)
`

=
= =

E E
E m
If this is true:
say estimate is
unbiased
11/15
Equivalent View of Assessing Performance
)

e e + = =
Define estimation error:
RV RV Not RV
Completely describe estimator quality with error PDF: p(e)
p(e)
e
Desire:
( ) { } small } {
0 } {
2 2
= =
= =
e E e E
e E m
e
e

If this is true:
say estimate is
unbiased
12/15
Example: DC Level in AWGN
Model: x 1 , , 1 , 0 ], [ ] [ = + = N n n w A n
Gaussian, zero mean, variance
2
White (uncorrelated sample-to-sample)
PDF of an individual data sample:


=
2
2
2
2
) ] [ (
exp
2
1
]) [ (

A i x
i x p
Uncorrelated Gaussian RVs are Independent
so joint PDF is the product of the individual PDFs:

=
2
1
0
2
2 / 2
1
0
2
2
2
2
) ] [ (
exp
) 2 (
1
2
) ] [ (
exp
2
1
) (

N
n
N
N
n
A n x
A n x
p x
( property: prod of exps gives sum inside exp )
13/15
Each data sample has the same mean (A), which is the thing we
are trying to estimate so, we can imagine trying to estimate
A by finding the sample mean of the data:
Statistics

=
=
1
0
] [
1

N
n
n x
N
A
Prob. Theory
Lets analyze the quality of this estimator
Is it unbiased?
A A E
i x E
N
n x
N
E A E
n
A
N
n
=
=
)
`

=

=

=
}

{
]} [ {
1
] [
1
}

{
1
0
# $ %
Yes! Unbiased!
N
A
N
N
N
n x
N
n x
N
A
N
n
N
n
N
n
2
2
2 1
0
2
2
1
0
2
1
0
)

var(
1
]) [ var(
1
] [
1
var )

var(

=
= = =

=


=

=
Can make var small by increasing N!!!
Due to Indep.
(white & Gauss.
Indep.)
Can we get a small variance?
14/15
Theoretical Analysis vs. Simulations
Ideally wed like to be always be able to theoretically
analyze the problem to find the bias and variance of the
estimator
Theoretical results show how performance depends on the problem
specifications
But sometimes we make use of simulations
to verify that our theoretical analysis is correct
sometimes cant find theoretical results
15/15
Course Goal = Find Optimal Estimators
There are several different definitions or criteria for optimality!
Most Logical: Minimum MSE (Mean-Square-Error)
See Sect. 2.4
To see this result:
( )
) ( }

var{

(
2
2


b
E mse
+ =
)
`

=
( )
( ) ( ) | |
| | { }
) ( }

var{
) ( }

) ( }

{ }

(
2
2
0
2
2
2




b
b E E b E E
E E E
E mse
+ =
+ +
)
`

=
)
`

+ =
)
`

=
=
" "# " "$ %
= }

{ ) ( E b
Bias
Although MSE makes sense, estimates usually rely on

Anda mungkin juga menyukai