Non-Parametric Methods
• Minimum Variance Method (MVSE)
LO-2.4.3, H-8.3
1/17
Recall: Family of Non-Parametric Methods
2/17
Minimum Variance Method
3/17
Minimum Variance Method – Terminology
The Minimum Variance Spectral Estimation (MVSE) method has
two other names:
• Maximum Likelihood Method (MLM)
• Capon’s Method
Note: The names MVSE and MLM are actually misnomers – this
method:
• does NOT minimize the variance of the estimate
• does NOT maximize the “likelihood function”
4/17
Recall: Filter Bank View of Periodogram
See Fig. 8.4 & 8.3 of Hayes
The problem is leakage from nearby frequencies:
|Hi(ω)|2
ω
Filter Sidelobes Leak
“Out-of-Band” Power
into Estimate at ωi
5/17
Goal for MVSE Method
Figure out a way to design each filter bank channel response to
minimize the leakage – this is thus a data-dependent design.
Collect Data Î “Design” Filters for Filter Bank
Want to “design” filters to minimize the sidelobes while keeping
the mainlobe height at 1:
“Design” Goals:
1. Want Hi(ωi) = 1 …to let through the desired Sx(ωi)
2. Minimize total output power in the filter:
π
1
∫
2
ρi = H i (ω ) S x (ω )dω
2π
−π
k =0 l =0 Rx = Autocorrelation Matrix
= h iH R x h i
“H” superscript = Hermitian Transpose
= Transpose & Conjugate 7/17
Autocorrelation Matrix
The AC matrix is the p×p matrix whose i,j element is rx[j – i].
Example for p = 4:
⎡ rx [0] rx [1] rx [2] rx [3]⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ rx [ −1] rx [0] rx [1] rx [2]⎥
⎢ ⎥
Rx = ⎢ ⎥
⎢ ⎥
⎢ r [ −2] r [ −1] r [0] rx [1] ⎥
⎢x x x
⎥
⎢ ⎥
⎢ ⎥
⎢ r [ −3] r [ −2] r [ −1] rx [0]⎥⎦
⎣ x x x
8/17
Not in Book
Now Minimize the Matrix Form:
For each i, minimize this: ρ i = h iH R x h i
Under this constraint:
H i (ω i ) = 1 ⇒ h iH e i = 1
J = hi R x hi
H
−λ (H
hi ei −1 )
Lagrange says: Choose hi and λ to minimize J
9/17
Lagrange Minimization:
To find the hi and λ that minimizes J, in general set:
∂J ∂J
=0 T
& =0
∂h i ∂λ
But often an easier way is to do these two steps:
1. Do the partial w.r.t. hi and solve for hi
2. Then choose λ to ensure solution meets the constraint
So… we need: ∂J
= 0T
∂h i
i =1
∂g ( x ) ⎡ ∂g ( x ) ∂g ( x ) ∂g ( x ) ⎤
=⎢ " ⎥
∂x ⎣ 1∂x ∂x 2 ∂x N ⎦
= [c1 c2 " c N ]
= cT 11/17
Lagrange Minimization (cont.):
Step #1: For our scalar-valued function J we get: Subscript “o”
∂J set indicates
= h iH,o R x − λe iH = 0T “Optimal” – the h
∂h i needed to get 0T
−1
Now solve this for hi,o: h i ,o = λe i R x
H H
But…Depends on λ!!
e iH R −x 1 R −x 1e i
h iH,o = h i ,o =
e iH R −x1e i e iH R −x1e i 12/17
LO-2.4.3
MVSE – Filter Solution
e iH R −x 1 R −x 1e i
h iH,o = h i ,o =
e iH R −x1e i e iH R −x1e i
1
σˆ x2 (ω i ) =
e iH R −x1e i
14/17
MVSE – PSD Estimate in Each Channel
To get the power spectral density we need to divide by the
filter’s bandwidth – for a filter of length p the BW is
approximately 1/p so our MVSE PSD estimate is:
p
Sˆ MV (ω i ) =
e iH R −x1e i
15/17
MVSE – Estimating The AC Matrix
ˆ =⎢
R
⎢ rˆx [ −1] rˆx [0] % ⎥
⎥
rˆx [k ] = 1
N ∑ x[n + k ]x*[m]
x n=0
⎢ # % % rˆx [1] ⎥
⎢ ⎥
⎢ rˆx [ − p + 1] " rˆx [ −1] rˆx [0] ⎥⎦
⎣
Note: The p× p
AC Matrix
MUST
be Estimated
p
Sˆ MV (ω i ) = Choose p < N so that high-order ACF lag
ˆ −1e
eH R estimates are reasonably accurate.
i x i
Performance of MVSE:
Provides better resolution than classical methods
Mostly used when spiky spectra are expected
(Although, the AR methods are usually better in that case)
17/17