The idea of signal modeling is to represent the signal via (some) model
parameters.
Signal modeling is used for signal compression, prediction, reconstruction
and understanding.
Two generic model classes will be considered:
ARMA, AR, and MA models,
low-rank models.
EE 524, # 7
ak y(n k) =
M
X
bk x(n k) ARMA.
k=0
k=0
Particular cases:
y(n) =
M
X
bk x(n k) MA,
k=0
N
X
k=0
ak Z{y(n k)} =
M
X
bk Z{x(n k)}
k=0
2
Y (z)
N
X
ak z k = X(z)
k=0
M
X
bk z k .
k=0
k
Y (z)
BM (z)
k=0 bk z
H(z) =
= PN
=
.
k
X(z)
AN (z)
k=0 ak z
EE 524, # 7
Pole-zero Model
Modeling a signal y(n) as the response of a LTI filter to an input x(n).
The goal is to find the filter coefficients and the input x(n) that make the
modeled signal yb(n) as close as possible to y(n). The most general model
is the so-called pole-zero model:
EE 524, # 7
.
H(z) =
A(z)
Consider the real AR equation:
y(n) + a1y(n 1) + + aN y(n N ) = x(n)
(with a0 = 1) and assume that E {x(n)x(n k)} = 2(k). Since the AR
model implies that
y(n) = x(n) + 1x(n 1) + 2x(n 2) +
we get
E{y(n)x(n)} = E {[x(n) + 1x(n 1) + 2x(n 2) + ]x(n)} = 2.
EE 524, # 7
1
a1
EE 524, # 7
1
a1
E {y(n)x(n)} = E y(n) [y(n), y(n 1), . . . , y(n N )]
..
aN
1
a1
= 2.
= [r0, r1, . . . , rN ]
.
.
aN
(
EE 524, # 7
1
a1
E {y(n k)x(n)} = E y(n k) [y(n), . . . , y(n N )]
..
aN
1
a1
EE 524, # 7
r0
r1
r1
r0
.
..
..
.
rN rN 1
If we omit the first equation, we
r0
r1
r1
r0
.
..
..
.
rN 1 rN 2
rN
rN 1
..
r0
get
a1 0
. = .
. .
aN
0
rN 1
a1
a2
rN 2
.. .. =
r0
aN
r1
r2
..
,
rN
Yule-Walker equations.
9
Levinson Recursion
For this purpose, let us introduce a slightly different notation:
r0
r1
.
.
rN
r1
r0
..
rN 1
..
rN
2
N
aN,1 0
rN 1
..
.. = ..
r0
aN,N
0
10
For N = 1, we have:
r0 + r1a1,1 = 12,
r1 + r0a1,1 = 0,
and thus
r1
a1,1 = ,
r0
n
h r i2 o
1
.
12 = r0 1
r0
EE 524, # 7
11
r0
r1
..
rN
rN +1
r1
r0
..
rN 1
rN
where N = rN +1 +
EE 524, # 7
..
PN
rN
rN 1
..
r0
r1
rN +1
rN
..
r1
r0
aN,1
.
.
aN,N
0
2
N
0
..
0
N
property of RN +1 to rewrite
r0
r1
..
rN
rN +1
r1
r0
..
rN 1
rN
..
rN
rN 1
..
r0
r1
rN +1
rN
..
r1
r0
aN,N
.
.
aN,1
1
N
0
..
0
2
N
EE 524, # 7
13
aN,1
..
RN +1
aN,N
0
Now, pick
aN,N
+N +1 ..
aN,1
1
2
N
0
..
0
N
+ N +1
N
0
..
0
2
N
N
N +1 = 2 ,
N
2
which reduces the above equation to RN +1aN +1 = N
+1 e1 , where
aN +1
aN,1
.
=
.
aN,N
0
EE 524, # 7
aN,N
+N +1 ..
aN,1
2
2
2
2
and N
=
[1
N
+1
N
+1
N
N
N +1 ].
14
N
X
ak y(n k) + x(n),
k=1
=
..
..
..
y(L 1)
y(L 2)
EE 524, # 7
points {y(n)}L1
N . In matrix form:
y(0)
a1
x(N )
a2 x(N + 1)
y(1)
. +
..
..
..
.
y(L N 1)
aN
x(L 1)
15
EE 524, # 7
16
Y H Y a = Y H y,
17
i.e.
R Y HY
r Y H y.
b = Y HY
R
rb = Y H y
In practice,
represent sample estimates of the exact covariance matrix R and covariance
vector r, respectively!
EE 524, # 7
18
yb(n) =
N
X
wiy(n i) = wT y,
i=1
w1
w2
,
w=
.
.
wN
EE 524, # 7
y(n 1)
y(n 2)
.
y=
.
.
y(n N )
19
Rw = r
20
EE 524, # 7
21
All-zero Modeling
All-zero model
H(z) = B(z).
Consider the real MA equation:
y(n) =
M
X
bix(n i).
i=0
EE 524, # 7
22
All-zero Modeling
One idea: find the MA coefficients through the coefficients of an auxiliary
higher-order AR model. We know that finite MA model can be approximated
by an infinite AR model:
BM (z) =
M
X
bk z
k=0
1
=
.
A(z)
BM (z) PN
k=0 ak,aux
EE 524, # 7
z k
.
23
EE 524, # 7
)
kAN,aux(ej )BM (ej ) 1k2d .
24
Durbins Method
Step 1: Given the MA(M ) signal y(n), find for it an auxiliary high-order
AR(N ) model with N M using Yule-Walker or normal equations.
Step 2: Using the AR coefficients obtained in the previous step, find the
coefficients of the MA(M ) model for the signal y(n).
EE 524, # 7
25
B(z)
.
A(z)
N
X
i=1
aiy(n i) = x(n) +
M
X
bix(n i)
i=1
26
1
a1
1
b1
EE 524, # 7
27
a1
2
2
2
[r0, r1, . . . , rN ] . = [ , 1, . . . , M ]
.
aN
EE 524, # 7
1
b1
..
.
bM
28
k = 1:
1
a1
2
2
2
.
aN
1
b1
..
,
bM
. . . so on until k = M .
k M + 1:
1
a1
.
aN
EE 524, # 7
1
b1
..
= 0.
bM
29
1
r(M +1)+N
a1
r(M +2)+N .
= 0.
.
..
aN
rM +1
rM +2
..
rM +N
EE 524, # 7
rM
rM +1
..
rM +N 1
..
rM N +1
1
a1
rM N +2
. = 0,
..
.
rM
aN
30
rM
rM +1
..
rM 1
rM +1
..
rM +N 1 rM +N 2
..
a1
rM N +1
rM +1
a2
rM +2
rM N +2
,
..
..
.. =
rM
aN
rM +N
EE 524, # 7
31
32
EE 524, # 7
33
EE 524, # 7
34
Remarks:
AR models peaky PSD better,
MA models valley PSD better,
ARMA is used for PSD with both peaks and valleys.
EE 524, # 7
35
Spectral Factorization
B(ej )
.
H(e ) =
A(ej )
j 2
2B(ej )B(ej )
j
2 B(e )
P (e ) =
=
j
A(e )
A(ej )A(ej )
j
EE 524, # 7
36
Spectral Factorization
Consider real case:
P (z) =
1
2 B(z)B( z )
A(z)A( z1 )
Remarks:
If is zero of P (z), so is 1 .
If is a pole of P (z), so is 1 .
Since a1, . . . , aN , b1, . . . , bM are real, the poles and zeroes of P (z) occur
in complex conjugate pairs.
EE 524, # 7
37
Spectral Factorization
Remarks:
If poles of
1
A(z)
B(z)
A(z)
is BIBO stable.
B(z)
A(z)
is minimum phase.
We choose H(z) so that both its zeroes and poles are inside the unit circle.
EE 524, # 7
38
Low-rank Models
A low-rank model for the data vector x:
x = As + n.
where A is the model basis matrix, s is the vector of model basis parameters,
and n is noise.
s is unknown, A is sometimes completely known (unknown), and sometimes
is known up to an unknown parameter vector .
EE 524, # 7
39
Low-rank Models
EE 524, # 7
40
b
s = (AH A)1AH x.
This approach can be generalized for multiple snapshot case:
X = AS + N,
X = [x1, x2, . . . , xK ],
S = [s1, s2, . . . , sK ],
N = [n1, n2, . . . , nK ].
b 2 = min kX ASk2
min kX Xk
S
Sb = (AH A)1AH X.
EE 524, # 7
41
b k2 = min kx A()sk2
min kx x
s,
s,
For fixed : b
s = [AH ()A()]1AH ()x. Substituting this back into the
LS criterion:
2
min
x A()[AH ()A()]1AH ()x
n
o
2
H
1 H
= min
I A()[A ()A()] A () x
2
= min
PA ()x
max xH PA()x.
EE 524, # 7
42
b 2 = min kX A()Sk2
min kX Xk
S,
S,
2
= min PA ()X
= min tr{PA()XX H PA()}
= min tr{PA()2XX H }
H
= min tr{PA ()XX } max tr PA()XX .
EE 524, # 7
43
XX H
where
= [x1, x2, . . . , xK ]
K
X
1
b=
R
xk xH
k
K
xH
1
xH
2
..
xH
K
K
X
H
b
=
x
x
k
k = K R,
k=1
k=1
b
min tr{PA ()R}
EE 524, # 7
b
max tr PA()R .
44
EE 524, # 7
45
46
EE 524, # 7
47
EE 524, # 7
48