Anda di halaman 1dari 31

AG

A Prediction Problem

DS
P

Problem: Given a sample set of a stationary


processes

{x[n], x[n 1], x[n 2],..., x[n M ]}

to predict the value of the process some time


into the future as

x[n m] f ( x[n], x[n 1], x[n 2],..., x[n M ])

The function may be linear or non-linear. We


concentrate only on linear prediction functions

Professor A G

AG
C

A Prediction Problem

DS
P

Linear Prediction dates back to Gauss in


the 18th century.
Extensively used in DSP theory and
applications (spectrum analysis, speech
processing, radar, sonar, seismology,
mobile telephony, financial systems etc)
The difference between the predicted
and actual value at a specific point in
time is caleed the prediction error.
Professor A G

AG
C

A Prediction Problem

DS
P

The objective of prediction is: given


the data, to select a linear function
that minimises the prediction error.
The Wiener approach examined
earlier may be cast into a
predictive form in which the desired
signal to follow is the next sample
of the given process
Professor A G

Forward & Backward


Prediction

AG
C
DS
P

If the prediction is written as

x[n] f ( x[n 1], x[n 2],..., x[n M ])

Then we have a one-step forward


prediction
If the prediction is written as

x[n M ] f ( x[n], x[n 1], x[n 2],..., x[n M 1])

Then we have a one-step backward


prediction
Professor A G

Forward Prediction
Problem

AG
C
DS
P

The forward prediction error is then


e f [n] x[n] x[n]
Write the prediction equation as
M

x[n] w[k ]x[n k ]


k 1

And as in the Wiener case we


minimise the second order norm of
the prediction error
Professor A G

Forward Prediction
Problem

AG
C
DS
P

Thus the solution accrues from


2
2
J min E{(e f [n]) } min E{( x[n] x[n]) }

Expanding we have
2
2
J min E{( x[n]) } 2 E{( x[n]x[n]) E{( x[n]) }

Differentiating with resoect to the


weight vector we obtain
J
x[n]
x[n]
2 E{( x[n]
) 2 E{x[n]
}
wi
wi
wi

Professor A G

Forward Prediction
Problem

AG
C
DS
P

However x[n] x[n i ]


wi
And hence

J
2 E{( x[n]x[n i ]) 2 E{x[n]x[n i ]}
wi

or

M
J
2 E{( x[n]x[n i ]) 2 E{ w[k ]x[n k ]x[n i ]}
wi
k 1

Professor A G

Forward Prediction
Problem

AG
C
DS
P

On substituting with the


correspending correlation
sequences
we have
M
J
2r[i ] 2 w[k ]rxx [i k ]
wi
k 1

Set this expression to zero for


M
minimisation
to yield
w[k ]rxx [i k ] rxx [i ] i 1,2,3,..., M
k 1

Professor A G

Forward Prediction
Problem

AG
C
DS
P

These are the Normal Equations, or


Wiener-Hopf , or Yule-Walker equations
structured for the one-step forward
predictor

In this specific case it is clear that we


need only know the autocorrelation
propertities of the given process to
determine the predictor coefficients
Professor A G

AG
C

Forward Prediction Filter

DS
P

1
m0
aM [ m] w[m] m 1,.., M
0
mM

Set

And rewrite earlier expression as

rxx [0]
k 0
aM [m]rxx [m k ]
0 k 1,2,..., M
m 0
M

These equations are sometimes known as


the augmented forward prediction normal
equations
Professor A G

10

AG
C

Forward Prediction Filter

DS
P

The prediction error is then given


M
as
e f [n] aM [k ]x[n k ]
m 0

This is a FIR filter known as the


1
2
M
prediction-error
filter
A f ( z ) 1 a1[1]z aM [2]z ... aM [ M ]z

Professor A G

11

Backward Prediction
Problem

AG
C
DS
P

In a similar manner for the backward


prediction case we write

eb [n] x[n M ] x[n M ]

And
M

~[k ]x[n k 1]
x[n M ] w
k 1

Where we assume that the backward


predictor filter weights are different
from the forward case
Professor A G

12

Backward Prediction
Problem

AG
C
DS
P

Thus on comparing the the forward and


backward formulations with the Wiener
least squares conditions we see that the
desirable signal is now

x[n M ]

Hence the normal equations for the


backward case can be written as
M

w~[m]rxx [m k ] rxx [ M k 1]

m 1

Professor A G

k 1,2,3,..., M
13

Backward Prediction
Problem

AG
C
DS
P

This can be slightly adjusted as

w~[ M m 1]rxx [k m] rxx [k ]

m 1

k 1,2,3,..., M

On comparing this equation with the


corresponding forward case it is seen that
the two have the same mathematical form
and
~

w[m] w[ M m 1]

Or equivalently
~

w[m] w[ M m 1]
Professor A G

m 1,2,..., M

m 1,2,..., M
14

AG
C

Backward Prediction Filter

DS
P

Ie backward prediction filter has the same


weights as the forward case but reversed.

Ab ( z ) aM [ M ] aM [ M 1]z 1 aM [ M 2]z 2 ... z M

This result is significant from which many


properties of efficient predictors ensue.
Observe that the ratio of the backward

prediction error filter to the forward


prediction error filter is allpass.
This yields the lattice predictor structures.
More on this later
Professor A G

15

AG
C

Levinson-Durbin

DS
P

Solution of the Normal Equations


The Durbin algorithm solves the following

R m w m rm

Where the right hand side is a column R


of
as in the normal equations.
Assume we have a solution for

R k w k rk
1 k m
T
Where
rk [r1 , r2 , r3 ,..., rk ]
Professor A G

16

AG
C

Levinson-Durbin

DS
P

For the next iteration the normal equations


can be written as
*

Where

Set

Rk
T
rk J k

rk
rk 1

rk 1
zk
w k 1
k

J k rk
w k 1 rk 1
r0

Professor A G

Jk

Is the k-order
counteridentit
y

17

AG
C

Levinson-Durbin

DS
P

Multiply out to yield


1
k

1
*
k k k

z k R (rk k J r ) w k k R J r

*
k k

Note that
Hence

R k1J k J k R k1

zk wk k Jk w

*
k

Ie the first k elements ofw k 1 are


adjusted versions of the previous solution
Professor A G

18

AG
C

Levinson-Durbin

DS
P

The last element follows from the


second equation of

Ie

Rk
T
rk J k

J k rk*

r0

w k rk
r
k k 1

1
k (rk 1 rkT J k z k )
r0
Professor A G

19

AG
C

Levinson-Durbin

DS
P

k
The parameters
are known as
the reflection coefficients.
These are crucial from the signal
processing point of view.

Professor A G

20

AG
C

Levinson-Durbin

DS
P

The Levinson algorithm solves the


problem
R my b

In the same way as for Durbin we


keep track of the solutions to the
problems R k y k b k
Professor A G

21

AG
C

Levinson-Durbin

DS
P

Thus assumingw k y, k
to be
known at the k step, we solve at
the next step the problem
Rk
T
rk J k

*
k k

J r
r0

vk bk
b
k k 1

Professor A G

22

AG
C

Levinson-Durbin

DS
P

vk
y k 1

Where

Thus
1
*
*
v k R k (b k k J k rk ) y k k J k y k

bk 1 r J y
k
r0 r y

T
k k k
T *
k k

Professor A G

23

AG
C

Lattice Predictors

DS
P

Return to the lattice case.


We write

Ab ( z )
TM ( z )
Af ( z)

or

aM [ M ] aM [ M 1]z 1 aM [ M 2]z 2 ... z M


TM ( z )
1 a1[1]z 1 aM [2]z 2 ... aM [ M ]z M
Professor A G

24

AG
C

Lattice Predictors

DS
P

The above transfer function is allpass of order


M.
It can be thought of as the reflection coeffient
of a cascade of lossless transmission lines, or
acoustic tubes.
In this sense it can furnish a simple algorithm
for the estimation of the reflection coefficients.
We strat with the observation that the transfer
function can be written in terms of another
allpass filter embedded in a first order allpass
structure

Professor A G

25

AG
C

Lattice Predictors

DS
P

This takes the form


1 z 1TM 1 ( z )
TM ( z )
1 1 z 1TM 1 ( z )
TM 1 ( z )
Where 1 is to be chosen to make
of degree (M-1) .
From the above we have
TM ( z ) 1
TM 1 ( z ) 1
z (1 1TM ( z ))
Professor A G

26

AG
C

Lattice Predictors

DS
P

And hence

(aM 1[ M ] aM 1[ M 1]z 1 ... z M


TM ( z ) 1
z (1 aM 1[1]z 1 aM 1[2]z 2 ... aM 1[ M ]z M )
Where
aM [r ] 1aM [ M r ]
aM 1[r ]
1 1aM [ M ]

Thus for a reduction in the order the


constant term in the numerator, which is
also equal to the highest term in the
denominator, must be zero.
Professor A G

27

AG
C

Lattice Predictors

DS
P

1 aM [ M ]
This requirement yields
The realisation structure is

1
TM (z )
z

TM 1 ( z )

Professor A G

28

AG
C

Lattice Predictors

DS
P

There are many rearrangemnets that can be


made of this structure, through the use of
Signal Flow Graphs.
One such rearrangement would be to
reverse the direction of signal flow for the
lower path. This would yield the standard
Lattice Structure as found in several
textbooks (viz. Inverse Lattice)
The lattice structure and the above
development are intimately related to the
Levinson-Durbin Algorithm
Professor A G

29

AG
C

Lattice Predictors

DS
P

The form of lattice presented is not the


usual approach to the Levinson algorithm in
that we have developed the inverse filter.
Since the denominator of the allpass is also
the denominator of the AR process the
procedure can be seen as an AR coefficient
to lattice structure mapping.
For lattice to AR coefficient mapping we
follow the opposite route, ie we contruct the
allpass and read off its denominator.
Professor A G

30

AG
C

PSD Estimation

DS
P

It is evident that if the PSD of the


prediction error is white then the
prediction transfer function multiplied
by the input PSD yields a constant.
Therefore the input PSD is determined.
Moreover the inverse prediction filter
gives us a means to generate the
process as the output from the filter
when the input is white noise.
Professor A G

31

Anda mungkin juga menyukai