Anda di halaman 1dari 81

An Introduction to adaptive

filtering & its applications


Introduction
Linear filters :
the filter output is a linear function of the
filter input
Design methods:
The classical approach
frequency-selective filters such as
low pass / band pass / notch filters etc

Optimal filter design


Mostly based on minimizing the mean-
square value
of the error signal
2
Wiener filter
work of Wiener in 1942 and
Kolmogorov in 1939
it is based on a priori
statistical information
when such a priori
information is not available,
which is usually the case,
it is not possible to design
a Wiener filter in the first
place.
Adaptive filter
the signal and/or noise characteristics are
often nonstationary and the statistical
parameters vary with time

An adaptive filter has an adaptation


algorithm, that is meant to monitor the
environment and vary the filter transfer
function accordingly

based in the actual signals received,


attempts to find the optimum filter design
Adaptive filter
The basic operation now involves two
processes :

1. a filtering process, which produces an


output signal in response to a given input
signal.

2. an adaptation process, which aims to


adjust the filter parameters (filter transfer
function) to the (possibly time-varying)
environment
Often, the (average) square value of the
error signal is used as the optimization
criterion
Adaptive filter
Because of complexity of the
optimizing algorithms most adaptive
filters are digital filters that perform
digital signal processing

When processing
analog signals,
the adaptive filter
is then preceded
by A/D and D/A
convertors.
Adaptive filter
The generalization to adaptive IIR
filters leads to stability problems

Its common to use


a FIR digital filter
with adjustable
coefficients
Applications of Adaptive Filters:
Identification
Used to provide a linear model of an
unknown plant
Applications:
System identification
Applications of Adaptive Filters:
Inverse Modeling
Used to provide an inverse model of an
unknown plant
Applications:
Equalization (communications
channels)
Applications of Adaptive Filters:
Prediction
Used to provide a prediction of the
present value of a random signal
Applications:
Linear predictive coding
Applications of Adaptive Filters:
Interference Cancellation
Used to cancel unknown interference from a
primary signal
Applications:
Echo / Noise cancellation
hands-free carphone, aircraft
headphones etc
Example:
Acoustic Echo Cancellation
LMS Algorithm
Most popular adaptation algorithm is LMS
Define cost function as mean-squared
error

Based on the method of steepest descent


Move towards the minimum on the error surface
to get to minimum
gradient of the error surface estimated at every
iteration
LMS Adaptive Algorithm
Introduced by Widrow & Hoff in 1959
Simple, no matrices calculation involved in the
adaptation
In the family of stochastic gradient algorithms
Approximation of the steepest descent method
Based on the MMSE criterion.(Minimum Mean
square Error)
Adaptive process containing two input signals:
Filtering process, producing output signal.
2.) Desired signal (Training sequence)
Adaptive process: recursive adjustment of filter
tap weights
LMS Algorithm Steps M 1

Filter output y n u n k w n
*
k
k 0

Estimation errore n d n y n
wk n 1
Tap-weight adaptation wk n un k e n *

update value old value learning - tap


error
of tap - weigth of tap - weight rate input
vector vector parameter vector signal

17
Stability of LMS
The LMS algorithm is convergent in the
mean square if and only if the step-size
parameter satisfy

Here max is the largest eigenvalue of the


correlation matrix of the input data
More practical test for stability is

Larger values for step size


Increases adaptation rate (faster
adaptation)
Increases residual mean-squared
STEEPEST DESCENT EXAMPLE

Given the following function we need to obtain the vector that would give us
the absolute minimum.

Y (c , c ) C C
1 2 1
2
2
2
y

It is obvious that
C1 C2 0,
give us the minimum.
(This figure is quadratic error function (quadratic bowl) ) C1

C2

Now lets find the solution by the steepest descend method


STEEPEST DESCENT EXAMPLE

We start by assuming (C1 = 5, C2 = 7)


We select the constant . If it is too big, we miss the minimum. If it is too
small, it would take us a lot of time to het the minimum. I would select
= 0.1.
The gradient vector is:

dy
dc 2C1
y
1

dy 2C2
dc
2
So our iterative equation is:

C1 C1 C1 C1 C1
C 0.2 y 0. 1 0.9
2 C
[ n 1] 2 [ n ] C2 [n] C2 [n] C2 [n]
STEEPEST DESCENT EXAMPLE
C 5
y
Iteration1 : 1
C2 7
C1 4.5 Initial guess
Iteration 2 :
C2 6.3
C1 0.405
Iteration3 : 0.567
C
2 C1
Minimum
......
C1 0.01
Iteration 60 :
C2 0.013
C1 0
lim n
C 2 [ n ] 0 C2
As we can see, the vector [c1,c2] converges to the value which would yield

the function minimum and the speed of this convergence depends on .
LMS CONVERGENCE GRAPH
Example for the Unknown Channel of 2nd
order:

Desired Combination of taps

This graph illustrates the LMS algorithm. First we start from


guessing the TAP weights. Then we start going in opposite the
gradient vector, to calculate the next taps, and so on, until we get
the MMSE, meaning the MSE is 0 or a very close value to it.(In
practice we can not get exactly error of 0 because the noise is a
random process, we could only decrease the error below a desired
minimum)
SMART ANTENNAS
Adaptive Array Antenna
Adaptive Arrays

Linear Combiner

Interference
Adaptive Array Antenna
Applications are many
Digital Communications
(OFDM , MIMO , CDMA,
and RFID)
Channel Equalisation
Adaptive noise
cancellation
Adaptive echo
cancellation
System identification
Smart antenna systems
Blind system equalisation
Adaptive
Equalization
Introduction

Wireless communication is the


most
interesting field of
communication these days,
because it supports mobility
(mobile users). However, many
applications of wireless comm.
now require high-speed
communications (high-data-
What is the ISI
Inter-symbol-interference, takes
place when a given transmitted
symbol is distorted by other
transmitted symbols.

Cause of ISI
ISI is imposed due to band-
limiting effect of practical
channel, or also due to the multi-
path effects (delay spread).
Definition of the Equalizer:
the equalizer is a digital filter
that provides an approximate
inverse of channel frequency
response.
Need of equalization:
is to mitigate the effects of ISI to
decrease the probability of error
that occurs without suppression
of ISI, but this reduction of ISI
effects has to be balanced with
Types of Equalization
techniques
Linear Equalization techniques
which are simple to implement, but
greatly enhance noise power
because they work by inverting
channel frequency response.

Non-Linear Equalization
techniques
which are more complex to
Equalization Techniques

Fig.3 Classification of equalizers


Linear equalizer with N-taps, and (N-1)
delay elements.
Go
Table of various algorithms and
their trade-offs:

algorith Multiplying- complexity convergen tracking


m operations ce
LMS 2N 1 Low slow poor
MMSE N 2toN 3 Very high fast good
RLS 2.5 N 2 4.5 N High fast good
Fast Fairly fast good
20 N 5
kalma Low
n
1.5 N 2 6.5 N
RLS- High fast good
DFE
Adaptive noise
cancellation
Adaptive Filter Block Diagram

Adaptive Filter Block Diagram

d(n) Desired + e(n) Error Output

-
x(n)
Filter Input y(n) Filter Output
Adaptive Filter

e(n)
The LMS Equation

The Least Mean Squares Algorithm


(LMS) updates each coefficient on a
sample-by-sample basis based on the
wk (n 1) wk (n) e(n) xk (n)
error e(n).

This equation minimises the power in


the error e(n).
The Least Mean Squares Algorithm

The value of (mu) is critical.


If is too small, the filter reacts slowly.
If is too large, the filter resolution is
poor.
The selected value of is a
compromise.
LMS Convergence Vs u
Audio Noise Reduction

A popular application of acoustic noise


reduction is for headsets for pilots. This uses
two microphones.
Block Diagram of a Noise Reduction Headset

Near Microphone d(n) = speech + noise Speech Output


+ e(n)

-
Filter Output
Far Microphone x(n) = noise'
y(n) (noise)
Adaptive Filter

e(n)
The Simulink Model
Setting the Step size
(mu)

The rate of
convergence of
the LMS
Algorithm is
controlled by the
Step size (mu).
This is the critical
variable.
Trace of Input to Model

Input = Signal + Noise.


Trace of LMS Filter
Output
Output starts at
zero and grows.
Trace of LMS Filter Error

Error contains
the noise.
Typical C6713 DSK
Setup
USB to PC to +5V

Headphones Microphone
Adaptive Echo
Cancellation
Acoustic Echo Canceller
New Trends in Adaptive
Filtering
Partial Updating Weights.
Sub-band adaptive filtering.
Adaptive Kalman filtering.
Affine Projection Method.
Time-Space adaptive processing.
Non-Linear adaptive filtering:-
Neural Networks.
The Volterra Series Algorithm .
Genetic & Fuzzy.
Blind Adaptive Filtering.
Thank You

Anda mungkin juga menyukai