Anda di halaman 1dari 10

Signal Processing 86 (2006) 13651374

State-space recursive least-squares with adaptive memory


Mohammad Bilal Malik

College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Rawalpindi, Pakistan
Received 5 July 2004; received in revised form 12 February 2005; accepted 13 February 2005
Available online 15 March 2006
Abstract
State-space recursive least-squares (SSRLS) enhances the tracking ability of the standard recursive least-squares (RLS)
by incorporating the underlying model of the environment. Its overall performance, however, depends on model
uncertainty, presence of external disturbances, time-varying nature of the observed signal or nonstationary behavior of the
observation noise. It turns out that the forgetting factor plays an important role in this context. However, depending on
the problem, it may be difcult or even impossible to have a prior estimate of the best value of forgetting factor. As a
logical approach to such situations, SSRLS with adaptive memory (SSRLSWAM) is developed in this paper. This in turn
has been achieved by stochastic gradient tuning of the forgetting factor. An approximation based on steady-state SSRLS is
also derived. The resultant lter alleviates the computational burden of the full-edged algorithm. An example of tracking
a noisy chirp demonstrates the overall capability and power of the new algorithm. It is expected that this new lter will be
able to track and estimate time-varying signals that are difcult to handle with the available tools.
r 2006 Elsevier B.V. All rights reserved.
Keywords: State-space RLS; SSRLS; Adaptive memory; Tracking
1. Introduction
The theory and analysis of state-space recursive
least-squares (SSRLS) was presented in our prior
work [14]. The concept was initially introduced by
Chun et al. [5]. SSRLS gives the designer freedom to
choose an appropriate signal model. Therefore,
SSRLS exhibits superior tracking characteristics as
compared to the standard RLS [68]. Both the lters
have certain resemblance with Kalman lter
[3,913]. The performance of SSRLS, however,
depends on model uncertainty, presence of unknown
external disturbances, time-varying nature of the
observed signal and/or nonstationary behavior of
the observation noise. It turns out that the forgetting
factor plays an important role in this context.
However, depending on the problem, it may be
difcult or even impossible to have a prior estimate
of the best value of forgetting factor. As a logical
approach to such situations, we develop SSRLS with
adaptive memory (SSRLSWAM) in this paper. This
algorithm was initially proposed in our earlier paper
[14]. The half of the mean square value of the
prediction error [1,3] is chosen as the cost function to
be minimized. This in turn is achieved by stochastic
gradient [8] tuning of the forgetting factor. Moti-
vated by somewhat similar reasons, the idea of
adaptive memory in the standard RLS was devel-
oped earlier and is well-known [8,15].
We begin with a brief account of SSRLS that is
mandatory for further development. A reader
ARTICLE IN PRESS
www.elsevier.com/locate/sigpro
0165-1684/$ - see front matter r 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.sigpro.2005.02.024

Tel.: +92 51 9278045.


E-mail address: mbmalik@ceme.edu.pk.
would also get a fairly good idea of the theory
formally presented in [1,3]. The key issues discussed
are the signal model, recursive formulae, and
steady-state SSRLS. A brief discussion on the
behavior of SSRLS in the presence of model
uncertainty and/or external unknown disturbances
follows. The results suggest existence of the best
forgetting factor that minimizes the steady-state
mean square prediction error.
SSRLSWAM is powerful, exible and versatile,
but requires somewhat intensive computations.
Real-time applicability of algorithms is always
dependent on their numerical simplicity. With this
constraint in mind, we develop computationally
efcient approximate solution of the actual lter.
This algorithm is based on steady-state solution of
SSRLS [1,3] and hence is termed steady-state
SSRLS with adaptive memory (S4RLSWAM). As
a special case we discuss the constant acceleration
model. Besides being important itself, this discus-
sion provides a guideline to a designer for other
problems as well. A discussion on initialization of
SSRLSWAM suggests that the method of delayed
recursion [3] is the preferred choice.
A complete section is dedicated to the computa-
tional aspect of different algorithms under discus-
sion. An example of tracking a noisy chirp
concludes the paper. The example compares perfor-
mances of different algorithms. It also provides a
good perspective of the potential of this new
algorithm.
2. Preview of SSRLS
SSRLS algorithm was presented in [3]. Since this
new development is a continuation of our previous
work, we reproduce important and relevant infor-
mation very briey.
2.1. Signal model and SSRLS lter
Consider the following unforced discrete state-
space system:
xk 1 Axk,
yk Cxk vk, (2.1)
where x 2 R
n
is the state vector, y 2 R
m
is the
output vector (which is the observed random
process), and vk is the observation noise vector.
We assume that the pair A; C is l-step observable
[16] with invertible state transition matrix A. The
unforced nature of (2.1) is particularly useful if the
signal is modeled such that A is neutrally stable (i.e.
all of its eigenvalues are exactly on the unit circle)
[1,3]. According to SSRLS state estimate ^ xk is
given as
^ xk xk Kkk, (2.2)
where
xk A ^ xk 1 (2.3)
is the predicted state estimate. Observer gain Kk is
determined according to the method of least-
squares. The prediction error, which is also referred
to as innovations, is dened as
k yk yk (2.4)
with
yk C xk (2.5)
as the predicted output. We assume that the
observations yk start appearing at time k 1. In
order to start the recursion we need knowledge of
certain quantities at time k 0. This includes initial
estimate of states ^ x0 and initial observer gain K0.
A discussion on initialization is given in Sections
2.1.4 and 4.1.
2.1.1. Riccati equation of SSRLS
The Riccati equation for SSRLS is given as
follows:
Pk l
1
APk 1A
T
l
2
APk 1A
T
C
T
I l
1
CAPk 1A
T
C
T

1
CAPk 1A
T
, 2:6
where Pk is an n n matrix. P0 is the initial
condition, which should preferably be positive
denite [3].
2.1.2. Observer gain
The following expression of observer gain makes
(2.2) an SSRLS estimator with forgetting factor l
Kk l
1
APk 1A
T
C
T
I l
1
CAPk 1A
T
C
T

1
. 2:7
From (2.6) and (2.7) we get
Kk PkC
T
. (2.8)
When the observer gain is given by (2.8), we term
the lter as SSRLS form I [3]. The complete
algorithm is given by (2.2)(2.6) and (2.8).
ARTICLE IN PRESS
M.B. Malik / Signal Processing 86 (2006) 13651374 1366
2.1.3. Recursive update of Fk
Dene Fk P
1
k. The equation for recursive
update of Fk is
Fk lA
T
Fk 1A
1
C
T
C. (2.9)
Observer gain can now be written as
Kk F
1
kC
T
. (2.10)
The introduction of Fk gives SSRLS form II,
which consists of (2.2)(2.5), (2.9) and (2.10).
Although mathematically equivalent, the computa-
tional efforts of the two forms of SSRLS are
different [3].
2.1.4. Initialization
SSRLS can be initialized by regularization term
or delayed recursion [3]. The later method is the
preferred choice as it provides superior convergence
properties and provides an initial estimate of the
states ^ x0 as well.
2.1.5. Memory length
The concept of forgetting the distant past is
similar to saying that the lter memorizes certain
data from the past. Any data before that period are
forgotten. The length of lter memory could be
approximated by the following expression:
Length of Memory
1
1 l
. (2.11)
The expression uses the fact that
1 l l
2
1=1 l.
2.2. Steady-state SSRLS
SSRLS is a time-varying lter which is computa-
tionally intensive. In this section, we discuss its
steady-state solution. The time-varying observer
gain (2.8) (or (2.10)) settles down asymptotically
to a unique and well-behaved value if

l
p
omin jEigenvaluesAj. (2.12)
The expression for steady-state observer gain is
given by
K F
1
C
T
, (2.13)
where F is a solution of the following algebraic
Lyapunov equation:
lA
T
FA
1
F C
T
C. (2.14)
F is given by
F

1
k0
l
k
A
T

k
C
T
CA
1

k
. (2.15)
For a neutrally stable system condition of stability
(2.12) translates to
lo1. (2.16)
3. Model uncertainty and unknown external
disturbances
If the system model is not exactly known,
estimation performance of SSRLS may deteriorate.
The situation worsens in the presence of external
unknown disturbances. In order to investigate the
behavior of SSRLS under such conditions, we
assume that the underlying model of the environ-
ment is as follows:
xk 1 A
0
xk B
0
wk,
yk C
0
xk vk, (3.1)
where wk is a bounded deterministic disturbance.
A
0
is the state transition matrix, B
0
is the input
matrix and C
0
is the observation matrix. To avoid
unnecessary complications the pair A
0
; B
0
is
assumed to be controllable. vk is the observation
noise, which is assumed to be a zero-mean process,
i.e.
Evk 0, (3.2)
and white with
Evkv
T
j
s
2
v
I if k j;
0 otherwise;
_
(3.3)
where E is the expectation operator. We also
assume that the states xk are bounded for k40.
This is a reasonable assumption if (3.1) is used to
model a physical signal. The nominal model used
for state estimation remains (2.1). This framework
can be construed as tting (2.1) on the observed
data according to the method of weighted least-
squares, whereas the observations are actually
originating from (3.1). The total number of samples
taken for this t is mathematically equal to the
memory depth of SSRLS. Our focus will be to
investigate the asymptotic behavior of SSRLS
estimator under the above stated conditions. As
SSRLS converges to steady-state SSRLS [3,4] for
0olo1, it would be sufcient if we restrict our
discussion to the later one. This simplies the
ARTICLE IN PRESS
M.B. Malik / Signal Processing 86 (2006) 13651374 1367
requisite analysis [1,17]. We begin by dening the
perturbation matrices as follows:
A
d
A A
0
,
C
d
C C
0
. (3.4)
In this case, the estimation error can be written as
ek xk ^ xk
Fek 1 Kvk dk 1, 3:5
where we have dened
F A KCA,
dk F
d
xk B
o
wk,
F
d
A
d
KA
d
C
d
A
d
C AC
d
. (3.6)
As the perturbation term dk is not a function of
the estimated states ^ xk, it is completely determi-
nistic. Therefore (3.5) has two inputs; one is
deterministic and the other one is completely
random (i.e. white). We proceed with a discussion
on steady-state mean estimation error.
3.1. Steady-state mean estimation error
Invoking assumption (3.2) we get the following
relation from Eq. (3.5):
Eek FEek 1 dk 1. (3.7)
We may view (3.7) as a discrete system with d as the
inputs and e as the states.
Theorem 3.1. Assume that the states xk are
bounded for k40 and that (3.2) holds. If SSRLS is
based on (2.1) and the actual model of the underlying
environment is (3.1) then the steady-state mean error
is bounded as follows:
lim
k!1
kEekkp
ab
1 l
; 0olo1, (3.8)
where a is a positive constant and b is dened as
supremum of dk as follows:
b sup
0pjok
kdjkpkF
d
k sup
0pjok
kxjk
kB
0
k sup
0pjok
kwjk. 3:9
Proof. See [1,17]. Note that the niteness of sup in
(3.9) is guaranteed by the boundedness assumption
of states xk. &
Remarks. The effect of observation noise on the
mean estimation error is eliminated due to zero-
mean assumption. Bound (3.8) on mean steady-state
error increases with the memory of the lter and
vice versa. This fact is in accordance with the high-
gain observer theory [18] which makes use of wide-
bandwidth lters in order to minimize the effect of
model uncertainties.
3.1.1. Steady-state mean square error
The estimation error correlation matrix is given
as
Rk Eeke
T
k. (3.10)
From Eq. (3.5) and assumptions (3.2) and (3.3), we
get
Rk FRk 1F
T
s
2
v
KK
T
Dk 1, (3.11)
where
Dk dkd
T
k FEekd
T
k dkEe
T
kF
T
.
(3.12)
It has been emphasized in [13] that neutrally stable
system matrix A plays an important role in the
realm of SSRLS ltering. If we restrict our attention
to such cases it is possible to nd closed form
meaningful bounds on the steady-state mean square
error. Without the loss of much generality, we can
ignore the effect of C
d
. Under these conditions, the
following theorem summarizes the results.
Theorem 3.2. Under the stated conditions of Theo-
rem 3.1 and assumptions (3.2) and (3.3), the
asymptotic bound on mean square estimation error
correlation matrix for a neutrally stable system
matrix A is given as
lim
k!1
kRkkpa1 ls
2
v

g
1 l

Z
1 l
2
_ _
kA
d
k
2
,
(3.13)
where a; g and Z are positive constants.
Proof. See [1]. Proving this theorem is a formidable
task and is planned to appear in another paper
[17]. &
Remarks. Bound (3.13) on steady-state mean
square error provides valuable information about
the second order statistical behavior of SSRLS in
the presence of both observation noise and model
uncertainty. The rst term on the right-hand side of
(3.13) is directly proportional to the variance of
observation noise. With increasing the memory of
lter (making l close to unity), the averaging or
ltering action of SSRLS is enhanced which results
in better noise suppression. On the other hand, the
ARTICLE IN PRESS
M.B. Malik / Signal Processing 86 (2006) 13651374 1368
second term on the right-hand side of (3.13) is
directly proportional to kA
d
k
2
, which is the square
of norm of matrix representing model mismatch.
Contribution of this term can be curtailed by
decreasing l. Whereas this is a familiar concept in
curve tting [19], it is also reminiscent of the high-
gain observer theory which overcomes the impact of
model uncertainty using wide-band lters [18].
Finally, the form of (3.13) indicates the possible
existence of some l that would result in minimum
steady-state mean square error. However, due to
uncertain nature of the problem, it is not possible to
have a prior knowledge of the best forgetting
factor. The problem is further complicated in a
time-varying environment [2]. This situation calls
for adaptive tuning of the forgetting factor that
would minimize some cost function. We set this
function as the half of learning curve [3] of SSRLS.
4. SSRLS with adaptive memory (SSRLSWAM)
Our objective is to tune the forgetting factor l so
as to minimize the cost function
Jk
1
2
E
T
kk, (4.1)
where E is the expectation operator and k is the
prediction error dened in (2.4). Differentiating Jk
with respect to l gives
r
l
k
qJk
ql
E
q
T
k
ql
k
_ _
, 4:2
where q
T
k=ql is a row vector. Dene
ck
q ^ xk
ql
. (4.3)
We get
qk
ql

q
ql
yk CA^ xk 1
CAck 1, 4:4
which implies that
r
l
k Ec
T
k 1A
T
C
T
k. (4.5)
Using (2.8), we can rewrite the SSRLS estimator
(2.2) as
^ xk xk PkC
T
k. (4.6)
Let
Sk
qPk
ql
. (4.7)
Differentiating (4.6) with respect to l and using
(4.4), (4.5) and (4.7) we get
ck A KkCAck 1 SkC
T
k. (4.8)
Proposition 4.1 (Wilson [16]). If Xl is a con-
tinuously differentiable invertible matrix for 0olp1,
then
qX
1
l
ql
X
1
l
qXl
ql
X
1
l. (4.9)
Differentiating (2.6) with respect to l and making
use of Proposition 4.1, we arrive at
Sk l
1
I KkCASk 1A
T
I C
T
K
T
k
l
1
Pk l
1
KkK
T
k. 4:10
In order to formulate SSRLSWAM, we make the
forgetting factor a function of time. The stochastic
gradient method [8] updates lk as follows:
lk lk 1 ar
l
k, (4.11)
where a is a small positive learning rate parameter.
Based on (4.5), an instantaneous estimate for the
scalar gradient r
l
k can be taken as
^
r
l
k Ec
T
k 1A
T
C
T
k, (4.12)
which modies (4.11) into
lk lk 1 a
^
r
l
k
lk 1 ac
T
k 1A
T
C
T
k. 4:13
We can summarize the complete SSRLSWAM
algorithm as follows:
Kk l
1
k 1APk 1A
T
C
T
I l
1
k 1CAPk 1A
T
C
T

1
,
k yk CA^ xk 1,
^ xk A^ xk 1 Kkk,
Pk l
1
k 1APk 1A
T
l
1
k 1KkCAPk 1A
T
,
lk lk 1 ac
T
k 1A
T
C
T
k
l

,
Sk l
1
kI KkCASk 1A
T
I C
T
K
T
k
l
1
kPk l
1
kKkK
T
k,
ck A KkCAck 1 SkC
T
k. (4.14)
For this algorithm to be meaningful we require
0olkp1. The bracket followed by l

and l

in
the seventh line of (4.14) indicates truncation that
ARTICLE IN PRESS
M.B. Malik / Signal Processing 86 (2006) 13651374 1369
restricts forgetting factor to the interval l

; l

.
The upper limit is generally set close to unity,
whereas the lower limit is determined by the user
through experimentation [8].
4.1. Initialization
Initialization of SSRLSWAM is more sensitive as
compared to the standard SSRLS. Improper initi-
alization could lead to large peaking [18,20] which
in turn could adversely effect the initial tuning of the
forgetting factor. Therefore, the method of delayed
recursion (Section 2.1.4) is highly recommended as
compared to initialization by regularization term.
The former method gives suitable initial values of
^ x0 and P0. Recursion can be started with some
reasonable estimate of l0, whereas c0 0 is a
good choice.
5. Approximate solution
It is apparent from (4.14) that the SSRLSWAM is
computationally intensive. We, therefore, develop
approximation to this algorithm based on steady-
state SSRLS. The algorithm is termed as steady-
state state-space recursive least squares with adap-
tive memory (S4RLSWAM). The resultant lter will
remain time-varying because lk has to be a
function of time in order to have adaptive memory.
On the other hand, steady-state SSRLS is a time
invariant lter if system (2.1) is also time invariant
[3]. It would be convenient that we denote certain
quantities as function of l rather than function of
time as would become apparent in the upcoming
discussion. Rewrite Eq. (2.14) as
lkA
T
FA
1
F C
T
C. (5.1)
We can solve this equation to nd Flk Fl or
alternately Pl F
1
l. This enables us to calcu-
late Sk Sl qPl=ql ofine. Keeping in view
these considerations the net algorithm becomes
k yk CA^ xk 1,
lk lk 1 ac
T
k 1A
T
C
T
k
l

,
Pk Plk,
Sk Slk,
Kk Klk 1
l
1
k 1APk 1A
T
C
T
I l
1
k 1CAPk 1A
T
C
T

1
PlkC
T
F
1
lkC
T
,
^ xk A^ xk 1 Kkk,
ck A KkCAck 1 SkC
T
k, (5.2)
where the result in the eighth line is based on (2.8).
One signicant change from (4.14) is the order of
equations. Here we can evaluate Plk without
prior knowledge of Kk. This results in a consider-
able simplication of computations. Having a closer
look at (5.2) we nd that in order to implement the
algorithm we only need expressions for Kl, A
KlCA and SlC
T
. Furthermore, all of these
expressions can be (symbolically) calculated ofine
for a specic case. Let us dene
Fl A KlCA,
Gl SlC
T
. (5.3)
Now we can rewrite algorithm (5.2) as follows:
k yk CA^ xk 1,
lk lk 1 ac
T
k 1A
T
C
T
k
l

,
^ xk A^ xk 1 Klkk,
ck Flkck 1 Glkk. (5.4)
The simplicity of (5.4) is apparent from its form.
The order of equations in (5.4) is important as Kl,
Fl and Gl all use updated value of l as obtained
from the second equation of (5.4). We follow our
development by discussing a special case.
5.1. A special case (constant acceleration)
Discrete-time equivalent of the constant accelera-
tion model [3,21] is obtained by evaluating the state
transition matrix over one sampling period T. The
state-space matrices are
A
1 T T
2
=2
0 1 T
0 0 1
_

_
_

_,
C 1 0 0. (5.5)
ARTICLE IN PRESS
M.B. Malik / Signal Processing 86 (2006) 13651374 1370
Solving the Lyapunov equation (2.14) and from the
eighth line of (5.2), we can calculate
Kl
1 l
3
31 l
2
1 l
2T
1 l
3
T
2
_

_
_

_
. (5.6)
Similarly from (5.3) we get
and
Gl 3
l
2
1 l1 3l
2T
1 l
2
T
2
_

_
_

_
. (5.8)
Expressions (5.6)(5.8) when incorporated in (5.4)
completely specify the S4RLSWAM when the signal
is modeled to have a constant acceleration.
6. Computational complexity
Computational complexities of the standard
adaptive lters are listed in Table 1, whereas the
corresponding results for variants of SSRLSWAM
are summarized in Table 2. A comparison of the
two shows that SSRLS and SSRLSWAM are both
On
3
. However, the later will roughly require three
to four times more computations as compared to
the standard ones. This also becomes apparent by
looking at (4.14), whose last two equations are
computationally intensive and are not required by
the standard SSRLS. The S4RLSWAM algorithm
only offers a case specic solution as discussed in
Section 5. In order to have a comparison of
SSRLSWAM and S4RLSWAM, we choose con-
stant acceleration model. Table 2 shows that the
alleviation in computational burden for
S4RLSWAM is signicant. Further discussion on
this aspect will appear in Section 7.1.
Both the tables represent the worst case situa-
tions. There could be case specic simplications,
ARTICLE IN PRESS
Table 1
Computational complexities of standard algorithms
S. no. Filter type Multiplications and Divisions
additions/subtractions
1 SSRLS Riccati 4n
3
4n
2
m4nm
2

2
3
m
3
m
2
=2 Om
equation (2.6) On
2
Om
2
Onm
2 Matrix difference equation (2.9) 4n
3
0
3 GaussJordan inversion of n n matrix
2
3
n
3
On
2
n
2
=2 On
4 SSRLS (Form I) 4n
3
4n
2
m4nm
2

2
3
m
3
m
2
=2 Om
On
2
Om
2
Onm
5 SSRLS (Form II) 4
2
3
n
3
2n
2
mOn
2
Onm n
2
=2 On
6 Steady-state SSRLS 2n
2
4nmOn Om 0
7 SSRLS (Form I) m 1 4n
3
On
2
1
8 SSRLS (Form II) m 1 4
2
3
n
3
On
2
n
2
=2 On
9 Steady-state SSRLS m 1 2n
2
On 0
10 RLS 5n
2
On 1
11 LMS 4n 1 0
12 NLMS 6n 1
Fl
l
3
Tl
3
T
2
l
3
2
31 l
2
1 l
2T
1
31 l
2
1 l
2
T
3T1 l
2
1 l
4

1 l
3
T
2

1 l
3
T
1
1 l
3
2
_

_
_

_
(5.7)
M.B. Malik / Signal Processing 86 (2006) 13651374 1371
which may include smart manipulation of formulas
and expressions. There are certain faster schemes
like fast transversal lter that may improve compu-
tational efciency but may face the problem of
numerical stability [8]. We restrict the scope of
discussion to the basic lters in this paper as it
serves the purpose of comparing different algo-
rithms. As a nal comment, change of variables may
prove to be a useful tool in this context. Some minor
errors in [3] have also been corrected in Table 1.
7. Example of tracking a noisy chirp
In order to illustrate the capabilities of
SSRLSWAM, we track a noisy chirp. This example
is of historical importance and has been a kind of
benchmark in evaluating tracking performance of
algorithms [8,11]. We make the problem further
difcult by assuming that the signal model is not
available for the estimator design. Under such
conditions (signal model completely unknown),
polynomial signal models like the constant accel-
eration model (Section 5.1) are usually a good
choice [1,3]. This essentially becomes a problem to
t a polynomial on the observed data [19]. The
suggested framework results in an obvious model
mismatch. However, it is expected that
SSRLSWAM will partly compensate for the model
uncertainty by adapting the forgetting factor (refer
to Section 3).
The chirp signal to be estimated is
yt sin0:0001t
2
p=3. (7.1)
We observe this signal in discrete domain after
sampling with sampling time, T 0:1 s. White noise
of variance 0.1 corrupts the observations. The
learning rate parameter is chosen to be a 0:0005.
The simulations are performed for SSRLSWAM
(Algorithm (4.14)) and S4RLSWAM (Algorithm
(5.4) with (5.6)(5.8)). The former algorithm is
initialized by the method of delayed recursion and
regularization term, whereas, the later one is
ARTICLE IN PRESS
Table 2
Computational complexities of variants of SSRLSWAM algorithms
S. no. Filter type Multiplications and Divisions
additions/subtractions
1 SSRLSWAM 16n
3
11n
2
m4nm
2

2
3
m
3
m
2
=2 Om
On
2
Om
2
Onm
2 SSRLSWAM m 1 16n
3
On
2
m
2
=2 Om
3 SSRLSWAM 308 3
(constant acceleration Multiplications 195
model) Additions=subtractions 113
4 S4SRLSWAM 76 0
(constant acceleration Multiplications 50
model) Additions=subtractions 26
Fig. 1. Performance of SSRLSWAM.
M.B. Malik / Signal Processing 86 (2006) 13651374 1372
initialized by the method of delayed recursion and
zero initial condition. Keeping in view the model
uncertainty and large noise variance, 25 samples are
reserved for the initialization phase. We also set
c0 0 and l0 0:98. The estimation errors are
illustrated in Fig. 1. The adaptation of forgetting
factor is shown in Fig. 2. In order to signify the
importance of adaptive memory, we also perform
the simulations for SSRLS with xed memory. The
performances for different forgetting factors are
illustrated in Fig. 3. The method of delayed
recursion is used in all the cases.
7.1. Comments
Constant acceleration model only approximates a
chirp. However, the estimation performance as
depicted in Fig. 1 is a ne result. We see that good
tracking is achieved, whereas the noise ltration is
reasonable. As the frequency of the signal increases,
model mismatch increases. The algorithm accord-
ingly adapts the forgetting factor l as illustrated in
Fig. 2. As expected, the algorithms initialized by the
method of delayed recursion result in better
transient response. An interesting observation is
that the tuning of forgetting factor appears to be
smoother in S4RLSWAM as compared to
SSRLSWAM.
Performance of xed memory SSRLS is illu-
strated in Fig. 3. High gain observers are known to
overcome model uncertainty but perform poorly in
the presence of observation noise. The same is seen
in the rst plot of Fig. 3. On the other hand,
Kalman lter (SSRLS with l 1) optimally lters
out observation noise but is critically sensitive to
model uncertainty. This phenomenon is illustrated
in the last plot of Fig. 3. The second and third plots
offer a compromise between the two extreme
choices. However, the performance of SSRLS for
ARTICLE IN PRESS
Fig. 2. Tuning of forgetting factor.
Fig. 3. Performance of SSRLS with xed memory.
M.B. Malik / Signal Processing 86 (2006) 13651374 1373
any one xed forgetting factor is not satisfactory
due to the time-varying nature of model uncer-
tainty.
Keeping in view both the tracking performance of
SSRLSWAM (initialized by delayed recursion) and
its numerical complexity (Section 6) we can assert
that it is the lter of choice amongst the ones
discussed in this paper.
8. Conclusion
In this paper we have developed SSRLS with
adaptive memory (SSRLSWAM). This new dimen-
sion adds versatility to SSRLS, which is already a
valuable tool in estimation theory and adaptive
ltering. The mathematical derivation of
SSRLSWAM is a logical combination of two
independent schemes viz. SSRLS and stochastic
gradient method. The result is a highly versatile and
exible lter. To alleviate computational burden of
the full-edged algorithm we devise an approximate
solution based on steady-state SSRLS
(S4RLSWAM). It is not an exaggeration if we say
that in a time-varying environment with model
uncertainties, tracking performance of
S4RLSWAM would surpass many of the existing
tools when both the computational complexity and
performance are kept under consideration. The
example of tracking a noisy chirp signies and
demonstrates the overall capability and power of
SSRLSWAM. Although this paper is a self-con-
tained exposition, it is expected that this new
algorithm would address a much wider range of
problems in the research that would follow this
work.
References
[1] M.B. Malik, State-space recursive least-squares, Ph.D.
Dissertation, College of Electrical and Mechanical Engineer-
ing, National University of Sciences and Technology,
Pakistan, 2004.
[2] M.B. Malik, H. Qureshi, R.A. Bhatti, Tracking of linear
time-varying systems by state-space recursive least squares,
IEEE Internat. Symp. Circuits System III (2004) 305308.
[3] M.B. Malik, State-space recursive least-squares: Part I & II,
Signal Process. J. 84 (2004) 17091728.
[4] M.B. Malik, E. Muhammad, M.A. Maud, in: Convergence
Analysis of SSRLS, International Networking and Commu-
nications Conference, vol. 1, 2004, pp. 175178.
[5] B. Chun, B. Kim, Y.H. Lee, Generalization of exponentially
weighted RLS algorithm based on a state-space model,
IEEE Internat. Symp. Circuits System V (1998) 198201.
[6] E. Eleftheriou, D.D. Falconer, Tracking properties and
steady-state performance of RLS adaptive lter algorithms,
IEEE Trans. Acoust., Speech Signal Process. ASSP-34
(1986) 10971110.
[7] E. Ewada, Comparison of RLS, LMS and sign algorithms
for tracking randomly time-varying channels, IEEE Trans.
Signal Process. 42 (1994) 29372944.
[8] S. Haykin, Adaptive Filter Theory, fourth ed., Prentice-Hall,
Englewood Cliffs, NJ, 2001.
[9] R.G. Brown, P.Y.C. Hwang, Introduction to Random
Signals and Applied Kalman Filtering, Wiley, New York,
1997.
[10] G.F. Franklin, J.D. Powell, M.L. Workman, Digital Control
of Dynamical Systems, second ed., Addison-Wesley, Read-
ing, MA, 1990.
[11] S. Haykin, A.H. Syed, J. Zeidler, P. Yee, P. Wei, Adaptive
tracking of linear time-variant systems by extended RLS
algorithms, IEEE Trans. Signal Process. 45 (6) (May 1997)
11181128.
[12] S.M. Kay, Fundamentals of Statistical Signal Processing,
Estimation Theory, Prentice-Hall, Englewood Cliffs, NJ,
1993.
[13] A.H. Sayed, T. Kailath, A state-space approach to adaptive
RLS ltering, IEEE Signal Process. Mag. 11 (3) (1994)
1860.
[14] M.B. Malik, State-space RLS with adaptive memory,
Internat. Symp. Image Signal Process. Anal. I (2003)
146151.
[15] A. Benveniste, Design of adaptive algorithms for the
tracking of time-varying systems, Int. J. Adaptive Control
Signal Process. 1 (1987) 329.
[16] W.J. Rugh, Linear System Theory, Prentice-Hall, Engle-
wood Cliffs, NJ, 1996.
[17] M.B. Malik, Performance of SSRLS under model uncer-
tainty and external disturbance, under preparation.
[18] H.K. Khalil, Nonlinear Systems, third ed., Prentice-Hall,
Englewood Cliffs, NJ, 2002.
[19] E. Kreyszig, Advanced Engineering Mathematics, Wiley,
New York, 1983.
[20] H.J. Sussman, P.V. Kokotovic, The peaking phenomenon
and the global stabilization of nonlinear systems, IEEE
Trans. Automat. Control. 36 (1991) 424440.
[21] Y. Bar-Shalom, X.-R. Li, T. Kirubarajan, Estimation with
Applications to Tracking and Navigation, Wiley, New York,
2001.
ARTICLE IN PRESS
M.B. Malik / Signal Processing 86 (2006) 13651374 1374

Anda mungkin juga menyukai