Anda di halaman 1dari 12

Discrete-time Kalman Filter and the Particle Filter

Com S 477/577
Dec 7, 2010
1 Discrete-Time System
A discrete-time system is a process that transforms input discrete-time signals into output discrete-
time signals. Simply stated, it takes an input sequence and produces an output sequence. Such a
system usually takes the form of
x
k
= F
k1
x
k1
+G
k1
u
k1
+w
k1
, (1)
where u
k
is a known input, and w
k
is white noise with zero mean and covariance Q
k
. We are
interested in how the mean of the state x
k
evolves with time.
From now on, we denote the mean of a random variable with an overbar . Take the expected
value of both sides of equation (1):
x
k
= E(x
k
) = F
k1
x
k1
+G
k1
u
k1
. (2)
How does the covariance P
k
= cov(x
k
) evolve with time? Subtraction of (2) from (1) yields
x
k
x
k
= F
k1
(x
k1
x
k1
) +w
k1
.
The covariance is then computed as follows:
P
k
= E
_
(x
k
x
k
)(x
k
x
k
)
T
_
= E
_
_
F
k1
(x
k1
x
k1
) +w
k1
__
F
k1
(x
k1
x
k1
) +w
k1
_
T
_
= E
_
F
k1
(x
k1
x
k1
)(x
k1
x
k1
)
T
F
T
k1
+w
k1
w
T
k1
+
F
k1
(x
k1
x
k1
)w
T
k1
+w
k1
(x
k1
x
k1
)
T
F
T
k1
_
.
That x
k1
x
k1
and w
k1
are uncorrelated implies
E
_
(x
k1
x
k1
)w
T
k1
_
= E(x
k1
x
k1
)E(w
T
k1
) = 0,
E
_
w
k1
(x
k1
x
k1
)
T
_
= 0.

The material is adapted from Chapters 4.1 and 5.15.3 in Dan Simons book Optimal State Estimation [1].
1
Two of the summands in the equation for P
k
vanish. The recursive covariance equation reduces to
P
k
= F
k1
P
k1
F
T
k1
+Q
k1
, (3)
where Q
k1
is the covariance of w
k1
. The above is called a discrete-time Lyapunov equation or a
Stein equation.
Example 1. The following is a linear system that describes the population of a predator x
1
and that of its
prey x
2
, where the second subscript of each item denotes the time step:
x
1,k+1
= x
1,k
0.8x
1,k
+ 0.4x
2,k
+w
1,k
,
x
2,k+1
= x
2,k
0.4x
1,k
+u
k
+w
2,k
.
The rst equation says that the predator population decreases due to overcrowding as well as the reduction
in the prey population. The second equation says that the prey population decreases due to the predator
population and increases on external food supply u
k
. Both populations are subject to random disturbances
(with variances 1 and 2, respectively) due to environmental factors. We rewrite the system in the state space
form as
x
k+1
=
_
0.2 0.4
0.4 1
_
x
k
+
_
0
1
_
u
k
+w
k
,
w
k
(0, Q),
where Q = diag(1, 2).
The mean and covariance of the populations evolve with time according to equations (2) and (3). Set
u
k
= 1 and the initial conditions as x
0
= (10, 20)
T
and P
0
= diag(40, 40). The gure
1
below depicts the two
means and the two diagonal elements of the covariance matrix for the rst few steps.
From the gure, we see that the mean and covariance eventually reach the steady-state values. The
steady state value of the mean is the solution of the linear system after setting w
k
= 0:
x =
_
2.5
5
_
.
1
Figure 4.1 on p. 110 in [1].
2
The steady-state value of P found by control system software
2
is
P
_
2.88 3.88
3.08 7.96
_
.
2 Four Estimates of a State
We would like to estimate the state, i.e., the values of all variables of a dynamic system as it
evolves with time. Our method is to derive equations from the system to describe how the mean
and covariance of its state x propagate with time. These equations form a new dynamic system
which is simulated on a computer. They are the basis for the derivation of the Kalman lter whose
estimate is the mean of the original systems state x, and whose covariance equals the covariance
of x. Every time a new measurement is taken, the mean and covariance of x are updated, in a
similar way we recursively update the estimate of a constant before.
Consider a linear discrete-time system as follows:
x
k
= F
k1
x
k1
+G
k1
u
k1
+w
k1
, (4)
y
k
= H
k
x
k
+v
k
. (5)
Here, the evolutions of the state x and the output y are subject to some noise processes w and
v, respectively. These two noise are white, zero-mean, and uncorrelated. Let Q
k
and R
k
be their
covariance matrices, respectively. Then the noise characteristics are given as follows:
w
k
(0, Q
k
), (6)
v
k
(0, R
k
), (7)
E(w
k
w
T
j
) =
_
Q
k
, if k = j;
0, otherwise,
(8)
E(v
k
v
T
j
) =
_
R
k
, if k = j;
0, otherwise,
(9)
E(v
k
w
T
j
) = 0. (10)
The goal is to estimate the state x
k
based on the known system dynamics (4) and (5) and from
the noisy measurements {y
k
}.
In estimating the state x
k
at time k, if we have all the measurements up to and including time
k available, then we can form an a posteriori (or a posterior) estimate denoted as x
+
k
. One way is
to compute the expected value of x
k
as
x
+
k
= E( x
k
| y
1
, y
2
, . . . , y
k
). (11)
If we have only the measurements before time k available, then we can form a priori (or a prior)
estimate denoted as x

k
. One way to form this latter estimate is to compute the expected value of
x
k
as
x

k
= E( x
k
| y
1
, y
2
, . . . , y
k1
). (12)
2
For example, the MATLAB Control System Toolbox function DLYAP(F, Q).
3
Note that x

k
and x
+
k
are both estimates of x
k
. However, the former is our estimate before the
measurement y
k
is taken into account, while the latter is our estimate after the measurement y
k
is taken into account. With more information, x
+
k
is expected to be a better estimate than x

k
.
A smoothed estimate of x
k
is formed if we also have measurements after time k available for
use. One way of doing this is to compute the following expected value:
x
k|k+n
= E( x
k
| y
1
, y
2
, . . . , y
k+n
), (13)
where n > 0 depends on the specic problem that is being solved. A predicted estimate is a
prediction of x
k
one step or more ahead of the available measurements. This is often computed as
the expected value of x
k
based on the limited measurements that are available:
x
k|km
= E( x
k
| y
1
, y
2
, . . . , y
km
), (14)
where m > 0 is problem specic.
The gure below depicts the relationship between the posterior, a priori, smoothed, and pre-
dicted state estimates. Suppose we are making estimates based on all measurements taken at time
k 5. An estimate of the state at k = 1 is a smoothed estimate. An estimate at k = 5 is the a
posteriori estimate. An estimate at k = 6 is the prior estimate. And an estimate of the state at
k = 9 is a prediction.
x
5
+
a priori
x
6

x
1/5
x
9/5
5 4 3 2
7 6 8 9 10 1
smoothed
estimate
estimate
estimate
a posteriori
time
prediction
Starting from now we denote x
+
0
as our initial estimate of x
0
before the rst measurement is
taken (at time k = 1). Given no measurements available, we form x
+
0
as the expected value of the
initial estimate:
x
+
0
= x
0
= E(x
0
). (15)
We let P

k
be the covariance of the estimation error of x

k
, and P
+
k
the covariance of the estimation
error of x
+
k
, all at time k:
P

k
= E
_
(x
k
x

k
)(x
k
x

k
)
T
_
,
P
+
k
= E
_
(x
k
x
+
k
)(x
k
x
+
k
)
T
_
.
The next gure shows the relationships from time k 1 to k. After we process the measurement
at time k 1, we have a posterior estimate x
+
k1
and the covariance P
+
k1
. At time k, before
we process the new measurement we compute a prior estimate x

k
and the covariance P

k
of the
estimate based on the system dynamics. Then we process the measurement y
k
at time k to rene
4
our estimate of x
k
. The resulting estimate that takes into account the measurement at time k is
x
+
k
with covariance P
+
k
.
P
k

P
k
+
+
x
k
x
k
+
x
k
P
k
+

x
k
P
k

time

1
1
1 1
k k 1
3 From a posteriori Estimate to a priori Estimate
The estimation process starts with x
+
0
, the best estimate of the initial state x
0
. It then computes
x

1
. Recall from (2) that the mean of x propagates with time, so we obtain
x

1
= F
0
x
+
0
+G
0
u
0
.
This equation generalizes to obtain the prior estimate at time k from the posterior estimate at the
previous time k 1:
x

k
= F
k1
x
+
k1
+G
k1
u
k1
.
Namely, the state estimate propagates the same way that the mean of the state does. This update
is based on the system dynamics (1).
Next, we derive the equations for updating the covariance of the state estimation error from
time k 1 to time k but before the measurement at time k is considered. We begin with P
+
0
, the
covariance of our initial estimate x
+
0
. The covariance represents the uncertainty in x
0
:
P
+
0
= E
_
(x
0
x
0
)(x
0
x
0
)
T
_
= E
_
(x
0
x
+
0
)(x
0
x
+
0
)
T
_
. (16)
If the initial state x
0
is perfectly known, then P
+
0
= 0; if it is completely unknown, then P
+
0
= I.
Since the covariance of the state propagates with time according to (3), we immediately obtain
P

1
= F
0
P
+
0
F
T
0
+Q
0
. (17)
The above update is generalized to time k:
P

k
= F
k1
P
+
k1
F
T
k1
+Q
k1
. (18)
4 From a priori Estimate to a posteriori Estimate
Now we need to update the state estimate at time k by taking into account the measurement y
k
at time k. In other words, we compute x
+
k
from x

k
, which was obtained without knowledge of y
k
.
Recall from the lecture on recursive least squares that the availability of y
k
changes the estimate
as follows:
K
k
= P
k1
H
T
k
(H
k
P
k1
H
T
k
+R
k
)
1
,
x
k
= x
k1
+K
k
(y
k
H
k
x
k1
),
P
k
= (I K
k
H
k
)P
k1
,
5
where x
k1
and P
k1
are the estimate and its covariance before the measurement y
k
is processed
(so they are essentially x

k
and P

k
here), and x
k
and P
k
are the estimate and its covariance based
on y
k
(so they are x
+
k
and P
+
k
here). The correspondences are shown below:
x
k1
x

k
P
k1
P

k
x
k
x
+
k
P
k
P
+
k
Applying these replacements, we obtain the measurement update equations
K
k
= P

k
H
T
k
(H
k
P

k
H
T
k
+R
k
)
1
,
x
+
k
= x

k
+K
k
(y
k
H
k
x

k
),
P
+
k
= (I K
k
H
k
)P

k
,
5 Kalman Filter
Now we summarize all the equations into an algorithm referred to as the discrete-time Kalman
lter. The system dynamics are given by (4) and (5), with noise statistics (8)(10).
1. Initialize the Kalman lter according to (15) and (16), namely,
x
+
0
= E(x
0
) = x
0
,
P
+
0
= E
_
(x x
0
)(x
0
x
0
)
T
_
.
2. Iterate the following sequentially at each time step k = 1, 2, . . .:
P

k
= F
k1
P
+
k1
F
T
k1
+Q
k1
, (19)
K
k
= P

k
H
T
k
(H
k
P

k
H
T
k
+R
k
)
1
, (20)
x

k
= F
k1
x
+
k1
+G
k1
u
k1
, prior state estimate (21)
x
+
k
= x

k
+K
k
(y
k
H
k
x

k
), posterior state estimate (22)
P
+
k
= (I K
k
H
k
)P

k
(I K
k
H
k
)
T
+K
k
R
k
K
T
k
(23)
= (I K
k
H
k
)P

k
. (24)
The rst expression for P
+
k
guarantees to be symmetric positive denite, as long as P

k
is so, whereas
the second expression is simpler though it does not guarantee symmetry or positive deniteness.
When x
k
is a constant, then F
k
= I, Q
k
= 0, and u
k
= 0 for all k. So we have
P

k
= P
+
k1
,
x

k
= x
+
k1
,
x
+
k
= x
+
k1
+K
k
(y
k
H
k
x

k
),
P
+
k
= (I K
k
H
k
)P
+
k1
.
The Kalman lter consists of the last two equations above and (20). It reduces to the recursive
least squares algorithm for estimating a constant vector.
6
Note that the calculation of P

k
, K
k
, and P
+
k
does not depend on the measurement y
k
. It
depends on the system parameters F
k
, H
k
, Q
k
, and R
k
. For eciency, we can precalculate the
Kalman gain K
k
up to large enough k and store the data beforehand. When the dynamical system
starts operating, only x

and x
+
need to be computed in real time. This also means that the
performance of the lter can be investigated and evaluated before it is actually run. The reason is
that the accuracy of the lter is indicated by P
k
, which does not depend on the measurements.
Example 2. Consider a noise-free Newtonian system involving position r, velocity v, and constant acceler-
ation a. The system equation is
_
_
r
v
a
_
_
=
_
_
0 1 0
0 0 1
0 0 0
_
_
_
_
r
v
a
_
_
,
Writing x = (r, v, a)
T
, the system becomes
x = Ax.
It has a solution
x = e
At
x
0
,
where
e
At
=

j=0
(At)
j
j!
.
Discretize the system with a sample time interval of h:
x
k+1
= Fx
k
,
where
F = e
Ah
= I +Ah +
(Ah)
2
2!
+
=
_
_
1 h h
2
/2
0 1 h
0 0 1
_
_
.
There is no state noise or output noise. Hence Q
k
= 0 and R
k
= 0 for all k. Also, there is no input so
G
k
= 0. The Kalman lter for the system follows from equations (19) and (21):
P

k
= FP
+
k1
F
T
.
x

k
= F x
+
k1
, (25)
The equations (20), (22), (23), and (24) are not needed since we do not make any measurements between
time (k 1)
+
and time k

. For the same reason, we see that the covariance of the estimation error increases
between time (k 1)
+
and time k

, as indicated in the rst equation above.


Now suppose that we measure the position with a variance of
2
:
y
k
= H
k
x
k
+v
k
= (1 0 0)x
k
+v
k
,
v
k
(0, R
k
),
R
k
=
2
.
7
We denote by P

k,ij
the (i, j)-th entry of the prior covariance. Substituting H
k
= (1, 0, 0) and R
k
=
2
into
the above equation, we obtain the Kalman gain from (20):
K
k
=
1
P

k,11
+
2
_
_
P

k,11
P

k,12
P

k,13
_
_
.
Meanwhile, the posterior covariance follows from (24) as
P
+
k
= P

k
K
k
H
k
P

k
. (26)
We obtain its traces:
tr(P
+
k
) = tr(P

k
)
(P

k,11
)
2
+ (P

k,22
)
2
+ (P

k,33
)
2
P

k,11
+
2
.
The above equation shows that the covariance decreases when we get a new measurement. Consequently,
our state estimate improves.
The gure below
3
show the variance of the position estimate (P

k,11
and P
+
k,11
) for the rst ve time steps
of the Kalman lter. We see that the uncertainty increases from one step to the next, but then decreases as
a new measurement is taken into account.
The second gure shows the variance evolution when the Kalman lter runs for the rst 60 time steps.
It can be seen that, despite the uctuations before and after each new measurement, the variance converges
to a steady-state value.
3
This and the next gures appear as Figures 5.3 and 5.4 on p. 134 of [1].
8
The next gure
4
shows the position-measurement error (with a standard deviation of 30) and the error of
the posterior estimate x
+
. The estimate error starts out with a standard deviation close to 30, and decreases
to about 11 by the end of the simulation. The standard deviations are obtained from the covariances.
6 One-step Kalman Filter
This section combines the a priori and a posteriori Kalman lter equations into one. At time k +1,
the prior state estimate is, by (21),
x

k+1
= F
k
x
+
k
+G
k
u
k
.
Now, we substitute (22) into the above and obtain
x

k+1
= F
k
_
x

k
+K
k
(y
k
H
k
x

k
)
_
+G
k
u
k
= F
k
(I K
k
H
k
) x

k
+F
k
K
k
y
k
+G
k
u
k
. (27)
4
Figure 5.5 on p. 135 of [1].
9
Similarly, we can obtain a one-step expression for the prior covariance. First, increase the index
in (19) by one:
P

k+1
= F
k
P
+
k
F
T
k
+Q
k
.
Next, substitute (24) into the above:
P

k+1
= F
k
(P

k
K
k
H
k
P

k
)F
T
k
+Q
k
= F
k
P

k
F
T
k
F
k
K
k
H
k
P

k
F
T
k
+Q
k
. (28)
The Kalman lter at each iteration step sequentially executes (20), (27), and (28).
Similarly, we can derive one-step expressions for the a posterior state estimate and covariance
as follows:
x
+
k
= (I K
k
H
k
)(F
k1
x
+
k1
+G
k1
u
k1
) +K
k
y
k
, (29)
P
+
k
= (I K
k
H
k
)(F
k1
P
+
k1
F
T
k1
+Q
k1
). (30)
The Kalman lter sequentially evaluates (20), (29), and (30) at each iteration step.
7 The Particle Filter
The particle lter is a statistical, brute-force approach to estimation that often works well on
problems that are dicult for the conventional Kalman lter. These problems typically involve
systems that are highly nonlinear. For nonlinear systems, the extended Kalman lter (EKF) is
the most widely applied state estimation technique. When the nonlinearities are high, though,
the EKF often gives unreliable estimates since it employs linearization to propagate the mean and
covariance of the state estimate.
The particle lter is a nonlinear state estimator that trades computational eort for the degree
of nonlinearity. A probability-based estimator, it has proven very eective in machine learning,
mapping, and learning robots [2].
At the beginning of estimation, we randomly generate a number of state vectors based on the
probability distribution of the initial state (which is assumed to be known). These state vectors
are called particles. At each time step, we propagate the particles to the next time step using the
process equation. Compute the likelihood of each particle based on the measurement taken at the
step. Then generate a set of new particles based on the computed relative likelihoods. Iterate for
a number of steps. Output the mean and covariance of the particles at the last step.
More formally, suppose the nonlinear system and measurement equations are given as follows
for the time step k:
x
k+1
= f
k
(x
k
, w
k
),
y
k
= h
k
(x
k
, v
k
).
where w
k
and v
k
are the process noise and measurement noise, respectively, with known probability
density functions (pdfs). The functions f
k
and h
k
vary with time (i.e., the time step). The pdf
p(x
0
) of the initial state is assumed to be known.
The particle ltering algorithm executes the following steps:
10
1. Randomly generate n initial particles based on p(x
0
), and denote them as x
+
0,i
(i = 1, . . . , n).
The parameter n is chosen as a trade-o between computational eort and estimation accu-
racy.
2. Repeat the following for k = 1, 2, . . . ,.
Propagation For each particle x
+
k1,i
perform the time propagation step to obtain a priori
particles x

k,i
, based on the system dynamics:
x

k,i
= f
k1
(x
+
k1,i
, w
i
k1
), for i = 1, . . . , n,
where each noise vector w
i
k1
is randomly generated based on the known pdf of w
k1
.
Likelihood determination Compute the relative likelihood q
i
of each particle x
k,i
, condi-
tioned on the measurement y
k
. This is done by evaluating the pdf p(y
k
| x

k,i
), since we
know the pdf of the measurement noise. For instance, if an m-dimensional measurement
is given as
y
k
= h(x
k
) +v
k
where v
k
N(0, R). Then the relative likelihood of a specic measurement y

can be
computed as follows:
q
i
= Pr(y
k
= y

| x = x

k,i
)
= Pr(v
k
= y

h(x

k,i
))

1
(2)
m/2
|R|
1/2
exp
_
(y

h(x

k,i
))
T
R
1
(y

h(x

k,i
))
2
_
. (31)
The symbol means that the probability is proportional to the right hand side. We
use it to compare one relative likelihood with another because all are scaled by the same
factor.
Normalization Scale the relative likelihoods obtained in the previous step, for example,
(31), as follows
q
i
=
q
i

n
j=1
q
j
.
After the scaling, all the likelihoods add up to one.
Resampling Generate n posterior particles x
+
k,i
based on the relative likelihoods q
i
.
In each iteration step, we have a set of particles x
+
k,i
distributed according to the pdf p(x
k
| y
k
).
We can compute the mean and covariance.
Example 3. We consider a benchmark problem in nonlinear estimation where the system is given as follows:
x
k
=
1
2
x
k1
+
25x
k1
1 +x
2
k1
+ 8 cos(1.2(k 1)) +w
k
,
y
k
=
1
20
x
2
k
+v
k
,
where {w
k
} and {v
k
} are Gaussian white noise sequences with zero means, and unit variances.
11
The high degree of nonlinearity in the process and the measurement makes the problem dicult for a
Kalman lter. We take the initial state x
0
= 0.1 and initial estimate x
0
= x
0
, and the initial estimation
covariance for the Kalman lter as P
+
0
= 2. Simulate the extended Kalman lter (EKF) [1] and the particle
lter over 50 time steps (using 100 particles).
The gure
5
below compares the EKF and particle lter estimates. Not only is the EKF estimate poor,
but also the EKF thinks (based on the computed covariance) that the estimate is better than it really is. In
comparison, the particle lter does a better job. The estimation errors
6
for the Kalman and particle lters
are 16.3 and 2.6, respectively.
References
[1] D. Simon. Optimal State Estimations. John Wiley & Sons, Inc., Hoboken, New Jersey, 2006.
[2] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. The MIT Press, Boston, MA, 2005.
5
Figure 15.3 on p. 470 of [1]
6
measured as the roots of the mean squares.
12