Applications of Sequential
Tests to Target Tracking by
Multiple Models
X. RONG LI
University of New Orleans, New Orleans, USA
12.1 INTRODUCTION
219
220 Li and Solanky
use of multiple models can achieve performance usually signicantly su-
perior to that of the best (single-model-based) elemental lter, similar
to the superiority in modeling power of a mixture density to a single
best density.
The MM method has three generations. In the rst generation,
which can be called autonomous MM, the elemental lters operate com-
pletely independently, and the power of this generation stems from its
superior \decision" rule of fusing outputs of elemental lters. These ele-
mental lters interact with one another (by sharing information) in the
second generation, known as interacting MM, on top of the superior fu-
sion rule. While inheriting the rst two generations' strength, the third
generation, known as variable-structure MM (VSMM), emphasizes the
use of a variant set of models that adapts to the environment, while the
rst two generations have a xed set of models, as in Li and Bar-Shalom
(1996). The second and third generations are particularly popular for
target tracking. For a survey of the MM approach to estimation, the
reader is referred to Li (1996). The VSMM approach provides state-
of-the-art solutions to a number of complex engineering problems, in
particular, target tracking, and is promising for many other applica-
tions. An easily accessible account of VSMM approach in the context
of target tracking is given in Li (2000a).
The most important and rather diÆcult task in the VSMM approach
is model-set adaptation (MSA) | determination of the model set in real
time based on the data available sequentially. MSA is usually decom-
posed into two sub-problems: Model-set activation (or generation) and
model-set termination. In general, while the former determines (that
is, activates or generates) a family of candidate model-sets M1 ; : : : ; MN ,
the latter selects from the candidates M1 ; : : : ; MN the smallest model-
set having the largest probability of including the true model of the
process being estimated. Typical examples of MSA problem are
Problem 1: Is it better to delete a subset M1 from the current
model-set M ?
Problem 2: Given the current model-set M , which subset is better,
M1 or M2 ?
Problem 3: Is it better to add one of the sets M1 ; : : : ; MN to the
current set M ?
Problem 4: Is it better to delete one of the sets M1 ; : : : ; MN from
the current set M ?
Problem 5: Is it better to add some of the sets M1 ; : : : ; MN to the
current set M ?
Target Tracking by Multiple Models 221
Problem 6 : Is it better to delete some of the sets M1 ; : : : ; MN from
the current set M ?
Another important task when applying the MM approach is the
design of model-sets. This is needed for all three generations of the
MM methods. For more information about this diÆcult problem, the
reader is referred to Li (2002) and Li et al. (2002). We mention only a
typical subproblem here:
Problem 7: Choose L (known) best sets from the family of candi-
date sets M1 ; : : : ; MN .
Problem 7 diers from Problems 3, 4, 5, and 6 in that it does not
involve a special model-set M .
Solutions to these problems are useful for virtually all applications
of MM estimation. For example, in maneuvering target tracking, the
greatest challenge arises from a lack of knowledge about the motion
models of the target being tracked, although quite often we do know
that it belongs to a family of candidate model-sets. Another common
application of the MM method is in the detection and identication
of a fault in a system or process, where dierent types of faults are
represented by dierent model-sets.
In general, given a set of possible models at some stage and based
on sequentially available data, we are interested in addressing various
queries related to the current set of models, dened in Problems 1{7.
Evidently, these problems can be formulated as a statistical decision
problem, particularly as a problem of testing statistical hypotheses.
Here, sequential methods are preferred primarily for the following rea-
sons.
Observations are available sequentially.
A sequential test usually arrives at a decision substantially faster
than a non-sequential test when they are subjected to the same
decision error rates. This is crucial because the model-set needs
to adapt itself as quickly as possible and the pace at which it
adapts amounts to the (average) sample size needed for the test.
For example, Wald's Sequential Probability Ratio Test (SPRT)
is fastest among all tests with the same given decision error rates
if the true distribution is really equal to one of its two assumed
distributions. Even in the worst case, when the true distribution
lies between the two assumed ones, the SPRT usually requires a
smaller sample size than a non-sequential test, provided that the
222 Li and Solanky
allowed Type-I error probability is not too small, which is the
case for virtually all MSA problems.
Using an SPRT-based sequential test, appropriate thresholds for
the test statistics can be determined (approximately) without a
knowledge of the distribution of the observations, while for a non-
sequential test, the optimal thresholds depend on the underlying
distribution and are hard to come by.
A sequential test does not need to determine the sample size in
advance, while a non-sequential test does, which is not an easy
job in the context of MSA.
However, these hypothesis testing problems are challenging in at least
the following aspects.
Candidate hypotheses (model-sets) are not necessarily disjoint.
It is diÆcult to obtain the corresponding regions of acceptance,
rejection, and continuation in an optimal fashion. Such testing
problems are usually considered ill-posed and have been rarely
studied.
Since MSA has to be done as quickly as possible even at the
risk of having high decision error rates, the sample size cannot
be large (it is usually smaller than ve). For this reason, tests
with nice large-sample (asymptotic) properties are not necessarily
good choices.
Most problems involve multiple hypotheses, each being compos-
ite, and the parameters that characterize each hypothesis are usu-
ally multi-dimensional.
The sequential observations are usually correlated and almost
never independent or identically distributed.
12.2 AN EXAMPLE
STATISTICAL METHODS
Let s be the time-invariant unknown true mode in eect during the time
period over which, a test of hypotheses is performed. In the hypothesis
testing terminology, s is the unknown parameter on which, hypotheses
are tested. For convenience, let m be a generic hypothesized value of
s; it represents the parameter of a generic model in the MM approach.
A set of hypothesized values of s is denoted by M and referred to as a
224 Li and Solanky
model-set in the MM approach for obvious reasons. Since we only deal
with the MM approach here, we assume throughout that model-sets
have nitely many elements. However, the results presented here are
valid for many other hypothesis testing scenarios, including those with
many other classes of model-sets. Finally, denote by S the unknown
set of possible values of the true mode s.
It has been shown in Li (1996) that the optimal model set for the
MM approach is M = S . The performance of MM estimators dete-
riorates if either extra models are used (M S ) or some models are
missing (M S ). The deterioration worsens as M and S become more
mismatched. Given the same degree of mismatch, however, the case of
missing models is usually worse than the case of having extra models.
With these eects, given a collection of not necessarily disjoint
model-sets M1 ; : : : ; MN , various denitions of the best model set have
been proposed in Li (2002) and Li et al. (2002) for a variety of pur-
poses. For example, it may be dened as the one with the smallest
cardinality among the model-sets with the largest probability2 of in-
cluding the true mode s. In other words, the best among M1 ; : : : ; MN
is the set Mj with cardinality jMj j = minq2Q jMq j, where
P fs 2 Mq g = max P fs 2 Mi g; 8q 2 Q f1; : : : ; N g (12.3.1)
i2f1;:::;N g
The use of this denition would direct us to a probability-based
model-set adaptation. In this article, however, we present an alternative
approach based on sequential hypothesis testing.
Problem formulation
(12.3.4)
Such requirements make a decision quite safe, but perhaps too safe to
lose a good deal of eÆciency in the sense of requiring a sample size
that is too large. Following Wald (1947), the requirements are replaced
by considering an weighted average of the Type-I error probabilities,
P f\H2"js = mg, over the region A = fs 2 M1 g and the Type-II error
probabilities, P f\H1"js = mg, over the region R = fs 2 M2 g with the
following weight functions
Z Z
dWa (s) = 1 and dWr (s) = 1: (12.3.5)
A R
Then, the question is how to select the weight functions dWa(s) and
dWr (s). This is not trivial in general, though Wald (1947) suggested
a number of ways of doing it. For MSA, however, a very natural and
intuitively appealing choice is:
dWa (s) = P fs = mjs 2 M1 g; dWr (s) = P fs = mjs 2 M2 g (12.3.6)
This choice satises the normalization requirements (12.3.5). As such,
we propose in eect the use of the following modied requirements on
the expected error probabilities:
P f\H2"js 2 M1 g ; and P f\H1"js 2 M2 g ; (12.3.7)
where
X
P f\Hp"js 2 Mq g = P f\Hp"js = mgP fs = mjs 2 Mq g:
m2Mq
We do so because P fs = mjs 2 Mq g, known as model probability, is
available in an MM estimator. See Li (1996).
226 Li and Solanky
Model-set probability and likelihood
The tests outlined in Theorems 12.3.1 and 12.3.2 are actually SPRT's
using the model-set joint likelihood ratio and the probability ratio,
respectively. P fs = mjs 2 Ml ; zk 1g is nothing but the probability
at time k using data zk 1 of mode s = m in model-set Ml . These
probabilities are available from the MM estimator based on the model-
set Ml . This is a unique characteristic of MSA and, thus, the key to the
Target Tracking by Multiple Models 229
usefulness of the above theorems. It is this characteristic that enables
the technique of weight function to convert a composite hypothesis-
testing problem into a simple one with a clear physical justication.
These tests also have the additional advantage that virtually no extra
computation is needed to carry them out. Note also that the test
in Theorem 12.3.2 is more compatible with the denition of the best
model-set in (12.3.1).
The use of the weight functions (12.3.5) or (12.3.11) converts the
composite and possibly non-disjoint hypothesis-testing problem into a
simple one. As a result, any overlap between model sets M1 and M2
poses no problem after this conversion. Here, the model-set likelihood
(or probability) acts exactly in the same way as the likelihood for a
simple hypothesis-testing problem. This is the key to Theorems 12.3.1
and 12.3.2. They are optimal in the sense of having the quickest MSA
for the non-disjoint and composite hypothesis-testing problems.
In practice, the constants A and B can be treated as design pa-
rameters, obtained from the tuning of an MSA algorithm. Their re-
lationships with the decision error probabilities are given by Wald's
approximation:
A=
; B=
1 (12.3.13)
1
which are exact if k (or P k ) can only be either in (A; B ) or on the
boundaries A or B (that is, there is no overshoot or excess over the
boundaries) and are very accurate if the amount of overshoot is small.
In case there is an overshoot, the above values of A and B are more
conservative than the optimal choices.
The weight functions used in Theorem 12.3.1 satisfy the normaliza-
tion requirements (12.3.5), but those in Theorem 12.3.2 do not. How-
ever, the latter can be justied as follows. To satisfy the normalization
requirements, the following weight functions could be used instead:
dWa (s) = P fs = mjs 2 M g=ca
dWr (s) = P fs = mjs 2 M g=cr
where 0 < ca ; cr 1 are factors such that the normalization require-
ments (12.3.5) are satised. However,
P f\H2"; s 2 M1 js 2 M g
ca
; P f\H1"; s 2 M2 js 2 M g
cr
is equivalent to (12.3.12) with
0 = ca ; 0 = cr (12.3.14)
230 Li and Solanky
(12.3.7) and (12.3.12) are dierent. The latter uses the same weight for
the two error probabilities under the same system mode, which seems
more reasonable than using dierent weights, as in the case of the for-
mer. This is achieved at the price of losing the normalization property.
Their allowable error probabilities dier and thus the corresponding
optimal tests are dierent.
Theorems 12.3.1 and 12.3.2 are closely related. The test statis-
tic in Theorem 12.3.1 is the product of ratios of model-set likelihoods
(for the present and the past), while in Theorem 12.3.2, it is the ra-
tio of posterior model-set probabilities at the present time only, which
depends on the past as well as current information. Specically, poste-
rior probability ratio = joint likelihood ratio prior probability ratio.
Thus, MS-SPRT is better than MS-SLRT if prior probabilities of the
model-sets are available, which is usually the case for MSA. In fact, the
normalization requirements imply ca = P fs 2 M1g; cr = P fs 2 M2 g.
However, although (12.3.7) and (12.3.12) are equivalent to (12.3.14),
the tests in Theorems 12.3.1 and 12.3.2 are distinct because the likeli-
hood ratio in Theorem 12.3.1 accounts for the factors ca and cr , which
is not the case for Theorem 12.3.2.
For testing a simple hypothesis H0 : = 0 against a simple alter-
native H1 : = 1, the optimality of the SPRT does not hold if the true
takes on a value other than 0 or 1 . In fact, if is between 0 and 1 ,
the SPRT may require a sample size even larger than a non-sequential
test. The above two tests may suer from a similar deciency when
s 2= Ml . Note, however, that this deciency is less serious for MSA since
the likelihoods (12.5.1) are given over a set of points, rather than just
a single point. A possible remedy for this deciency is to consider the
Kiefer-Weiss problem (see Ghosh and Sen (1991), and Siegmund (1985))
of nding a test Æ with specied bounds for the error probabilities that
minimizes the maximum expected sample size, min sup E [N j], for all
Æ
values of . Unfortunately, a satisfactory solution of this problem is not
available, although asymptotically optimal solutions are available. For
example, there is a so-called 2-SPRT test of Lorden for distributions
in the exponential family (see Human (1983)). SPRT is usually very
eÆcient in the case where the error probabilities are not very small even
if the true is between 0 and 1. This is important for MSA since it
has to be done as quickly as possible and, thus, cannot aord to have
a large sample size (i.e., small error probabilities).
Another signicant drawback of SPRT is that it does not have a
Target Tracking by Multiple Models 231
guaranteed nite upper bound on its stopping time. In other words,
SPRT may keep running for an arbitrarily long time. Since a nite
bound on the stopping time is important for MSA, the above theorems
can be modied to guarantee such a bound as well as decision error
probabilities, following a SPRT-type procedure developed recently in
Zhu et al. (2002).
Remark 12.3.1 Note that Problem 2 is somewhat related to a prob-
lem one typically considers under the Gupta's (1956) subset selection
setup, in the general area of multiple comparisons. Along the lines of
the subset selection approach, one may consider the following version
of Problem 2. Given the current model-set M , the goal is to nd the
subset M of M , so that the true model is inside the set M with
some prespecied probability. However, for the target tracking pur-
pose, the formulation given in Problem 2 is more directly applicable as
it allows one to have non-disjoint sets M1 and M2 . Also, in Problem
2, the subsets M1 and M2 are not random, whereas in the subset se-
lection approach, the selected subset M of M is random. One may
look at Gupta and Huang (1981), Gupta and Panchapakesan (1991),
and Mukhopadhyay and Solanky (1994) for subset selection and other
common formulations available in the area of multiple comparisons.
Problems
Problem formulation
Solutions
M(1)
k Mk Mk Nk ;
(2) ( )
Mj
considering deleting M(1) ; : : : ; M(j) if k Mk B and continue for
( )
remaining model-sets.
M
k (i)
For Problem 7: Accept M(1) ; : : : ; M(i) if M L B ; reject M(j) ; ::
[ +1]
k
M
k (j )
M(Nk ) if M
k [L]
A; continue to test until exactly L model-sets
have been accepted or not rejected, and thus are chosen.
The model-set probabilities can be replaced by the model-set joint like-
lihoods, resulting in a Model-Set Likelihood Sequential Ranking Test
(MSL-SRT).
In the above, A and B are design parameters, which control the
error probabilities.
Problems 5{7 are thus solved by sequential ranking tests without
the need for an explicit formulation.
EVALUATING PERFORMANCE
with state x = [x1 ; x_ 1 ; x2; x_ 2 ]0, where process and measurement noise
sequences w and v are white, uncorrelated with the initial state x0 ,
mutually uncorrelated, and have constant means w, v and covariances
Q = 0:0042I , R = 1002 I , respectively. Two generic types of model
are considered: nearly constant-velocity (CV) model and constant-turn
238 Li and Solanky
(CT) (with a known turn rate) model, given by Li and Bar-Shalom
(1993) and Bar-Shalom et al. (2001).
" # " #
F2 0 1 T
FCV = 0 F2 ; F2 = 0 1
2
sin !T 0 1 cos !T 3
6 1 7
6
6 0 ! ! 7
6
FCT = 66 1 cos !T cos !T 0 sin !T 7
7
7
6 0 1 sin !T 7
7
4 ! ! 5
0 sin !T 0 cos !T
where T = 5 second is the sampling period. Denote by 3Æ /s a CT
model with known turn rate 3Æ/s. Note that CV corresponds to 0Æ /s.
These models are commonly used in maneuvering target tracking and,
in particular, surveillance for air traÆc control, as indicated in Bar-
Shalom and Li (1995) and Bar-Shalom et al. (2001).
A performance evaluator was used to evaluate the sequential pro-
cedures of the previous sections using 100 observation sequences fzk g,
where the true mode was known to the evaluator, but not to the se-
quential procedures. The distribution of the true mode was known to
neither the evaluator nor the procedures. The thresholds A and B
below were determined by (12.3.13) with = , and thus AB = 1.
Therefore, only B is indicated. The range of threshold used was 9
B 99, which corresponds to the Type-I and Type-II error probabili-
ties = 2 [0:01; 0:1] for binary problems. All results are expressed
in terms of the discrete time (sample size) k, that is, multiples of T .
First, consider two model-sets M1 = f0; 3Æ=sg and M2 = f0; 3Æ=sg.
For the 100 observation sequences with two cases (2Æ /s and 5Æ /s, respec-
tively) of the truth, MS-SLRT and MS-SPRT (denoted as \likelihood"
and \probability" in the plots, respectively) both led to 100% correct
decisions (choose M1 ). The average decision times (that is, sample
sizes) are shown in Figure 12.4.1(b). When the truth was 1Æ/s, with
a high percentage both MS-SLRT and MS-SPRT refused to make a
choice between M1 and M2 because they both include the true model
(see Figure 12.4.1(a)). The solid and dashed curves correspond to the
results of MS-SLRT and MS-SPRT, respectively.
Now, consider three model sets M1 = f 1Æ=s; 0; 1Æ=sg; M2 = f 3Æ=
s; 1Æ =s; 0g, M3 = f0; 1Æ=s; 3Æ =sg with three cases of the truth, 1Æ /s,
2Æ/s, and 3Æ/s. The results of MMS-SLRT and MMS-SPRT are dis-
played in Figure 12.4.2.
Target Tracking by Multiple Models 239
92
90
Truth: CT=1o/s
88
86
Likelihood
Percent
84
Probability
82
80
78
76
74
0 10 20 30 40 50 60 70 80 90 100
Threshold
(a) Percent of No Decisions
4.5
Average Decison Time (k)
Truth: CT=2o/s
4
Likelihood
Probability
3.5
2
0 10 20 30 40 50 60 70 80 90 100
Threshold
(b) Average Decision Time (k)
Figure 12.4.1: Average Decision Time and Percent of No Decisions of
MS-SLRT and MS-SPRT for Two Model Sets.
240 Li and Solanky
100
Truth=1o/s Likelihood
90
Probability
80
70
Percent
60
50
40
Truth=2o/s
30
20
0 10 20 30 40 50 60 70 80 90 100
Threshold
(a) Percent of no decisions
7.5
7
Average Decision Time (k)
Truth=2o/s
6.5
Likelihood
6
Probability
5.5
4.5
Truth=3o/s
4
3.5
0 10 20 30 40 50 60 70 80 90 100
Threshold
(b) Average decision time (k)
Figure 12.4.2: Average Decision Time and Percent of No Decisions of
MMS-SLRT and MMS-SPRT for Three Model Sets.
Target Tracking by Multiple Models 241
10
9 Likelihood (Truth=3o/s)
Probability (Truth=3o/s)
8
Likelihood (Truth=1o/s)
7 Probability (Truth=1o/s)
6
Percent
0
0 10 20 30 40 50 60 70 80 90 100
Threshold
(a) Percent of incorrect decisions
8.5
8
Average Decison Time (k)
7.5
Truth=1o/s
7
Likelihood
Probability
6.5
Truth=3o/s
5.5
5
0 10 20 30 40 50 60 70 80 90 100
Threshold
(b) Average decision time (k)
Figure 12.4.3: Percent of Incorrect Decisions and Average Decision
Time of Sequential Ranking Tests for Five Model Sets.
242 Li and Solanky
When the truth was 1Æ/s, they refused to decide between model-
sets M1 and M3 because they both included the true model, although
M2 was ruled out. When the truth was 2Æ /s, M3 was better than M1
but their only dierence came from that of the models 3Æ/s and 1Æ /s,
which is not large. When the truth was 3Æ /s, MMS-SLRT and MMS-
SPRT led to 100% correct decisions (not shown) and the decision time
was shorter.
Finally, consider ve model-sets
M1 = f 0:5Æ =s; 0; 0:5Æ=sg; M2 = f1Æ =s; 4Æ=sg; M3 = f 1Æ =s;
-4Æ=sg; M4 = f3Æ=s; 7Æ=s; 15Æ=sg; M5 = f 3Æ=s; 7Æ=s; 15Æ=sg
with two cases of the truth: 3Æ/s and 1Æ/s. The results for MSP-SRT
and MSL-SRT are shown in Figure (12.4.3). In both cases, there was
not a single instance of \no decision," the decision error was low, and
the decision time was short. When the truth was 3Æ /s, M4 rather than
M2 was chosen because M4 included the true model while M2 did not
(although it \covered" the true model). It is interesting to note that
the decision time was shorter when the truth was 3Æ/s than when it
was 1Æ /s.
The following observations can be made from these evaluations: The
percent of decision errors is low and the average decision time (that is,
sample size) is short. If the truth lies in an area that is too risky to
decide on one model-set, the tests refuse to make a decision in order
to avoid a large probability of decision error. Since the probability-
and likelihood-ratio tests are optimal under dierent criteria, one is
not superior to the other in general (Figure 12.4.1 is in favor of the
former but Figure 12.4.3 is not).
More information about these tests can be found in Li (2000a,2000b),
Li et al. (1999), and Li and Zhang (2000). For example, the MS-SLRT
and MS-SPRT are used in the Model-Group Switching (MGS) algo-
rithm and veried indirectly by its superior performance. The MMS-
SLRT and MMS-SPRT are used in the extended MGS algorithm.
Sequential methodologies are widely used in statistical signal and
data processing. They have found many applications in target de-
tection, tracking, and recognition. We mention here a few examples
worked out recently. Tartakovsky et al. (2001a, 2002) developed a
scheme for detection of targets using distributed sensors as an appli-
cation of sequential procedures. An extension of that work for target
recognition as well detection is reported in Tartakovsky et al. (2001b),
using invariant sequential procedures. Li et al. (2002) developed an
SPRT-type sequential procedure to conrm, maintain, and delete tracks,
Target Tracking by Multiple Models 243
along with an analysis of the life-span of a track.
12.5 APPENDIX
Since the test is actually the SPRT for the above modied simple hy-
pothesis testing problem, the optimality of the test follows from that
of the SPRT for this problem.
A regularity condition for the optimality of this test is that the se-
quence of model-set log-likelihood ratios log(k ) has to be a random
walk.
Proof of Theorem 12.3.2 This theorem follows essentially from The-
orem 1 by replacing P fs = mjs 2 Ml g with P fs = mjs 2 M g. Specif-
ically, with the modied requirements (12.3.12), testing the composite
hypothesis H1 against the composite alternative H2 of (12.3.2) reduces
to testing a simple hypothesis against a simple alternative:
H1 : p = p[~z k ; s 2 M1 ] vs. H2 : p = p[~z k ; s 2 M2 ]
The likelihood ratio of the hypotheses turns out to be the probability
ratio of the model-set. Since the test is actually the SPRT for the above
modied simple hypothesis-testing problem, the optimality of the test
follows from that of the SPRT for this situation.
244 Li and Solanky
ACKNOWLEDGMENT
[4] Ghosh, B.K. and Sen, P.K. (1991). Handbook of Sequential Analysis,
edited volume, Marcel Dekker: New York.
[5] Ghosh, M., Mukhopadhyay, N. and Sen, P.K. (1997). Sequential
Estimation, Wiley: New York.
[18] Li, X.R. and Jilkov, V.P. (2001a). A Survey of Maneuvering Tar-
get Tracking|Part II: Ballistic Target Models. In Proc. 2000 SPIE
Conf. on Signal and Data Proc. of Small Targets, San Diego, Califor-
nia, U.S.A., 4473, 559-581.
[19] Li, X.R. and Jilkov, V.P. (2001b). A Survey of Maneuvering Target
Tracking|Part III: Measurement Models. In Proc. 2000 SPIE Conf.
on Signal and Data Processing of Small Targets, San Diego, California,
U.S.A., 4473, 423{446.
[20] Li, X.R., Li, N. and Jilkov, V.P. (2002). An SPRT-Based Se-
quential Approach to Track Conrmation and Termination. In Proc.
246 Li and Solanky
2002 Int. Conf. on Inform. Fusion, Annapolis, Maryland, U.S.A.,
951-958.
[21] Li, X.R., Zhao, Z.-L., Zhang, P. and He, C. (2002). Model-Set
Design, Choice and Comparison for Multiple-Model Estimation|Part
II: Examples. In Proc. 2002 Int. Conf. on Inform. Fusion, Annapolis,
Maryland, U.S.A., 1347-1354.
[22] Li, X.R., Zhi, X.R. and Zhang, Y.M. (1999). Multiple-Model Es-
timation with Variable Structure|Part III: Model-Group Switching
Algorithm, IEEE Trans. Aerosp. Elec. Sys., AES-35, 225{241.
[23] Lorden, G. (1976). 2-SPRT's and the Modied Kiefer-Weiss Prob-
lem of Minimizing an Expected Sample Size, Ann. Statist., 4, 281{291.
[24] Mukhopadhyay, N. and Solanky, T.K.S. (1994). Multistage Se-
lection and Ranking Procedures: Second-Order Asymptotics, Marcel
Dekker: New York.
[25] Ross, W.H. (1987). The Expectation of the Likelihood Ratio Cri-
terion, Int. Statist. Rev., 55, 315{329.
[26] Siegmund, D. (1985). Sequential Analysis, Springer-Verlag: New
York.
[27] Tartakovsky, A.G., Li, X.R. and Yaralov, G. (2001b). Invariant
Sequential Detection and Recognition of Targets in Distributed Sys-
tems. In Proc. 2001 Int. Conf. on Inform. Fusion, Montreal, Quebec,
Canada, WeA3.11{WeA3.18.
[28] Tartakovsky, A.G., Li, X.R. and Yaralov, G. (2001a). Sequen-
tial Detection of Targets in Distributed Systems. In Proc. 2001 SPIE
Conf. Signal Processing, Sensor Fusion, and Target Recognition, Or-
lando, Florida, U.S.A., 4380, pages 501{513.
[29] Tartakovsky, A.G. and Li, X.R. and Yaralov, G. (2002). Sequen-
tial Detection of Targets in Multichannel Systems, IEEE Trans. Info.
Theory, 48, .
[32] Zhu, Y.M., Zhang, K.S. and Li, X.R. (2002). An SPRT-Type
Procedure with Finite Upper Bound on Stopping Time, (submitted for
publication).
Addresses for communication