Anda di halaman 1dari 13

3037

IEEE Transactions on Power Apparatus and Systems, Vol. PAS-104, No. 11, November 1985

BAD DATA IDENTIFICATION METHODS IN POWER SYSTEM STATE ESTIMATION - A COMPARATIVE STUDY
M. Ribbens-Pavella

Th. Van Cutsem

L. Mili (*)

Department of Electrical Engineering; University of Liege


Sart-Tilman, B- 4000 Liege, Belgium
Abstract - The identification techniques available today
are first classified into three broad classes. Their
behaviour with respect to selected criteria are then
explored and assessed. Further, a series of simulations
are carried outwith various types of bad data. Investigating the way these identification techniques behave allows completing and validating the theoretical
comparisons and conclusions.

la INTRODUCTION
a power system state e3timator softroutines, bad data identification is the last
but not least - satellite function. Its task is to
guarantee the reliability of the data base generated
through the estimator. Indeed, despitethe preprocessing
data validation techniques used to clear the data received at a control center, gross anomalies (suchas bad
data, modelling and parameter errors) may still exist
during estimation. To avoid corrupting the resulting
data base, it is of great importance that these anomalies are identified and further eliminated from the set
of measurements. This explains why the need for a function capable to identify bad data has been felt almost
simultaneously with the need for the state estimation
function itself. It also explains the number and diversity of research works carried out on the subject.
This paper aims at providing a comparative assessment of the "post-estimation" identification methods(1)
available today. More specifically it concentrates on
evaluating the techniques able to identify bad data
(BD), i.e. grossly erroneous measurements. These techniques are first classified, then explored and compared.
Three broad classes are distinguished : the class of
identification by elimrrnation (IBE) (1-14], that of the
non-quadratic criteria (NQC) [3,15-20], and the hypothesis testing identification (HTI) [211. The investigations are based upon both theoretical considerations
and practical experience. The latter has been acquired
through simulations performed on four different power
systems. The results reported here concern simulations
performed on the IEEE 30-bus system, with the three
possible types of multiple BD : noninteracting, interacting, and unidentifiable ones.
The paper is organized as follows. Section 2
gathers the material necessary for the intended exploration. The reader is supposed to be familiar at least
with state estimation and BD detection techniques; so
this Section focuses essentially on topological identifiability aspects and selection of identifiability
criteria. Section 3 gives a brief description of the
various identification methods within their corresponding categories,while Section 4 investigates further and
compares the three main methodologies. Finally, the exploration is completed and validated through the simulation results of Section 5.

In the list of

ware

A paper recommended and approved


85 WM 060-9
by the IEEE Power System Engineering Committee of
the IEEE Power Engineering Society for presentation
at the IEEE/PES 1985 Winter Meeting, New York, New
York, February 3 - 8, 1985. Manuscript submitted
January 19, 1984; made available for printing
November 19, 1984.

2, MISCELLANIES
Somewhat hybrid, this Section groups the various
pieces of information necessary for the subsequent
developments. The degree of the authors' personal perception and interpretation goes increasing along the
paragraphs. Starting with definitions of the usual
symbols in 2.1, one is led up to some useful topological considerations and definitions in 2.3 and 2.4
and finally to the selection of relevant identifiability
criteria to be used in the comparative assessment of
the various identification methodologies.
2.1. STATE ESTIMATION: DEFINITIONS AN

SYMBOLS

N. B. With ome obviouz exception2, foweL ca6e ita&&c


ZetteA6 indicate vecto4, czapita ittc and capitat
G'teek tetteA denote maticea.
which
One seeks the estimate I of the true state
x

best fits the measurements z related to


a

model :

z=-i
4k;4'

through

the

(_% 1)
JL

where the customary notation is used:


z : the m-dimensional measurement vector;
x
: the n-dimensional state vector of voltage magnitudes and phase angles;
n = 2N- 1, N being the number of system nodes;
the m-dimensional measurement error vector;
e
its i-th component is2:
if the corresponding
a normal noise N(0,Ui)
measurement is valid,
an unknown quantity otherwise.
Moreover use will be made of the variable
-

vi- E[eiJ , where E stands for expectation.


The weighted least squares (WLS) estimate
satisfies the optimality condition
(2)
HT(.1) R- [z- h()I = HT(e) R 1 r = 0

ei

where

HA

denotes the Jacobian matrix,

ah/ax

R= diag (at) and the measurement residual vector is by


.
definition
-A
t i I)
r A_= z- n tx) = we
(3')
and E - (HTR-lH)1
W= IBEHTR 1
where
In the absence of BD, the measurement residual
N(O,WR)
vector is distributed :
The presence of BD is currently detected through
one of the variables below :

N(0,WRWTT)=

the
-

weighted residual

rN
-

vector

the normalized residual vector


=

with

rW

/R-rr

diag(WR)

(4)

(5)

the quadratic cost function

J(e) =

rTR-U

=rWrW

(6)

2.2. DETECTABILITY OF BAD DATA


For any detection test, the probability 6 of
detecting BD is given by

non-

prob (ji|<X)

(*) On leave from "Socidt6 Tunisienne de l'Electricitd


et du Gaz" (STEG), Tunis, Tunisia.
(1) The term "post" is used to clearly differentiate
the identification methods of concern here from the
preprocessing data validation techniques which are
beyond the scope of this paper.

0018-9510/85/1100-3037$01.001985 IEEE

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3038
; is the statistical variable of concern (rwi,
J ) with meanvalue 1p and variance C2 ; X is the
detection threshold. Hence,detecting the presence of BD
requires that [241
|1| > X-N

where

rNi

or

(N7)

Let us now consider the case of a single BD.


Given an error probability B, the detectab-iZlity threshold of the i-th measurementis defined as
the minimal magnitude of the corresponding weighted
error e! necessary to detect the presence of BD with a
probability Pd= 1-S of success (the other measurements

Definition.

being affected by gaussian noises).


Fig. 1 shows the value of the relative detectability
threshold corresponding to the rw, rN and J tests as
a function of the Wii coefficient. These curves, plotted via eq. (7) ,- inspire the following comments
(i) in presence of a single BD (and in the absence of
"critical pairs" [21]), the most powerful test is the
one based on rN; recall moreover that within the linearized approximation and provided that ej = 0 (Vj # i), the
largest normalized residual, IrNilmax, corresponds to
the erroneous measurement. This is generally not true
for IrW.
* Hence the advantage of relying on normalized rathIr than on weighted residuals;
(ii) when the local redundancy decreases, Wii decreases too;,
hence, in order to be detectable, the errors must be larger;
(iii) critical measurements are characterized by Wii= 0:
their errors are thus undetectable. Indeed such measurements have always null residuals;
(iv) in the presence of riultiple BD, property (i) does
not hold anymore. Indeed, in this case, E[rNi] is a
linear combination of the gross errors (e.g. see (2.11)

Imax

in

[211);

erroneous judgement, the


rN criterion still remains the most reliable one; it will

02

a is the false
alarm probability

0.4 0.6 0.8 1.0


Fig. 1: Detectability threshoZds vs.

Wij

In order to enhance the reliability of the final


data base, we propose the following post-elimination

procedure :
(i) search for all measurements become critical after
elimination;
(ii) add these critical measurements to the list of the
measurements declared false;
(iii) determine the estimates which would be affectedby
possible errors on the critical measurements and join
this qualitative information to the final data base.
Step (i) can be carried out by simply comparingthe
lists of critical measurements before and after elimination.
The above procedure may apply to any identification
method which involves elimination of measurements.

(v) despite the above risk of

2.5. PERFORMANCE ASSESSMENT CRITERIA

therefore be used to determine the suspected measurements : these are measurements possessing normalized
residuals larger than the fixed threshold.
2.3. TOPOLOGICAL IDENTIFIABILITY OF BAD DATA
Given a set of BD it is interesting to determine
whether the measurement configuration is rich enough to
allow their proper identification.
Definition. A set of BD is said to be topoZogicaZZy
identifiable if their suppression does not cause

Five criteria are selected for assessingthe quality


of the various identification methods. The first three
of them are the main objectives sought by any identification approach as such. The two others concernits practical feasibility, i.e. the applicability requirements.

system's unobservability,

creation of critical measurements.

Proposition. To be identifiable a set of BD must necessarily be topologically identifiable.


This proposition expresses the following evidence
in order to identify f BD among m' measurements, it is
necessarythat f < m'-n' , where n' is the number of unknows

to be estimated. Note that this is a necessary but not suffi-

cient condition for proper identification; indeed numerical aspects have also to be taken into account.
A reliable identification procedure should be able
to recognize topologically unidentifiable BD; in such
cases, it should declare the problem unsolvable and warn
the operator against the lackof reliabilityof the available state estimate, rather than give unusable results.

2.4. MEASUREMENTS BECOMING CRITICAL DURING ELIMINATION


Identification methods based on (successive) eliminations of measurements may lead to situations where the
remaining measurements are critical : the detection tests
are then negative, since errors on critical measurements
are undetectable. Now it is possibZe that errors remain
on these critical measurements, which would heavily
affect the accuracy of the final state estimate (the remaining errors being no longer filtered). In such cases
neither of the first two objectives of 2.5 is attained.
Note that new critical measurements may be generated because of:
- the presence of topologically unidentifiable BD,
- the undue elimination of valid measurements.

LocacUzation oi the BV : ability to localize exactly the BD, or at least to furnish a list of suspected
measurements which includes all the BD and as few as
possible valid data.
datao bae : the aptitude
Cotection oJ _the
for clearing the final data base is of great practical
importance and one of the most essential tasks of the
overall state estimation process.

_na.t

ReoSoniio

o_pagegcay_undentieabte

BD:

whenever such BD arise, the algorithm should be able to


draw up an as reduced as possible list of suspected measurements while containing all the BD; moreoverit should
warn the operator of its unability to identify the suspected data which have become critical and thereby the
possible existence of erroneous estimates rather than
provide him with misleading results.

Tmptementation_&eqyWLemients

: practical consider-

ations on the implementation and design should be taken


into account, such as simplicity, adaptabilityto system
modifications; to a lesser extent, memory storage.
Compy,text Ve it should be as short as possible
so as to comply with the real-time requirements of the
overall operation.

3. BAD DATA IDENTIFICATION. BRIEF OVERVIEW


Two criteria are used to classify the various BD
identification methods :
- the nature of the statistical tests of concern, determined by the variables they imply,
- the way of eliminating BD and clearing the data base.
The first criterion leads to distinguish HTI from the
other methods, whereas the second leads to regrouping
the various nonquadratic criteria in a class distinct
from that of the elimination procedures.

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3039
3.1. IDENTIFICATION BY ELIMINATION (IBE)
Conceptually, this identification is the continuationof theBD detection step which is a global criterion
implying the residual vector r . The leading idea is
that in the event of a positive detection test, a first
list of candidate BD is drawn up on the basis of an rN
(or rW ) test, then successive cycles of eliminationreestimation-redetection are performed until the detection test becomes negative.
Two subclasses may be distinguished corresponding
to the elimination of single or of grouped BD. Introduced by Schweppe et al. [1] almost at the same time
with the state estimation itself, the former consists
in eliminating at each cycle the measurement having the
largest magnitude of the normalized or weighted residual. As for the grouped elimination, a grouped residual
search has been proposed by Handschin et al. [31; it
consists in eliminating a group of suspected measurements which supposedly includes all BD, and reinserting
them afterwords one-by-one.
Another variant of these procedures consists in
solving eqs. (3) with respect to one or several suspected
measurement errors, then in correcting them by substracting these errors. This measurement error estimation has
first been proposed by Aboytes and Cory [6]. Later on,
Garcia et al. [7,8] have explored the simplified way of
correcting one measurement at a time (the one having at
each step the largest IrNil ) and keeping the W matrix
constant during the subsequent computations of rN. Note
that this technique has also been applied by SimoesCosta et al. [14] to the orthogonal row processing
sequential estimator. The work by Xiang Nian-de et al.
[9-11] has significantly contributed to elucidate this
question. These authors have brought up the singular
character of W , have proposed its partitioning so as
to estimate only s (s< m-n) out of the m measurement
errors. Moreover, they have clearly pointed outthe fact

that correcting these s measurements amownts to eZiminating them. Attempting to improve this technique, MaZhi-

quiang proposed to process combinatorial sets of suspected measurements and to identify the BD through a
detection test based on an interesting formula he established in Ref. [121 (see 4.1. 3 below). Now, becauseof
the equivalence between correction and elimination, the
fact remains that all these techniques belong to the
class of the procedures by elimination.

3.2. NON QUADRATIC CRITERIA (NQC)


Almost in parallel with the above approach, theNQC
have started being developed and explored. The idea of
this methodology differs totally from the preceding one:
here the identification-elimination of BD is part of the
state estimation itself. The rejection of the suspected
measurements depends upon the magnitudes of the (normalized or weighted) residuals : the larger the residual,
the smaller the weight allocated to the corresponding
measurement, and the larger the degree of its rejection.
Initiated by Merril and Schweppe [151 the NQC methods have been further developed and analyzed by Handschin
et al.[3] and by Muller [171. More recently, a comparative study of some of them has been carried out by Lo et
al.[191 and by Falcao et al.[20].
3.3. HYPOTHESIS TESTING IDENTIFICATION (HTI)
Unlike the two previous methodologies, HTI uses individual criteria, particularized to each suspected measurement. The variables of concern here are the error
estimates, es , of some of the suspected measurements;
these are evaluated through a suitable partitioning of
eq.(3) and a linear estimation. Exploiting the statistical properties of each e
through an individual
identification testing allows deciding whether the corresponding measurement is erroneous or not. This method
along with two strategies for taking decisions is developed in Ref.[21].

4. BAD DATA IDENTIFICATION. CRITICAL. ANALYSIS


4.1. IDENTIFICATION BY ELIMINATION (IBE)

4.1.1. Description
The methods of this class rely on the rW or the
rN test. The choice between rW and rN implies a tradeoff between good applicability features (simplicity,time
and core savings) and reliability. Generally, the poor
performances of rW (apart from the special caseof high
redundancy and single BD) make the rN test worth-conceding the additional implementation effort. Nevertheless, the latter is not reliable enough either; indeed,
in case of multiple interacting BD, the one-to-one corand erroneous mearespondence between largest
surement stops being guaranteed : valid measurements
may thus be declared false and vice-versa.
Note that the decision is taken on a global basis
given by the sole detection test, which just informs
about the existence of BD among the measurements, but
does not indicate whether the eliminated ones are actually erroneous.

IrN

4.1.2. Assessment

Puos

it is simple, since the only computation it needs besides estimation is that of residuals;
* it is capable to warn the operator that the BD are
topologically unidentifiable, provided the method of
2.4 is implemented.
*

* it is heavy since it requires a series of reestimation-detection after each elimination; this may lead
to computer times incompatible with the on-line requirements;
* it may lead to a degradation of the measurement configuration and a subsequent drop of the power of the
detection test (see fig.1); thisin turnmay cause an
important probability of non-detecting remaining BD
(especially when they become critical);
* it can provoke an undue elimination of valid measurements causing not only a rough identification but
also a drop of the detection test power. When using
the rN test, this situation arises in the case of
multiple interacting BD or of BD located in regions
with low local redundancy, i.e. in the case of stringent identification conditions. On the other hand,
the rw test may lead to a degradation even in mild
situations.

4.1.3. Remarks on the correction of measurements


Within the procedure by elimination, two variants
may be distinguished. The first consists in correcting,
after each reestimation, the measurement having the
largesttrNi[ by substracting from its value the estie
-1
mate
(8)
ri
ei 1= wiii1
while keeping constant the W matrix.
The second variant consists in correcting a group
of s selected measurements among the suspected ones by
subtracting from their values the estimates
where :
W5e
=
es
: denotes the selected
s
is the corresponding
matrix of W
rs : is the corresponding

r..

rs = rr5
measurements,

(9)

(s Xs) -dimensional sub-

s-dimensional subvector of

To avoid successive reestimations of the state


vector,the following correction formula of J(') proposed by Ma Zhi-quiang [12] can be used

J(XC)= J(x) -srs

es '

(10)

Here xc is the new state vector obtained from the measurements corrected by es (i.e. eliminated). Therefore,
2
J xc has a x _distribution with (m-n-s) degrees of

freedom.

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3040
The advantages and drawbacks of the above techniques are summarized hereafter.
PU4o
* The main attractiveness of these techniques is that

the correction does not affect the measurement configuration. Hence, the gain matrix can be kept constant during the successive reestimations of the whole
identification procedure, while keeping the goodness
of the minimization procedure convergence. On the contrary, eliminating BD may deteriorate this convergence.
It may even happen that such a procedure which converges properly through the above technique, diverges
when eliminating the BD.

Cons

In addition to the weaknesses of thevery procedure


by elimination listed above, these correction techniques
induce the following disadvantages
As for the single correction-elimination
* there is a risk that some measurements previously corrected become erroneous again. Indeed, in order that
correction and elimination to be equivalent at each
step, all the (s-1) previously corrected measurements
must be corrected again along with the lastone through
eq. (9) (see Ref. [21]);
* there is a greater risk to declare false a valid measurement because of the approximation of the normalized residual. Indeed, the variances of the residuals
computed on the basis of the initial W matrix are no
longer valid since the residuals of the non corrected
measurements are equal to those resulting from the
actual elimination of the corrected measurements and
the residuals of the corrected ones are zero (if the
correction is carried out only through eq. (10)).
Concerning the grouped correction-elimination
* the computation time may increase significantly (and
even become prohibitive with the number of times the
linear system given by (8) and (9) is solved), even if
a grouped residual search is used.

4.2. IDENTIFICATION BY

NWC

4.2.1. Description
The NQC methodology consists in minimizing the cost
function
m
(1 1)
J (x) =
fi (ri/ai)

i=1
where fi is equal to rl/4G when I rXi <Y ; here rX,
denotes either rW. or rNi and y is a properly chosen
threshold. When I rXi |Y
fi takes one of the following criteria [3]: quadratic-tangent (QT), quadraticlinear (QL), quadratic-square root (QR), quadratic-constant (QC), etc.
Applying the Gauss-Newton algorithm to minimize
(11) gives the following iterative algorithm [3,171

HTP H [x (Q+1)-x(t)] =H

[z -h(x(9))]

(12)
where P and Q are diagonal weighting matrices depending on the residuals. The comparison of eq. (12) withthe
corresponding basic WLS algorithm (P Q=R-1) shows
that the method consists in modifying the weights of the
measurements according to their residuals. Fig. 2 indiQ

cates the variation of the weight Qii with the magnitude of the corresponding residual. As can be seen, the
),the stronger the relarger Ithe Irwi (resp.
jection of the corresponding measurement. This figure
compares also the rejection effect of the various
criteria. In particular, the QC criterion is a borderline case, since it purely eliminates those mesurements
whose residuals are larger than y . Therefore minimizing
the QC criterion amounts to eliminating and/or reinserting measurements at each iteration.

IrNi

4.2.2. Assessment
PUZb. The main advantage of the NQC method lies in its
simplicity. Indeed, on one hand, it can be implemented
through a simple transformation of the basic WLS algo-

2
4
0
6
8
FIG. 2: variation of Qiij vs.

10

rWi /y

rithm; on the other hand, the estimation and identification steps are carried out in a single procedure, which
avoids successive reestimations.
Com6. The method suffers from the following serious
drawbacks.
* Possible existence of local minima : this, however,
can be circumvented by using as a starting point the
result of a WLS estimation.
* Strong tendency to slow convergence or even to divergence: the NQC exhibit a slower convergence than the
corresponding quadratic criterion. This can be explained
as follows
- the shape of the cost function is more intricate;
- the rejection of many measurements may lead to numerically unobservable situations, especially in cases of
poor local redundancy and/or multiple interacting BD.
* High risk of wrong identification. Schematically,
the NQC rely on measurements having small residuals
(with respect to y ) and tend to reject the others. Now,
since there is no one-to-one correspondence between
large residuals and large measurement errors (see 4.1)
it may happen that valid measurements are rejected
whereas erroneous ones are kept. In such a case the estimate is much less reliable than that givenby the quadratic estimation without any BD processing.
* No recognition of topologically unidentifiable BD
situations : in this case, results are unpredictable.
Moreover, the convergence is generally affected, since
the NQC tend to reject too many suspected measurements.
* Partial rejection of BD, except for the QC criterion.
This implies a (vicious) compromise between valid and
BD; the accuracy of the resulting estimate is thus corrupted since it is influenced by wrong information contained in the BD. Inspired by Muller [17], Fig. 2 shows
that the QT criterion is more subjectto,thisdegradation.

4,3. IDENTIFICATION

BY HTI

4.3.1. Description
The HTI method comprises three main steps

(i) At the end of a standard detection test, which presumably has shown presence of BD, the measurements are
i.e. in decreasarranged in decreasing values of
ing suspicion. A list' s of selected among the suspected
oof the meameasurements is drawn up and an estimate
surements error vector es is-computed via eq.(9).
test allows verifying
By means of eq. (10), -the J

IrNil,

whether all the BD have been selected.


(ii) On the basis of the variance of es, of the i-th
measurement assumed to be valid and for fixed risk ot
a threshold is computed
(13)
= (N_ c)* /'var (4si) = Vi Cfi
where Vi= (N1_c).
2
2
(ii) Comparing
I with X- allows deciding whether

18s

(4sijI<

the i-th measurement is valid


Xi) orfalse. Note
that unlike the detection test, this identification test
is particularized to each processed measurement; indeed

(see

[211)

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3041

E[6si]

(14)

esi

The HTI method may be exploited through either of


the two strategies proposed in Ref.[21] :

StAategy_q

the

decision

is

taken

with

fixed

type

probability of declaring false a measurement which


is valid.
the decision is taken with a fixed type 6
StAategy__
error probability of declaring valid a measurement which
is false. More explicitly,this strategy consists in adjusting the parameter Vi for each selected measurement
and in refining the successive s lists by selecting at
each cycle only the measurements which have yielded a
positive hypothesis testing.
error

4.3.2. Assessment

Puz

:
The HTI method is generally able to identify all BD
within a single step (or at worst within two steps).
This is especially true for strategy 6 . Concerning
strategy a experience has shown that, when all the
BD have not been identified by the first test, a second one, performed after a reestimation, is sufficient
to complete the identification. Note that in both
strategies, situations where all BD have not been selected may lead to a slightly larger number of reestimations.
* This method is able to identify strongly interacting
BD. This important advantage results from eq.(14)
which shows that, unlike the residuals, the estimate
esi is not affected by the presence of BD among the
other measurements. In other words, the very notion
of interacting BD becomes meaningless.
* The method treats properly topologically unidentifiable BD. Indeed, the procedure of 2.4 applies to
the HIT method as well.
Conz :
* There isarisk of poor identification, corresponding
to the case where one or several BD are not selected.
This risk can however be alleviated through appropriate techniques [21].
* The method requires the computation of the Wss matrix,
whereas the other procedures merely need the diagonal
of the W matrix. Note however that the technique proposed in [211 avoids necessity of computing the complete E matrix.
*

5. COMPARING SIMULATION RESULTS


5.1. SIMULATION CONDITIONS

of successive reestimations (and hence to save computer


time), the active and reactive measurement subsets are
processed in parallel, i.e. an active and a reactive
measurements are eliminated at the same time (as proposed in Ref. [7]). This shortening is based on the hypothesis of decoupling between active and reactive variables in E.H.V. power systems.
NQC. When the detection tests reveal presence of BD
among the measurements, a new estimation is performed
based on one of the proposed NQC. To overcome the difficulty of local minima, the starting point of the iterative procedure is the estimate given by the WLS estimator (as proposed in Ref. [3]).
The threshold y - which determines the transition
from quadratic to nonquadratic estimation - has been
taken equal to 5. Experience has shown that this choice
is reasonable; indeed a too small value for this threshold leads to the rejection of too many measurements
and hence to convergence problems, whereas a too large
value results in a poor BD rejection.
The study of NQC has not been extended to the case
of a threshold varying during the iterative process;our
experience makes us think that this refinement is not
capable of significant improvements.
HTI. The elements of the W matrix needed for the computation of the normalized residuals and for the Wss,
submatrix are obtained from the available jacobian H
and gain G matrices. In practice, H and G are kept
constant after the first two iterations (i.e. they are
computed and/or factorized only twice). Experience has
shown that this does not affect the accuracy of Wss
provided that H and G are kept constant at the same
iteration step.
The number s of selected measurements is arbitrarily limited to 30 but -when the test on J(Xc) (see
4.1.3)
detects the presence of BD among the remaining
measurements, groups of 10 additional measurements are
successively appended to the previous selection.
Concerning the strategy a , the parameter V has
been taken equal to 2 (a= 4.6%). The choice of a higher
value (3.0 for example) could result in an incomplete
BD identification; indeed, in the presenceof inaccurate
the corresponding S error probability
estimates &Si
is too high. This is one of the reasons for considering
strategy 6 .
As for strategy $ , the parameters of concern take
on the following values
,

Ies.I= 40
Hen

,w

5.1.1. The test systems

The results reported below are merely concerned


with the most important variants of each of the three
identification methodologies. Some specific implementaare

also

N= -2.32

4u - 2.jv3lIii_l

and

with

discussed.

IBE. Because of the inappropriatness

of the grouped
elimination, only the single elimination scheme is considered here. However, in order to decrease the number

(NIa)max= 3

.la -1

0< vi< 3

(15)

(15')

5.1.3. The test cases


In order for an identification method to be practically effective, it has to pass the exam on multiple
BD. The cases chosen to be reported below pertain to
the three possible types of such BD
1st case : multiple interacting BD located around the
same node;
2nd case : multiple noninteracting BD having very different magnitudes and belonging to poor and rich areas;
3rd case : topologically unidentifiable BD. The above
list is certainly not exhaustive but nevertheless sufficient to illustrate the considerations of Section 4.

MULTIPLE INTERACTING BAD DATA


interacting BD surrounding node 1 have been

5.2. FIRST CASE


Four

introduced. Their degree of interaction is low to moder-

5.1.2. The tested methods

questions

1%

All the identification methods have been tested on


the
two test networks and two real systems, namely
IEEE 30-bus and 118-bus networks, and a Belgian 400/225/
150/70 kV and the Tunisian 220/150/90 kV power systems.
For the former two, the measurement configurations have
been fixed randomly and further adjusted so asto comply
with observability constraints while keeping an overall
redundancy of about 2 As for the two others, their
(actual) configurations have a redundancy of 1.9 (Belgian) and 2.8 (Tunisian). The variety of the systems
characteristics (with respect to size, topology, electrical parameters and measurement locations) allows
drawing valid conclusions as regarding BD analysis.
For purposes of illustration, the well-known IEEE
30-bus system is chosen here; its diagram along with
the adopted measurement configuration and characteristics are shortly described in the Appendix.

tion

5=

TABLE I CHARACTERISTICS OF THE FOUR INTERACTING BD


Bad data Actual(x)Value

hi

FLP

1-2

FLQ 1-2.
INP 1
INQ 1

177.3
-25.7
261.2
-27.1

"Measured" Value e1 = z
z

0.0
30.0
0.0
30.0

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

-hj(x)

-177.3
55.7
-261.2
57.1

e!|

118.2
37.1
174.1
38.1

3042
TABLE II SUCCESSIVE LISTS OF SUSPECTED MEASUREMENTS IN THE SIMPLE ELIMINATION PROCEDURE THROUGH THE rN TEST
lst estimation

Active
FLP
INI)
FLP
FlP
INP

2-1
1
1-3
1-2
2

3rd estimation

2nd estimation

Reactive

rNI JReactive

Active

FLQ 2-1 28.0


FLQ 1-2 22.7
INQ 1
18.2
17.0
INQ 2
-41.6 FLQ 1-3 -10.5

-81.7
-74.1
49.8
-46.7

INP
FLP
INP
FLP
FLP

J()= 15211.6 > 87.0

rN| Active

2 -71.8 INQ
1-3 56.6 FLQ
1 -40.5 INQ
4-3 -28.7 FLQ

2
29.3 FLP
1-2 15.7 INP
5
13.4 FLP
1-3 -12.0 FLP
1-2 -24.0 FLQ 4-3 10.6 FLP

4th estimation

rNi

rNi Reactive

Active

Reactive

rNi

IrNIl

1-3 39.5 FLQ 4-3 10.8 INP 1 -10.6


1 -23.6 FLQ 1-3 -6.3 FLP 4-3 10.6
4-3 -22.1 FLQ 6-2 -4.0 FLP 1-2 10.6
1-2 10.5 FLQ 1-2 3.2
2-6 -7.8

J() = 1753.3 > 82.1

J(2? = 7693.8 > 84.5

rN.

5th estimation

<

Active rNN-Reactive

J(2) = 156.2 > 79.6

IrNil

IrNil

< 3

J()

rNi

< 3

43.5 < 18.3

ate. Their characteristics are given in Table I (values


in MW/MVar). They are of both types, IN (injection) and
FL (flow), of P/Q (active/reactive power).

5.2.1. Identification by elimination

5.2.1.1. Etiination bo6ed

o n rN

The identification procedure requires four successive elimination-reestimation cycles, after the alarm of
the detection test. They are summarized in Table II. The
elimination of the fourth active measurement makes critical two others. The final list of measurements labelled
false is thus the following.:
- eliminated : FLP 2-1, FLQ 2-1; INP 2, INQ 2; FLP 1-3,
FLQ 4-3; INP 1;
- become critical : FLP 4-3, FLP 1-2.
The final state estimate is the one obtained at the
end of the fifth estimation; some characteristic values
are reported in column four of Table IV (see next page).
The results inspire the following comments.
(i) Both erroneous active measurements are present in
the final list, even if one of them has been included
thanks to the critical measurement analysis.
(ii) Three valid measurements have incorrectly been declared false.
(iii) None of the two erroneous reactive measurements
has been identified. Indeed the improper elimination of
three valid (reactive) data caused an important weakening of the measurement configuration. This in turn provoked a decrease in the value of the Wii coefficients
and hence in the detection capability, as described in
2.2. A more detailed analysis of this question is
given below.
(iv) The final state estimate is completely erroneous
in a certain neighbourhood of node 1, since FLP 1-2,
FLQ 1-2 and INQ i have not been eliminated.

It is interesting to explore further the mechanism


of detection capability decrease by considering the degree of BD interaction. Let ei (resp. e2 ) be the
weighted error affecting FLQ 1-2 (resp. INQ 1). We determine the domain D1 of the two-dimensional space
(ej,e2) in which the probability to detect the presence
of BD is smaller than a given value Pd (Pd= 0.9 hereafter, hence NPd =1.28 ). Using eq. (7) and taking into
account that Npd =-NS yields
Iv<Wl +e

e2

<

X+ NP

(16)

ej +

2 e

<

X+NPd

(17)

I1 = I21

Substituting into (16) and (17) the values of the Wij


coefficients before any elimination (see Table III)
yields
_4.28 < 0.886e; - 0.310 e < 4.28
(18)
-4.28

<

-0.380ej + 0.724 e2

< 4.28

(19)

These inequalities define the domain D1 plottedin Fig.3.

TABLE

III

- SUCCESSIVE VALUES

Before
any elimination

FLQ 1-2
INQ 1

OF

W-MATRIX TERMS RELATIVE

After elim. of

FLQ 2-1 and INQ 2

INQ 1

FLQ 1-2

INQ I

FLQ 1-2

0.785
- 0.275

0.5

0564 -.430
0
-.430
0.382

0.524

TO

BD

After elimination of
FLQ 2-1, INQ 2 and FLQ 4-3
FLQ 1-2
.344
-

4.330

I,NQ

0.0330
0.336

P >90%

The relatively restricted extent of D, denotes a good


ability of BD detection.
On the other hand, substituting the values of the
Wij coefficients after elimination of FLQ 2-1, INQ 2
and FLQ 4-3 (see Table III) gives

-4.28 <
-4.28 <

0.587el
-0.569ei

- 0.563 e2 <

4.28

+ 0.580 e2 < 4.28

(20)
(21)

The corresponding domain D2 is plotted in Fig.4. One


can see that D2 is notably larger than Dl . This illustrates the drop of the detection power test. Note that
the actual value of the two BD (see Table I) are located just in D2 ; this explains why they are no longer
detected. Table III shows the successive decrease in
the terms of concern of W matrix resulting from the
successive eliminations, and hence the corresponding
increase in the degree of BD interaction.

5.2.1.2.

EUinaLtion bo6ed

on rw

The results and the conclusions are similar except


that measurements are not eliminated in the same order:
- eliminated : FLP 2-1, FLQ 2-1; FLP 1-3, INQ 2; INP 2;
FLP 1-2 FLQ 1-3;
- become critical : INP 1, FLP 4-3.
Moreover, the corresponding domains D1 and D2 are
larger than in the-previous case.

5.2.2. Identification by NQC


The state estimates given by the QT, QL and QR
criteria through the residuals rW are reported in Table IV
along with the actual values of the corresponding parameters. Table V lists the suspected measurements (i.e.
those characterized by
3) obtained after estimation. The salient results are the following.

IrWiJ>

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3043

TABLE IV ESTIMATION RESULTS PROVIDED BY


NQC AND BY IBE METHODS (MW, MVar, p.u., degree)
NQC
IBE
Electrical Actual
QQR
variables values
QT
QL
____ _-------------T-Q-rrw
rN
1.060 1.052 1.058 1.065 1.063 1.058
MOD 1
0.0 155.9 166.5 174.6
177.3 -19.9
FLP 1-2
26.6 31.4 -6.2 -10.4 -20.7
FLQ 1-2 -25.7
20.0 28.3 71.2 77.4 82.6
83.9
FLP 1-3
0.9
4.9
6.3 -2.7
7.6
-1.4
FLQ 1-3
0.0 28.3 227.1 243.9 257.1
261.2
INP 1
1.4 -5.5 -19.8
32.9 28.7
-27.1
INQ 1
1.045 1.040 1.040 1.042 1.041 1.040
MOD 2
0.3 -4.7 -5.0 -5.4
-5.5
0.9
PHA 2
18.3 210.2 190.2 25.8 22.4 20.4
INP 2
31.9 -36.6 -40.2 20.2 20.6 29.7
INQ 2
1.033 1.029 1.048 1.025 1.026 1.027
MOD 3
PHA 3
-8.1 -1.75 -2.7 -6.7 -7.4 -8.0
3.8 -1.4
9.7
-2.4
59.1 51.1
INP 3
-1.2 -18.4 47.3 -12.6 -8.4 -2.5
iNQ 3
1.027 1.022 1.021 1.018 1.019 1.020
MOD 4
-3.4 -4.0 -8.4 -9.1 -9.7
-9.8
PHA 4
-7.6
-6.3 -6.2 -1.3 -3.8 -5.8
INP 4
-4.5 -61.2 -11.2 -8.1 -6.5
-1.6
INQ 4

TABLE VI FIRST SELECTION RESULTS OF HTI THROUGH


STRATEGIES a AND ,B. NUMBER OF SELECTED MEASUREMENTS 25

1st Selection

Selectedr.
measuremnent
FLP
INP
FLP
FLP
FLQ
FLQ
FLP
FLP
INQ
FLP
INQ
FLP
FLQ
FLQ
FLQ
FLP
FLQ
INQ
FLQ
FLP
FLQ
FLP

2-1
1
1-3
1-2
2-1
1-2
4-2
6-2
1
2-6
2

2-5

1-3
4-2
6-2
6-8
6-7
5
2-6
4-6
6-8
6-4
FLQ 2-5
FLQ 4-6
FLP 6-9

TABLE V SUSPECTED MEASUREMENTS BY NQC


ALONG WITH THEIR rWi OBTAINED AFTER ESTIMATION
NQC

QT

QL

OR

Susp. measurts.

rWl

1
1-2
2-1
1-3
2
6-2
2-5

-103.9
-11.8
10.0
-4.4
-3.2
3.1

INP 1
FLP 1-2
FLP 1-3
FLP 2-1

-162.6
-111.0
5.9
-5.2

INP
FLP
FLP
FLP
INP
FLP
FLP

INP 1
FLP 1-2

-151.4

-171.4
-116.4

Susp. measurts.
FLQ
INQ
FLQ
INQ
FLQ
FLQ

1-2
1
2-1
2

esi

esi

1i

r11

1i

xi

2.30
-261.20
2.33
-177.30
-2.86
55.69
1.09
-0.64
57.06
-1.83
-0.78
1.61
-2.67

-23.27
-211.65
23.51
-148.89
19.34
41.84
-9.67
-8.79
39.74
7.03
17.63
5.92
-6.15
4.95
2.94
1.37
-6.72
-6.31
-2.14
-10.12
-0.18
9.22
0.38
1.25
1.02

97.47
173.51
73.86
99.75
44.75
39.39
41.94
34.37
53.51
35.64
42.99
19.70
16.12
15.15
15.54
11.76
21.06
22.36
12.32
29.01
11.71
28.50

1056.00
3345.00
606.20
1106.00
222.50
172.40
195.40
131.20
318.10
141.10
205.30
43.12
28.89
25.50
26.83
15.37
49.30
55.55
16.88
93.52
15.24
90.23
7.68
2.31
2.08

0.00
0.00
0.00
0.00
0.37
0.73
0.55
1.18
0.00
1.06
0.48
3.00
3.00
3.00
3.00
3.00
3.00
3.00
3.00
1.83
3.00
1.83
3.00
3.00
3.00

0.00
O.00
0.00
0.00

0.23

-1.59
1.29
0.70
1.82

0.30
-0.30
-0.79
-0.42
-0.01
1.31
1.22

8.32

4.56
4.33

8.28
14.38
11.53
20.28
0.00
1T. 89
10.32
29.55
24.19
22.72
23.31
17.64
31.60
33.54
18.49
26.55
17.57
26.07
12.48
6.84
6.49

TABLE VII

Strategy a: 2nd Selection

rW_

1-3
4-2

7.9
-7.7
3.7

FLQ 1-2
INQ 1
FLQ 2-1
INQ 2
FLQ 1-3

26.9
23.7
9.8
7.7
-5.9

FLQ 1-2
INQ 1
FLQ 1-3

33.8
33.2
-3.3

final state estimate is still erroneous in the vicinity


of node 1 (see Table IV).
(ii) Therefore,too many valid measurements are suspected at the end of the estimation. Note that the stronger
the rejection (as for example for the QR criterion),
the smaller the list of suspected measurements (see
Table V).
(iii) Except for the QC criterion which has shown unable to provide an estimation, all the other NQC have
required a great - if not prohibitive - number of iterations (see Table X below). This slow convergence is due
to the rejection of all measurements around nodel which
in turn tends to make the network numerically unobservable. The QC criterion is particularly unreliable since
by eliminating all the suspected measurements it makes
the network topologically unobservable.
(iv) All the NQC diverge if the gain matrix is kept
constant after the first two iterations. Thus, unlike
for the WLS estimation, this matrix has been computed
at each cycle.

5.2.3. Identification by HTI


Among the 31 suspected measurements given by the
are chosen (s= 25). Indeed the 6 remaining ones (INP 2, FLP 6-7, INP 5, FLP 4-3, FLQ 4-3,
FLP 4-12) are necessary to ensure the observability of
the system (i.e. they would become critical after eliminating the 25 above-mentioned measurements). Computa-

eSi

4.47
-261.23
-177.28
4.91
54.32
57.42

4.91
6.64
4.98
20.72
20.68

-0.94
1.37

7.04
3.84

Selected measurements
FLP
INP
FLP
FLQ
FLQ
INQ
INQ
FLQ
FLQ
FLQ
-FLQ

22.1
19.1
13.8

(i) The BD have not been completely rejected and the

rN test, only 25

Strategy 8

Str. a

2-1
1
1-2
2-1
1-2
1
2
1-3
4-2

6-7
2-6

7.86

26.63
23.30

-3.57

5.46

0.32

3.78

TABLE VIII
Strategy B: 2nd selection

Select.
Meas.

es

Xi
vi
rj
'i~i~
M

0.80 4.34 3.00 9.38


2-1
1 -254.73 9.19 3.00 13.64
1-3
5.03 2.17 3.00 6. 63
1-2 -173.50 4.50 3.00 9.55
2-1 10.18 47.45 3.00 31.00
1-2 49.31 46.99 3.00 30.85
1
53.35 47.99 3.00 31.17
2
14.72 52.97 3.00 32.75
6-7
3.76 4.68 3.00 9.74
5
7.07 9.63 3.00 13.96
4-3
8.51 121.00 1.33 21.94
FLP 4-12 2.00 2.24 3.00 6.74

FLP
INP
FLP
FLP
FLQ
FLQ
INQ
INQ
FLP
INP
FLQ

Strategy 8:3rd selection


Select.
eas.

ds j

rij1 vi1

Xi

INP 1 -259.04 3.03 3.00 7.83


FLP 1-2 -172.79 2.04 3.00 6.43
-0.83 4.01 3.00 9.01
INP 2

FLQ 1-2 60.51 1.60 3.00 5.69


64.51 2.45 3.00 7.04
INQ 1
FLP 4-3 -22.39 30.08 3.00 22

Strategy 8: 4th selection

1
INP 1 -257.25
FLP 1-2 -174.51
FLQ 1-2 60.34
64.91
INQ 1

2.54 3.00k 7.18


1.65 3.00 I5.79
1.60 3.00 5. 69
2.44 3.00 7.03

tion of J (;c) relative to the corresponding (m-s) measurements gives


JGuc) = 15211.6- 15178.3 = 33.3
J(:iEc) is chi-squared with (m-n)-s = 118-59-25= 34 degrees of freedom. The threshold corresponding to a risk
a= 1% is 55.3
Hence the test on J(2c) is negative
one concludes (with of course a certain error probabi) that there are no more BD among the remaining
lity
redundant measurements (but not necessarily among the
six above-mentioned ones).
The results corresponding to strategy Ca are reported in Tables VI and VII. As can be seen, only three
BD have been identified by the first test. The fourth
one (INQ 1) has not, because of a too high error probability , (ii = 318. 1 , hence , = 45 % ) . These three
measurements are eliminated and the state is estimated
again. The second selection is composed of eight new
suspected measurements along with the three previously
eliminated ones. The identification is now correctly

performed.

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3044

TABLE IX

CHARACTERISTICS OF THE EIGHT NONINTERACTING BD

Actual
value

"Measured"
value

82.6
2.8
17.6
7.0
-0.5
2.5
-2.4

184.6
101.7
69.2
56.1
19.0
22.4
-12.1
-10.2

hi(x)

2-5
2-5
12-15
12-15
24-25
24-25
29
INQ 29

FLP
FLQ
FLP
FLQ
FLP
FLQ
INP

Zi

-0.9

ei

Seven valid measurements have improperlybeen eliminated.


These undue eliminations are essentially caused by FLP
& FLQ 12-15, which are located in a region of low local
redundancy (Wii= 0.15). Moreover, the final estimate is
erroneous since one BD has not been rejected; indeed,
the latter has become critical (it has been labelled
false as is explained in 2.4).
5.3.2. Identification by NQC

zi-hi(x) lesil Wii


102.0
98.9
51.6
49.1
19.5
19.9
-9.7

68.0
65.9
64.5
61.4
24.4
24.9
12.1
11.6

-9.3

0.84
0.85
0.15
0.16
0.62
0.64
0.47
0.47

Tables VI and VIII summarize the results of strategy 6 . Four cycles of selection were needed. The six
suspected measurements which were not inserted in the
first selection (for observability reasons) are introduced in the second and third ones. Note that for the
first test, the value of 'i is equal to zero for five
measurements : this results from the poor accuracy of
the correspondinq estimates. However, for most of the
measurements, Vi reaches its maximal value (3.0) atthe
second test. This shows the rapid increase in-accuracy
of the estimates and hence in power of the identification test. Finally, the fourth selection is simply composed of the four BD.

5.3. SECOND CASE

MULTIPLE NONINTERACTING BAD DATA

others).
5.3.1. Identification by elimination
5.3.1.1. IBE bazed on rN

The procedure has required 5 successive cycles corresponding to the following final list :
- eliminated : INP 5, FLQ 2-5; FLP 2-5, FLQ 12-15; FLP
12-15, FLQ 24-25; FLP 24-25, INQ 29; INP 29;
- become critical : INP 2.
All the BD have been eliminated. The incorrect elimination of INP 5 has made INP 2 critical. Note that the
latter measurement is not erroneous; however this cannot
be verified a posteriori.
5.3.1.2. IBE baed on rw
The identification has required 7 successive reestimations. The final list of measurements declaredfalse
is the following.:
- eliminated : FLP&FLQ 2-5; FLP&FLQ 24-25; FLP 12-14,
FLQ 12-15; FLP 12-16, INQ 29; FLP&FLQ 4-12; FLP10-17;
MODV 13; INP 29, MODV 12;
- become critical : FLP 12-15, INP 16, INP 17.

Measurements Actual BD
labelled
false
Valid data

Qualitytof

state estimation

NNumber
uberof
of

A gross error has been introduced in the value of


FLP 10-20. This measurement is redundant only with FLP
19-20. The elimination method has drawn up a list comprising both measurements. The HTI method has ledto the
same conclusion. On the contrary some NQC tendto reject
FLP 19-20 and to keep FLP 10-20.

5.5. SUMMING UP SIMULATION RESULTS


Table X summarizes the salient simulation results
of this Section, along with computer times given here
for information only. Indeed, many parameters - and especially system's size - influence significantly the
speed of the various identification methods. For example, in the cases considered here the reduced system's
size is to the advantage of the IBE methods since generally they require many state reestimations.
Note that for the IBE method based on rN , the
Sherman-Morison formula and the sparse inverse matrix
method proposed in [4,22] have been used. Note also that
the number of the Z matrix 'terms necessary to be computed for the ETI method has been assessed with respect
to 17 and 49 state variables respectively for the interacting and noninteracting BD cases. The latter should
be regarded as an upper bound.
The simulations have been performed on a DEC 20
computer.

4 Interacting BD
IBE

8 Noninteracting BD

'W rN

QT

NQC
QL

QR

rw

bad bad bad


ba
a

of
[state
[T; Number
reestimations | 4

Both strategies have identified in one step all the


8 BD. This identification has required a single test for
strategy ot, and 4 successive cycles for strategy a .

SALIENT SIMULATION RESULTS OF THE VARIOUS IDENTIFICATION METHODS

METHOD
PERFORMANCE
CRITERIA

5.3.3. Identification by HTI

5.4. THIRD CASE : TOPOLOGICALLY UNIDENTIFIABLE BAD DATA

Eight noninteracting BD have been simulated. Table


IX lists their characteristics along with the values of
the diagonal terms of W matrix, which inform about the
"quality" of the corresponding local redundancy (poor
for 12-15, moderate for 29, moderate to high for the

TABLE X

Conclusions are similar to those drawn for the preceding case, even if the identification conditions are
less stringent here. As in the interacting case, the QC
has been unable to provide an estimation. Note that,
because of a low local redundancy, the quality of the
state estimation in the vicinity of node 15 is rather
bad for all the NQC (see Table X).
It is worth-mentioning that the NQC efficiency
is found to vary with the noise attached to the
valid measurements. This gives NQC a "capricious"
behaviour.

rather fairly gud


a
odgbadgood

1 |

___
2
2 __
23

HTI

cr iterations/estimation
Time in sec. CPU
1.8 2.5 5.0 5.5
m
Number of
455 Q the Z matrix terms
to be computed
:___.

__5

1.1

NQC

HTI

rN

QT

QL

QR

15

ad
go
godbd

3 .__
3

1.9

1.4

560

560

2
_ 24

IBE

3.2

3.1

rather rather good good


god od
bad
bad

_____
8
8

_ 10 _

3_

__
3

1.7

2.2

1.5

1.7

1
1.7

455

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

1360 1360

3045

6. CONCLUSION
The identification techniques available today have
been classified into three broad classes; their capabilityto face various types of BD has been found to differ
significantly from one class to another.
The NQC exhibit the most poor performances; they
are very sensitive to low local redundancy and to interaction of BD; they have a slow convergence and a "vicious".behaviour. In brief, they don't show to be suitable enough.
On the other hand, the IBE techniques are attractive with respect to implementation considerations:
they are easy to use and simple to implement. They show
to be quite interesting as long as the BD are non- (or
weakly) interacting and located in regions of moderate
redundancies. They start being unefficient, however,
when the number of BD and their spreading increase and
when the local redundancy decreases. Although much more
reliable than the NQC, the IBE methods lead to inaccurate BD identification results at a certain level of
severity of the identifiability conditions.
The HTI method, finally, seems to combine effectiveness, reliability and compatibility with on-line
implementation requirements. This latter aspect receives
at present further consideration.

REFERENCES
[1] F.C. Schweppe, J. Wildes, D.B. Rom, "Power System
Static State Estimation. Parts I, II, III", IEEE
Trans. on PAS, vol.PAS-89, No.1, Jan.1970, pp.120-135.
[21 J.F. Dopazo, O.A. Klitin, A.M. Sasson, "State Estimation for Power Systems : Detection and Identification of Gross Measurement Errors", Proc. of the 8th
PICA Conf., Minneapolis, 1973, pp. 313-318.
[3] E. Handschin, F.C. Schweppe, J. Kohlas, A. Fiechter,
"Bad Data Analysis for Power System State Estimation", IEEE Trans. on PAS, vol.PAS-94, No.2, March/
April 1975, pp. 329-337.
[41 A. Merlin, F. Broussole, "Fast Method for Bad Data
Identification in Power System State Estimation",
Proc. of the IFAC Symp., Melbourne, Feb.1977, pp.
449-453.
[5] N.Q. Le, H.R. Outhred, "Identification and Elimination of Bad Data and Line Errors for Power System
State Estimators", Proc. of the IFAC Symp., Melbourne,

Feb.1977, pp.459-463.

[61 F. Aboytes, B.J. Cory, "Identification of Measure-

ment, Parameter and Configuration Errors in Static


State Estimation", Proc. of the 9th PICA Conf., New

[7]
[8]

[91
[10]

[11]

[12]

[131

Orleans, June 1975, pp.298-302.


A. Garcia, A. Monticelli, P. Abreu, "Fast Decoupled
State Estimation and Bad Data Processing", IEEE
Trans. on PAS, vol.PAS-98, No.5, Sept./Oct. 1979,
pp. 1645-1652.
A. Monticelli, A. Garcia, "Reliable Bad Data Processing for Real-Time State Estimation", IEEE Trans.
on PAS, vol.PAS-102, No.5, May 1983, pp. 1126-1139.
Xiang Nian-de, Wang Shi-Ying, Yu Er-keng, "A New ap-

PAS-101, No.9, Sept.1982, pp. 3356-3364.


[14] A. Simoes-Costa, R. Salgado, "Bad Data Recovery for
Orthogonal Row Processing State Estimators", Proc.
of the CIGRE-IFAC Symp. on Control Appl. for Power
System Security, Florence, Sept.1983, Paper 101-01.
[15] H.M. Merril, F.C. Schweppe, "Bad Data Suppression
in Power System Static Estimation", IEEE Trans. on
PAS, vol.PAS-90,No.6, Nov./Dec. 1971, pp. 2718-2725.
[16] J. Kohlas, "On Bad Data Suppression in Estimation",
IEEE Trans. on AC, vol.AC-17, No.6, Dec.1972, pp.
827-828.
[17] H. Muller, "An Approach to Suppression of Unexpected Large Measurement Errors in Power Systems State
Estimation", Proc. of the 5th PSCC, Cambridge, Sept.
1975, Paper 2.3/5.
[18] W.W. Kotugia, M. Vidyasagar, "Bad Data Rejection
Properties of Weighted Least Absolute Value Techniques Applied to Static StateEstimation", IEEETrans.
onPAS, vol.PAS-101, No.4, April 1982, pp. 844-853.
[19] K.L. Lo, P.S. Ong, R.D. McColl, A.M. Moffatt, J.L.
Sulley, "Development of a Static State Estimator",
Parts I, II, IEEE Trans. on PAS, vol.PAS-102, No.8,
August 1983, pp. 2486-2500.
[20] D.M. Falcao, S.H. Karaki, A. Brameller, "Nonquadratic State Estimation : A Comparison of Methods",
Proc. of the 7th PSCC Conf., Lausanne, July 1981,
pp. 1002-1006.
[21] L. Mili, Th. Van Cutsem, M. Ribbens-Pavella, "Hypothesis Testing Identification: A New Method for Bad
Data Analysis in Power System State Estimation",
IEEE Trans. on PAS, vol.PAS-103, No.11, November
1984, pp. 3239-3252.
[22] F. Broussole, "State Estimation in Power Systems
Detecting Bad Data Through the Sparse Inverse matrix
Method", IEEE Trans. on PAS, vol.PAS-97, No.3, May/
June 1978, pp. 678-682.
[23] L. Mili, "Traitement statistique des fausses donndes : mdthode d'identification par test d'hypotheses", Int. Rep., Univ. of Liege, No.MBC/1, May 1983.
[24] L. Mili, "Thdorie de la decision appliqude aux
rdseaux dlectriques: detection des faussesdonnees",
Int. Rep., Univ. of Liege, No. LML/7, Oct. 1982.

APPENDIX
The IEEE 30-bus system, along with the measurement
configuration is schematically given inthe figurebelow.
It comprises 118 measurements leading to a redunaancy
T= 2 . The following standard deviations have been used
- for power measurements : 0= 1.5 MW/MVAr at 132 kV and
C= 0.8 MW/MVAr at 33 kV;
- for voltage measurements: 0= 0.005 p.u.;
- for injection pseudo-measurements: a= 0.2 MW/MVAr.

proach for Detection and Identification of Multiple Bad

Data in Power System State Estimation", IEEE Trans. on


PAS, vol.PAS-101, No.2, Feb.1982, pp. 454-462.
Xiang Nian-de, Wang Shi-Ying, Yu Er-keng, "An Application of Estimation-Identification Approach of Multiple Bad Data in Power System State Estimation",
presented at the IEEE/PES 1983 Summer Meeting, Los
Angeles, Cal., July 17-22, Paper No. 83 SM 355-5.
Xiang Nian-de, Wang Shi-Ying, "Estimation and Identification of Multiple Bad Data in Power System State
Estimation", Proc. of the 7th PSCC Conf., Lausanne,
July 1981, pp. 1061-1065.
Ma Zhi-quiang, "Bad Data Reestimation-Identification
Using Residual Sensitivity Matrix", Proc. of the 7th
PSCC Conf., Lausanne, July 1981, pp. 1056-1060.
V.H. Quintana, A. Simoes-Costa, M. Mier, "Bad Data

Detection and Identification Techniques Using Estimation Orthogonal Methods", IEEE Trans. on PAS, vol.

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3046

Discussion
M. S. Kurzyn (Transmission Development Department, State Electricity Commission of Victoria, Melbourne, Australia): The HTI method uses
ranking of suspect measurements and one-shot identification procedure,
both being the salient features of the simple bad data identification scheme
described in [A]. However, many details of this method are necessarily
different from those of [A], and the authors should be commended for
a development of what appears to be a highly effective bad data identification technique.
The HTI method and two representatives of the existing methods have
been tested and compared using four networks, but perhaps due to space
limitations the paper shows only the results pertaining to the smallest
network. Could the authors bother to present the remaining test results?
Is the HTI method effective and reliable for the larger networks as well?
What is the sensitivity of the HTI method to different system operating

conditions?
The authors' response to the above questions would be greatly
appreciated.
REFERENCE

[Al M. S. Kurzyn, "Real-Time State Estimation for Large-Scale Power


Systems." IEEE Trans. on Power App. and Syst., vol. PAS-102,
pp. 2055-2063, July 1983.
Manuscript received February 20, 1985

A. Monticelli and Felix F. Wu (University of California, Berkeley, CA):


This paper provides a valuable service to the research in bad data identification by supplying strigent test cases. The HTI method performs
remarkedly well in these cases. One should be cautioned, however, that
the performance of a method for a set of selected test cases may differ
from that in a practical environment. The authors, in our opinion, may
be a little too harsh on the assessment of the conventional IBE method
and lenient on the HTI method.
It has generally been recognized that the IBE method works quite
satisfactorily in almost all bad data cases encountered in practice. As
a matter of fact, the IBE method fails only in the rare cases where multiple
interacting and conforming bad data are present [A]. It is perhaps too
rash to make generalizations based on some special test cases.
Conceptually the IBE method requires re-estimations, as will be explained later; this fact may very well add the strength rather than weakness
to the method. Computationally, however, the method can be implemented in one step without actually carrying out the re-estimation,
provided that the same assumption as in the HTI method holds, namely, the validity of the linear relation of the residuals and the errors. But
we believe whether a method is one step or not is irrelevant, the bottom
line is the computation time.
As explained in Ref. B, the residuals (using linearization) r = We can
be interpreted as the projection of the error vector e onto the subspace
N(HTW): = {f Rm, HTW = 0}. (Note NTWr = 0). Since Rm can be decomposed into two orthogonal subspaces, N(HTW) and R(H): = {I eRm,
r = HE}, the other component of e, namely, the projection of e onto the
subspace R(H) goes into the estimation of x. Thus
* The residual vector r has only partial information on the error vector e.

* Re-estimation provides a new residual vector adding our knowledge


about e.
We therefore conclude that the re-estimation (conceptually) may be
something desirable, rather than merely computational nuisance.
It can easily be seen that r= We= W(e+el) for any el in R(H), i.e.,
there are many error vectors e giving rise to the same residual vector r.
Also, r = z - Hx= Wz, which means that when the components of r are
treated as measurements, as in the HTI method, one actually takes the
"processed" measurements-the projection of z onto the subspace
N(HTW). The HTI method performs on the subspace N(HTW). The
so-called "optimal" solution in the sense of least square estimation is
to find one solution e lying in a particular subspace. There are many
"feasible" solutions to the bad data problem (i.e., different combinations of declaring measurements bad that lead to acceptable rN-test and
network observability), the HTI method finds one such solution. We
believe that the selection of an "optimal" solution should take into account meter reliability, rather than letting the least square to run its course

[A]. The above reasoning implies that the HTI may fail to identify "correctly" the bad data. Have the authors encountered failed cases in their
testing of HTI method?
It would be very helpful if the authors could give explicitly step-bystep of the HTI algorithm. Though not explicitly stated stated in the
description of the HTI method, the testing of observability in selecting
suspected measurements is an important part in the algorithm. Would
the authors care to comment on (i) what method is used for observability test, (ii) what is the percentage of computation time spent on it, and
(iii) the effect of decreasing in measurement redundancy on the observability test.
REFERENCES

[A] A. Monticelli, F. F. Wu, and M. Yen, "Multiple Bad Data Identification for State Estimation by Combinatorial Optimization,"
the IEEE Power Industry Computer Application Conference,
(PICA) pp, 452-460, May 6-10, 1985, San Francisco.
[B] R. J. Kaye, "A Geometric Approach to Bad Data Analysis in Electric Power Systems State Estimation," to be presented at 1985 International Symp. on Circuits and Systems, June 5-7, 1985, Kyoto.
Manuscript received March 1, 1985
L. Mili, Th. Van Cutsem, and M. Ribbens-Pavella: We thank the
discussers for their interest in our paper and their constructive remarks.
We shall group our answers by subject matter, starting with those
relative to IBE, then to HTI method; for the latter, increasing order of
generality will be followed.
Professors Wu and Monticelli discuss many interesting issues relative
to IBE and HTI methods.
A. As concerning the IBE, they raise two practical aspects considered
but not developed enough in the paper because of space limitations. We
are clarifying them hereafter.
A.1 With regard to the frequency of failure of the IBE method, we
don't share the discussers' opinion that multiple interacting bad data cases
where IBE fails are "special and artificial". Of course, the example of
Section 5 of the paper was chosen for the purpose of illustrating the
theoretical considerations of previous sections. But we have never claimed
that IBE performs always as unsatisfactorily as it does in this example.
Nevertheless, it is not true either that these cases are "rare." A means
to tentatively assess IBE's frequency of failure is given below.
Let us consider the IEEE-30 bus system of the paper with the two erroneous active measurements :FLP1-2 and INP1 (similar results are obtained in the reactive case). Let us identify the domain D of the corresponding measurement errors (ek,ee) for which the IBE method unduly
eliminates valid measurements and fails to identify the two bad ones.

This domain is such that the normalized residual of a bad data is never
the largest one, i.e.
D= U di with
i valid
i k,f

Di=[ek,e(: IrNkI <lrNiI

and

IrNel <IrNilI

For a given suspected measurement i, expressing the inequality


rNk < I rNi (i * k) in terms of the W and R matrices and neglecting
the contribution of the valid measurement noises leads to

WitZ
Wik
e
,(Al)
.oi.,/w
ork VWkk
0k \/Wkk
,Wi i
OFi
A similar relation can be written for the inequality rN I < IrNi (i * ).
Substituting the numerical values of the Wij terms, the corresponding
domain Di can be easily determined. Fig. A shows the final domain D
(note that in this particular case, the domain Di corresponding to measurement FLP 2-1 contains all the other Di). Assuming that the errors ek
and ei are bounded by measurement full scale values, we obtain a ratio

I Wkk

Wkk

<

of failure of about 10%. Taking into account the effect of the noises
in (Al) results in a negligible decrease in this ratio (see dotted lines on
Fig. A). Note that after the undue elimination of measurement FLP 2-1,
the new corresponding domain D includes the preceding one, so that
another valid measurement (INP 2) will be elmiinated. Therefore we may
conclude that the domain D of Fig. A is the domain of failure of the
IBE method, for the two bad data.

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3047

Obviously, this development should be repeated for all the possible


locations of two, three,
bad data in order to get an assessment of
the overall behavior of IBE, for a given measurement configuration. Note
that the representative point of the example of Section 5.2 falls into the
hachured area but admittedly this case is not isolated.
.

Fig. A. Domain D of failure of the IBE method


A.2. Concerning the one-step implementation of the measurement
elimination: we agree with the discussers that computationally the elimination of multiple measurements can be carried out in one step, by simply
correcting the corresponding measured values (which allows keeping the
same factorized gain matrix in the subsequent re-estimations). However
the main concern is to properly correct these measurements, i.e. correct
them in such a way that the correction will have exactly the same effect
as a "pure" elimination.

As is mentioned in Section 4.1.3 of the paper, this requires:


(i) computing the measurement correction by means of Eq. (9):
-(zs)cor = Zs - W- Issrs
where the off-diagonal terms of Wsscannot be neglected for
interacting measurements;
(i i) correcting correspondingly the W-matrix, in order to reflect the
(fictitious) eliminations;
(iii) re-correcting the previously treated measurements when new
measurements are to be corrected.
Practical experience has shown that corrections which do not fulfill
these three requirements may lead to meaningless results. This is illustrated
in the two following examples.
Considering the IEEE system again, two bad data have been introduced
as follows:
z = 100. MW
INP 29: h(x) =- 2.4 MW
z = 50. MW
INP 30: h(x) = -10.6 MW
These two bad data are interacting but the value of the errors are such
that the IBE method with "pure" elimination performs well. The list
of suspected measurements obtained after estimation is given in column
I of Table A below.
In a first case, the measurements are treated through the approximate
correction formula (8) of the paper, while keeping the Wii coefficients
always constant. Measurement INP 29 is first corrected. This yields the
suspected measurements listed n column II. Since there remains only one
bad data (INP 30), the latter should have the largest rNi I; however,
because incorrect Wii coefficients have been used when computing the
new rNi, FLP 27-30 is the most suspected one and is unduly corrected.
Due to erratic corrections, mine successive cycles have been required before
the detection test becomes negative and INP 30 has never been corrected
at all!
In a second case, we have tried to improve the above results by
"refreshing" the W,i terms after each correction (according to requirement (ii)); these coefficients are given in Table B below (note that Wii
is undefined for the corrected measurements). Column IV of Table A
shows that INP 30 is, as expected, at the top of the list. Although the
two erroneous measurements have been corrected, the detection test remains positive (see column V).
These two examples illustrate that a proper correction of multiple bad
data must obey the above three requirements. We are thus led to the

Table A: Successive Lists of Suspected Measurements with one-by-one


Correction

Diag(W) refreshed

Diag(W) non refreshed


COLUMN I:
1st estimation
Measurts.
INP
INP
FLP
FLP

INP
FLP
FLP
FLP
FLP
FLP
FLQ
FLP

INQ
FLP

IVf

FLP
FLQ
FLP
FLP

29
30
29-27
27-29
26
27-28
27-30
24-25
28- 6
6-28
24-25
24-22
26
28- 8
27
4-12
4-12
6-9
12-15

rN

118.95
108.46
- 45.10
42.84
34.22
-26.07
21.63
13.56
-9.88
8.83
-6.09
5.67
-4.15
-3.95
-3.87
-3.64
-3.22
-3.20
-3.19

COLUMN II:
2nd estimation
Measurts.

rM

COLUMN III :
3rd estimation
Measurts.

rN

COLUMN IV:
2nd estimation
Measurts.

rN

COLUMN V

3rd estimation
Measurts.

rN

28.28 FLP 27-30 26.30


12.79 INP 30
FLP 27-30 25.22 INP 26
18.89 FLP 27-28 -9.18 FLP 27-30 25.22 FLP 27-29 -15.35
INP 30
FLP 27-29 -10.89 FLP 29-27 7.94 FLP 27-29 -12.39 FLP 29-27 14.41
FLP 29-27 10.86 FLP 27-29 -7.75 FLP 29-27 11.71
9.47
INP 26
8.76 FLP 24-25 5.47 INP 26
FLP 27-28 -6.00 INP 30
5.10 FLP 27-28 -6.07
3.82
FLP 24-25
3.81 FLP 28- 6 -4.22 FLP 24-25
-3.22
-3.53 INQ 29
INQ 29
-3.21 INQ 29
-3.36 FLP 28- 6 -3.16
FLP 28- 6 -3.15 INQ 30
FLP 6-28 3.15

performed
performed
correction
correction
INP 29: -42.69 MW FLP27-30 :- 14.9 MlW

performed
correction
INP 30 : 23.20 MW

adequate correction adequate correction


INP 29: -42.69 MW INP29 : -39.99 MW
FLP 27-30 :-20.87 MW

adequate correction
INP 29 : - 5.52 MW
INP 30 :

10. 19 MW

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3048
conclusion that the W- lss matrix - required in the HTI method - is
anyway necessary to properly and efficiently face the problem of multiple bad data, whatever their interaction.

Table B: Diagonal Elements of W-Matrix


\

Wji

initial
INP 29
configuration eliminated

Measurements

0.445
0.352
0.773
0.814
0.354
0.714
0.840
0.610
0.944
0.469

INP
INP
FLP
FLP
INP
FLP
FLP
FLP
FLP

29
30
29-27
27-29
26
27-28
27-30
24-25
28- 6
INQ 29

INP 29
and
INP 30
eliminated

0.157
0.665
0.629
0.303
0.698

0.840
0.608

0.940
0.465

0.574
0.557
0.260
0.670
0.262
0.602
0.938
0.469

B. The other points raised by Professors Wu and Monticelli concern


the very principle of the HTI method and are now successively discussed.
B.1 We fully agree that elimination-reestimation adds knowledge about
e: the HTI method also takes full advantage of this additional information. This follows from a property of the POE estimate, proved in Ref.

[21]:

(A2)
es = W-Issrs = Zs - hs(xc)
where xc is the state estimate based on the remaining zt measurements,
i.e. those remaining after eliminating the s selected ones.
This property is precisely at the root of the equivalence between correction and elimination (as was already observed by the authors of Ref.
[11]). However, the discussers do not consider the counterpart of the
elimination: eliminating valid measurements wastes useful information,
decreases the power of the detection test, and increases the interaction
of the remaining bad data. These drawbacks are inexistent in the HTI
method which performs on the initial configuration: Eq. (A2) shows that
es is based on Zs as well as zt.
B.2. Considering projection properties is indeed worthwhile since it
allows gaining good insight into the theoretical understanding of the WLS
estimator, as has already been observed and used in Ref. [C] below.
Within this context, we agree with the discussers that the residual vector
r belongs to N(HTR - 1), since
(A3)
HR-TRlr=O
Now, what probably has escaped the discussers' attention is that the vector
of concern in POE and hence in HTI is not r but a subvector rs of it,
for which the above property is no longer valid. Indeed, partitioning (A3)
according to selected and other measurements yields:
T -1

T -1

HSRSr5+ H~Rt

rt

and Eq. (A4) must be considered instead of Eq. (A3).


B.3. We agree with the discussers that the residual vector r contains
partial information on the error vector e: this is precisely why we do not
rely on residuals but rather on error estimates which are "faithful pictures" of the errors. Another question evoked by the discussers is the
existence of multple error vectors leading to the same residual vector rs,
i.e. there are many es,et satisfying the basic relation:
rs

Wsses

Wstet

(Wss regular by construction)

Among all the possible solutions, HTI definitely provides the OPTIMAL
one in the statistical sense, since the estimate Rs = W- 'ssrs is:
(i) unbiased, provided that all erroneous measurements are selected, which
is ensured by adequate measurement selection (see Ref. [21]);
(ii) minimum-variance (i.e. es is the most accurate estimate of es): this
important property is a recent result, shown and discussed in Ref. fD,E].

B.4. Referring to the terminology of Ref. [A] of the discussers, it is


clear that the first selection of the HTI method provides a so-called "feasible solution" which contains all the bad data; starting with this solution, the successive refinements of the strategy fi lead to the smallest list
of suspected measurements. The major differences with the method proposed in Ref. [A] are the following.
(i) In HTI, the convergence towards the final solution is guided
by the information contained in As (through the statistical test);
HTI does not consider various "feasible solutions" as equiprobable, but rather it follows an "optimal path".
(ii) The procedure is not combinatorial since each selection is a subset of the preceding one.
(iii) In HTI each decision is taken with an upper bounded f-risk
(the risk of declaring valid a false measurement is thus
controlled).
B.5. The concept of measurement reliability could indeed enhance the
efficiency of bad data identification. It has not been considered here while
modeling measurements, but it could certainly be included in the various
methods compared in our paper. In the HTI method, the meter reliability
can be easily taken into account by selecting preferentially these
measurements and/or by adapting the threshold in the statistical test.
In particular, one could consider as unreliable a measurement that has
been labeled false in a previous estimation but has not been eliminated
from the data base.
Ref. [21] is mainly devoted to the theoretical aspects of the HTI method
and offers a relatively wide spectrum of practical schemes. In the meantime, intensive numerical testing as well as the implementation of HTI
in the regional control center of the Belgian UNERG Companpy led us
to establish appropriate practical schemes. These considerations and the
corresponding results will be the subject of a future publication; only
essential aspects are discussed hereafter at the request of Professors Wu
and Monticelli.
Fig. B presents the main steps of HTI, while Fig. C details the initial
selection, a keypoint of the method. The other measurement selection
mentioned on Fig. B is similar to the latter and need not be detailed.
The most salient feature of this procedure is the use of the well-known
i.e.
matrix inversion lemma, for the computation of P55 = W-

with

-1
w-l

-1
HTR-1=I

(I- H

+H

T-1
ZHJR

=
5+
I~St=XZ15[R-15H]'4
HS 5[Rs H ZHS] t ss

(A5)

(A6)

These two formulae are used in a recursive way, each time a new measurement is added to the selection; in fact, in this case, Eq. (A6) reduces to:
where

n-s

D1

t
2
1

a-hi

+n-s+I Ii lYkT-s+i
nm-s+1

hi

(scalar)

(A7)

hi is the column vector or H relative to the i-th measurement.


The advantages of this technique are manifold.
(i) It avoids a full inversion of WsS for each group of selected
measurements.
(ii) The computation of rss = W- Iss being rapid, the J(xc) -test
(which requires es and hence rss) can be applied each time a
new measurement is selected; this allows stopping the selection
as soon as all the bad data have been imbedded. Now, the
smaller the initial selection, the faster the identification.
(iii) The quantity Di in Eq. (A7) is used as a NUMERICAL observability test. Indeed Di = 0 (in practice Di < eDb2i) means that the
corresponding measurement may not be included in the selection (this measurement is in stand-by and will replace a measurement declared valid in a subsequent selection). Di being available
anyway, the observability test is thus a simple by-product of
the error estimation. The corresponding computing time is
negligible and no special additional program is required.
Finally, let us observe that in the case of a single bad data, HTI is
as fast as IBE. Indeed in such a situation:
- the erroneous measurement is selected first;
- the J(2c) - test is directly negative;
- the hypothesis test on esi is necessarily positive and no further refinement is required;
- the corresponding measured value is directly corrected.
Note also that a new detection test, after the state reestimation, is useless.

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

3049
Fig. B. The overall algorithm
Make a (proper) initial selection
[measurements are classified into: selected)
stand-by} (suspected)
others (valid)

REPEAT:
Apply the hypothesis test to each selected measurement
If the test is positive for some measurements:
THEN: make a new selection =
measurements with positive test
+ as many stand-by measurements as possible;
UNTIL the test is negative for all the selected measurements;
[the latter are the bad data]
correct selected measurements : Zs = zs - es
re-estimate the state (with fixed gain matrix).
s =0

Fig. C. The initial selection

F. = E
REPEAT:
Search for the (not yet selected) measurement with
maximum rNi ; let i be this measurement

compute

Di = 2ihi

IF D; e u2 THEN : this measurement is in stand-by;


ELSE : this measurement is selected
s = s+l
)
Se = St + 1' (E' hi h

rss Is fs HSR
s
eS-=rS Srs-

4..-

2
UNTIL s= smax or Jc <Xm-n-s
FOR each suspected, neither already selected nor already in

stand-by measurement
IF this measurement had initially a Wii > EW
THEN : compute Di = - hTE'"h
IF Di <eDoal
THEN : this measurement is also in stand--by

Dr. Kurzyn: We are pleased that he confirms our conclusions relative


to the performance of the various identification methods.
1. As concerning our choice of the IEEE-30 bus system for the presentation of simulation results, it has been guided by the fact that this system
is largely known. Otherwise, the comparative study presented in the paper
is general and applies to any network, whatever its size. Because similar
simulation results have been found on the four quoted networks, the
illustrative examples of the paper are representative of the behavior of
the various identification methods in front of the three types of BD which
may arise in practice.

2. The only difference existing between small and large networks concerns the computation time. As for the HTI method, the difficulty (particularly for large systems) presented by the computation of Ws5 may
be cleared through efficient ways: (i) The submatrix of L involved in
W5s may be calculated through an updating of the factors of G. Presently,
this new approach is under testing. (ii) A straightforward method is now
available. It consists of multiplying the inverse factored form of G by
the appropriate columns of the identity matrix and of using the sparse
vector technique (see Ref. [F]). As mentioned in the closure of this latter paper, this technique speeds up dramatically the computation of the
off-diagonal terms of the inverse matrix. Let us mention that, whatever
the size of the network, a selection involving more than 30 measurements
is rare.
3. About the sensitivity of HTI method: experience has shown that
the linear assumption is valid whatever the operating point of the system
and whatever the measurement weighting factors.
Finally, we would like to draw some general conclusions inspired by
the experience we gained during the implementation of HTI.
(i) In obvious single bad data cases, the HTI method reduces to
the conventional IBE method.
(ii) In the case of multiple (whatever interacting or not) bad data
the main part of the computing effort, i.e., computation of
W- lss matrix is required of both, IBE and HTI methods. At
the expense of a very little additional effort, HTI guarantees
the identification of all bad data, and only of them.
(iii) The HTI method is easy to program and to implement (we have
implemented it on a PDP-1 1/70 computer which is in charge
of various tasks currently existing in the control center). Of
course, a prerequisite of its programming is its understanding;
in other words, one cannot expect an inexperienced engineer
(who, e.g., has seldom got involved with state estimation-related
functions) to successfully program it.
(iv) The claim according to which one should not bother about
special multiple interacting bad data cases, which are anyway
likely to be successfully handled by the system's operator is in
our opinion arguable. Indeed, as long as the system is small
and the operator experienced, the need for fully automatic state
estimation, security and the like functions is hardly felt. The
difficulties arise when complexity increases beyond human

capacities.

REFERENCES
[C] K. A. Clements, G. R. Krumpholz, P. W. Davis, "Power System
State Estirriation Residual Analysis: An Algorithm Using Network
Topology", IEEE Trans. on PAS, vol. PAS-100, no. 4, pp.
1779-1787, April 1981.
[DI L. Mili, Th. Van Cutsem, M. Ribbens-Pavella, "Decision Theory
Applied to Bad Data Identification in Power System State Estimation", to be presented at the 7th IFAC Symp. on Identification
and System Parameter Estimation, Univ. of York, U.K., July 3-7,

1985.

El L. Mili, "Algorithmes Fiables d'Identification des Fausses Donnees par Tests d'Hypotheses", Internal Report (in French), Univ.
of Liege, No. MBC/3, May 1984.
[F] W. F. Tinney, V. Brandwajn, S. M. Chan, "Sparse Vector
Methods", IEEE Trans. on PAS, vol. PAS-104, no. 2, pp. 295-301,
Feb. 1985.

Manuscript received March 26, 1985

Authorized licensed use limited to: to IEEExplore provided by Virginia Tech Libraries. Downloaded on January 16, 2010 at 13:31 from IEEE Xplore. Restrictions apply.

Anda mungkin juga menyukai