Peter S. Probst
1 Introduction
One of the major problems in Intelligence analysis and counter-terrorism research is
the use or, more precisely, misuse of metrics as a means to measure success. Such
quantification may be admirable and necessary when dealing with rocket motors or
physical phenomena but can be self-defeating and unrealistic when dealing with
people and human events which, after all, are the ultimate underpinnings of terrorism,
insurgency and political instability. Human behavior is notoriously hard to predict
and outcomes without historical perspective difficult to assess. Measures of success
that are touted as useful and accurate so often in the real world prove to be little more
than intellectual snake oil. Hard quantifiable data that is meaningful is hard to come
by, and so we often willingly settle for data that are easily accessible and quantifiable,
hoping that our extrapolations are sufficiently accurate to guide or assess a course of
action or the conduct of a conflict.
P. Kantor et al. (Eds.): ISI 2005, LNCS 3495, pp. 316 321, 2005.
Springer-Verlag Berlin Heidelberg 2005
Measuring Success in Countering Terrorism: Problems and Pitfalls 317
much to themselves. In other words short term success paved the way for long term
failure and a significant setback in the war of hearts and minds. The point is defining
success can sometimes be tricky depending on ones ultimate objectives and timeframe.
3 Measures of Success
Countless attempts to implement quantitative systems to evaluate success too often
have backfired with unintended and serious consequences. In the early 1960s a
process called Management by Objective was regarded by government as a cutting
edge management tool. Its use by the Intelligence Community proved unfortunate
and, too often, counterproductive. The principle was to define a series of professional
objectives for intelligence officers to determine how well they measured up. The aim,
of course, was to institute accountability and provide an objective tool to assess the
relative success of field personnel and their operations.
Tremendous weight was given to the number of intelligence reports submitted by
the Case Officer in the field. This was regarded as an objective gauge of
effectiveness and worth. Case Officers also might be tasked to make a specific
number of new agent recruitments each fiscal year. Of course the process was
somewhat more complex, but this was the general idea. Performance depended on
productivity and productivity was defined by numbers. Those serving overseas soon
discovered that they had entered the numbers game big time.
Members of CIAs Clandestine Service have always had a reputation for being
savvy with a healthy regard for self-preservation, and this extended to the
bureaucratic arena as well. Many realized this was a whole new ballgame with a new
set of rules. The result was adaptation. They began to take a solid, detailed report
that the agent and case officer at no small risk had spent considerable time developing
and, realizing that numbers were critical, would divide that report into two or three
highly rated shorter reports; thereby, increasing their production numbers for that
month. Nevertheless, such adjustments were rarely sufficient to overcome the weight
of a large quantity of useful but not particularly valuable reporting that began to flood
Headquarters as a result of the pressure to best the previous months total or, at the
very least, to maintain the numbers. As in academia, a publish or perish mentality
became increasingly pervasive.
Of course, ways of weighing the value of the reports were ultimately introduced as
a way to level the playing field but numbers too often trumped quality, and a system
that had been introduced to measure success ended up measuring the wrong criterion,
and introducing pressures that tended to compromise the integrity of the intelligence
process and those participating in it.
A similar situation developed with regard to agent recruitment, an officer being
tasked to recruit a minimum of new agents each fiscal year. The reality was that a
Case Officer could literally work years to recruit a high-level communist party
penetration; whereas, a colleague might spend only a couple of months to recruit two
or three agents that were useful but of considerably lesser value. It was like
comparing apples and oranges with the numerical comparisons creating a false sense
of equivalence, objectivity and fairness. As a consequence there was a tremendous
318 P.S. Probst
temptation to make easy recruitments that might not have been of great value but had
the virtue of keeping the Stations numbers up and Headquarters at bay.
killings may comprise an integral part. But results of such operations are also difficult
to quantify, and usually lack the drama of large enemy casualty counts.
It is important to realize that the mindset and value systems of the Jihadists are
considerably different from ours. In general, they are not encumbered by our Western
mindset. They may have their own issues, but a preoccupation with statistical counts
is not one of them. Their operations are not tied to the fiscal year or an annual budget
cycle. They are not saddled with annual project renewals. They take the long view.
The American mindset, in contrast, is tied to the fiscal year mentality and other
artificial short-term and, too often, self-defeating constraints and pressures that
include the political timetable of Presidential and Congressional elections. As one
terrorism expert stated, This is not a war that can be won by an impatient people.
Yet we are impatient, and we demand immediate results. And this is one of our
greatest strengths but also one of our greatest failings. Although such traits may be
admirable in other circumstances, they are counterproductive when dealing with a
Protracted Conflict, such as the one in which we are currently engaged against a
determined terrorist foe such as al Qaeda and other global Jihadists.
7 Conclusion
Over-reliance on and misuse of statistical measurement not only has served to distort
the intelligence product, but too often has corrupted the intelligence process as
intelligence officers find themselves chasing the numbers with less time available to
chase the hard but elusive information needed to advance the countrys security
interests. The mindset that produced the body count syndrome of the Vietnam War
unfortunately is alive and well. It is part and parcel of our cultural baggage. As a
consequence, it has caused us to fail to identify or misread critical trends and engage
in practices that are transparently counterproductive. In seeking to measure success
we look to measure things that are easy to quantify, but too often are off the mark,
providing us only the illusion of accuracy and precision rather than a valid and
accurate measure of meaningful progress.
Statistical analysis as used by the government to assess terrorism and counter-
terrorism efforts remains primitive and, too often, dangerously misleading. We measure
what can easily be quantified rather than what is truly meaningful. We strive to capture
extremely complex phenomena in a simple sound bite, reinforced by seemingly
compelling but simplistic statistical comparisons and then wonder why our instant
analysis has failed to comport with reality, leaving us embarrassed and scratching our
heads. Numbers, as we use them, provide a false sense of objectivity, accuracy and
precision, too often leaving the decision makers frustrated and angry. And, too often,
leaving the public with the feeling that somehow they have been conned.
In order to use our limited resources to best effect, we need to introduce concepts
and analytic strategies that more accurately reflect the reality on the ground, enable us
to better predict trends and more accurately assess the effectiveness of our efforts, our
programs and our people.