Anda di halaman 1dari 4

Abstract: Profile, Predict & Prevent

Automating Normality: Detecting Abnormal Behaviors


A Case-Study in Computational Visions i
In this paper we discuss the results of a philosophical inquiry building on a surveillance project
entitled: Privacy Perimeter Preserving Protection Project (P5), funded by the Seventh Framework
Programme of the European Commission. Rather briefly, the project aimed at detecting, in real time,
threatening intrusions within the outdoor perimeter of any Critical Infrastructure by using a wide
variety of sensors including radars, video and visual cameras. While automation of image analysis
promises crucial impacts on the organization of surveillance, important challenges must still be
overcome before a robust technological system can be availablein the words of engineers: a system
guaranteeing a very low rate of detection failures under any weather conditions. This paper
concentrates on the very last component of this broad system, namely the set of algorithms enabling the
discrimination between normal and threatening behaviors. Such a technological focus will allow us
to investigate the interplay between machine-learning algorithms and socio-technical definition of
normal behavior. More fundamentally, we claim that these algorithmic practices significantly reshape
our concepts of statistical and social norms (Hacking 1991, Macherey 2014). Beforehand, we propose,
to carefully unfold one of the many algorithms developed by the University of Reading's
Computational Vision Grouppartner in this European project. The considered algorithm for behavioral
recognition centrally draws upon Hidden Markov Model and Bayesian Networksboth widespread
techniques used in pattern recognition, showing the particularity of turning quite old statistical
techniques (Bayes 1763, Markov 1906) into innovative algorithmic devices (Pearl 1985, Bishop 1986).
A genetic description of the algorithm's technical scheme (Simondon 1956) will help us in deploying
three focal points where social and technical normativities intertwine: (i) the metrics' definition, (ii) the
algorithm's tests and (iii) the dataset's constitution.
(i) Starting by carefully looking at the coding and writing of such algorithms will help us to dissipate
the hazy aura surrounding these learning techniques. Rather evidently, every machine-learning
algorithm must be given a kind of metrics according to which measures upon quantified reality (i.e.
pixels) will be taken and, eventually, alarm triggered. Indeed, the very design decisions strongly
determines the algorithm's abilities to model and recognize behaviors. (ii) Focusing hereafter on the
process of setting and assessing an algorithm may allow us to grasp some understanding of what makes
a good algorithm for researchers in computational vision and how much of the algorithm is reshaped
during this testing phase. Thus, rather traditional questions in history and sociology of technology will
here be addressed to our machine-learning techniques: How accurate are the algorithm's threat
detections (Mackenzie 1993)? How can we measure and insure it (Wise 1995)? How good is it
performing regarding other algorithm's benchmarks? (iii) Finally, the status of data has to be adequately
qualified. Indeed, the main role of data here is to teach the learning algorithms to differentiate between
various types of behavior. Consequently, in order for the training to be pertinent, data feeding these
algorithms have to be both massively collected (e.g. PETS datasets) and carefully produced (e.g.
synchronization, geographical calibration, production of typical situations).
Although this analysis lead us to the identification of three privileged loci (metrics, tests, datasets)
through which regulatory practices could over take these algorithms, we would like here to conclude by
emphasizing three philosophical issues raised by the application of machine-learning to surveillance
systems. Firstly, the thorough understanding of the algorithms, here exposed, called for a proper
technological perspective, which could not have been harmless reduced to an epistemology of statistics

Abstract: Profile, Predict & Prevent


(Gigerenzer et al. 1989). As the practice's description has suggested, the supposed automation of
clustering and inferences produces practical effects absent from any equivalent statistical formalism
or from its application. Secondly, the selected case-study impelled us to describe the system in terms of
efficiency and accuracy, rather than truth and causation. Consequently, the analysis of machine-learning
systems required us to requalify the relation of technical systems and predictability, emphasizing the
difficultynot to say the practical impossibilityin assessing formally the reliability of any algorithmic
system (Mackenzie 2003). Finally, the great philosophical interest of this case-study lies in the active
production, through machine-learning techniques, of an equivalence between two very different sort of
normativities. This socio-technical negotiation supposes the very possibility of translating a rather
complex set of social norms (e.g. characterizing a threatening behavior) into accurate technical
requirements. Describing the norm's successive metamorphoses and, in turn, the effect of its technical
embodiment upon its initial formulation shall allow us to address the problem we initially raised: How
does the possibility of such an implementation affect our concept of norm?

Abstract: Profile, Predict & Prevent

Primary Literature
Baker J., The DRAGON systemAn overview. IEEE Transactions on Acoustics, Speech,
and Signal Processing, Issue 23, pp. 2429, 1975.
Thomas Bayes, An Essay towards solving a Problem in the Doctrine of Chance. By the late
Rev. Mr. Bayes, Communicated by Mr. Price, in a letter to John Canton, in Philosophical
Transactions of the Royal Society of London, issue 53, pp. 370418.
Baum, L. E.; Petrie, T., Statistical Inference for Probabilistic Functions of Finite State Markov
Chains, The Annals of Mathematical Statistics, Issue 37, Volume 6, pp. 15541563.
M. Bishop and E. Thompson, Maximum Likelihood Alignment of DNA Sequences, Journal
of Molecular Biology, Issue 190, Volume 2, pp. 159165, 1986.
A.A. Markov, Extension of the limit theorems of probability theory to a sum of variables
connected in a chain, reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems,
volume 1: Markov Chains. John Wiley and Sons, 1971 [1906].
Pearl, J., Bayesian Networks: A Model of Self-Activated Memory for Evidential Reasoning in
Proceedings of the 7th Conference of the Cognitive Science Society, University of California,
1985, pp. 329334.

Secondary Literature
Norton Wise (ed.), The Values of Precision, Princeton University Press, 1995.
Ian Hacking, The Taming of Chance, Cambridge University Press, 1991.
Gerd Gigerenzer, Zeno Swijtink, Theodore Porter, Lorraine Daston, John Beatty, Lorenz
Krger, How Probability Changed Science and Everyday Life, Cambridge University Press,
1989.
Lisa Gitelman, Raw Data is an Oxymoron, Massachusetts Institute of Technology Press, 2013
Pierre Macherey, La sujet des normes, Editions Amsterdam, 2014.
Donald Mackenzie, Inventing Accuracy, Massachusetts Institute of Technology Press, 1993.
Donald Mackenzie, Mechanizing Proof: Computing, Risk and Trust, Massachusetts Institute of
Technology Press, 2004.
Reviel Netz, Barbed Wire: An Ecology of Modernity, Wesleyan, 2009.
Antoinette Rouvroy, Thomas Berns, Gouvernementalit algorithmique et perspectives
d'mancipation: le disparate comme condition de l'individuation par la relation, in Rseaux,
numro 177, 2013.
Gilbert Simondon, Du mode d'existence des objets techniques, Aubier, 1956.

Most of the material we draw upon was gathered in the context of the Privacy Perimeter Preserving Protection Project
(P5) funded by European Commissions, whose the Centre de Recherche en Information, Droit et Socit (CRIDS) of
the Universit de Namur (uNamur) is leading the ethical and legal reports. Our inquiry extensively draws on written
surveys and oral interviews addressed to partner engineers, as well as ethnographic observations during field trials. This
proposal is part of a doctorate in Philosophy of Science and Technology, entitled: Computing Economics: At the
Crossroads of Scientific, Technical and Economic Normativities, conducted at the uNamur under the supervision of
Antoinette Rouvroy (F.N.R.S., uNamur) and Thomas Berns (Universit Libre de Bruxelles). Dominique Deprins, T.
Berns and A. Rouvroy are presently leading a Research Project, funded by the F.N.R.S, in turn entitled Algorithmic
Governmentality.

Anda mungkin juga menyukai