Anda di halaman 1dari 114

Facial Recognition 1ac

1ACFRT

Plan
Plan: The United States federal government should
substantially curtail its domestic facial biometric surveillance.

Exclusion
Contention 1 is exclusion
FRT has become an instrument of power that creates
ontological gaps between populations and functions under a
paradigm of security based bio-politics
Kember 13 (Sarah Kember is a writer and academic. Her work incorporates new
media, photography and feminist cultural approaches to science and technology,
Gender Estimation in Face Recognition Technology: How Smart Algorithms Learn to
Discriminate, November 30, 2013, pg. 4-6,
http://static1.1.sqspcdn.com/static/f/707453/23986911/1385871002407/Kember_MF
7.pdf?token=aCDMxWa5aZ54giSsrZsP%2BJRBRFs%3D accessed 6/23/15)//CS
Jacque Penrys PhotoFIT pack came in to use in the 1970s and consisted of
photographic images of five features (hair and forehead, eyes, nose, mouth, and
chin) mounted on card.[12] He included a male and female database but
established what he claimed was a universalgenderlessfacial topography. This
was actually derived from a norm, a young white male that face recognition
systems continue to use, but with the aim, for example, of restricting access to
certain areas based on gender or collecting valuable demographics such as the
number of women entering a retail store on a given day.[13] The segue from
disciplinary to biopower is, for Foucault, contingent on the increasing use of
demographics and statistics that orient governance more towards the populace
than the individual.[14] Face recognition systems demonstrate both forms of
power and perhaps even the shift from one to the other. This becomes clearer as
we track back from the biopolitical uses and applications of face recognition
technology to the disciplinary design and architecture of the technology itself. Koray
Balci and Volkan Atalay present two algorithms for gender estimation.[15] They
point out that the same algorithms can be used for different face specific tasks
such as race or age estimation, without any modification.[16] In the first
algorithm, the training face images are normalized and the eigenfaces are
established using PCA.[17] PCA is described here as a statistical technique for
dimensionality reduction and feature extraction.[18] The performance of the
system is improved by the subsequent use of a pruning algorithm, which
identifies statistical connections extraneous to gender (race or age) estimation and
deletes them. After deletion, the system is re-trained and the pruning is repeated
until all the connections are deleted.[19] A performance table is produced,
showing the relation between each iteration of pruning, the percentage of deleted
connections, and the accuracy of the system. The accuracy of gender estimation in
Balci and Volkans experiment actually diminishes after the eighth iteration, albeit
by only a few percentage points, allowing them to claim that the system is stable.
They maintain that pruning or the deletion of statistical connections improves
gender estimation not in a linear or absolute sense but by enhancing the process of

classification itself. For Geoffrey Bowker and Susan Leigh Star, classification is a
largely invisible, increasingly technological, and fundamentally infrastructural
means of sorting things out.[20] It is an instrument of power-knowledge that is
productive of the things it sortsthings such as faces that are by no means
unambiguous entities that precede their sorting.[21] The existence of a pruning
algorithm that renders faces less ambiguous testifies to their elusiveness, or their
inherent resistance to classification as one mode of representationalism. It would,
perhaps, be going too far to suggest that there is a crisis of representationalism in
appearance-based face recognition systems. However, their designers and
engineers are clearly aware that faces are things that resist depiction[22] because
they are complex and multidimensional[23] and not unique, rigid objects.[24]
The advantage of a more dynamic and relational approach to the production of
faces in face recognition technology would include recognizing representationalism
as a claim, a defensive manoeuvre in the face of faces non-essential ontology and
dynamic co-evolution with technological systems. Still, this defensive manoeuvre
matters in a double sense: it is both meaningful and material, reproducing norms
for example, norms of gender in a machine that is learning to classify, sort, and
discriminate among the populationbetter than it could before. If this is a last push
to representationalism, it is one that reinforces it rather than shows it the door.
Face recognition technology upholds a belief in the existence of ontological
gaps between representations and that which they represent. It also re-produces
the norms of nineteenth-century disciplinary photography even as photography
becomes allied to the security-based biopolitics of computational vision and smart
algorithmic sorting. In this sense, Kelly Gates is right to argue that new vantage
points can underscore old visions as well as old claims to unmediated visuality.[25]
Like her, I question the autonomy of face recognition systems without denying that,
in conjunction with human input of various kinds, they enact what Barad calls
agential realism, generating both categories and entities by cutting and sorting
male from female, black from white, old from young.[26] In a context in which
security systems are fully integrated with those of marketing, these particular
epistem-ontologies intersect in predictable ways with the category of
criminal/citizen-consumer.[27] Since the events of 9/11, the stereotypical face of
terror (gendered, racialized) has been perhaps the most represented and most
elusive of all. If the problem, from a system point of view, is that the categories leak
and the classification structure does not hold, the solution is to reinforce it by
pruning it. This process of agential cutting and sorting strengthens statistical groups
by deleting connections between them and is precisely the point of a possible
intervention, the means by which the biopolitics and ethics of computational vision
can be intercepted in order to make a difference .

Scenario 1 is Biopower

Biometrics reflect the management of lifebiometrics appear


in multiple aspects of our daily lives and represent the
governing of life
Ajana and Beer 14 (Btihaj Ajana is Lecturer in Culture, Digital Humanities and
Creative Industries at Kings College London & David Beer is Senior Lecturer in
Sociology at the University of York, UK, May 12, 2014, The Biopolitics of Biometrics:
An Interview with Btihaj Ajana http://theoryculturesociety.org/the-biopolitics-ofbiometrics-an-interview-with-btihaj-ajana/ ) //CS
DB: Your book focuses upon the biopolitics of biometrics. This allows you to explore
the implications of forms of measurement for the politics of life and bodies. When
you explore these connections you introduce some interesting conceptual
resources. Michel Foucault plays a central role in your analysis here. What does
Foucaults work offer in terms of the on-going analysis of new biometrics? BA: If we
consider the literal meaning of biometrics, it is all about measuring life, measuring
the uniqueness of the bio and its identity. Biometrics as such provides us with a
very valid example of what Foucault terms biopower, that is, the form of power
being directed at the biological existence of individuals and populations, at man-asspecies-body. The management of life, which Foucault refers to as biopolitics is
performed through a variety of means and techniques of which biometrics is an
example. We can see this unfolding in a variety of domains and spaces including
borders, citizenship and immigration policy, social services, healthcare and many
other areas of governance that are increasingly reliant on biometrics for managing
and controlling the life of the living. Foucaults concepts provide us with pertinent
and valuable points of departure for analysing the ways in which biometrics is
implicated in processes of categorisation and classification which allow the
(sub)division of the population into manageable groups according to their level of
risk and identity profiles. Crucially, Foucault identified the paradoxical aspect of
biopolitics: the same techniques that are used to protect and enhance certain lives
can be used to endanger and obstruct others. And this is something we can clearly
observe in the politics and policies governing asylum, immigration and citizenship.
Throughout this book, I stage an encounter between the figure of the citizen and
that of the asylum seeker by way of elucidating this paradoxical yet constitutive
biopolitical relation between the two, a relation that is increasingly being mediated
through biometric technology. But instead of relying exclusively upon a Foucauldian
conceptualisation of biopolitics, I extend my analysis to other relevant theorists
including Agamben and Rose in order to achieve a more nuanced, complex and
critical approach to the biopolitics of biometrics. Ultimately, the concept of
biopolitics becomes here both a method of analysis as well as a subject of enquiry.

Facial recognition specifically exerts the disciplinary power of


the panopticonpower stems from FRTs ability to sort and
categorize
Gray 3 (Mitchell, Surveillance and Society. Urban Surveillance and Panopticism:
will we recognize the facial recognition society? JJZ
http://www.surveillance-and-society.org/articles1(3)/facial.pdf)
This paper explores the implementation of facial recognition surveillance
mechanisms as a reaction to perceptions of insecurity in urban spaces. Facial
recognition systems are part of an attempt to reduce insecurity through knowledge
and vision, but, paradoxically, their use may add to insecurity by transforming
society in unanticipated directions. Facial recognition promises to bring the
disciplinary power of panoptic surveillance envisioned by Bentham - and then
examined by Foucault - into the contemporary urban environment. The potential of
facial recognition systems the seamless integration of linked databases of human
images and the automated digital recollection of the past will necessarily alter
societal conceptions of privacy as well as the dynamics of individual and group
interactions in public space. More strikingly, psychological theory linked to facial
recognition technology holds the potential to breach a final frontier of surveillance,
enabling attempts to read the minds of those under its gaze by analyzing the
flickers of involuntary microexpressions that cross their faces and betray their
emotions. It is rapidly becoming an urban instinct to grasp at security through
surveillance and knowledge, but this, paradoxically, may add to urban insecurity in
a fundamental way: by transforming society in unforeseen directions. There is a
threshold point in urban surveillance beyond which quantitative change - the
addition of devices used and areas watched - becomes qualitative change. It follows
that we might not recognize the facial recognition society. 2 In Discipline and
Punish, Michel Foucault highlighted the transformative, disciplinary potential of
surveillance, explaining the power inherent to the acts of information collection and
analysis. Facial recognition software, with its ability to digitally archive a limitless
gaze over urban space, represents a leap in this disciplinary influence. There is no
limit to how completely facial recognition may permeate society as its underlying
technologies continue to develop. One cannot assume that traditional conceptions
of privacy would have meaning in a society riddled with facial recognition cameras.
And it is not just privacy that could be affected; fundamental ways in which
members of society interrelate are also vulnerable to change. It is vital to
contemplate how those within would experience this type of surveillance-saturated
urban environment, and how it could alter urban life. This paper proceeds through
four main sections. First, facial recognition surveillance is defined, highlighting the
significant advance it represents over other forms of surveillance. Second, I explore
general background issues relevant to surveillance. The third section highlights the
transformative potential of facial recognition surveillance with an overview of the
panoptic tradition of thought and its disciplinary power. It also contemplates the
manner in which facial recognition surveillance relates to governance based on risk
management. The final section of the paper explores the technologys potential
ramifications, especially in the realm of privacy. It also considers how facial

recognition systems may affect those who are observed. The conclusion addresses
ways in which societies that value the balance between privacy and security must
respond. Facial Recognition Defined Facial recognition programs are part of the
growing realm of biometrics, or body measurement. Face images, fingerprints, hand
geometries, retinal patterns, voice modulations and DNA are all identification
sources unique to individuals. Facial recognition software maps details and ratios of
facial geometry using algorithms, the most popular of which results in a
computation of what is called the eigenface, composed of eigenvalues (Selinger
and Socolinsky, 2002). Many basic uses of facial recognition technology are
relatively benign and receive little criticism. For example, the technology can be
used like a high-tech key, allowing access to virtual or actual spaces. Instead of
presenting a password, magnetic card or other such identifier, the face of the
person seeking access is screened to ensure it matches an authorized identity. This
eliminates the problem of stolen passwords or access cards. In heightened security
situations, facial recognition could be used in conjunction with other forms of
identification (Lyon, 2001: 75). The next step in facial recognition is to connect the
systems to digital surveillance cameras, which can then be used to monitor spaces
for the presence of individuals whose digital images are stored in databases. Images
of those present in the spaces under watch can also be recorded and subsequently
paired with identities. Surveillance power grows as various systems , public and
private, are networked together to share information. Risk management is able to
enhance security by cataloguing and analysing observable behaviour, but it also
has a deeper significance: the ability to directly affect that behaviour. For Foucault
(1994), the modern art of governance arose with a turning away from the blunt
forces of sovereign power and control over a state to a disciplinary influence on the
population within a state through the acquisition of knowledge and conduct of
analysis about that population. Clive Norris and Gary Armstrong (1998:7) list three
types of power created by surveillance. First is a direct, authoritative response seen,
for example, when a security guard using CCTV observes a person behaving
inappropriately and asks the person to cease the behaviour. The second form is
deterrence, exemplified by an individual who refrains from inappropriate behaviour
due to a fear of being caught based in the perceived ability of CCTV monitors to
identify him. The third form is not meant to punish or deter, but to abolish the
potential for deviance. This requires an internalisation of the power of surveillance
that transforms those under its gaze. Understanding this third type of power begins
with Jeremy Benthams eighteenth-century disciplinary concept of the panopticon.
The panopticon is a simple architectural design meant to impose order on the lives
of those within, be they criminals, insane, workers or school children. A multi- level
circular building surrounds a central observation tower. The building is divided into
individual cells traversing its entire width, so that sunlight from a window in the
outside wall of the cells illuminates each inhabitant for viewing by disciplinarians in
the tower. Windows on the tower are fitted with blinds or other mechanisms,
allowing disciplinarians to observe those sequestered in the cells without being seen
themselves (Foucault, 1995:200). The surveillance ability from the tower is
complete: each actor is alone, perfectly individualized and constantly visible. The
panoptic mechanism arranges spatial unities that make it possible to see constantly
and to recognize immediately (Foucault, 1995:200). The strength of the panopticon

derives from the visible yet unverifiable operation of power within . Captives
constantly sense the presence of the tower and the possibility they are being
observed at any given time, yet have no way to determine exactly when they are
under scrutiny (Foucault, 1995:201). In this way, the panopticon induces in the
inmate a state of conscious and permanent visibility that assures the automatic
functioning of power (Foucault, 1995:201). Carefully orchestrated power of this sort
does not need to be exercised constantly, because subjects internalise the power
relationship. As Foucault explains: He who is subjected to a field of visibility, and
who knows it, assumes responsibility for the constraints of power; he makes them
play spontaneously upon himself; he inscribes in himself the power relation in which
he simultaneously plays both roles; he becomes the princip le of his own subjection.
(1995:202-203). In a similar vein, Norris and Armstrong note that the power of
surveillance is not merely that it is exercised over someone but through them .
and [s]urveillance therefore involves not only being watched but watching over
ones self. The result is habituated anticipatory conformity and social control that
automatically enforces commonly accepted societal norms and values (1998:5-6).
An urban space permeated with facial recognition systems is the apotheosis of the
panopticon. While CCTV has the power to see constantly like the panopticon, only
facial recognition can recognize immediately. Disciplinary influence can be
achieved in this way over bodies on the move; bodies no longer need to be
physically sequestered for panoptic discipline to affect them. New dangers lurk in
the contemporary panopticon. As surveillance spreads throughout society and its
control disperses accordingly, its influence is also dispersed, and in unanticipated
ways. The operation of surveillance power stems largely from its ability to sort and
categorize. David Lyon calls contemporary panopticism the phenetic fix. It acts to
capture personal data triggered by human bodies and to use these abstractions to
place people in new social classes of income, attributes, habits, preferences, or
offences, in order to influence, manage, or control them (2002). Little is known
about the overall effects of the process on urban centres and their liveability, and it
must be analysed critically. As Lyon says, We simply do not understand ... the full
implications of networked surveillance for power relationships, or of the phenetic
fix for security and social justice (2002).

Following 9-11, Home Land security has continued to rely on


FRT to achieve security. This narrative and push for FRT relies
on a repressed paranoia surrounding Arabian others.
Gates 6 Assistant Professor in the Department of Communication and the
Science Studies Program at the University of California, San Diego. (Kelly, Cultural
Studies Vol. 20, Nos. 4 5 July/September 2006, pp. 417 440 ISSN 0950-2386
print/ISSN 1466-4348 online 2006 Taylor & Francis http://www.tandf.co.uk/journals
DOI: 10.1080/09502380600708820, IDENTIFYING THE 9/11 FACES OF TERROR The
promise and problem of facial recognition technology) NAR

A complicated rush of forces and alliances converged and parted ways in response
to the September 11 catastrophe in the United States. From the cacophony of public
debates and policy responses emerged the official rubric of Homeland Security,
but what exactly this type of security would entail in practice would require
additional political energy and resources to define. What sort of political priorities
would define Homeland Security and how would they be translated into programs,
technologies, and practices for security provision? Among the range of security
solutions proposed in the immediate aftermath of 9/11, and later as part of the
responsibilities of the Department of Homeland Security (DHS), was the widespread
deployment of new identification technologies called biometrics. Following the
attacks, security through identification, or the securitization of identity (Rose 1999,
p. 240), emerged as a major political and governmental imperative , with significant
attention and resources applied to conceive of how to improve upon existing
identification systems, and how to deploy those improved techniques at a
proliferation of sites. The biometrics industry made up of private companies
developing and marketing digital fingerprinting, iris scanning, voice recognition, and
similar systems reoriented itself around the new political priorities of homeland
security. The press gave considerable coverage to biometrics and their developers,
and awareness of the technologies mushroomed among policy makers and the
public. Biometric industry stocks rose to inflated levels, and industry representatives
appeared in the press touting the prospects of their product offerings to protect the
nation from the new threat of catastrophic terrorism on US soil. Hearings were held
on Capitol Hill to debate the potential uses of biometrics in new state security
programs, and Congress subsequently funded more research and development
toward their integration into airport security and border and immigration control
systems. Every major piece of post-9/11 federal security legislation included
biometrics provisions, including the USA Patriot Act, the Enhanced Border Security
and Visa Entry Reform Act, the Aviation and Transportation Security Act, and the
Homeland Security Act, officially establishing the Department of Homeland Security
and its duties. One specific type of biometric that drew the attention of policy
makers and the press was automated facial recognition. More of an ongoing set of
experiments than an actually existing technical system, automated facial
recognition included a number of different competing computer processes designed
to identify individuals using digitized visual images of the face. While civil
libertarians have raised critical questions about the privacy implications of
computerized facial recognition, this essay sets aside the unresolved privacy issues,
and the security versus privacy dichotomy, to approach facial recognition from a
different critical lens.1 Why was facial recognition considered a solution to the
newly salient problem of terrorism after 9/11? And what problems and
contradictions had to be forgotten or glossed over in order to construct it as a socalled security solution? I explore three answers to these questions. First and most
generally, interest in facial recognition in the wake of 9/11 was indicative of the
preoccupation with technical concerns in contemporary political life. As Andrew
Barry (2001) has argued, present governmental rationality is concerned with all
things technical, and problems of government are conceived as having technical
solutions, often involving networking and interactivity. In this context, it made sense
for the problem of terrorism to be thought of as having a high-tech, state-of-the-art

solution involving computer science, digital technologies, and information networks.


Second, facial recognition and other biometrics not only connected directly to the
demand for security through identification, they also held out the promise of
significantly improving identification systems by enabling the identification of
individuals at a distance and in real-time, two central governmental and
technological aims at present. But while new identification technologies have been
posited as improvements over existing systems in the degree of distance, speed,
accuracy and objectivity that they bring to identification techniques, in fact these
new techniques are yet another manifestation in a long history of efforts to
standardize identification systems in order to stabilize identity, a fundamentally
unstable construct. Third, the idea of automated facial recognition seemed uniquely
suited to identifying the terrorist threat embodied in the mugshot images of Arab
men paraded across media screens and the pages of print. These fetishized objects
where simultaneously unidentifiable and readily identified by their characteristic
faces of terror, an contradiction repeated across the field of post-9/11 chatter,
from security industry white papers, press releases, and promotional materials to
agency briefings, congressional hearings, and newspaper headlines. There seemed
no better way to identify these so-called unidentifiable enemies with their
distinctive face of terror than the high-tech, state-of-the-art technique of
computerized facial recognition. In both practical and symbolic terms, then, ongoing
efforts to transform automated facial recognition from a set of technical
experiments to a functioning identification technology have relied on an
essentialized construction of the terrorist identity a construction that has had
particularly grave consequences for some. In the effort to define automated facial
recognition as a new homeland security technology, proponents and developers
have attempted to build a particularly doctrinaire and troubling politics of identity
into both the rationale for and the design of technical systems. Identifying how
proponents of new identification technologies attempt to incorporate rigid, and
thereby flawed, notions of identity into the rationale for and design of such systems
troubles claims about their technical neutrality, and raises further questions about
their suitability for providing something called homeland security.

The use of FRT to prevent terrorism constructs enemies


through a political project that constructs the Arabian other as
a threat based on a discourse of racialized otherness.
Gates 6 Assistant Professor in the Department of Communication and the
Science Studies Program at the University of California, San Diego. (Kelly, Cultural
Studies Vol. 20, Nos. 4 5 July/September 2006, pp. 417 440 ISSN 0950-2386
print/ISSN 1466-4348 online 2006 Taylor & Francis http://www.tandf.co.uk/journals
DOI: 10.1080/09502380600708820, IDENTIFYING THE 9/11 FACES OF TERROR The
promise and problem of facial recognition technology) NAR
In the US, a portion of the federal funding for research and development into
automated facial recognition has come from the Defense Advanced Research
Projects Agency (DARPA), the central research and development arm of the

Department of Defense. Following the bombing of the Khobar Towars US military


barracks in Saudi Arabia in 1996, DARPA began to develop a program called Image
Understanding Force Protection (IUFP), which would provide funding to universities
and private companies for research and development on multi-modal biometric
technologies for identifying humans at a distance, literally at a range of 50 to 500
feet, and without their actual cooperation or direct interface with a biometric
systems. By multi-modal they meant using and combining multiple biometrics in
one system, merging facial, voice, gait, and other forms of automated recognition
into one, more robust system. The program was renamed Human ID at a Distance,
or HumanID for short, and following 9/11, it was placed under the auspices of
DARPAs controversial Total Information Awareness program, headed by John
Poindexter. HumanID was one of a number of projects conceptualized in the 1990s
to address the newly defined problem of asymmetric threats , a concept coined by
military researchers to define the insidious post-Cold War adversaries, including
guerrilla insurgents, drug smuggling cartels, and other state-less actors those
loosely organized networks that exploit [Western] societys openness (Grimes
2003, p. 28). As the chosen mantra of the new Total Information Awareness system,
the concept of asymmetric threats was briefly addressed in a promotional video for
TIA targeted to industry and state officials. The video, outlining the various research
and development projects that were combined to form TIA, was shown as the
introduction to the keynote address at the September 2002 Biometrics Consortium
Conference, delivered by Dr Robert L. Popp, then Deputy Director of DARPAs newly
formed Information Awareness Office. The text opens with a montage of images and
sounds signifying the Cold War, the fall of the Berlin Wall, and the newly defined US
security threats of the 1990s and early twenty-first century. The Cold War images
black and white photos of suffering peasants and video of Soviet soldiers marching
in file are followed by mug shot images of recognizable terrorist suspects and a
characteristic video image of a crowd of Arab men moving rhythmically en masse.
The montage is accompanied by the following voice-over narration: During the Cold
War, the enemy was predictable, identifiable, and consistent. We knew the threats,
the targets were clear. But times change. Today, with the demise of the other
superpower, America is in a different position: a position of vulnerability. When the
enemy strikes, it isnt predictable. It isnt identifiable. It is anything but consistent.
Times change. We are in a world of asymmetrics, and we need transformational
solutions. The asymmetric threat is now a reality of global life. How do we detect it?
How do we predict it? How do we prevent it? (DARPA, 2002) The opening sequence
of this video invokes a nostalgic longing for the so-called predictability and
consistency of the Cold War, when the enemy was ostensibly well-defined and
identifiable. This nostalgic idea of an identifiable enemy is used to define a new
form of national vulnerability the construction of America as vulnerable precisely
because it cannot identify its enemy, literally or symbolically. The inference is that
the United States is more vulnerable than ever, that the asymmetric threats facing
the nation today are even greater than the perpetual threat of nuclear holocaust
during the 40-year Cold War. The collapse of the Soviet Union may have eliminated
the symmetric threat of nuclear warfare with the communist Other, but in its place
came many smaller villains, ostensibly more difficult to locate, define, and identify.
The US may no longer have an enemy that can match its military might, according

to this message, but it has many small enemies that do not play by the
conventional rules of state warfare and thus represent significant threats,
disproportionate to their small size and military resources. These new
unidentifiable and unpredictable enemies are constructed as major risks , a
construction given considerable leverage by the enormity of the violence on 9/ 11
along with its precession as simulacra. Of course, the mug shot images of specific
faces in this video contradict the notion that the new national threats are
unidentifiable. The visual text, including images of specific faces and groups of
ethnically coded people, exemplifies the way in which the problem of asymmetric
threats is bound symbolically to the stereotype of the Arab terrorist. While the
implication is that facial recognition and other technologies can accomplish the truly
magical feet of identifying the unidentifiable threats to the nation, we are invited
to imagine precisely who will be identified . In the aftermath of 9/11, the
identity of asymmetric, unidentifiable threats articulated not only to visual
images of Arab men, but also to repeated references to the face of terror . One
example was the title of the Visionics white paper discussed above, Protecting
Civilization from the Faces of Terror. In addition, the Washington Post published an
article on facial recognition headlined In the Face of Terror; Recognition Technology
Spreads Quickly (OHarrow 2001b, p. E1), and the Technology, Terrorism, and
Government Information Subcommittee of the US Senate Judiciary Committee held
a hearing on Biometric Identifiers and the Modern Face of Terror: New Technologies
in the Global War on Terrorism (2001). The headline of John Poindexters September
2003 New York Times op-ed piece, defending the politically unpopular Total
Information Awareness system, read Finding the Face of Terror in Data. The faces
of terror metaphor, while obviously used as a clever turn of phrase in order to
position facial recognition as a solution to airport security, cannot be dismissed as a
clever copy.7 Ostensibly referencing the individual faces of the 9/11 hijackers as
well as potential future terrorists, it also conjured up the idea of an amorphous,
racialized, and fetishized enemy Other that had penetrated both the
national territory and the national imagination. With images of the faces of
the hijackers and of Osama bin Laden circulating in the press, the faces of terror
metaphor invoked specific objects: mug-shots and grainy video images of Arab
men. It is not surprising or unusual that the facial images stood in for the individuals
themselves; we commonly understand the image of the face as a signifier for
individual identity. However, the idea that certain faces could be inherently faces of
terror that individuals embody terror or evil in their faces could not help but
invoke a paranoid discourse of racialized otherness. Such discourse
recuperated the famous eugenicist Francis Galtons pseudo-scientific research into
the typology of criminal faces, as well as the guiding principle of the eighteenth and
nineteenth-century science of physiognomy: that a persons true character could be
read from the features of the face, the window to the soul. Further, it is not difficult
to read a subtext of incubation and national contamination in the reference to
finding the face of terror in data, along with an implicit effort to posit facial
recognition and other new identification technologies as capable of purifying the
nation of its enemies within. In the fall of 1993, TIME published its now famous
cover depicting The New Face of America: a computer generated image of a
womans face, morphed together from the facial images of seven men and seven

women of various ethnic and racial backgrounds.8 According to Lauren Berlant


(1996), the composite face was cast as an imaginary solution to the problems of
immigration, multiculturalism, sexuality, gender, and (trans)national identity that
haunt the US present tense (p. 398). The morphed image was feminine,
conventionally pretty, light skinned, and non-threatening, preparing white America
for the new multicultural citizenship norm. The post-9/11 face of terror is a similar
sort of fetishized object, in reverse. It is an imaginary target for directed attention
and hatred, but one that is likewise aimed at preparing mainstream America for new
citizenship norms, norms that involve intensified state practices of surveillance and
identification. Like TIMEs fictitious multicultural citizen, the Modern Face of Terror
is a technologically constructed archetype, and one for which racial categories still
deeply matter despite the absence of overtly racist references. Where the former
allegedly represented progress toward an assimilated ideal, the latter deeply
negated those same ideals of integration. Skillfully glossing over the tension
between the individualizing and classifying logics of identification the tension
between identity as the self-same, in an individualizing, subjective sense, and
identity as sameness with another, in a classifying, objective sense (Caplan 2001,
p. 51) the xenophobic face of terror discourse helped to construct terrorism as a
problem with a specific technical solution: automated facial recognition.

This form of biopolitics lead to a militarized security statethis


justifies state killings justified by no more than racist
prejudices
Giroux 6 (Henry Giroux, Henry Giroux, is an American and Canadian scholar and
cultural critic. One of the founding theorists of critical pedagogy in the United
States, he is best known for his pioneering work in public pedagogy, Reading
Hurricane Katrina: Race, Class, and the Biopolitics of Disposability, Summer 2006,
http://cnqzu.com/library/Politics/Henry%20Giroux/reading%20hurricane%20katrinarace,%20class,%20and%20the%20biopolitics%20of%20disposability.pdf Pg. 176177, accessed 6/25/15)//CS
In one of the most blatant displays of racism underscoring the biopolitical "live free
or die" agenda in Bush's America, the dominant media increasingly framed the
events that unfolded during and immediately after the hurricane by focusing on acts
of crime, looting, rape, and murder, allegedly perpetrated by the black residents of
New Orleans. In predictable fashion, politicians such as Louisiana Governor Kathleen
Blanco issued an order allowing soldiers to shoot to kill looters in an effort to restore
calm. Later inquiries revealed that almost all of these crimes did not take place. The
philosopher, Slavoj Zizek, argued that "what motivated these stories were not
facts, but racist prejudices, the satisfaction felt by those who would be able
to say: 'You see, Blacks really are like that, violent barbarians under the
thin layer of civilization!'" (2005). It must be noted that there is more at stake

here than the resurgence of old-style racism; there is the recognition that some
groups have the power to protect themselves from such stereotypes and others do
not, and Henry A. Giroux 177 for those who do not?especially poor blacks?racist
myths have a way of producing precise, if not deadly, material consequences. Given
the public's preoccupation with violence and safety, crime and terror merge in the
all too-familiar equation of black culture with the culture of criminality, and images
of poor blacks are made indistinguishable from images of crime and violence.
Criminalizing black behavior and relying on punitive measures to solve social
problems do more than legitimate a biopolitics defined increas ingly by the
authority of an expanding national security state under George W. Bush. They also
legitimize a state in which the police and military, often operating behind closed
doors, take on public functions that are not subject to public scrutiny (Bleifuss 2005,
22) ? This becomes particularly dangerous in a democracy when paramilitary or
military organisations gain their legitimacy increasingly from an appeal to fear and
terror, prompted largely by the presence of those racialized and class-specific
groups considered both dangerous and disposable.

Biopolitical domination enables use of even the most benign


technology for global slaughterWWII proves
Dean, 04 professor of sociology at the University of Newcastle (Mitchell, Four
Theses on the Powers of Life and Death, Contretemps 5, December 2004,
http://sydney.edu.au/contretemps/5december2004/dean.pdf)//HK
For Foucault, at least in the History of Sexuality and related texts, modern powers
are more closely aligned to a bio-politics, a politics of life. This bio-politics emerges
in the eighteenth century with the concerns for the health, housing, habitation,
welfare and living conditions of the population. Such an observation leads him to
place his concerns with health, discipline, the body, and sexuality within a more
general horizon. Again the notion of bio-politics is quite complex. The idea of the
population as a kind of species body subject to bio-political knowledge and power
operating in concert with the individual body subject to disciplinary powers would
appear central.11 No matter how bloody things were under the exercise of
sovereign power with its atrocious crimes and retributions, it is only with the advent
of this modern form of the politics of life that the same logic and technology
applied to the care and development of human life is applied to the
destruction of entire populations. The link between social welfare and mass
slaughters can at times appear to be a fairly direct one. Of one of its first
manifestations in German police science, Foucault argues, it wields its power over
living beings as living beings, and its politics, therefore has to be a bio-politics.
Since the population is nothing more than what the state takes care of for its own
sake, of course, the state is entitled to slaughter it. So the reverse of bio-politics is
thanato-politics.12 Despite such statements, there is a hesitation, a point of
indeterminacy, in this relation between bio-politics and thanato-politics. Foucault
seems to identify a puzzle or an aporia of contemporary politics, which he cannot
resolve or which may itself be irresolvable. The coexistence in political structures
of large destructive mechanisms and institutions oriented to the care of individual
life is something puzzling, he states.13 But he immediately adds I dont mean that

mass slaughters are the effect, the result, the logical consequence of our rationality,
nor do I mean that the state has the obligation of taking care of individuals since it
has the right to kill millions of people. After proceeding through this set of
inconclusive negatives he avers, as if trying to defer the answer to the questions he
poses: It is this rationality, and the death and life game which takes place in it, that
Id like to investigate from a historical point of view. One aspect of this historical
investigation occurred in Foucaults 1976 lectures. These lectures cover such
concerns as the seventeenth-century historical-political narrative of the war of the
races, and the biological and social class re-inscriptions of racial discourse in the
nineteenth century.14 He concludes with the development of the biological state
racisms and the genocidal politics of the twentieth century, including a radical
analysis of the Nazi state and of socialism. From this perspective, there is a certain
potentiality within the human sciences which, when alloyed to notions such as race,
can help make Contretemps 5, December 2004 20 intelligible the catastrophes
of the twentieth century. Such lectures seem to make the totalitarian rule of the
twentieth century a capstone on the histories of confinement, internment and
punishment that had made up his genealogical work. This thesis is perhaps close to
the work of the first generation of the Frankfurt School and a certain reading of Max
Weber. Here the one-sided development of rationality and application of reason to
man in the human sciences has the consequence of converting instrumental
rationality into forms of domination. Bio-politics in this reading is the application of
instrumental rationality to life. The dreadful outcomes of the twentieth century then
result from this kind of scientization and technologization of earlier notions of race.
There is also a similarity in this reading of Foucault and the work of Zygmunt
Bauman.15 The latter presents the Holocaust as something that must be
understood as endogenous to Western civilization and its processes of
rationalization rather than as an aberrant psychological, social or political pathology.

Scenario 2 is racism
FRTs lack of institutional checks exacerbates racial profiling
Volz 14(Dustin, National Journal, June 14. Privacy Groups Sound the Alarm Over
FBIs Facial-Recognition Technology. JJZ
http://www.nationaljournal.com/tech/privacy-groups-sound-the-alarm-over-fbi-sfacial-recognition-technology-20140624)
More than 30 privacy and civil-liberties groups are asking the Justice Department to
complete a long-promised audit of the FBI's facial-recognition database. The groups
argue the database, which the FBI says it uses to identify targets, could pose
privacy risks to every American citizen because it has not been properly vetted,
possesses dubious accuracy benchmarks, and may sweep up images of ordinary
people not suspected of wrongdoing. In a joint letter sent Tuesday to Attorney
General Eric Holder, the American Civil Liberties Union, the Electronic Frontier
Foundation, and others warn that an FBI facial-recognition program "has undergone
a radical transformation" since its last privacy review six years ago. That lack of

recent oversight "raises serious privacy and civil-liberty concerns," the groups
contend. "The capacity of the FBI to collect and retain information , even on innocent
Americans, has grown exponentially," the letter reads. "It is essential for the
American public to have a complete picture of all the programs and authorities the
FBI uses to track our daily lives, and an understanding of how those programs affect
our civil rights and civil liberties." The Next Generation Identification programa
biometric database that includes iris scans and palm prints along with facial
recognitionis scheduled to become fully operational later this year and has not
undergone a rigorous privacy litmus testknown as a Privacy Impact Assessment
since 2008, despite pledges from government officials. "One of the risks here,
without assessing the privacy considerations, is the prospect of mission creep with
the use of biometric identifiers," said Jeramie Scott, national security counsel with
the Electronic Privacy Information Center, another of the letter's signatories. "it's
been almost two years since the FBI said they were going to do an updated privacy
assessment, and nothing has occurred." The facial-recognition component of the
database, however, is what privacy advocates find most alarming. The FBI projects
that by 2015 the facial-recognition database could catalog up to 52 million face
photos. A substantial portion of thoseabout 4.3 millionare expected to be
gleaned from noncriminal photography, such as employer background checks,
according to privacy groups. But earlier this month, FBI Director James Comey told
Congress the database would not collect and store photos of average civilians and is
intended to "find bad guys by matching pictures to mugshots." But privacy hawks
remain concerned that images may be shared among the FBI and other agencies,
such as the Defense Department and National Security Agency, and even state
motor-vehicle departments. FBI Director James Comey defends the limited scope of
the agency's new facial recognition technology. Comey, during his testimony, did
not completely refute the suggestion that photos would be shared with states.
"There are some circumstances in which when states send us records, they'll send
us pictures of people who are getting special driving licenses to transport children
or explosive materials or something," Comey said. "But as I understand it, those are
not part of the searchable Next Generation Identification database." Currently, no
federal laws limit the use of facial-recognition software, either by the private sector
or the government. A 2010 government report made public last year through a
Freedom of Information Act request filed by the Electronic Privacy Information
Center stated that the agency's facial-recognition technology could fail up to 20
percent of the time. When used against a searchable repository, that failure rate
could be as high as 15 percent. But even those numbers are misleading, privacy
groups contend, because a search can be considered a success if the correct
suspect is listed within the top 50 candidates. Such an "overwhelming number" of
false matches could lead to "greater racial profiling by law enforcement by shifting
the burden of identification onto certain ethnicities ." Facial-recognition technology
has recently endured heightened scrutiny from the anti-government-surveillance
crowd for its potential as an invasive means of tracking. Last month, documents
supplied by Edward Snowden to The New York Times revealed that the National
Security Agency intercepts "millions of images per day" as part of a program
officials believe could fundamentally revolutionize the way government spies on
intelligence targets across the globe. That daily cache includes about 55,000 "facial-

recognition quality images," which the NSA considers possibly more important to its
mission than the surveillance of more traditional communications. When asked for
comment, the Justice Department would only say it was reviewing the letter.

FRTs model of surveillance create conditions of trauma


through constant policingthis reduces accountability for
those that craft these policies.
Brown 14 Associate Professor of Law, University of Baltimore School of Law.
B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY, FACEPRINTS,
AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
In the big data universe, FRT enables users to trace a passerbys life in real time
past, present, and futurethrough the relation of faceprint algorithms with other
data points. Although the American publics reaction to the NSAs Prism program
was relatively muted,203 most people understand the awkward feeling of being
stared at on a bus.204 Constant surveillance by the government is more pernicious.
It discovers a persons identity and then augments that information based on
intelligence that todays technology renders limitless . The loss of anonymity that
results from the detailed construction of a persons identity through ongoing
monitoring can lead to at least three categories of harm, discussed below. First, as
the panopticon suggests, ongoing identification and tracking can adversely
influence behavior. People involuntarily experience self-censorship and inhibition
in response to the feeling of being watched. 205 They might think twice, for
example, before visiting websites of extreme sports or watching sitcoms glorifying
couch potatoes if they felt this might result in higher insurance premiums.206 The
social norms that develop around expectations of surveillance can, in turn, become
a tool for controlling others. To be sure, social control is beneficial for the deterrence
of crime and management of other negative behavior. Too much social control,
however, can adversely impact freedom, creativity, and selfdevelopment.207
Professor Jeffrey Rosen has explained that the pressures of having ones private
scandals outed can push people toward socially influenced courses of action that,
without public disclosure and discussion, would never happen.208 They are less
willing to voice controversial ideas or associate with fringe groups for fear of bias or
reprisal.209 With the sharing of images online, the public contributes to what
commentators have called sousveillance or a reverse Panopticon effect,
whereby the watched become the watchers.210 Second, dragnet-style monitoring
can cause emotional harm. Living with constant monitoring is stressful, inhibiting
the subjects ability to relax and negatively affecting social relationships. 211 When
disclosures involve particularly sensitive physical or emotional characteristics that
are normally concealedsuch as [g]rief, suffering, trauma, injury, nudity, sex,
urination, and defecationa persons dignity and self-esteem is affected, and
incivility toward that person increases.212 Third, constant surveillance through

modern technologies reduces accountability for those who use the data to make
decisions that affect the people they are monitoring.213 The collection of images
for FRT applica tions is indiscriminate, with no basis for suspecting a particular
subject of wrongdoing. It allows users to cluster disparate bits of information
together from one or more random, unidentified images such that [t]he whole
becomes greater than the parts.214 The individuals whose images are captured do
not know how their data is being used and have no ability to control the
manipulation of their faceprints, even though the connections that are made reveal
new facts that the subjects did not knowingly disclose. The party doing the
aggregating gains a powerful tool for forming and disseminating personal
judgments that render the subject vulnerable to public humiliation and other
tangible harms, including criminal investigation. 215 Incorrect surveillance
information can lead to lost job opportunities, intense scrutiny at airports, false
arrest, and denials of public benefits.216 In turn, a lack of transparency,
accountability, and public participation in and around surveillance activities fosters
distrust in government. The recent scandal and fractured diplomatic relations over
NSA surveillance of U.S. allies is a case in point.217 Perhaps most troubling, FRT
enhances users capacity to identify and track individuals propensity to take
particular actions,218 which stands in tension with the common law presumption of
innocence embodied in the Due Process Clause of the Fifth and Fourteenth
Amendments.219 As described below, prevailing constitutional doctrine does not
account for the use of technology to identify, track, and predict the behavior of a
subject using an anonymous public image and big data correlations.

Racist dichotomies grant the state power to exterminatethis


is the root of all war
Mendieta 2 (Eduardo Mendieta, PhD and Associate professor of Stonybrook School of
Philosophy, To make live and to let die Foucault on Racism Meeting of the Foucault Circle,
APA Central Division Meeting
http://www.stonybrook.edu/commcms/philosophy/people/faculty_pages/docs/foucault.pdf)

This is where racism intervenes, not from without, exogenously, but from within,
constitutively. For the emergence of biopower as the form of a new form of political
rationality, entails the inscription within the very logic of the modern state the logic
of racism. For racism grants, and here I am quoting: the conditions for the
acceptability of putting to death in a society of normalization. Where there is a
society of normalization, where there is a power that is, in all of its surface and in
first instance, and first line, a bio-power, racism is indispensable as a condition to
be able to put to death someone, in order to be able to put to death others. The
homicidal [meurtrire] function of the state, to the degree that the state functions
on the modality of bio-power, can only be assured by racism (Foucault 1997, 227)
To use the formulations from his 1982 lecture The Political Technology of
Individuals which incidentally, echo his 1979 Tanner Lectures the power of the
state after the 18th century, a power which is enacted through the police, and is

enacted over the population, is a power over living beings, and as such it is a
biopolitics. And, to quote more directly, since the population is nothing more than
what the state takes care of for its own sake, of course, the state is entitled to
slaughter it, if necessary. So the reverse of biopolitics is thanatopolitics. (Foucault
2000, 416). Racism, is the thanatopolitics of the biopolitics of the total state. They
are two sides of one same political technology, one same political rationality: the
management of life, the life of a population, the tending to the continuum of life of
a people. And with the inscription of racism within the state of biopower, the long
history of war that Foucault has been telling in these dazzling lectures has made a
new turn: the war of peoples, a war against invaders, imperials colonizers, which
turned into a war of races, to then turn into a war of classes, has now turned into
the war of a race, a biological unit, against its polluters and threats. Racism is the
means by which bourgeois political power , biopower, re-kindles the fires of war
within civil society. Racism normalizes and medicalizes war. Racism makes war
the permanent condition of society, while at the same time masking its
weapons of death and torture. As I wrote somewhere else, racism banalizes
genocide by making quotidian the lynching of suspect threats to the health of the
social body. Racism makes the killing of the other, of others, an everyday
occurrence by internalizing and normalizing the war of society against its enemies .
To protect society entails we be ready to kill its threats, its foes, and if we
understand society as a unity of life, as a continuum of the living, then these threat
and foes are biological in nature.

Structural violence is the prerequisite for war. The 1AC is the


starting point of a social movement which spills over against
other forms of oppression
Szentes 8 [Tamas Szentes, Professor Emeritus at the Corvinus University of
Budapest and a member of the Hungarian Academy of Sciences, Globalization and
prospects of the world society,
http://www.eadi.org/fileadmin/Documents/Events/exco/Glob.___prospects_-_jav..pdf ]
Its a common place that human society can survive and develop only in a lasting
real peace. Without peace countries cannot develop. Although since 1945 there has
been no world war, but: Numerous local wars took place Terrorism has spread all
over the world, undermining security even in the most developed and powerful
countries Arms race and militarization have not ended with the collapse of the
Soviet bloc, but escalated and continued, extending also to weapons of mass
destruction and misusing enormous resources badly needed for development Many
invisible wars are suffered by the poor and oppressed people, manifested in mass
misery, poverty, unemployment, homelessness, starvation and malnutrition,
epidemics and poor health conditions, exploitation and oppression, racial and other
discrimination, physical terror, organized injustice, disguised forms of violence, the
denial or regular infringement of the democratic rights of citizens, women, youth,
ethnic or religious minorities, etc., and last but not least, in the degradation of
human environment, which means that The war against nature, i.e. the disturbance
of ecological balance, wasteful management of natural resources, and large-scale

pollution of our environment, is still going on, causing also losses and fatal dangers
for human life. Behind global terrorism and invisible wars we find striking
international and intra-society inequities and distorted development patterns, which
tend to generate social as well as international tensions, thus pacing the way for
unrest and visible wars. It is a common place now that peace is not merely the
absence of war. The prerequisites of a lasting peace between and within societies
involve not only though, of course, necessarily demilitarization, but also a
systematic and gradual elimination of the roots of violence, of the causes of
invisible wars, of the structural and institutional cases of large-scale international
and intra-society inequalities, exploitation and oppression. Peace requires a process
of social and national emancipation, a progressive, democratic transformation of
societies and the world bringing about equal rights and opportunities for all people,
sovereign participation and mutually advantageous co-operation among nations. It
further requires a pluralistic democracy on global level with an appropriate system
of proportional representation of the world society, articulation of diverse interests
and their peaceful reconciliation, by non-violent conflict management, and thus also
a global governance with a really global institutional system. Under the
contemporary conditions of accelerating globalization and deepening global
interdependencies in our world, peace is indivisible in both time and space. It
cannot exist if reduced to a period only after or before war, and cannot be
safeguarded in one part of the world when some others suffer visible or invisible
wars. Thus, peace requires, indeed, a new, demilitarized and democratic world
order, which can provide equal opportunities for sustainable development.
Continues The causes of inequalities on local, national, regional and world levels
are often interlinked. Dominance and exploitation relations go across country
boundaries, oppressors are supporting each other and oppressing other oppressors.
Societies that exploit others can hardly stay free of exploitation, themselves.
Nations that hinder other in democratic transformation can hardly live in democracy.
Monopolies induce also other to monopolize. Narrow, selfish interest generate
narrow, selfish interest. Discrimination gives birth to discrimination. And so on

Solvency
The US government must preserve the right to anonymityChayka, 14
Kyle Chayka, 3 October 2014, theguardian, The facial recognition databases are
coming. Why aren't the privacy laws? theguardian,
http://www.theguardian.com/commentisfree/2014/apr/30/facial-recognition-databases-privacy-laws
Online dating is kind of like going on a shopping trip. But instead of looking at pairs of shoes, we're perusing people, glancing over their photos and profiles in an effort to gauge how
interested we might be in them. So why shouldn't we be warned, like a grocery-store expiration date, when one is rotten. Such is the intention of CreepShield, a new web-browser
extension that uses facial recognition technology to allow users to scan the faces they see on social networking websites Facebook, eHarmony, OKCupid, even Grindr and see if the
faces match any public records in databases of sex offenders. The app seems somewhat useful. Unless, of course, you're mistakenly identified as a sex offender. When I uploaded my
own photo after writing a recent Newsweek cover story on biometric surveillance, the CreepShield search engine showed results that were less than 50% sure I was a match though
there were some people in the database who looked eerily similar to me. Thankfully, my data-filled profile photo didn't have a match, but actual sex offenders "have no right to privacy",
CreepShield's founder told me. This raises the question: will the rest of us have the right to our own faces when they get stored in search engines of the future? The US government is
currently building the largest biometrics database in the world with Next Generation Identification, a system meant to help identify criminals. The FBI estimates that it will store over 50m
faces images by 2015, according to documents obtained by the Electronic Frontier Foundation. Facial recognition technology has plenty of practical applications. Germany is beginning to
use biometric data to scan individuals at border crossings, and Facebook even collects face patterns to suggest who should be tagged in photos. The technology is contributing to what
will become a $20bn market by 2020, according to the Secure Identity Biometrics Association (Siba). Companies including Animetrics and Cognitec are selling their technology to
startups like CreepShield as well as to police and military, with success rates of over 98% for facial matching. From a clear face image, ethnicity can be identified with an error rate of
13% and gender with an error rate of 3%. Unfortunately, United States law has not caught up with the technology's expansion. And as facial recognition becomes more present in
everyday life, we are going to need new regulations protecting the anonymity of our faces, just as we are protecting our cellphones and, hopefully, the metadata therein. If we don't, we
will lose our ability to be anonymous and even when we're talking about identifying sex offenders, retaining some measure of anonymity is important. Would you really want to cast a
controversial vote or publicly protest in a world where your peers or the cops could track down your cheekbone pattern in seconds? With Facebook's burgeoning databases as well as the
FBI's s Next Generation Identification system, it's now easier than ever to get access to a photo of a person's face and turn it into a kind of fingerprint on steroids, without them knowing.

We need to find a way to preserve our anonymity , and fast. Fingerprints and DNA data are protected under US
Supreme Court law, providing a possible precedent for face-prints. If a fingerprint or DNA test is collected without due cause, it can't be used in court as evidence it constitutes an
unreasonable search and seizure, outlawed by the Fourth Amendment. The Supreme Court is just this week embroiled in debate over whether or not search and seizure of social media
and cellphone data should require a warrant. While we grapple with today's dominant technologies, we should also be looking forward to tomorrow's, regulating the Fourth Amendment's

Laws should allow


us to control which businesses and government entities have access to our faces
and when. Individuals might opt in to facial recognition for interactive marketing
campaigns or to be tracked in a retail store, but choose to be left out of
unwarranted public government surveillance. A face-print should fall under the
same category as a fingerprint and be controlled just as stringently . The US must
take this opportunity to give citizens a right to their own face-prints and an ability to
opt out of any facial recognition not explicitly connected to active criminal
investigation. This will regulate the technology's application before it's too late to prevent a situation like a wrongful conviction because of a facial recognition mistake.
application to futuristic technologies like CreepShield and Google Glass, which has banned facial recognition for now but might not forever.

Otherwise, the future gets dystopian quickly. The door opens to a version of CreepShield that runs on gossip or Yelp-like reviews of people instead of a sex-offender database. (That
random guy you see in the bar? Forget Lulu try "see you later".) Indeed, a world without facial-recognition laws is a world without strangers. Not being able to lie about height on
OKCupid is the least of our worries.

Global politics, alliances, and economic interdependence


means there is no risk of great power war
Robb 12,

Doug; Lieutenant, US Navy. M.A. from Navy. Now Hear thisWhy the age of great power war is over.
Proceeding Magazine May 2012 Vol. 138 http://www.usni.org/magazines/proceedings/2012-05/now-hear-why-agegreat-power-war-over
The 20th century brought seismic shifts as the global political system transitioned from being multipolar during the
first 40 years to bipolar during the Cold War before emerging as the American-led, unipolar international order we

major world powers have been at peace for nearly


seven decadesthe longest such period since the 1648 Treaty of Westphalia codified the sovereign nationknow today. These changes notwithstanding,

state. Whereas in years past, when nations allied with their neighbors in ephemeral bonds of convenience, todays

global politics are tempered by permanent international organizations, regional


military alliances, and formal economic partnerships. Thanks in large part to the prevalence
of liberal democracies, these groups are able to moderate international disputes and
provide forums for nations to air grievances, assuage security concerns, and negotiate
settlementsthereby making war a distant (and distasteful) option. As a result, China (and any
other global power) has much to lose by flouting international opinion, as evidenced by its advocacy of the recent
Syrian uprising, which has drawn widespread condemnation. In addition to geopolitical and diplomacy issues,
globalization continues to transform the world.

This interdependence has blurred the lines

between economic security and physical security . Increasingly, great-power interests demand
cooperation rather than conflict. To that end, maritime nations such as the United States and China desire open sea
lines of communication and protected trade routes, a common security challenge that could bring these powers
together, rather than drive them apart (witness Chinas response to the issue of piracy in its backyard). Facing
these security tasks cooperatively is both mutually advantageous and common sense. Democratic Peace Theory
championed by Thomas Paine and international relations theorists such as New York Times columnist Thomas
Friedmanpresumes that great-power war will likely occur between a democratic and non-democratic state.
However, as information flows freely and people find outlets for and access to new ideas, authoritarian leaders will
find it harder to cultivate popular support for total waran argument advanced by philosopher Immanuel Kant in
his 1795 essay Perpetual Peace. Consider, for example, Chinas unceasing attempts to control Internet access.
The 2011 Arab Spring demonstrated that organized opposition to unpopular despotic rule has begun to reshape the
political order, a change galvanized largely by social media. Moreover, few would argue that China today is not
socially more liberal, economically more capitalistic, and governmentally more inclusive than during Mao Tse-tungs
regime. As these trends continue, nations will find large-scale conflict increasingly disagreeable. In terms of the
military, ongoing fiscal constraints and socio-economic problems likely will marginalize defense issues. All the more
reason why great powers will find it mutually beneficial to work together to find solutions to
common security problems, such as countering drug smuggling, piracy, climate change, human trafficking, and
terrorismmissions that Admiral Robert F. Willard, former Commander, U.S. Pacific Command, called deterrence
and reassurance.

There is no nuclear risk from great powers, rogue states, or


terrorism low risk should be treated as no risk
Mueller 10, John Mueller is a professor of political science at Ohio State University Calming Our Nuclear
Jitters, Issues In Science and Technology, Winter 2010, http://issues.org/26-2/mueller/
An exaggerated fear of nuclear weapons has led to many wrongheaded policy decisions. A more sober assessment is needed. The fearsome destructive
power of nuclear weapons provokes understandable dread, but in crafting public policy we must move beyond this initial reaction to soberly assess the
risks and consider appropriate actions. Out of awe over and anxiety about nuclear weapons, the worlds super-powers accumulated enormous arsenals of
them for nearly 50 years. But then, in the wake of the Cold War, fears that the bombs would be used vanished almost entirely. At the same time, concerns

excessive fear
about nuclear weapons led to many policies that turned out to be wasteful and
unnecessary. We should take the time to assess these new risks to avoid an overreaction that will take resources and attention away from other
that terrorists and rogue nations could acquire nuclear weapons have sparked a new surge of fear and speculation. In the past,

problems. Indeed, a more thoughtful analysis will reveal that the new perceived danger is far less likely than it might at first appear. Albert Einstein
memorably proclaimed that nuclear weapons have changed everything except our way of thinking. But the weapons actually seem to have changed
little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money. To begin with, the

Nuclear weapons are, of course, routinely


given credit for preventing or deterring a major war during the Cold War era.
However, it is increasingly clear that the Soviet Union never had the slightest
interest in engaging in any kind of conflict that would remotely resemble World War
II, whether nuclear or not. Its agenda emphasized revolution, class rebellion, and
civil war, conflict areas in which nuclear weapons are irrelevant. Thus, there was no
threat of direct military aggression to deter. Moreover, the possessors of nuclear weapons have never been able to
bombs impact on substantive historical developments has turned out to be minimal.

find much military reason to use them, even in principle, in actual armed conflicts. Although they may have failed to alter substantive history, nuclear
weapons have inspired legions of strategists to spend whole careers agonizing over what one analyst has called nuclear metaphysics, arguing, for
example, over how many MIRVs (multiple independently targetable reentry vehicles) could dance on the head of an ICBM (intercontinental ballistic
missile). The result was a colossal expenditure of funds. Most important for current policy is the fact that contrary to decades of hand-wringing about the

most countries have actually found them to be a substantial


and even ridiculous misdirection of funds, effort, and scientific talent . This is a major if muchunderappreciated reason why nuclear proliferation has been so much slower than predicted over the decades. In addition, the proliferation that
inherent appeal of nuclear weapons,

has taken place has been substantially inconsequential. When the quintessential rogue state, Communist China, obtained nuclear weapons in 1964,
Central Intelligence Agency Director John McCone sternly proclaimed that nuclear war was almost inevitable. But far from engaging in the nuclear

China built its weapons quietly and has never made a


real nuclear threat. Despite this experience, proliferation anxiety continues to flourish. For more than a decade, U.S.
policymakers obsessed about the possibility that Saddam Hussein s pathetic and technologically
dysfunctional regime in Iraq could in time obtain nuclear weapons, even though it took the far more
advanced Pakistan 28 years. To prevent this imagined and highly unlikely calamity, damaging and destructive economic sanctions
blackmail expected at the time by almost everyone,

were imposed and then a war was waged, and each venture has probably resulted in more deaths than were suffered at Hiroshima and Nagasaki
combined. (At Hiroshima and Nagasaki, about 67,000 people died immediately and 36,000 more died over the next four months. Most estimates of the
Iraq war have put total deaths there at about the Hiroshima-Nagasaki levels, or higher.) Today, alarm is focused on the even more pathetic regime in North
Korea, which has now tested a couple of atomic devices that seem to have been fizzles. There is even more hysteria about Iran, which has repeatedly
insisted it has no intention of developing weapons. If that regime changes its mind or is lying, experience suggests it is likely to find that, except for

Politicians of
all stripes preach to an anxious, appreciative, and very numerous choir when they ,
like President Obama, proclaim atomic terrorism to be the most immediate and extreme
threat to global security. It is the problem that, according to Defense Secretary Robert Gates, currently keeps every senior leader
awake at night. This is hardly a new anxiety . In 1946, atomic bomb maker J. Robert Oppenheimer ominously warned that if three or
stoking the national ego for a while, the bombs are substantially valueless and a very considerable waste of money and effort.

four men could smuggle in units for an atomic bomb, they could blow up New York. This was an early expression of a pattern of dramatic risk inflation that
has persisted throughout the nuclear age. In fact, although expanding fires and fallout might increase the effective destructive radius, the blast of a
Hiroshima-size device would blow up about 1% of the citys areaa tragedy, of course, but not the same as one 100 times greater. In the early 1970s,
nuclear physicist Theodore Taylor proclaimed the atomic terrorist problem to be immediate, explaining at length how comparatively easy it would be to
steal nuclear material and step by step make it into a bomb. At the time he thought it was already too late to prevent the making of a few bombs, here
and there, now and then, or in another ten or fifteen years, it will be too late. Three decades after Taylor, we continue to wait for terrorists to carry out
their easy task. In contrast to these predictions, terrorist groups seem to have exhibited only limited desire and even less progress in going atomic. This
may be because, after brief exploration of the possible routes, they, unlike generations of alarmists, have discovered that the tremendous effort required
is scarcely likely to be successful. The most plausible route for terrorists, according to most experts, would be to manufacture an atomic device
themselves from purloined fissile material (plutonium or, more likely, highly enriched uranium). This task, however, remains a daunting one, requiring that
a considerable series of difficult hurdles be conquered and in sequence. Outright armed theft of fissile material is exceedingly unlikely not only because of
the resistance of guards, but because chase would be immediate. A more promising approach would be to corrupt insiders to smuggle out the required
substances. However, this requires the terrorists to pay off a host of greedy confederates, including brokers and money-transmitters, any one of whom
could turn on them or, either out of guile or incompetence, furnish them with stuff that is useless. Insiders might also consider the possibility that once the
heist was accomplished, the terrorists would, as analyst Brian Jenkins none too delicately puts it, have every incentive to cover their trail, beginning with

If terrorists were somehow successful at obtaining a sufficient


mass of relevant material, they would then probably have to transport it a long
distance over unfamiliar terrain and probably while being pursued by security
forces.
eliminating their confederates.

Crossing international borders would be facilitated by following established smuggling routes, but these are not as chaotic as they appear and are often under the watch of suspicious and careful criminal regulators. If border personnel became suspicious of the commodity being

smuggled, some of them might find it in their interest to disrupt passage, perhaps to collect the bounteous reward money that would probably be offered by alarmed governments once the uranium theft had been discovered. Once outside the country with their precious booty, terrorists would need to set up a
large and well-equipped machine shop to manufacture a bomb and then to populate it with a very select team of highly skilled scientists, technicians, machinists, and administrators. The group would have to be assembled and retained for the monumental task while no consequential suspicions were
generated among friends, family, and police about their curious and sudden absence from normal pursuits back home. Members of the bomb-building team would also have to be utterly devoted to the cause, of course, and they would have to be willing to put their lives and certainly their careers at high risk,
because after their bomb was discovered or exploded they would probably become the targets of an intense worldwide dragnet operation. Some observers have insisted that it would be easy for terrorists to assemble a crude bomb if they could get enough fissile material. But Christoph Wirz and Emmanuel
Egger, two senior physicists in charge of nuclear issues at Switzerlands Spiez Laboratory, bluntly conclude that the task could hardly be accomplished by a subnational group. They point out that precise blueprints are required, not just sketches and general ideas, and that even with a good blueprint the
terrorist group would most certainly be forced to redesign. They also stress that the work is difficult, dangerous, and extremely exacting, and that the technical requirements in several fields verge on the unfeasible. Stephen Younger, former director of nuclear weapons research at Los Alamos Laboratories, has
made a similar argument, pointing out that uranium is exceptionally difficult to machine whereas plutonium is one of the most complex metals ever discovered, a material whose basic properties are sensitive to exactly how it is processed. Stressing the daunting problems associated with material purity,
machining, and a host of other issues, Younger concludes, to think that a terrorist group, working in isolation with an unreliable supply of electricity and little access to tools and supplies could fabricate a bomb is farfetched at best. Under the best circumstances, the process of making a bomb could take
months or even a year or more, which would, of course, have to be carried out in utter secrecy. In addition, people in the area, including criminals, may observe with increasing curiosity and puzzlement the constant coming and going of technicians unlikely to be locals. If the effort to build a bomb was
successful, the finished product, weighing a ton or more, would then have to be transported to and smuggled into the relevant target country where it would have to be received by collaborators who are at once totally dedicated and technically proficient at handling, maintaining, detonating, and perhaps
assembling the weapon after it arrives. The financial costs of this extensive and extended operation could easily become monumental. There would be expensive equipment to buy, smuggle, and set up and people to pay or pay off. Some operatives might work for free out of utter dedication to the cause, but
the vast conspiracy also requires the subversion of a considerable array of criminals and opportunists, each of whom has every incentive to push the price for cooperation as high as possible. Any criminals competent and capable enough to be effective allies are also likely to be both smart enough to see
boundless opportunities for extortion and psychologically equipped by their profession to be willing to exploit them. Those who warn about the likelihood of a terrorist bomb contend that a terrorist group could, if with great difficulty, overcome each obstacle and that doing so in each case is not impossible.
But although it may not be impossible to surmount each individual step, the likelihood that a group could surmount a series of them quickly becomes vanishingly small. Table 1 attempts to catalogue the barriers that must be overcome under the scenario considered most likely to be successful. In
contemplating the task before them, would-be atomic terrorists would effectively be required to go though an exercise that looks much like this. If and when they do, they will undoubtedly conclude that their prospects are daunting and accordingly uninspiring or even terminally dispiriting. It is possible to
calculate the chances for success. Adopting probability estimates that purposely and heavily bias the case in the terrorists favorfor example, assuming the terrorists have a 50% chance of overcoming each of the 20 obstaclesthe chances that a concerted effort would be successful comes out to be less
than one in a million. If one assumes, somewhat more realistically, that their chances at each barrier are one in three, the cumulative odds that they will be able to pull off the deed drop to one in well over three billion. Other routes would-be terrorists might take to acquire a bomb are even more problematic.
They are unlikely to be given or sold a bomb by a generous like-minded nuclear state for delivery abroad because the risk would be high, even for a country led by extremists, that the bomb (and its source) would be discovered even before delivery or that it would be exploded in a manner and on a target the
donor would not approve, including on the donor itself. Another concern would be that the terrorist group might be infiltrated by foreign intelligence. The terrorist group might also seek to steal or illicitly purchase a loose nuke somewhere. However, it seems probable that none exist. All governments have an
intense interest in controlling any weapons on their territory because of fears that they might become the primary target. Moreover, as technology has developed, finished bombs have been out-fitted with devices that trigger a non-nuclear explosion that destroys the bomb if it is tampered with. And there are
other security techniques: Bombs can be kept disassembled with the component parts stored in separate high-security vaults, and a process can be set up in which two people and multiple codes are required not only to use the bomb but to store, maintain, and deploy it. As Younger points out, only a few
people in the world have the knowledge to cause an unauthorized detonation of a nuclear weapon. There could be dangers in the chaos that would emerge if a nuclear state were to utterly collapse; Pakistan is frequently cited in this context and sometimes North Korea as well. However, even under such
conditions, nuclear weapons would probably remain under heavy guard by people who know that a purloined bomb might be used in their own territory. They would still have locks and, in the case of Pakistan, the weapons would be disassembled. The al Qaeda factor The degree to which al Qaeda, the only
terrorist group that seems to want to target the United States, has pursued or even has much interest in a nuclear weapon may have been exaggerated. The 9/11 Commission stated that al Qaeda has tried to acquire or make nuclear weapons for at least ten years, but the only substantial evidence it
supplies comes from an episode that is supposed to have taken place about 1993 in Sudan, when al Qaeda members may have sought to purchase some uranium that turned out to be bogus. Information about this supposed venture apparently comes entirely from Jamal al Fadl, who defected from al Qaeda in
1996 after being caught stealing $110,000 from the organization. Others, including the man who allegedly purchased the uranium, assert that although there were various other scams taking place at the time that may have served as grist for Fadl, the uranium episode never happened. As a key indication of
al Qaedas desire to obtain atomic weapons, many have focused on a set of conversations in Afghanistan in August 2001 that two Pakistani nuclear scientists reportedly had with Osama bin Laden and three other al Qaeda officials. Pakistani intelligence officers characterize the discussions as academic in
nature. It seems that the discussion was wide-ranging and rudimentary and that the scientists provided no material or specific plans. Moreover, the scientists probably were incapable of providing truly helpful information because their expertise was not in bomb design but in the processing of fissile material,
which is almost certainly beyond the capacities of a nonstate group. Kalid Sheikh Mohammed, the apparent planner of the 9/11 attacks, reportedly says that al Qaedas bomb efforts never went beyond searching the Internet. After the fall of the Taliban in 2001, technical experts from the CIA and the
Department of Energy examined documents and other information that were uncovered by intelligence agencies and the media in Afghanistan. They uncovered no credible information that al Qaeda had obtained fissile material or acquired a nuclear weapon. Moreover, they found no evidence of any
radioactive material suitable for weapons. They did uncover, however, a nuclear-related document discussing openly available concepts about the nuclear fuel cycle and some weapons-related issues. Just a day or two before al Qaeda was to flee from Afghanistan in 2001, bin Laden supposedly told a
Pakistani journalist, If the United States uses chemical or nuclear weapons against us, we might respond with chemical and nuclear weapons. We possess these weapons as a deterrent. Given the military pressure that they were then under and taking into account the evidence of the primitive or more
probably nonexistent nature of al Qaedas nuclear program, the reported assertions, although unsettling, appear at best to be a desperate bluff. Bin Laden has made statements about nuclear weapons a few other times. Some of these pronouncements can be seen to be threatening, but they are rather coy
and indirect, indicating perhaps something of an interest, but not acknowledging a capability. And as terrorism specialist Louise Richardson observes, Statements claiming a right to possess nuclear weapons have been misinterpreted as expressing a determination to use them. This in turn has fed the
exaggeration of the threat we face. Norwegian researcher Anne Stenersen concluded after an exhaustive study of available materials that, although it is likely that al Qaeda central has considered the option of using non-conventional weapons, there is little evidence that such ideas ever developed into
actual plans, or that they were given any kind of priority at the expense of more traditional types of terrorist attacks. She also notes that information on an al Qaeda computer left behind in Afghanistan in 2001 indicates that only $2,000 to $4,000 was earmarked for weapons of mass destruction research and
that the money was mainly for very crude work on chemical weapons. Today, the key portions of al Qaeda central may well total only a few hundred people, apparently assisting the Talibans distinctly separate, far larger, and very troublesome insurgency in Afghanistan. Beyond this tiny band, there are
thousands of sympathizers and would-be jihadists spread around the globe. They mainly connect in Internet chat rooms, engage in radicalizing conversations, and variously dare each other to actually do something. Any threat, particularly to the West, appears, then, principally to derive from self-selected
people, often isolated from each other, who fantasize about performing dire deeds. From time to time some of these people, or ones closer to al Qaeda central, actually manage to do some harm. And occasionally, they may even be able to pull off something large, such as 9/11. But in most cases, their
capacities and schemes, or alleged schemes, seem to be far less dangerous than initial press reports vividly, even hysterically, suggest. Most important for present purposes, however, is that any notion that al Qaeda has the capacity to acquire nuclear weapons, even if it wanted to, looks farfetched in the
extreme. It is also noteworthy that, although there have been plenty of terrorist attacks in the world since 2001, all have relied on conventional destructive methods. For the most part, terrorists seem to be heeding the advice found in a memo on an al Qaeda laptop seized in Pakistan in 2004: Make use of
that which is available rather than waste valuable time becoming despondent over that which is not within your reach. In fact, history consistently demonstrates that terrorists prefer weapons that they know and understand, not new, exotic ones. Glenn Carle, a 23-year CIA veteran and once its deputy
intelligence officer for transnational threats, warns, We must not take fright at the specter our leaders have exaggerated. In fact, we must see jihadists for the small, lethal, disjointed, and miserable opponents that they are. al Qaeda, he says, has only a handful of individuals capable of planning, organizing,
and leading a terrorist organization, and although the group has threatened attacks with nuclear weapons, its capabilities are far inferior to its desires. Policy alternatives The purpose here has not been to argue that policies designed to inconvenience the atomic terrorist are necessarily unneeded or unwise.
Rather, in contrast with the many who insist that atomic terrorism under current conditions is rather likely indeed, exceedingly likelyto come about, I have contended that it is hugely unlikely. However, it is important to consider not only the likelihood that an event will take place, but also its consequences.

At some point, however, probabilities


become so low that, even for catastrophic events, it may make sense to ignore
them or at least put them on the back burner; in short, the risk becomes acceptable. For example, the British could at
Therefore, one must be concerned about catastrophic events even if their probability is small, and efforts to reduce that likelihood even further may well be justified.

any time attack the United States with their submarine-launched missiles and kill millions of Americans, far more than even the most monumentally gifted
and lucky terrorist group. Yet the risk that this potential calamity might take place evokes little concern; essentially it is an acceptable risk. Meanwhile,
Russia, with whom the United States has a rather strained relationship, could at any time do vastly more damage with its nuclear weapons, a fully
imaginable calamity that is substantially ignored. In constructing what he calls a case for fear, Cass Sunstein, a scholar and current Obama
administration official, has pointed out that if there is a yearly probability of 1 in 100,000 that terrorists could launch a nuclear or massive biological
attack, the risk would cumulate to 1 in 10,000 over 10 years and to 1 in 5,000 over 20. These odds, he suggests, are not the most comforting. Comfort,
of course, lies in the viscera of those to be comforted, and, as he suggests, many would probably have difficulty settling down with odds like that. But
there must be some point at which the concerns even of these people would ease. Just perhaps it is at one of the levels suggested above: one in a million

for that other central policy concern, nuclear proliferation, it seems to me that
policymakers should maintain their composure. The pathetic North Korean regime mostly seems to be engaged in a
or one in three billion per attempt. As

process of extracting aid and recognition from outside. A viable policy toward it might be to reduce the threat level and to wait while continuing to be
extorted, rather than to carry out policies that increase the already intense misery of the North Korean people. If the Iranians do break their pledge not to
develop nuclear weapons (a conversion perhaps stimulated by an airstrike on its facilities), they will probably use any nuclear capacity in the same way
all other nuclear states have: for prestige (or ego-stoking) and deterrence. Indeed, suggests strategist and Nobel laureate Thomas Schelling, deterrence is
about the only value the weapons might have for Iran. Nuclear weapons, he points out, would be too precious to give away or to sell and too precious
to waste killing people when they could make other countries hesitant to consider military action. It seems overwhelmingly probable that, if a nuclear
Iran brandishes its weapons to intimidate others or to get its way, it will find that those threatened, rather than capitulating to its blandishments or rushing
off to build a compensating arsenal of their own, will ally with others, including conceivably Israel, to stand up to the intimidation. The popular notion that
nuclear weapons furnish a country with the capacity to dominate its region has little or no historical support. The application of diplomacy and bribery in
an effort to dissuade these countries from pursuing nuclear weapons programs may be useful; in fact, if successful, we would be doing them a favor. But

the world can live with a nuclear Iran or North Korea, as it has
lived now for 45 years with a nuclear China, a country once viewed as the ultimate
although it may be heresy to say so,

rogue.

Should push eventually come to shove in these areas, the problem will be to establish orderly deterrent and containment strategies and to

Although there is nothing wrong with making


nonproliferation a high priority, it should be topped with a somewhat higher one:
avoiding policies that can lead to the deaths of tens or hundreds of thousands of
people under the obsessive sway of worst-case scenario fantasies. In the end, it appears to me
avoid the temptation to lash out mindlessly at fancied threats.

that, whatever their impact on activist rhetoric, strategic theorizing, defense budgets, and political posturing, nuclear weapons have had at best a quite
limited effect on history, have been a substantial waste of money and effort, do not seem to have been terribly appealing to most states that do not have
them, are out of reach for terrorists, and are unlikely to materially shape much of our future.

Case Extensions

Inh
Facial recognition lacks any governmental regulationPeterson, 6/16. Andrea, Washington Post, 2015. The governments plan to
regulate facial recognition tech is falling apart. JJZ
http://www.washingtonpost.com/blogs/the-switch/wp/2015/06/16/the-governmentsplan-to-regulate-facial-recognition-tech-is-falling-apart/
Facial recognition is being used by the government and big tech companies free of
federal regulation. Now, the government process trying to craft a voluntary code of
conduct to govern the technology appears to be falling apart. Privacy groups are dropping out of the multi-stakeholder meetings organized by
the Department of Commerce's National Telecommunication and Information Administration (NTIA), they said in a letter obtained by The Washington Post that will be sent Tuesday. "At

, we do not believe that the NTIA process is likely to yield a set of privacy rules
that offer adequate protections for the use of facial recognition technology ," the letter says. "We
this point

are convinced that in many contexts, facial recognition of consumers should only occur when an individual has affirmatively decided to allow it to occur. In recent NTIA meetings,
however, industry stakeholders were unable to agree on any concrete scenario where companies should employ facial recognition only with a consumers permission." NTIA has hosted
12 meetings on the issue since February 2014. But the tipping point was at the most recent meeting, on Thursday. First, Alvaro Bedoya, the executive director of Georgetown University's
Center on Privacy and Law, asked if companies could agree to making opt-in for facial recognition technology the default for when identifying people -- meaning that if companies wanted
to use someone's face to name them, the person would have to agree to it. No companies or trade associations would commit to that, according to multiple attendees at the meeting.
Then Justin Brookman, the director of the Center for Democracy & Technology's consumer privacy project, asked if companies would agree to a concrete scenario: What if a company set
up a camera on a public street and surreptitiously used it identify people by name? Could companies agree to opt-in consent there? Again, no companies would commit, according to
several attendees. "This is a pretty remarkable opposition to a core privacy concept that's already in state laws on this issue," Bedoya said in an interview. Some on the business side
argue that privacy advocates "drew a line in the sand" and weren't willing to negotiate. Privacy advocates blame the industry for the impasse. "Trade associations have successfully
blocked any expansion of privacy rights since 2009, and here they are successfully shutting down a process that could have given consumers more control," Bedoya said. Others go even
further, blaming the Obama administration's ties with Silicon Valley. "The White House staff are veterans from Google and Facebook -- they see this sector as vital to the American
economy and they used data mining techniques in elections, so it is no surprise that they are ambivalent about protecting privacy, to say the least," said Jeff Chester, the executive
director of the Center for Digital Democracy. Members of the administration disputed that description. "This process is being spearheaded by people who come out of the public interest
community," said John Morris, associate administrator and director of Internet policy at the NTIA. But, he agreed, there are few federal standards for how companies can collect
information about consumers right now. Most of what little protection people have at the national level stems from the Federal Trade Commission's ability to go after companies that

We're trying to work on facial recognition without legislation, " Morris


Consumers and companies need to know what the rules are for this technology
and we think stopping the discussion at this point doesn't get clarity that's needed ."
engage in unfair or deceptive practices. "
said. "

The NTIA process was a major part of the White House's draft proposal for comprehensive consumer privacy legislation, which received a chilly reception from the privacy community
and even the FTC when it was released this spring. The NTIA meetings were designed to bring the private sector and privacy advocates together to help develop "legally enforceable
codes of conduct" based on concepts in the White House's 2012 Privacy Bill of Rights Blueprint for use in the real world. The approach was first used to come up with rules for mobile app
data, but the process was grueling and few companies have adopted the code of conduct that resulted from it, according to privacy advocates. And now representatives from the
consumer advocacy world are pulling out of the facial recognition meetings. They include representatives from the Electronic Frontier Foundation, the Consumer Federation of America,
the American Civil Liberties Union, the Center for Digital Democracy and the Center for Democracy & Technology. Privacy advocates believe their decision to withdraw could be a
significant blow to the legitimacy of proceedings. "Without the consumer and privacy groups there is no multi-stakeholder process," Chester said. But the meetings will continue, the
agency said. NTIA is disappointed that some stakeholders have chosen to stop participating in our multistakeholder engagement process regarding privacy and commercial facial
recognition technology," an agency spokesperson said. "A substantial number of stakeholders want to continue the process and are establishing a working group that will tackle some of
the thorniest privacy topics concerning facial recognition technology. The process is the strongest when all interested parties participate and are willing to engage on all issues. Industry
representatives also have committed to continuing with the process. "It is disappointing that some stakeholders have chosen to stop participating, but well continue to engage with the
goal of building guidelines that help people enjoy the benefits of this technology while protecting their privacy. a Facebook spokeswoman said in a statement. At this point, consumers
are better off turning to state legislatures for protection from facial recognition technology, Bedoya said. Two states, Texas and Illinois, have biometric data privacy laws on the books that
may already provide some protection against the use of facial technology without informed consent, he said. Under the Illinois law, companies must tell users whenever biometric
information is collected, why it's being collected and how long the companies will keep it. Consumers then must provide written release that they consent to the data collection.
Facebook is being sued over its image tagging feature, which relies on facial recognition technology. The case will decide if the feature triggers the Illinois law -- and if so, if clicking
through the terms and conditions when signing up counts as adequate consent.

Facial tracking expanding now- nearly 55,000 photo results per


day.
Pagliery, 14. Jose, CNN Money, 9/16. FBI launches a face recognition system.
JJZ
http://money.cnn.com/2014/09/16/technology/security/fbi-facial-recognition/
The FBI can now quickly identify people just by looking at their faces.
The government expects the
system's database to house 51 million photographs by next year -- and keep
growing. But it's not just for the FBI. Police everywhere will be able to tap into the
system. They'll quickly ID fingerprints during a routine traffic stop -- or look up a
face while investigating a crime
By 2015, the system is expected to produce
results on more than 55,000 photo searches every day
Coming soon: eyes, voice, palm print

and walking stride. It's called the FBI's Next Generation Identification system, and the agency said it became fully operational Monday.

. Hawaii, Maryland and Michigan took part in the NGI system's pilot program, documents show. A dozen others including California, Florida and

New York have discussed participation in the program as well, according to the Electronic Frontier Foundation.

. Facial recognition is only expected to be used in just a small portion of those

searches. Police nationwide are expected to use it 196 times a day, government documents show. There are several ways your photo could end up in this massive, one-of-a-kind human tracker. Police agencies can submit your postarrest mug shot, video feed from a security camera, or photos from your family and friends. But the FBI database will also keep photos that it receives when conducting background checks, which it does for lots of private sector and

government job candidates. Surprised the FBI didn't have this before? It actually had a limited, low-tech version that only stored fingerprints. But that old system was slow to respond. Police who took fingerprints from people they
arrested would wait two hours for a response from the FBI's database. The new wait time? 10 minutes. And the 24-hour wait for employers performing background checks is now down to 15 minutes. The NGI system, which started as
a pilot program in 2009, was designed by defense contractor Lockheed Martin (LMT) in a deal worth up to $1 billion. The facial recognition software was built by MorphoTrust, a Billerica, Massachusetts-based company that already
does the biometric scans at 450 U.S. airports and DMVs in 42 states. The fingerprint features of the system are meant to help police officers identify suspects in real time. Facial recognition is meant to help detectives identify

it's already a major privacy concern, because of its potential to


relentlessly track innocent people
suspects using clues at crime scenes. But

. The Electronic Frontier Foundation has sued the Justice Department to get details on the program, but questions remain. For example, the

FBI said it will gather data from security cameras at a crime scene. But does that include the estimated 30 million surveillance cameras installed at street corners and parks? If the FBI information slide below is any indication, the FBI
is interested in using NGI system to identify a random person in a crowd -- and track them as they move, said EFF attorney Jennifer Lynch. That's why the Electronic Privacy Information Center worries the NGI system will get
integrated with CCTV cameras everywhere -- including at private businesses -- and let the government track folks without justification. FBI NGI slideFBI slide from a presentation about its new program's facial recognition capabilities.
To that point, the FBI has already mentioned it will store all photos -- even those with faces it can't immediately pinpoint -- for later identification. However, there are a few ground rules cited in FBI documents: This tool doesn't let
the government start collecting your fingerprint and body data if it couldn't before. Police aren't supposed to rely solely on the facial recognition software to arrest anyone. Photos on people's social media accounts (Facebook,
Instagram, etc.) cannot be submitted into the NGI database (at least during the pilot phase). NGI isn't just about cameras, though. The system is also designed to alert police if someone "holding positions of trust," such as a school
teacher, has run-ins with the law. And the identification system isn't limited to your face. The system is able to spot and search for scars, tattoos, birth marks. FBI documents show the agency built the system to accommodate for
future collection of biometric data. If and when our eyeballs, voices and walking style are recorded and categorized, the system will be able to uniquely identify a person that way too.

Despite need for regulation, Congress hasnt acted.


Weise, June 16 15
Elizabeth Weise, June 16, 2015, USATODAY, Privacy groups leave over dispute on
facial recognition software USATODAY,
http://www.usatoday.com/story/tech/2015/06/16/facial-recognition-software-googlefacebook-moments-ntia/28793157/
Consumer groups that believe companies like Facebook need to get individuals'
permission before their images can be identified using increasingly advanced facial
recognition technology have abandoned talks to create a voluntary code of conduct
for the controversial software."This isn't just your anonymous online profile. It's you
they're tracking your face," said Jeffrey Chester with the Center for Digital
Democracy, one of the groups.Facial recognition software automatically identifies individuals in a digital image by comparing facial features in the image and a database. It
allows computers to link a person's name to their face in photos or video. The discussion was being organized by the National Telecommunications and Information Administration, a
division of the Department of Commerce.As part of the process, industry and privacy groups had been meeting for 18 months to create a voluntary Code of Conduct around the use of
facial recognition software in commercial contexts. The NTIA successfully created a voluntary code for mobile apps last year.However, a code for facial recognition software looks to be in
trouble after most of the consumer and privacy groups involved in the discussion announced Tuesday they're leaving the group.The groups that left included the Center for Democracy &
Technology, the Center for Digital Democracy, the Consumer Federation of America, the Electronic Frontier Foundation, the American Civil Liberties Union, Consumer Action
and Consumer Watchdog.While such voluntary codes don't carry the force of law, if a company agreed to one and then didn't abide by it, it would be open to Federal Trade Commission
Enforcement. Chester said the consumer groups realized they saw opt-in as the baseline, while companies wanted opt-out. Opt-in they would require companies to ask permission before
using the software on a person's image. Opt out would require people to tell each company they didn't want their images identified. The position that companies never need to ask
permission to use biometric identification is at odds with consumer expectations, current industry practices, as well as existing state law," the group's letter, released Tuesday, said. We
were disappointed that some of the public interest group have decided to step out of the process," said John Morris, associate administrator of NTIA's Office of Policy Analysis and
Development. The walkout comes amidst sharpened concerns over people's privacy, with new photo apps appearing that promise to view and tag by name billions of images. On
Monday, Facebook rolled out its new Moments photo app, which makes use of powerful facial recognition features to identify who is in a given picture. Google announced a similar app
earlier this month. In the Facebook app, however, in order to avoid having their pictures recognized, users must go into their Facebook settings and turn tagging off. A Facebook
spokeswoman said it would continue to engage in the process "with the goal of building guidelines that help people enjoy the benefits of this technology while protecting their privacy.
However most users don't have the "time or awareness" to understand the complex privacy policies they click to agree online or in apps," said Marc Rotenberg, a privacy advocate with
the Electronic Privacy Information Center in Washington D.C. EPIC is not a part of the NTIA discussion. Its the power of facial recognition software that worries privacy advocates.
Facebook for years has insisted on turning on facial recognition by default, where other services have the user turn it on by choice. . So they are essentially creating one of the largest
collections out there, because so few people opt out of it," said Alvaro Bedoya, executive director on the Center on Privacy and Tech at Georgetown Law. Facial recognition software can
be used in a wide variety of situations. For examples ATMs could use cameras to verify that the person in front of the machine was the holder of the card being used. Stores could identify
known shoplifters are they came through the door. But privacy advocates worry it could also be used to market or track consumers when they imagine themselves to be anonymously
walking down the street or browsing in a store. Some in the coalition said they'd doubted that companies would ever voluntarily agree to give up a possible marketing edge. The multistakeholder process was doomed from the start. Companies had no real incentive to develop a code with meaningful protections," said John Simpson, the privacy project director for the

There is currently no federal privacy legislation regarding the


commercial use of facial recognition software, though Illinois and Texas have state
laws governing it on their books. The administration released a privacy blueprint
with proposed legislation in February but it has not yet been introduced into
Congress.
group Consumer Watchdog.

Technology exists now to make biometric data interoperableLynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
Recent advancements in camera and surveillance technology over the last few
years support law enforcement goals to use face recognition to track Americans . For
example, the National Institute of Justice has developed a 3D binocular and camera that allows realtime facial acquisition and recognition at 1000 meters.31 The tool wirelessly transmits
images to a server, which searches them against a photo database and identifies the photos subject. As of 2010, these binoculars were already in field-testing with the Los Angeles
Sheriffs Department. Presumably, the back-end technology for these binoculars could be incorporated into other tools like body-mounted video cameras or the MORIS (Mobile Offender
Recognition and Information System) iPhone add-on that some police officers are already using.32 Private security cameras and the cameras already in use by police departments have

They are more capable of capturing the details and facial features necessary
to support facial recognition-based searches, and the software supporting them allows photo manipulation that can improve the
also advanced.

chances of matching a photo to a person already in the database. For example, Gigapixel technology, which creates a panorama photo of many megapixel images stitched together (like
those taken by security cameras), allows anyone viewing the photo to drill down to see and tag faces from even the largest crowd photos.33 It also shows not just a face but also what
that person is wearing; what books and political or religious materials he is carrying; and whom he is with. And image enhancement software, already in use by some local law
enforcement, can adjust photos taken in the wild34 so they work better with facial recognition searches. Cameras are also being incorporated into more and more devices that are
capable of tracking Americans and that can provide that data to law enforcement. For example, one of the largest manufacturers of highway toll collection systems filed a patent
application in 2011 to incorporate cameras into the transponder that sits on the dashboard in your car.35 This manufacturer's transponders are already in 22 million cars, and law
enforcement already uses this data to track subjects. While a patent application does not mean the company is currently manufacturing or trying to sell the devices, it certainly shows
its interested. Interoperability and Data Sharing Before September 11, 2001, the federal government had many policies and practices in place to silo data and information within each
agency. Since that time the government has enacted several measures that allowand in many cases requireinformation sharing within and among federal intelligence and federal,

36 For example, currently the FBI, DHS, and Department of


Defenses biometrics databases are interoperable, which means the systems can
easily share and exchange data.37 This has allowed information sharing between FBI and DHS under ICEs Secure Communities program.38 And
states are collecting and sharing biometric data with the federal government as well. At least 31 states have already started using
some form of facial recognition with their DMV photos,39 generally to stop fraud and identity theft, and the FBI has already worked with North
state, and local law-enforcement agencies.

Carolina, one of a handful of states reported to be in the NGI pilot program, to track criminals using the states DMV records.40 States also share fingerprints (and face prints soon)

NGI will allow all states to share and access face


prints as easily as they now share and access fingerprints by 2014.41 The federal
government also exchanges biometric data with foreign governments through direct and ad-hoc
indirectly with DHS through Secure Communities. According to the FBI,

access to criminal and terrorist databases. 42 And ICE and the FBI share biometric data on deportees with the countries to which they are deported.43

The federal government uses it it has simply moved offices.


Fretty 11 Associate at Irell & Manella, LLP, in Los Angeles, California,
J.D from UCLA School of Law (Douglas A., Face-Recognition Surveillance: A
Moment of Truth for Fourth Amendment Rights in Public Places, Vol 16 No 3, Fall
2011, Virginia Journal of Law and Technology,
http://www.vjolt.net/vol16/issue3/v16i3_430-Fretty.pdf) NAR
The federal governments activeness in this arena is hard to measure, but the
Department of Defense has historically shown great enthusiasm for human
identification at a distance, or HumanID.33 Shortly after the USA PATRIOT Act
passed in 2001,34 DARPA established a data-mining and pattern-recognition
program to provide tools to better detect, classify, and identify potential foreign
terrorists.35 Called TIA (originally Total Information Awareness but redubbed
Terrorism Information Awareness to avoid an overtly Orwellian moniker),36 the
program included a HumanID component, intended to identify humans using a
combination of biometric modes at distances up to 500 feet.37 In a 2003 report to
Congress, DARPA revealed that its then-existing HumanID technology could detect
the presence of human faces at 20 to 150 feet and then zoom[] in to recognize the
detected face.38 Members of Congress, disturbed by TIAs potential invasiveness,
defunded the program in the fall of 2003.39 Nevertheless, TIA appears not to have
vanished but merely moved to the Disruptive Technology Office ,40 a
department under the auspices of the Director of National Intelligence.41 DARPAs
2003 report, then, likely describes not simply FRT research conducted in the past
but ongoing activity in the U.S. intelligence community.42

Solvency
Legislative actions are key to establishing necessary oversight
mandates on biometric collectionsLynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
The over-collection of biometrics has become a real concern, but there are still
opportunitiesboth technological and legalfor change . Given the current uncertainty of Fourth Amendment
jurisprudence in the context of biometrics and the fact that biometrics capabilities are undergoing dramatic technological change,121 legislative action
could be a good solution to curb the overcollection and over-use of biometrics in
society today and in the future. If so, the federal governments response to two seminal wiretapping
cases in the late 60s could be used as a model .122 In the wake of Katz v. United States123 and
New York v. Berger, 124 the federal government enacted the Wiretap Act,125 which laid out specific rules that govern
federal wiretapping, including the evidence necessary to obtain a wiretap order,
limits on a wiretaps duration, reporting requirements, and a notice provision .126 Since then,
law enforcements ability to wiretap a suspects phone or electronic device has been governed primarily by statute rather than Constitutional case law. Congress could also look to the
Video Privacy Protection Act (VPPA),127 enacted in 1988, which prohibits the wrongful disclosure of video tape rental or sale records or similar audio-visual materials, requires a

. If
legislation or regulations are proposed in the biometrics context, the following
principles should be considered to protect privacy and security. These principles are based in part on key
warrant before a video service provider may disclose personally identifiable information to law enforcement, and includes a civil remedies enforcement provision

provisions of the Wiretap Act and VPPA and in part on the Fair Information Practice Principles (FIPPs), an internationally recognized set of privacy protecting principles.128 Limit the

The collection of biometrics should be limited to the minimum necessary


to achieve the governments stated purpose. For example, collecting more than one
biometric from a given person is unnecessary in many situations . Similarly, the governments acquisition of
biometrics from sources other than the individual to populate a database should be limited. For example, the government should not obtain
biometrics en masse to populate its criminal databases from sources such as state
DMV records, where the biometric was originally acquired for a non-criminal
purpose, or from crowd photos or data collected by the private sector. Techniques should also be employed to avoid
over-collection of face prints (such as from security cameras or crowd photos) by, for example, scrubbing the images of faces that are not central to an investigation.
Define Clear Rules on the Legal Process Required for Collection Each type of biometric should be subject to clear rules
on when it may be collected and which specific legal process such as a warrant based on probable cause is required prior to
Collection of Biometrics

collection. Collection and retention should be specifically disallowed without legal process unless the collection falls under a few very limited and defined exceptions. For example, clear
rules should be defined to govern when law enforcement or similar agencies may collect biometrics revealed to the public, such as a face print. Limit the Amount and Type of Data Stored
and RetainedFor biometrics such as a face print that can reveal much more information about a person than his or her identity, rules should be set to limit the amount of data stored.

Retention periods should be defined by statute and should be limited to no longer


than necessary to achieve the goals of the program . Data that is deemed to be safe from a privacy perspective today
could become highly identifying tomorrow. For example, a data set that includes crowd images could become much more identifying as technology improves. Similarly, data that was
separate and siloed or unjoinable today might be easily joinable tomorrow. For this reason retention should be limited, and there should be clear and simple methods for a person to
request removal of his or her biometric from the system if, for example, the person has been acquitted or is no longer under investigation.129 Limit the Combination of More than One

Different biometric data sources should be stored in separate


databases. If biometrics need to be combined, that should happen on an ephemeral basis for a particular investigation. Similarly, biometric data
should not be stored together with non-biometric contextual data that would
increase the scope of a privacy invasion or the harm that would result if a data breach occurred. For example, combining facial
Biometric in a Single Database

recognition technology from public cameras with license plate information increases the potential for tracking and surveillance. This should be avoided or limited to specific individual
investigations. Define Clear Rules for Use and SharingBiometrics collected for one purpose should not be used for another purpose. For example, face prints collected for use in a
criminal context should not automatically be used or shared with an agency to identify a person in an immigration context. Similarly, photos taken in a non-criminal context, such as for a
drivers license, should not be shared with law enforcement without proper legal process. For private sector databases, users should be required to consent or opt-in to any face
recognition system. Enact Robust Security Procedures to Avoid Data CompromiseBecause biometrics are immutable, data compromise is especially problematic. Using traditional
security procedures, such as basic access controls that require strong passwords and exclude unauthorized users, as well as encrypting data transmitted throughout the system, is
paramount. However security procedures specific to biometrics should also be enacted to protect the data. For example, data should be anonymized or stored separate from personal
biographical information. Strategies should also be employed at the outset to counter data compromise after the fact and to prevent digital copies of biometrics. Biometric encryption130
or hashing protocols that introduce controllable distortions into the biometric before matching can reduce the risk of problems later. The distortion parameters can easily be changed to

make it technically difficult to recover the original privacy-sensitive data from the distorted data, should the data ever be breached or compromised.131 Mandate Notice Procedures
Because of the real risk that face prints will be collected without their knowledge, rules should define clear notice requirements to alert people to the fact that a face print has been
collected. The notice provision should also make clear how long the biometric will be stored and how to request its removal from the database. Define and Standardize Audit Trails and
Accountability Throughout the SystemAll database transactions, including biometric input, access to and searches of the system, data transmission, etc. should be logged and recorded
in a way that assures accountability. Privacy and security impact assessments, including independent certification of device design and accuracy, should be conducted regularly. Ensure

government entities that collect or use biometrics must be subject to


meaningful oversight from an independent entity . Individuals whose biometrics are compromised, whether by the government
or the private sector should have a strong and meaningful private right of action. Conclusion Face recognition and its accompanying
privacy concerns are not going away. Given this, it is imperative that government
act now to limit unnecessary biometrics collection; instill proper protections on data
collection, transfer, and search; ensure accountability; mandate independent
oversight; require appropriate legal process before government collection; and
define clear rules for data sharing at all levels. This is important to preserve the
democratic and constitutional values that are bedrock to American society .
Independent Oversight

This can potentially be both a mechanism and have a ton of


wacky advantages
Nguyen 2 - J.D. Candidate, Yale Law School, 2003 (Alexander T., Heres
Looking At You, Kid: Has Face-Recognition Technology Completely Outflanked The
Fourth Amendment, VIRGINIA JOURNAL of LAW and TECHNOLOGY , UNIVERSITY OF
VIRGINIA SPRING 2002 7 VA. J.L. & TECH. 2,
http://www.vjolt.net/vol7/issue1/v7i1_a02-Nguyen.PDF ) NAR
Fourth Amendment-based arguments against FaceIt are mostly based on dicta or
extrapolation, and therefore offer very weak opposition to technology such as FaceIt . Even
though the arguments are intellectually interesting, to contend that the Fourth Amendment would
prohibit the use of technology such as FaceIt is simply to fight an quixotic battle, and it
might take too long for courts to reformulate a new conceptualization of the Fourth
Amendment to protect citizens against FaceIt. Instead, one must realize that the
expectation of privacy has crumbled with the onslaught of technology, and it might
be time to turn to another potentialand more immediately availablesource of opposition to
FaceIt technology. That source is anonymity. 46. If technology has eroded the expectation of privacy,
one could argue that courts have consistently upheld what might be termed the
expectation of anonymity. The definition of privacy is almost certainly too broad in
order to meaningfully protect individuals against FaceIt. Privacy has become as
nebulous a concept as happiness or security. 162 To simply say that FaceIt violates privacy by
45. These potential

infringing on the right to be left alone,163 for example, is not useful because in the FaceIt case, the people being
scanned are technically being left alone. The great simplicity of this definition gives it rhetorical force and
attractiveness, but also denies it the distinctiveness that is necessary for the phrase to be useful in more than a

As a spokesman for the Tampa Police department stated after the use
is no expectation of privacy in a crowd of
100,000 people.165 Such a definition of privacy exempts biometric surveillance
because proponents can simply claim that such technology leaves citizens alone
while ignoring the argument that privacy claims also have to do with, for example, an
individuals reluctance to have a file in a database or to have his or her face scanned unknowingly .
Anonymity is a much narrower conception of the value at stake insofar as
biometric technology is concerned. While there may be no expectation of
privacy in a crowd, there may be an expectation of anonymity in such a space. 166
Because this technology is primarily concerned with identification rather than
searches, anonymity is a value that is tailored much more narrowly and is therefore
conclusory sense.164

of video surveillance at the Super Bowl: There

better equipped to deal with biometric surveillance . 47. Privacy is closely allied with anonymity.
We may commute for yearssame train, same compartment, same fellowtravelersand yet the man to whom we reveal our hopes, our opinions, our beliefs,
our business and domestic joys and crises remains The chap who gets on at Dorking with The
Times and a pipe; I dont know who he is. And he does not know who we are, because
we have never exchanged names, and thus the necessary communication and release of our private
concerns is accomplished without violation of our privacy . In our anonymity is our security.167
But the value of anonymity is its role as buffer to privacy intrusions. In other words, we will tolerate
considerable intrusion, and even volunteer supererogatory circumstantial detail of
our lives, if our anonymity is preserved.168 48. The strength of using anonymity to oppose FaceIt
rather than expectations of privacy lies in the fact that courts have generally protected anonymity
in public spaces whereas they have in general held that there is no expectation of
privacy in public places. This is because anonymity has implications for the First
Amendment and has strong political dimensions, from the earliest beginnings of the
country. The Federalist papers of Alexander Hamilton, James Madison and John Jay were published anonymously,
under the pen name of Publius.169 Over the years, at least six presidents, fifteen cabinet members, and thirtyfour congressmen published anonymous political writings.170 In McIntyre v. Ohio Elections Comn, the court
indicated in striking down an ordinance requiring that political pamphlets bear the name of the author that: Under
our Constitution, anonymous pamphleteering is not a pernicious, fraudulent practice, but an honorable tradition of
advocacy and of dissent. Anonymity is a shield from the tyranny of the majority [citing J. Mill, On Liberty]. It thus
exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular
individuals from retaliationand their ideas from suppressionat the hand of an intolerant society. The right to
remain anonymous may be abused when it shields fraudulent conduct. But political speech by its nature will
sometimes have unpalatable consequences, and, in general, our society accords greater weight to the value of free
speech than to the dangers of its misuse.171 49. In Thomas v. Collins, 172 the Court held that the president of the
United Auto Workers did not have to register as a labor organizer with the Secretary of State in Texas before being
able to identify himself as such on business cards and solicit new members. Although the ambiguities in the
Thomas opinion leave its scope in doubt, it may be read as a recognition of a right of anonymity.173 The Court has
also upheld the refusal of individuals to disclose the names of individuals who had bought defendants book,174 the
refusal of party officials to divulge the names of other members of the Progressive Party175 and the refusal of a
witness to reveal to the House Committee on Un-American Activities if other individuals had participated in the
Communist Party.176 The right to anonymity was even more firmly expounded on in NAACP v. Alabama ex rel.
Patterson177 in which the Supreme Court upheld the refusal of the NAACP to disclose its membership lists because
to do so would be a violation of the associational privacy implied by the First Amendment. And in Shelton v.
Tucker178 the Court struck down a statute requiring teachers to list their group affiliations on an annual basis.
Despite this line of cases, the scope of anonymity has not really been specified.179 This term, the Supreme Court
will hear Watchtower Bible & Tract Soc. of New York, Inc. v. Village of Stratton, 180 in which Jehovahs Witnesses are
challenging the constitutionality of an ordinance that requires door-to-door proselytizers to register first. 50. Courts
have further upheld anonymity in another prominent public forum: The Internet.181 Various scholars have decried
the fact that cookies and other technology are eroding anonymity on the Internet.182 Individuals and organizations
have argued, and courts have agreed, that there is a strong interest in being anonymous on the Internet because in
the discussion of sensitive topics, they would like to avoid ostracism or embarrassment.183 In some cases,
scholars have even argued, anonymity might even change race relations.184 51. Internet anonymity is easy to
come byunlike anonymity off-line. Instead of having to go outside to find a payphone and making a call using a

One of the
most valuable democratic aspects of the Internet is its capability for anonymous
communication.185 Thus, it is evident that anonymity is a fundamental right that courts have in general
disguised voice, now users could simply find a re-mailer service that would ensure anonymity.

been very aggressive in protecting and it is this right that might offer a foundation for constitutional protection
against FaceIt. B. A PER SE RIGHT? ANONYMITY DECOUPLES FROM SPEECH 52. From these cases it would appear
that a speech nexus is always required and that a right of anonymity only exists insofar as it has consequences for
speech. But in fact,

the Court has recognized a right to anonymity that is broader than


simply political anonymity. In other words, even though the speech nexus will make a courts protection of
anonymity more likely, such a nexus is not necessary in order to be protected by anonymity. Thus, courts have
in general upheld juror anonymity186, anonymity with respect to the abortion
decision of a minor,187 the anonymity of a rape victim in newspaper articles,188 the

anonymity of a pregnant student in a student newspaper ,189 the right to proceed


anonymously in a court actioneven though a court is a public forum.190 The
interests protected by anonymity vary widely. Anonymity is praised as a necessary component of free society on
one hand, but condemned as a vehicle for nefarious activity on the other.191 Still, the right to anonymity is a
quasi-right that is protected in some instances but not in others.192 Under this broader conception of anonymity,

FaceIt violates a per se right to anonymity because the program


allows citizens in public places to be identified indiscriminately. 53. Another way to
one might argue that

reformulate the value of anonymity is to argue that it encompasses a broader range of non-speech activities that
nevertheless implicate speech. Under this conception, activities that are formative of identity (such as attending
certain meetings, going into certain stores, viewing certain movies, and so on) are part of speech. Similarly,
activities that help an individual formulate his or her thoughtssuch as readingare also closely tied to speech.
These activities therefore should also be granted anonymity. Julie Cohen therefore argues for a right to read
anonymously, because the activity of reading is as intimate and prior to the activity of speaking.193 Logically, that
zone of protection should encompass the entire series of intellectual transactions through which they formed the
opinions they ultimately chose to express. Any less protection would chill inquiry, and as a result, public discourse,

there is a right to be
anonymous in public as well as it is expressive conduct. Attending a Green Party meeting or a Catholic mass
concerning politically and socially controversial issues[].194 One could argue that

requires walking in public and would almost certainly qualify as political and expressive conduct to which there
might be a right to anonymity. But what about attending a New York Giants gamesurely the expression implied is

No
matter how trivial or incidental the expressive conduct, one could still argue that
they have expressive value and should therefore be protected . The case for protection of
ones support for one of the teamsor a Yo Yo Ma concert? What about walking into the local McDonalds?

anonymity is further bolstered by the fact that individuals appearing in public often do not have the option of hiding
their faces under a mask, for instance. Court authority has been divided over whether or not ordinances prohibiting
masks violate the First Amendment.195 Usually, courts have held, however, that unless the masks themselves
constituted symbolic speech (such as a KKK hood), ordinances preventing the wearing of masks that just hide
identity are constitutional.196 Once FaceIt is a common occurrence, ordinary citizens ought to have the right to
protect their anonymity as well, either by wearing masks or by taking down the cameras. In any case, it is
anonymity that might offer a vindication of rights and the privacy invasion that FaceIt carries itself. VIII.
CONCLUSION 54.

Current biometric research has reached a point at which identification


human beings can take place from a distance without touching individuals, stopping them on the
street. Research currently includes efforts to identify humans simply from the way they walk, or their gait.197
Because of the rise of national databases that keep track of financial, health, sexual, consumer and other types of
information, the danger to personal privacy comes from being able to link an individual to all these sources of
information. FaceIt is different from other sense-enhancing technologies because it is almost used exclusively in a
public sphere where it is able to sidestep the Fourth Amendment protection of the reasonable expectation privacy.
55. Under the current jurisprudence, it is not likely that the Fourth Amendment will protect citizens from
technologies such as FaceIt. This article has argued that a re-conceptualization of the Fourth Amendment not so
much as protecting individuals but more as demanding accountability and justification from the government as it
does its searches might be necessary. Under this conception, a de minimis individualized suspicion would be
required before FaceIt can scan an individuals facethis requirement might then essentially prohibit its
indiscriminate use in public spheres .

Anonymity might be another alternative because of its


nexus to the public sphere and because FaceIt is a tool that is used to identify rather
than to pry, so anonymity is a better fit than privacy. 56. With the rise of each new technology,
courts strive to carry forward established constitutional principles into the new context.198 At the same time,
however, the new technology strains the old principles and often requires a new approach that has consequences
for older technologies as well as the newer ones.199 FaceIt is such a technology. It differs from the other
technologies since the Katz decision in that it is used almost exclusively in public, but prominently so, and is able to
identify even innocent parties from a distance without intrusion. The technology is by far the most Orwellian, but
at the same time the Fourth Amendments expectation of privacy that traditionally has with some difficultyruled

Only through a new conception of the Fourth Amendment that


a First Amendment
protection of anonymity could this dystopia be postponed or even eliminated.
out some technology is helpless against FaceIt.

stresses the governments accountability and justification in conducting searches or

The aff reverses how government uses legal speak to require


consent from citizens before collecting dataWelinder 12 - Visiting Assistant Professor, California Western School
of Law; Junior Affiliate Scholar, Center for Internet and Society at
Stanford Law School. LL.M., Harvard Law School; J.D., University of
Southern California; LL.B., London School of Economics and Political
Science (Yana, Harvard Journal of Law & Technology Volume 26, Number 1 Fall
2012 A FACE TELLS MORE THAN A THOUSAND POSTS: DEVELOPING FACE
RECOGNITION PRIVACY IN SOCIAL NETWORKS ,
http://jolt.law.harvard.edu/articles/pdf/v26/26HarvJLTech165.pdf) NAR
that I view biometric data as particularly
sensitive information because of its power to de-anonymize a face (which we cannot easily
hide or alter) and instantly connect it to all our online activities. This type of sensitive
information should never be collected or used without a persons consent.260 The
consent needs to be specific as to the collection or use of the data and be based on
detailed information of the type that I outline in the next section. It must also be
affirmative, and it must precede the collection or use of the information in question .
At this juncture in my analysis, it should be clear

From the users perspective, however, the notice and consent model can only be effective if it is accompanied by
freedom to withhold consent. For that reason, the architectural and market solutions below are intended to give
users more choice by making social networks interoperable with distributed social networks and providing users

Once users understand how social networks use their


biometric data and have realistic alternatives, they will be able to select only those
data practices that they find acceptable. But if users fail to make a choice either
with more autonomy over their data use.

because they still do not understand the particular data process or because they have not had time to become

no new data should be collected and previously collected data should not
be used in a new way.261 Privacy settings262 that allow users to opt out of collection and use of biometric
informed

data simply cannot serve as consent not least because by the time a user opts out, the data has already been
collected and potentially used to identify the person in new photos.263 Professor James Grimmelmann has noted
that opt-out consent is particularly insufficient when a new practice in a social network involves a large cultural

Collection and use of biometric data from photos previously shared with
friends involves such a cultural shift because it uses technology that, for most
users, is completely unimaginable. Further, opt-out consent is often designed so that users are not
shift. 264

even aware of the change that they are accepting by default. But even if a social network were to actually notify a
user that she can opt out if she does not want to have her friends automatically find her in new photos, the user
would simply not understand the issue. This is because it would be presented in terms of trust vis--vis the users
friends not the social network that will be collecting, storing, and using highly sensitive and personally

A user cannot be expected to take affirmative (and


cumbersome) steps to object to something that she does not understand. As a
result, specific opt-in consent should be solicited from users. One problematic
identifiable information about that user.

aspect of consent with respect to face recognition technology is that in order to know whether an unidentified
individual consents to automatic face recognition, you need to first extract and process her biometric data to
compare it against a database of consenting individuals.265 This could be addressed by allowing automatic face
recognition for the limited purpose of determining consent and requiring immediate deletion of any data derived

The consent also needs to be


innovatively designed to ensure that users truly understand what they approve . In this
from the process if it turns out that there was no such consent.266

respect, Googles opt-in notice for its face recognition technology, Find My Face, is a good start, though not
perfect. Google launched Find My Face in December 2011.267 At that time, it used a cartoon to illustrate how Find
My Face would [h]elp people tag you in photos and it provided the users real name above a face in the cartoon to
make the example feel realistic.268 Users could then select to turn on the function. The problem was that the
notice did not indicate that this function would use old photos to find users faces in new photos. If users did not

know how face recognition technology works and most users do not this notice did not tell them that it was
asking for permission to collect and use their data in a new way. The notice had a link that users could click to
learn more, but given the small print that usually appears after clicking on links of this sort, by now most users
have learned not to be too curious online.269 Yet, while this notice failed to inform users of every relevant aspect of
the face recognition process, it demonstrated Googles ability to communicate abstract information like automatic
face recognition through a simple cartoon. Google is treading a narrow path here. On the one hand, it tries to live
up to its motto, Dont be evil. 270 On the other hand, it does not want to provide more information than its
competitors so as not to overwhelm the users. But if Google had clearer instructions about what information it
needed to present, and the same requirements applied to its competitors, it could have designed a notice to obtain
adequate consent from its users. 2. Notice Regarding the Collection and Processing of Biometrics What sort of

At the very least, users


should know specifically what biometric data is collected and from which photos.
They should also know how long the data will be stored and who will have access to
it in the meantime. Companies need to explain in detail how they will use the data and who will have access
to the end results of that use, i.e., once the data has been aggregated or processed in some way . Users
should also know how they can delete biometric data as it is not clear that deleting
a photo necessarily deletes the biometric data collected from that photo. The current lack of information
leads users to make erroneous assumptions about how social networks use their
photos. Users may, for example, mistakenly believe that biometric data will not be collected from photos to
information should companies provide before collecting or using biometric data?

which they restrict access through their privacy settings. Most users probably think that if they opt out of automatic
face recognition their biometric data will never be collected. But as a function of the opt-out consent, chances are
that a social network collects biometrics when it rolls out a service, which then resides in a database even after a

These issues should be no mystery to users whose


information is collected. Companies should further have an incentive to think
creatively about how they can present this information to users in an accessible
way. Crucially, this information cannot just be buried in a privacy policy full of
legalese and tech-speak, 272 which no one reads.273 Some scholars are very skeptical about
user opts out of automatic face recognition.

whether information about privacy practices can ever be effectively communicated to users.274 Professor
Nissenbaum argues that attempts to concisely communicate this information in plain language present a
transparency paradox. 275 Thorough information overwhelms users, while concise notices contain general
provisions and do not describe the details that differentiate between good and bad practices.276 I am more
optimistic about companies ability to concisely present this information if they have the right incentives. Work in
infographics has shown that it is possible to explain incredibly complex information, such as geography or medical
information, with graphs and charts that can easily be understood by non-experts.277 The recent start-up trend of
creating demo videos to communicate often very complex online business models to users and investors in only a
few minutes is another example of this capability.278 Emerging research in user experience design further suggests
that websites can be designed to notify users of the data collection in real time and show how it will be used.279
Indeed, social networks already spend most of their time thinking about how to present our intricate social
relationships, correspondence, and social lives in a clear and accessible manner so that the platforms can be used
by children and grandparents alike.280 Organizing information about data practices is in fact a very similar task
that they have the resources to handle.281 The cartoon in the Google Plus notice though not perfect is a good
example of how social networks can communicate very detailed information through a simple picture.282 Another
example is Facebooks Interactive Tools that allow a user to browse her own profile as if she was another person to
experience what that particular individual can learn about her.283 Were comprehensible information in nontraditional form incentivized by legal requirements and user expectations, these companies could extend their
innovative solutions to provide simple and informative notice about biometric data collection and processing.

Impact Framing

No War
War isnt a threat
John Aziz 14, 3-6-2014, "Don't worry: World War III will almost certainly never
happen," No Publication, http://theweek.com/articles/449783/dont-worry-world-wariii-almost-certainly-never-happen
Next year will be the seventieth anniversary of the end of the last global conflict.
There have been points on that timeline such as the Cuban missile crisis in 1962,
and a Soviet computer malfunction in 1983 that erroneously suggested that the U.S.
had attacked, and perhaps even the Kosovo War in 1999 when a global conflict
was a real possibility. Yet today in the shadow of a flare up which some are calling
a new Cold War between Russia and the U.S. I believe the threat of World War III
has almost faded into nothingness. That is, the probability of a world war is the
lowest it has been in decades, and perhaps the lowest it has ever been since the
dawn of modernity. This is certainly a view that current data supports. Steven
Pinker's studies into the decline of violence reveal that deaths from war have fallen
and fallen since World War II. But we should not just assume that the past is an
accurate guide to the future. Instead, we must look at the factors which have led to
the reduction in war and try to conclude whether the decrease in war is sustainable.
So what's changed? Well, the first big change after the last world war was the
arrival of mutually assured destruction. It's no coincidence that the end of the last
global war coincided with the invention of atomic weapons. The possibility of
complete annihilation provided a huge disincentive to launching and expanding
total wars. Instead, the great powers now fight proxy wars like Vietnam and
Afghanistan (the 1980 version, that is), rather than letting their rivalries expand into
full-on, globe-spanning struggles against each other. Sure, accidents could happen,
but the possibility is incredibly remote. More importantly, nobody in power wants to
be the cause of Armageddon.

Nuclear war isnt a threat


John Aziz 14, 3-6-2014, "Don't worry: World War III will almost certainly never
happen," No Publication, http://theweek.com/articles/449783/dont-worry-world-wariii-almost-certainly-never-happen
But what about a non-nuclear global war? Other changes economic and social in
nature have made that highly unlikely too. The world has become much more
economically interconnected since the last global war. Economic cooperation
treaties and free trade agreements have intertwined the economies of countries
around the world. This has meant there has beena huge rise in the volume of global
tradesince World War II, and especially since the 1980s. Today consumer goods like
smartphones, laptops, cars, jewelery, food, cosmetics, and medicine are produced
on a global level, with supply-chains criss-crossing the planet. An example: The
laptop I am typing this on is the cumulative culmination of thousands of hours of
work, as well as resources and manufacturing processes across the globe. It
incorporates metals like tellurium, indium, cobalt, gallium, and manganese mined in

Africa. Neodymium mined in China. Plastics forged out of oil, perhaps from Saudi
Arabia, or Russia, or Venezuela. Aluminum from bauxite, perhaps mined in Brazil.
Iron, perhaps mined in Australia. These raw materials are turned into components
memory manufactured in Korea, semiconductors forged in Germany, glass made in
the United States. And it takes gallons and gallons of oil to ship all the resources
and components back and forth around the world, until they are finally assembled in
China, and shipped once again around the world to the consumer.

A2: nuke War


Default to structural impacts over nuclear war
Martin 82 - Professor of Social Sciences at the University of Wollongong,
Australia, Brian Martin, Critique of nuclear extinction, Published in Journal of Peace
Research, Vol. 19, No. 4, 1982, pp. 287-300, http://www.bmartin.cc/pubs/82jpr.html
To summarise the above points, a major global nuclear war in which population
centres in the US, Soviet Union, Europe and China ware targeted, with no effective
civil defence measures taken, could kill directly perhaps 400 to 450 million people.
Induced effects, in particular starvation or epidemics following agricultural failure or
economic breakdown, might add up to several hundred million deaths to the total,
though this is most uncertain. Such an eventuality would be a catastrophe of
enormous proportions, but it is far from extinction. Even in the most extreme case
there would remain alive some 4000 million people, about nine-tenths of the world's
population, most of them unaffected physically by the nuclear war. The following
areas would be relatively unscathed, unless nuclear attacks were made in these
regions: South and Central America, Africa, the Middle East, the Indian
subcontinent, Southeast Asia, Australasia, Oceania and large parts of China. Even in
the mid-latitudes of the northern hemisphere where most of the nuclear weapons
would be exploded, areas upwind of nuclear attacks would remain free of heavy
radioactive contamination, such as Portugal, Ireland and British Columbia. Many
people, perhaps especially in the peace movement, believe that global nuclear war
will lead to the death of most or all of the world's population.[12] Yet the available
scientific evidence provides no basis for this belief. Furthermore, there seem to be
no convincing scientific arguments that nuclear war could cause human extinction.
[13] In particular, the idea of 'overkill', if taken to imply the capacity to kill everyone
on earth, is highly misleading.[14] In the absence of any positive evidence,
statements that nuclear war will lead to the death of all or most people on earth
should be considered exaggerations. In most cases the exaggeration is unintended,
since people holding or stating a belief in nuclear extinction are quite sincere.[15]
Another major point to be made in relation to statements about nuclear war is that
almost exclusive attention has been focussed on the 'worst case' of a major global
nuclear war, as indeed has been done in the previous paragraphs. A major global
nuclear war is a possibility, but not the only one. In the case of 'limited' nuclear war,
anywhere from hundreds of people to many tens of millions of people might die.[16]
This is a real possibility, but peace movement theory and practice have developed
almost as if this possibility does not exist.
Why the effects of nuclear war are
exaggerated Why do so many people have an exaggerated idea of the effects of
nuclear war, or focus on the worst possible outcome? Many people tend to believe
what they hear, but in the case of nuclear war there are both very pessimistic
accounts and other accounts which minimise the dangers. Many people, though not
all by any means, seem to assume the worst and not look into the technical details as indeed I myself did until a few years ago. Why? Here I outline a number of
possible reasons for exaggeration of the effects of nuclear war and emphasis on
worst cases. While the importance of most of these reasons may be disputed, I feel
it is necessary to raise them for discussion. The points raised are not meant to lay

blame on anyone, but rather to help ensure that peace movement theory and
strategy are founded on sound beliefs. By understanding our motivations and
emotional responses, some insight may be gained into how better to struggle
against nuclear war. (a) Exaggeration to justify inaction. For many people, nuclear
war is seen as such a terrible event, and as something that people can do so little
about, that they can see no point in taking action on peace issues and do not even
think about the danger. For those who have never been concerned or taken action
on the issue, accepting an extreme account of the effects of nuclear war can
provide conscious or unconscious justification for this inaction. In short, one
removes from one's awareness the upsetting topic of nuclear war, and justifies this
psychological denial by believing the worst. This suggests two things. First, it may
be more effective in mobilising people against nuclear war to describe the dangers
in milder terms. Some experiments have shown that strong accounts of danger - for
example, of smoking[17] - can be less effective than weaker accounts in changing
behaviour. Second, the peace movement should devote less attention to the
dangers of nuclear war and more attention to what people can do to oppose it in
their day-to-day lives. (b) Fear of death. Although death receives a large amount of
attention in the media, the consideration of one's own death has been one of the
most taboo topics in western culture, at least until recently.[18] Nuclear war as an
issue raises the topic insistently, and unconsciously many people may prefer to
avoid the issue for this reason. The fear of and repression of conscious thoughts
about personal death may also lead to an unconscious tendency to exaggerate the
effects of nuclear war. One's own personal death - the end of consciousness - can be
especially threatening in the context of others remaining alive and conscious.
Somehow the death of everyone may be less threatening. Robert Lifton[19] argues
that children who learn at roughly the same age about both personal death and
nuclear holocaust may be unable to separate the two concepts, and as a result
equate death with annihilation, with undesirable consequences for coping
individually with life and working collectively against nuclear war. Another factor
here may be a feeling of potential guilt at the thought of surviving and having done
nothing, or not enough or not the right thing, to prevent the deaths of others. Again,
the idea that nearly everyone will die in nuclear war does not raise such disturbing
possibilities. (c) Exaggeration to stimulate action. When people concerned about
nuclear war describe the threat to others, in many cases this does not trigger any
action. An understandable response by the concerned people is to expand the
threat until action is triggered. This is valid procedure in many physiological and
other domains. If a person does not heed a call of 'Fire!', shouting louder may do the
trick. But in many instances of intellectual argument this procedure is not
appropriate. In the case of nuclear war it seems clear that the threat, even when
stated very conservatively, is already past the point of sufficient stimulation. This
means that what is needed is not an expansion of the threat but rather some
avenue which allows and encourages people to take action to challenge the threat.
A carefully thought out and planned strategy for challenging the war system, a
strategy which makes sense to uncommitted people and which can easily
accommodate their involvement, is one such avenue.[20] (d) Planning and
defeatism. People may identify thinking about and planning for an undesirable
future - namely the occurrence and aftermath of nuclear war - with accepting its

inevitability (defeatism) or even actually wanting it. By exaggerating the effects of


nuclear war and emphasising the worst possible case, there becomes no post-war
future at all to prepare for, and so this difficulty does not arise. The limitations of
this response are apparent in cases other than nuclear war. Surely it is not
defeatism to think about what will happen when a labour strike is broken, when a
social revolution is destroyed (as in Chile) or turns bad (as in the Soviet Union), or
when political events develop in an expected though unpleasant way (as Nazism in
the 1920s and 1930s). Since, I would argue, some sort of nuclear war is virtually
inevitable unless radical changes occur in industrialised societies, it is realism rather
than defeatism to think about and take account of the likely aftermath of nuclear
war. An effective way to deal with the feeling or charge of defeatism is to prepare
for the political aftermath of nuclear war in ways which reduce the likelihood of
nuclear war occurring in the first place. This can be done for example by developing
campaigns for social defence, peace conversion and community self-management
in ways which serve both as preparation to resist political repression in time of
nuclear crisis or war, and as positive steps to build alternatives now to war-linked
institutions.[21] (e) Exaggeration to justify concern (I). People involved with any
issue or activity tend to exaggerate its importance so as to justify and sustain their
concern and involvement. Nuclear war is only one problem among many pressing
problems in the world, which include starvation, poverty, exploitation, racial and
sexual inequality and repressive governments. By concentrating on peace issues,
one must by necessity give less attention to other pressing issues. An unconscious
tendency to exaggerate the effects of nuclear war has the effect of reducing
conscious or unconscious guilt at not doing more on other issues. Guilt of this sort
is undoubtedly common, especially among those who are active on social issues
and who become familiar with the wide range of social problems needing attention.
The irony is that those who feel guilt for this reason tend to be those who have least
cause to feel so. One politically effective way to overcome this guilt may be to
strengthen and expand links between anti-war struggles and struggles for justice,
equality and the like. (f) Exaggeration to justify concern (II). Spokespeople and
apologists for the military establishment tend to emphasise conservative estimates
of the effects of nuclear war. They also are primarily concerned with military and
economic 'survival' of society so as to confront further threats to the state. One
response to this orientation by people favouring non-military approaches to world
order and peace is to assume that the military-based estimates are too low, and
hence to exaggerate the effects and emphasise worst cases. The emotional
underpinning for this response seems to be something like this: 'if a militarist thinks
nuclear war will kill 100 million people and still wants more nuclear weapons, and
because I am totally opposed to nuclear war or plans for waging it, therefore
nuclear war surely would kill 500 million people or everyone on earth.' This sort of
unconscious reasoning confuses one's estimate of the size of a threat with one's
attitude towards it. A more tenable conclusion is that the value structures of the
militarist and the peace activist are sufficiently different to favour very different
courses of action when considering the same evidence. The assumption that a given
item of information will lead to a uniform emotional response or conclusion about its
implications is false. The primary factor underlying differences in response to the
threat of nuclear war is not differences in assessments of devastation, but political

differences. The identification of the degree of opposition to nuclear war with the
degree of devastation envisaged may also lead to the labelling of those who make
moderate estimates of the danger as lukewarm opponents of nuclear war. In many
cases such an identification has some degree of validity: those with more awareness
of the extent of racism, sexism, exploitation and misery in the world are often the
ones who take the strongest action. But the connection is not invariable. Extremism
of belief and action does not automatically ensure accurate beliefs or effective
action. A recurrent problem is how to talk about nuclear war and wide scale
devastation without appearing - or being - hardhearted. Peace activists are quite
right to reject sterilised language and doublethink ('Peace is war') in discussions on
nuclear death and destruction, especially when the facade of objectivity masks
dangerous policies. But an exclusive reliance on highly emotional arguments, or an
unofficial contest to see who can paint the worst picture of nuclear doom, is
undesirable too, especially to the degree it subverts or paralyses critical thinking
and creative development of strategy. Another unconscious identification, related
to the identification of the level of opposition to nuclear war with the level of
destruction thought to be caused by it, arises out of people's abhorrence at
'thinking about the unthinkable', namely post-nuclear war planning by military and
strategic planners. This abhorrence easily becomes abhorrence at 'thinking about
the unthinkable' in another sense, namely thinking about nuclear war and its
aftermath from a peace activist point of view. The abhorrence, though, should be
directed at the morality and politics of the military and strategic planners, not at
thinking about the 'unthinkable' event itself. Many peace activists have accepted
the reality of nuclear war as 'unthinkable', leaving the likes of strategic planner
Herman Kahn with a virtual monopoly on thinking about nuclear war. So while postnuclear war planning is seriously carried out by some military and government
bodies, the strategies of the peace movement are seriously hampered by the gap
created by self-imposed 'unthinkability'. (g) White, western orientation. Most of the
continuing large-scale suffering in the world - caused by poverty, starvation, disease
and torture - is borne by the poor, non-white peoples of the third world. A global
nuclear war might well kill fewer people than have died of starvation and hungerrelated disease in the past 50 or 100 years.[22] Smaller nuclear wars would make
this sort of contrast greater.[23] Nuclear war is the one source of possible deaths of
millions of people that would affect mainly white, rich, western societies (China and
Japan are the prime possible exceptions). By comparison, the direct effect of global
nuclear war on nonwhite, poor, third world populations would be relatively small.

Framing
Detachment from crisis-driven politics spurs practical
resistance
Cuomo, 96 [Chris Cuomo 1996 - Professor of Philosophy and Women's Studies, and Director of the Institute for
Women's Studies at the Univerity of Georgia 1996 War Is Not Just an Event: Reflections on the Significance of
Everyday Violence Published in Hypatia 11.4 nb, pp. 30-46
(https://www.academia.edu/476274/War_is_not_just_an_event_Reflections_on_the_significance_of_everyday_violenc
e)]

Moving away from crisis-driven politics and ontologies concerning war and military violence
also enables consideration of relationships among seemingly disparate phenomena, and therefore can shape
more nuanced theoretical and practical forms of resistance. For example, investigating the ways in
which war is part of a presence allows consideration of the relationships among the events of war and the following:
how militarism is a foundational trope in the social and political imagination; how the pervasive presence and

which threats of statesponsored violence are a sometimes invisible/sometimes bold agent of racism,
nationalism, and corporate interests; the fact that vast numbers of communities, cities, and
symbolism of soldiers/warriors/patriots shape meanings of gender; the ways in

nations are currently in the midst of excruciatingly violent circumstances. It also provides a lens for considering the
relationships among the various kinds of violence that get labeled "war." Given current American obsessions with
nationalism, guns, and militias, and growing hunger for the death penalty, prisons, and a more powerful police
state, one cannot underestimate the need for philosophical and political attention to connections among
phenomena like the "war on drugs," the "war on crime," and other state-funded militaristic campaigns. . I propose
that the constancy of militarism and its effects on social reality be reintroduced as a crucial locus of contemporary
feminist attentions, and that feminists emphasize how wars are eruptions and manifestations of omnipresent
militarism that is a product and tool of multiply oppressive, corporate, technocratic states.(2) Feminists should be
particularly interested in making this shift because it better allows consideration of the effects of war and militarism
on women, subjugated peoples, and environments. While giving attention to the constancy of militarism in
contemporary life we need not neglect the importance of addressing the specific qualities of direct, large-scale,

declared, large-scale conflicts should not


obfuscate the ways in which military violence pervades most societies in
declared military conflicts. But the dramatic nature of

increasingly technologically sophisticated ways and the significance of military institutions and everyday practices
in shaping reality. Philosophical discussions that focus only on the ethics of declaring and fighting wars miss these
connections, and also miss the ways in which even declared military conflicts are often experienced as omnipresent
horrors. These approaches also leave unquestioned tendencies to suspend or distort moral judgement in the face of
what appears to be the inevitability of war and militarism.

Worst-case risk assessment leads to policy failure, failed


predictions, and makes risks more likely to occur
Schneier 10 Bruce Schneier is a security technologist, and author of "Beyond Fear:
Thinking Sensibly About Security in an Uncertain World." Worst-case thinking makes us
nuts, not safe May 12, 2010
http://www.cnn.com/2010/OPINION/05/12/schneier.worst.case.thinking/ *We do not endorse
ableist language
There's a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it
involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination
for thinking, speculation for risk analysis and fear for reason. It fosters powerlessness and vulnerability and

Worst-case
thinking means generally bad decision making for several reasons. First, it's
only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards.
magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.

By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking
focuses only on the extreme but improbable risks

and does a poor job at assessing

outcomes. Second, it's based on flawed logic. It begs the question by assuming that
a proponent of an action must prove that the nightmare scenario is
impossible. Third, it can be used to support any position or its opposite. If we
build a nuclear power plant, it could melt down. If we don't build it, we will
run short of power and society will collapse into anarchy. If we allow flights near
Iceland's volcanic ash, planes will crash and people will die. If we don't, organs won't arrive in time for transplant
operations and people will die. If we don't invade Iraq, Saddam Hussein might use the nuclear weapons he might
have. If we do, we might destabilize the Middle East, leading to widespread violence and death. Of course, not all
fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking. So terrorism
fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary;
nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and

Basically, any fear that would make a good movie plot


is amenable to worst-case thinking. Fourth and finally, worst-case thinking
validates ignorance. Instead of focusing on what we know, it focuses on what we don't know -- and what
annihilating the planet is bad.

we can imagine. Remember Defense Secretary Donald Rumsfeld's quote? "Reports that say that something hasn't
happened are always interesting to me, because as we know, there are known knowns; there are things we know
we know. We also know there are known unknowns; that is to say we know there are some things we do not know.
But there are also unknown unknowns -- the ones we don't know we don't know." And this: "the absence of
evidence is not evidence of absence." Ignorance isn't a cause for doubt; when you can fill that ignorance with
imagination, it can be a call to action. Even worse, it can lead to hasty and dangerous acts . You can't wait for a

Rather than making us safer, worstcase thinking has the potential to cause dangerous escalation. The new
smoking gun, so you act as if the gun is about to go off.

undercurrent in this is that our society no longer has the ability to calculate probabilities. Risk assessment is
devalued. Probabilistic thinking is repudiated in favor of "possibilistic thinking": Since we can't know what's likely to

Worst-case thinking leads to bad


decisions, bad systems design, and bad security. And we all have direct experience with
go wrong, let's speculate about what can possibly go wrong.

its effects: airline security and the TSA, which we make fun of when we're not appalled that they're harassing 93-

You can
refuse to fly because of the possibility of plane crashes. You can lock your
children in the house because of the possibility of child predators . You can
year-old women or keeping first-graders off airplanes. You can't be too careful! Actually, you can.

eschew all contact with people because of the possibility of hurt. Steven Hawking wants to avoid trying to
communicate with aliens because they might be hostile; does he want to turn off all the planet's television
broadcasts because they're radiating into space? It isn't hard to parody worst-case thinking, and at its extreme it's a
psychological condition. Frank Furedi, a sociology professor at the University of Kent, writes: "Worst-case thinking
encourages society to adopt fear as one of the dominant principles around which the public, the government and
institutions should organize their life. It institutionalizes insecurity and fosters a mood of confusion and
powerlessness. Through popularizing the belief that worst cases are normal, it incites people to feel defenseless and
vulnerable to a wide range of future threats." Even worse, it plays directly into the hands of terrorists, creating a
population that is easily terrorized -- even by failed terrorist attacks like the Christmas Day underwear bomber and
the Times Square SUV bomber. When someone is proposing a change, the onus should be on them to justify it over
the status quo. But worst case thinking is a way of looking at the world that exaggerates the rare and
unusual and gives the rare much more credence than it deserves. It isn't really a principle; it's a cheap trick to

lets lazy or biased people make what seem to be cogent


arguments without understanding the whole issue. And when people don't need to
justify what you already believe. It

refute counterarguments, there's no point in listening to them.

Democracy/Anonymous Advantage

FRT k2 Anonymous Speech


FRT represents a serious threat to anonymous
speech/assembly.
Brown 14 Associate Professor of Law, University of Baltimore School
of Law. B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY,
FACEPRINTS, AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
With the exception of Doe, an important distinguishing feature of the First Amendment anonymity cases is the

FRT presents a
particularly difficult problem under prevailing constitutional law because most faces
are routinely exposed in public. No domestic law requires that a persons facial
features be unobstructed while she maneuvers about in public places so that the
government can use them for identification purposes. Her visage is there for the
governments taking. Technology has thus become deterministic of personal privacy
today. Yet there is no reciprocal power on the part of individuals to direct how
technology will evolve in relationship to their privacy interests or even to opt out of
its implications for their daily lives. The First Amendment anonymity cases and Fourth Amendment
involvement of legislative attempts to coerce the disclosure of personal identities.

doctrine assume that a person possesses the discretion to take steps to protect communications or other effects

In the First Amendment


context, the Court has upheld private individuals ability to choose to keep their
identities anonymous in some respect. Indeed, the fact that the McIntyre plaintiff simultaneously disclosed
from governmental intrusionthat is, by keeping personal information private.

her identity in other pamphlets was irrelevant to the Courts analysis and ultimate conclusion that her choice to

In the Fourth Amendment arena,


disclosure operates as a waiver of sorts, but the Court has taken pains to identify
how the subject of police inquiry could have effectively invoked constitutional
protections by keeping information private. In both contexts, the underlying
assumption supporting the Courts analyses of the constitutional guarantee at issue is that citizens have
a choice andcaveat emptorif they choose public disclosure, the Constitution cannot
save them from the consequences of that choice. Facebooks FRT features are
active by default.431 It takes six clicks to reach a disclosure that Facebook uses FRT.432 Apples
iPhoto does not have an opt-out function at all. 433 Currently, there are no laws
requiring private entities to provide individuals with notice that they are collecting
personal data using FRT, how long that data will be stored, whether and how it will
be shared, or how it will be used.434 Other countries have regulations that give Internet users control
over their own data. 435 In the United States, however, private companies are free to
sell, trade, and profit from individuals biometric information . Private
companies can also disclose individuals data to government authorities without
their consent.436 Fourth and First Amendment law is remarkably consistent in its deference to the subjects
choice to remain anonymous or put information into the public domain . If people protect their privacy,
the Constitution protects it too. In modern times, the problem with this tautology is
that the concept of choice implies that there is more than one meaningful option .
With FRT and other emerging technologies, there is no mechanism for
opting out of the various sources that are amalgamated into what
remain anonymous was protected by the First Amendment.430

amounts to surveillance. The theory behind the Fourth Amendment doctrines that lift its protections
for information disclosed publicly or to third parties is thus unsustainable. Accordingly, the recognition of
anonymity as a constitutional value that warrants protection under the First and
Fourth Amendments may require numerous safeguards in place for forestalling
indiscriminate disclosure, as Justice Brennan suggested in Whalen. 437 In his words, whether
sophisticated storage and matching technology amount[s] to a deprivation of constitutionally protected privacy
interests might depend in part on congressional or regulatory protections put in place to forbid the governments

This will not be


easy. Choosing to opt out of Googles tracking technologies itself leaves a trace,
and technology exists to re-identify people whose personal identifiers, such as
name, address, credit card information, birth date, and social security number had
been removed from a dataset.439 But constitutional limits on the governments
ability to work around individuals attempts to protect their privacy would be an
important step toward rescuing the constitutional value of anonymity before FRT
and big data are used to do more than simply predict who may commit crimesi.e.,
to punish people for future acts.440 The writing is on the wall. One day soon, [y]our
use of big data for arbitrary monitoring of the populace without individuals consent.438

phone or in some years your glasses and, in a few more, your contact lenses will tell you the name of that
person at the party whose name you always forget . . . . Or it will tell the stalker in the bar the address where you

FRT is rapidly moving


society toward a world in which the Constitutions scope needs to be meaningfully
reformulated, else it risks irrelevance when it comes to individuals ability
to hide from the prying eyes of government. 442 The third party doctrine and the
live, or it will tell the police where you have been and where you are going.441

longstanding judicial rejection of a reasonable expectation of privacy in matters made public have depleted the
Fourth Amendment of vitality for purposes of establishing constitutional barriers to the governments use of FRT to
profile and monitor individual citizens.

Although the Court has expressly affirmed protections


for anonymous speech under the First Amendment, that doctrine has not been
extended to address the harms that flow from dragnet-style surveillance . Yet every
member of the modern Court has at some point recognized that technology necessitates a rethinking of traditional

existing First Amendment protections for


anonymity should be brought to bear in assessing how Fourth Amendment doctrine
can adapt to the challenges of modern surveillance methods. Today, the
conglomerate of publicly available data is colossal and constantly expanding.
Technology enables the government and private companies to identify patterns
within such data which reveal new information that does not exist anywhere in
isolation. As a consequence, information in the digital age is fundamentally distinct from information in the predigital age, in which the Courts Fourth Amendment doctrine evolved. This Article thus identified
constitutionally derived guidelines for courts and lawmakers to consider in crafting
judicial, legislative, and regulatory responses to the governments newfound
capacity to create new information from storehouses of data gleaned from social
media sites, public cameras, and increasingly sophisticated technologies like FRT. By
giving these guidelines serious consideration, courts and lawmakers can tether
foundational constitutional protections against over-surveillance with the
development of the lawlaw that is otherwise broken and outdated.
constitutional boundaries. This Article argued that

Face recognition technology is spreading and constitutes a


fundamental speed bump for fourth amendment jurisprudence.
Fretty 11 Associate at Irell & Manella, LLP, in Los Angeles, California,
J.D from UCLA School of Law (Douglas A., Face-Recognition Surveillance: A
Moment of Truth for Fourth Amendment Rights in Public Places, Vol 16 No 3, Fall
2011, Virginia Journal of Law and Technology,
http://www.vjolt.net/vol16/issue3/v16i3_430-Fretty.pdf) NAR
Since the 2001 Super Bowl, when Tampa Bay installed face-recognizing cameras in its stadium to catch criminals

Americans have been increasingly monitored with facerecognition technology (FRT). Though the technique remains crude, face-based
surveillance is already used in airports and on city streets to detect fugitives,
teenage runaways, criminal suspects, or anyone who was ever arrested. As it spreads,
FRT will be an unusually fraught topic for courts to address, because it straddles so
many fault lines currently lying beneath our Fourth Amendment jurisprudence.
These include whether: (1) people enjoy a reasonable expectation of anonymity in
public, (2) a seizure can occur without halting a persons movement, (3) longterm aggregation of data about individuals can constitute a search, and (4) the
probable-cause standard tolerates generalized surveillance with a high rate of
false positives. These fault lines are not minor questions but fundamental
challenges of the digital-surveillance movement. While most courts to address these issues have
attending the big game,

erred toward diminished Fourth Amendment protection, this Article cites an emerging minority that would reclaim
basic privacy rights currently threatened by electronic monitoring in public.

The 4th amendment is allowed to be stretched for people to be


surveilled but believing people should expect to be
RECOGNIZED fundamentally hollows out the amendment.
Fretty 11 Associate at Irell & Manella, LLP, in Los Angeles, California,
J.D from UCLA School of Law (Douglas A., Face-Recognition Surveillance: A
Moment of Truth for Fourth Amendment Rights in Public Places, Vol 16 No 3, Fall
2011, Virginia Journal of Law and Technology,
http://www.vjolt.net/vol16/issue3/v16i3_430-Fretty.pdf) NAR
When people exit their homes, they risk being observed by others and thereby
forego any reasonable expectation of not being captured by surveillance , even if they
believe they are not observed by anyone.82 The case of Edward Kowalski illustrates the point.83 Mr. Kowalski
suffered a neck injury while working for the Pennsylvania State Police and, a few months after filing for workers
compensation, took a vacation to Florida.84 While at the beach with his wife, he was unknowingly videotaped for
days by a private investigator, hired by the State Police to verify Mr. Kowalskis medical condition.85 Though most
people would not expect or want to be surreptitiously recorded while sunbathing, Mr. Kowalski had no expectation of
privacy and therefore no Fourth Amendment claim against the State Police.86 This doctrine extends even to
secluded spaces such as the elevators and hallways of commercial buildings, where recessed cameras often record
goings-on. 87 Government agencies have a strong argument, then, that where people lack an expectation of not
being observed, they equally lack an expectation of not being recognized. Because one could unexpectedly be
recognized by a fellow pedestrian, so would go the argument, one cannot expect that FRT-equipped cameras will
not match ones face against a government photobase. This reasoning may strike some as strained, but it is the
analysis that the Supreme Court has applied to surveillance since 1986, when California v. Ciraolo and Dow

In Ciraolo,
police officers flew an airplane 1,000 feet over a suspects fenced-off property and
observed a small marijuana field.89 In Dow Chemical, EPA agents photographed the
Chemical Co. v. United States were decided on the same day.88 The cases presented similar facts.

companys property from varying altitudes with a precision aerial mapping


camera.90 Because the evidence gathering in both cases occurred from public airspace, the Court reasoned,
any air traveler could have observed what the government agents did, had they bothered to look down.91 EPAs
reliance on a sophisticated camera did not amount to a search, said the Court,
because: (1) the camera was available for public use,92 and (2) the agents used the
camera only to augment their natural sensory abilities.93 The first fact matters because, if
aerial mapping cameras are available in commerce, Dow could not have expected its land to be immune from the
technology.94 The second fact reflects the Courts view that,

as long as technology does not give

police novel powers of perceptionthe ability to see through walls or hear private
conversations95sensory-enhancing tools are not offensive to public expectations .96
Based on the example of Dow, police are able to enhance their noses with drugsniffing dogs97 and enhance their
eyes with telescopes and binoculars. 98 Police cannot, however, aim a heat-sensing camera at a suspects garage,
since this technique is uncomfortably analogous to looking through a wall into a private space.99 Still, as Justice
Powell admonished in his Dow dissent, the availability and sensory enhancement tests inevitably abrogate
public privacy as snooping technology becomes more pervasive.100 Linking surveillance cameras to FRT, then,
arguably only enhances the polices already-existing senses :

many surveillance advocates posit that


scanning a face with FRT is simply a highly efficient version of looking through a
traditional mug shot book.101 Further support comes from cases where the police have sought to
subpoena a suspects handwriting or voice sample without a warrant. Because a persons handwriting and speech
are frequently made public, the Court upholds such subpoenas, even though the requested sample is for the
unusual purpose of matching the suspects writing or speech to that of a criminal.102 The pro-FRT interpretation is
that, just as the government can demand a voice recording for matching purposes, so too can the government

the Court stated in


dictum in United States v. Dionisio, No person can have a reasonable expectation
that others will not know the sound of his voice, any more than he can reasonably
expect that his face will be a mystery to the world. 103 Though these cases were not
decided in the surveillance context and so would not bind an FRT dispute, they foreshadow the Courts
low-ebbing protection of facial privacy. Nevertheless, challengers to FRT should
engage the Harlan standard head-on by demonstrating that Americans reasonably
expect not to be identified in public by sophisticated algorithms . Indeed, the
Court has at times cast itself as a bulwark against novel technology that takes away
privacies we once took for granted.104 As evidence that people expect a degree of anonymity while
moving in public, civil libertarians could point to the popular outcries that often accompany a
citys installation of facerecognizing cameras.105 Public reaction to Tampa Bays use
of FRT at the Super Bowl was overwhelmingly negative ;106 the subsequent
installation of FRT cameras in Tampas nightlife district prompted vociferous
protests, effectively ending the citys FRT experiment two years later .107 Courts may
digitize a pedestrians likeness for processing with a face-matching algorithm. As

respond that a persons outrage means nothing at the point at which surveillance technology meets the Dow test.
This argument, made by lower courts in other contexts, is that as long as people know a technology could
conceivably be used against them by strangers, the governments use of the technology is not a constitutional
issue.108 As articulated in one district opinion, The proper inquiry . . . is not what a random stranger would
actually or likely do [with surveillance technology], but rather what he feasibly could.109 Members of the public
could conceivably use an online FRT program such as Polar Rose to identify strangers on the street based on a
furtivelysnapped digital photo.110 Making such a scenario all the more plausible, Google is now building an
application that would locate a persons online Google Profile based on any photo of the persons face.111 Thus, like

pedestrians have relinquished their expectation


of facial-identity privacy. Against this mechanical reading, however, a small revolt
is stirring. In August 2010, the D.C. Circuit in United States v. Maynard held that police could not track
it or not, under a strict reading of the Dow line,

suspects via their cell phone records without a warrant.112 The holding was despite the governments truthful
argument that a cell phone company could easily track any subscribers movements by cataloguing the cell phone
towers that received the subscribers signal.113 Maynard reviewed the Courts important reasonable expectation
cases114 and concluded: In

considering whether something is exposed to the public . . .

we ask not what another person can physically and may lawfully do but rather what
a reasonable person expects another might actually do. 115 Were the D.C. Circuit to review
state-run FRT, the inquiry would then be whether D.C. pedestrians expect their fellow travelers to discover their
identities via FRT software. Three weeks after Maynard, a district court followed its result, emboldened by several
rulings in recent years that reclaim domains of personal privacy threatened by encroaching technology.116 Though

the Maynard reasoning is for now the minority view,117 it reflects a broadly felt
instinct to reclaim the reasonable expectation test as a guardian of Fourth
Amendment rights in public spaces.118 Face-recognition challenges offer the
potential to push Maynard further into the mainstream.

FRT violates the first amendment destroys our right to


freedom of anonymous speech.
Brown 14 Associate Professor of Law, University of Baltimore School
of Law. B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY,
FACEPRINTS, AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
a line of First Amendment cases confirms that the privacy threat posed by
FRTthe governments unfettered identification and monitoring of
personal associations, speech, activities, and beliefs, for no justifiable purposeis
one of constitutional dimension. In fact, the Supreme Court has steadfastly
protected anonymous speech.336 The Courts repeated pronouncements that the First
Amendment337 safeguards the right of anonymous speechthat is, the right to distribute
Separately,

technologies like

written materials without personal identification of the authorlargely came about in response to government
attempts to mandate disclosures in public writings. 338 In Talley v. California,339 the Court struck down a Los
Angeles ordinance restricting the distribution of a handbill in any place under any circumstances, which does not
have printed on the cover . . . the name and address of . . . [t]he person who printed, wrote, compiled or
manufactured the same.340 Finding that the law infringed on freedom of expression, the Court observed that
[a]nonymous

pamphlets, leaflets, brochures and even books have played an


important role in the progress of mankind by enabling persecuted groups to
criticize oppressive practices and other matters of public importance, particularly
where the alternative may be not speaking at all. 341 The Talley Court342 relied on two cases
that linked anonymous speech with the ability to freely associate in private. Both involved constitutional
challenges343 to laws requiring members of the National Association for the Advancement of Colored People
(NAACP) to furnish government officials with its member lists. In NAACP v. Alabama ex rel. Patterson, 344 the
lower court imposed a $100,000 civil contempt fine after the organization refused to comply with a court order
requiring production of its lists.345 The Supreme Court lifted the judgment and fine, holding that immunity from
state scrutiny of membership lists . . . is here so related to the right of the members to pursue their lawful private
interests privately as to be constitutionally protected on privacy and free association grounds.346 Although
association is not listed among the First Amendments enumerated freedoms, the Court declared in Talley that
freedom to engage in association for the advancement of beliefs and ideas is an inseparable aspect of . . .
liberty.347 In Bates v. City of Little Rock,348 the NAACPs records custodian was tried, convicted, and fined for
refusing to comply with state ordinances requiring that membership lists be public and subject to the inspection of
any interested party at all reasonable business hours.349 The organization claimed a right on the part of its
members to participate in NAACP activities anonymously and free from any restraints or interference from city or
state officialsa right that it felt has been recognized as the basic right of every American citizen since the
founding of this country.350 The Supreme Court again struck down the ordinances, asserting that the freedom of
speech, a free press, freedom of association, and a right to peaceably assemble are protected from being stifled by
[such] subtle governmental influence as a requirement to divulge membership lists.351 Over four decades later,

the Court in McIntyre v. Ohio Elections Commission characterized anonymous


speech as important to the preservation of personal privacy. 352 In McIntyre, the plaintiff
distributed leaflets opposing a school superintendents referendum which were anonymously attributed to
CONCERNED PARENTS AND TAX PAYERS.353 The Ohio Election Commission fined the plaintiff for violating state

The U.S. Supreme Court reversed a lower


court ruling upholding the ordinance, explaining that an authors decision to remain
anonymous . . . is an aspect of the freedom of speech protected by the First
Amendmenteven if [t]he decision in favor of anonymity [is] motivated . . . merely by a desire to preserve as
laws banning the distribution of unsigned leaflets.354

much of ones privacy as possible.355 The Court extolled the virtues of anonymity as fostering [g]reat works of
literature . . . under assumed names, enabling groups to criticize the government without the threat of
persecution, and provid[ing] a way for a writer who may be personally unpopular to ensure that readers will not
prejudge her message simply because they do not like its proponent.356 As core political speech, it concluded,
[n]o form of speech is entitled to greater constitutional protection.357 Justice Stevens went on in his majority
opinion to tether anonymity to the purpose behind the Bill of Rights and the First Amendment: to protect unpopular
individuals from retaliationand their ideas from suppression at the hand of an intolerant society.358
Anonymity, he explained, is a shield from the tyranny of the majority.359 In a concurring opinion, Justice
Thomas commented that the Founders practices and beliefs on the subject indicate[] that they believed the
freedom of the press to include the right to author anonymous political articles and pamphlets.360 That most
other Americans shared this understanding, he added, is reflected in the Federalists hasty retreat before the
withering criticism of their assault on the liberty of the press.361 Justice Scalia dissented, arguing that anonymity
facilitates wrong by eliminating accountability, which is ordinarily [its] very purpose.362 To treat all anonymous
communication . . . in our society [as] traditionally sacrosanct, he continued, seems to me a distortion of the past
that will lead to a coarsening of the future.363 In Watchtower Bible & Tract Society of New York, Inc. v. Village of
Stratton,364 the Court struck down an ordinance requiring permits for doorto-door canvassing as a prior restraint
on speech but also because the law vitiated the possibility of anonymous speech.365 It characterized the permit
requirement as result[ing] in a surrender of . . . anonymityeven where circulators revealed their physical
identitiesbecause strangers to the resident certainly maintain their anonymity.366 The Court was thus
unmoved by the fact that speakers who ring doorbells necessarily make themselves physically known to their
audience, thus revealing themselves to some extent. For the Court, it was the recognition that occurs when a name
on a permit is connected to a face which triggered the Constitutions protection of anonymity. Most recently, a
fractured plurality in Doe v. Reed367 upheld a state law compelling public disclosure of the identities of referendum
petition signatories while squarely acknowledging the vitality of a First Amendment right to anonymous speech. 368
Significantly, all but one Justice recognized that the governments ability to correlate identifying information with
online data created a First Amendment hazard of unprecedented dimension. Writing for the majority, Chief Justice
Roberts found that an individuals expression of a political view through a signature on a referendum petition
implicated a First Amendment right.369 The Court nonetheless held that the states interest in preserving the
integrity of the electoral process and informing the public about who supports a petition justified the burdens of
compelled disclosure.370 Justice Roberts made a point of deeming significant the plaintiffs argument that, once
on the Internet, their names and addresses could be matched with other publicly available information about them
in what will effectively become a blueprint for harassment and intim idation.371 Because the majority only
considered the facial challenge to the law, Justice Roberts found the burdens imposed by typical referendum
petitions unlike those that the plaintiffs feared.372 Justice Alito wrote separately to emphasize that government
access to personal data online gave rise to a strong as-applied challenge based on the individual . . . right to
privacy of belief and association.373 He considered breathtaking the implications of the states argument that it
has an interest in providing information to the public about supporters of a referendum petition; if true, the State
would be free to require petition signers to disclose all kinds of demographic information, including the signers
race, religion, political affiliation, sexual orientation, ethnic background, and interest-group memberships.374
Justice Alito added that the posting of names and addresses online could allow anyone with access to a computer
[to] compile a wealth of information about all of those persons, with vast potential for use in harassment.375
Justice Thomas dissented on similar grounds, asserting that he would sustain a facial challenge precisely because
[t]he advent of the Internet enables rapid dissemination of the information needed to threaten or harass every
referendum signer, thus chill[ing] protected First Amendment activity.376 Concurring separately, Justice Scalia
stood alone in his complete rejection of First Amendment protections for anonymous speech.377 When considered
in conjunction with the digital-age Fourth Amendment cases, Doe is remarkable in its recognition of the pressures
that modern technology puts on the viability of existing constitutional doctrine relating to individual privacy.
Although Jones addressed GPS monitoring under the Fourth Amendment, Justice Sotomayor invoked the First
Amendment to emphasize that [a]wareness that the Government may be watching chills associational and
expressive freedoms, and that the Governments unrestrained power to assemble data that reveal private aspects
of identity is susceptible to abuse.378 When inexpensive technology is paired with massive amounts of readily
accessible personal information and unfettered government discretion to track individual citizens, she explained,

Although pre-digital-age Fourth Amendment case law appears


to paint FRT surveillance into a doctrinal corner, in the right case the Supreme Court
may well find constitutional limits on surveillance conducted with cutting-edge
technology like FRT and publicly available data. The next Part offers guidelines derived from the
democracy itself suffers.379

legislators should bear in mind in


crafting legal limits on surveillance through technologies like FRT . 380
Courts Fourth and First Amendment jurisprudence which courts and

Mechanism
The plan represents an enshrined attempt to protect
anonymous assembly and communication
Nguyen 2 - J.D. Candidate, Yale Law School, 2003 (Alexander T., Heres
Looking At You, Kid: Has Face-Recognition Technology Completely Outflanked The
Fourth Amendment, VIRGINIA JOURNAL of LAW and TECHNOLOGY , UNIVERSITY OF
VIRGINIA SPRING 2002 7 VA. J.L. & TECH. 2,
http://www.vjolt.net/vol7/issue1/v7i1_a02-Nguyen.PDF ) NAR
Fourth Amendment-based arguments against FaceIt are mostly based on dicta or
extrapolation, and therefore offer very weak opposition to technology such as FaceIt . Even
though the arguments are intellectually interesting, to contend that the Fourth Amendment would
prohibit the use of technology such as FaceIt is simply to fight an quixotic battle, and it
might take too long for courts to reformulate a new conceptualization of the Fourth
Amendment to protect citizens against FaceIt. Instead, one must realize that the
expectation of privacy has crumbled with the onslaught of technology, and it might
be time to turn to another potentialand more immediately availablesource of opposition to
FaceIt technology. That source is anonymity. 46. If technology has eroded the expectation of privacy,
one could argue that courts have consistently upheld what might be termed the
expectation of anonymity. The definition of privacy is almost certainly too broad in
order to meaningfully protect individuals against FaceIt. Privacy has become as
nebulous a concept as happiness or security. 162 To simply say that FaceIt violates privacy by
45. These potential

infringing on the right to be left alone,163 for example, is not useful because in the FaceIt case, the people being
scanned are technically being left alone. The great simplicity of this definition gives it rhetorical force and
attractiveness, but also denies it the distinctiveness that is necessary for the phrase to be useful in more than a

As a spokesman for the Tampa Police department stated after the use
of video surveillance at the Super Bowl: There is no expectation of privacy in a crowd of
100,000 people.165 Such a definition of privacy exempts biometric surveillance
because proponents can simply claim that such technology leaves citizens alone
while ignoring the argument that privacy claims also have to do with, for example, an
individuals reluctance to have a file in a database or to have his or her face scanned unknowingly .
Anonymity is a much narrower conception of the value at stake insofar as
biometric technology is concerned. While there may be no expectation of
privacy in a crowd, there may be an expectation of anonymity in such a space. 166
Because this technology is primarily concerned with identification rather than
searches, anonymity is a value that is tailored much more narrowly and is therefore
better equipped to deal with biometric surveillance . 47. Privacy is closely allied with anonymity.
We may commute for yearssame train, same compartment, same fellowtravelersand yet the man to whom we reveal our hopes, our opinions, our beliefs,
our business and domestic joys and crises remains The chap who gets on at Dorking with The
Times and a pipe; I dont know who he is. And he does not know who we are, because
we have never exchanged names, and thus the necessary communication and release of our private
concerns is accomplished without violation of our privacy . In our anonymity is our security.167
But the value of anonymity is its role as buffer to privacy intrusions. In other words, we will tolerate
considerable intrusion, and even volunteer supererogatory circumstantial detail of
our lives, if our anonymity is preserved.168 48. The strength of using anonymity to oppose FaceIt
conclusory sense.164

courts have generally protected anonymity


in public spaces whereas they have in general held that there is no expectation of
privacy in public places. This is because anonymity has implications for the First
Amendment and has strong political dimensions, from the earliest beginnings of the
country. The Federalist papers of Alexander Hamilton, James Madison and John Jay were published anonymously,
rather than expectations of privacy lies in the fact that

under the pen name of Publius.169 Over the years, at least six presidents, fifteen cabinet members, and thirtyfour congressmen published anonymous political writings.170 In McIntyre v. Ohio Elections Comn, the court
indicated in striking down an ordinance requiring that political pamphlets bear the name of the author that: Under
our Constitution, anonymous pamphleteering is not a pernicious, fraudulent practice, but an honorable tradition of
advocacy and of dissent. Anonymity is a shield from the tyranny of the majority [citing J. Mill, On Liberty]. It thus
exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular
individuals from retaliationand their ideas from suppressionat the hand of an intolerant society. The right to
remain anonymous may be abused when it shields fraudulent conduct. But political speech by its nature will
sometimes have unpalatable consequences, and, in general, our society accords greater weight to the value of free
speech than to the dangers of its misuse.171 49. In Thomas v. Collins, 172 the Court held that the president of the
United Auto Workers did not have to register as a labor organizer with the Secretary of State in Texas before being
able to identify himself as such on business cards and solicit new members. Although the ambiguities in the
Thomas opinion leave its scope in doubt, it may be read as a recognition of a right of anonymity.173 The Court has
also upheld the refusal of individuals to disclose the names of individuals who had bought defendants book,174 the
refusal of party officials to divulge the names of other members of the Progressive Party175 and the refusal of a
witness to reveal to the House Committee on Un-American Activities if other individuals had participated in the
Communist Party.176 The right to anonymity was even more firmly expounded on in NAACP v. Alabama ex rel.
Patterson177 in which the Supreme Court upheld the refusal of the NAACP to disclose its membership lists because
to do so would be a violation of the associational privacy implied by the First Amendment. And in Shelton v.
Tucker178 the Court struck down a statute requiring teachers to list their group affiliations on an annual basis.
Despite this line of cases, the scope of anonymity has not really been specified.179 This term, the Supreme Court
will hear Watchtower Bible & Tract Soc. of New York, Inc. v. Village of Stratton, 180 in which Jehovahs Witnesses are
challenging the constitutionality of an ordinance that requires door-to-door proselytizers to register first. 50. Courts
have further upheld anonymity in another prominent public forum: The Internet.181 Various scholars have decried
the fact that cookies and other technology are eroding anonymity on the Internet.182 Individuals and organizations
have argued, and courts have agreed, that there is a strong interest in being anonymous on the Internet because in
the discussion of sensitive topics, they would like to avoid ostracism or embarrassment.183 In some cases,
scholars have even argued, anonymity might even change race relations.184 51. Internet anonymity is easy to
come byunlike anonymity off-line. Instead of having to go outside to find a payphone and making a call using a

One of the
most valuable democratic aspects of the Internet is its capability for anonymous
communication.185 Thus, it is evident that anonymity is a fundamental right that courts have in general
disguised voice, now users could simply find a re-mailer service that would ensure anonymity.

been very aggressive in protecting and it is this right that might offer a foundation for constitutional protection
against FaceIt. B. A PER SE RIGHT? ANONYMITY DECOUPLES FROM SPEECH 52. From these cases it would appear
that a speech nexus is always required and that a right of anonymity only exists insofar as it has consequences for
speech. But in fact,

the Court has recognized a right to anonymity that is broader than


simply political anonymity. In other words, even though the speech nexus will make a courts protection of
anonymity more likely, such a nexus is not necessary in order to be protected by anonymity. Thus, courts have
in general upheld juror anonymity186, anonymity with respect to the abortion
decision of a minor,187 the anonymity of a rape victim in newspaper articles,188 the
anonymity of a pregnant student in a student newspaper ,189 the right to proceed
anonymously in a court actioneven though a court is a public forum.190 The
interests protected by anonymity vary widely. Anonymity is praised as a necessary component of free society on
one hand, but condemned as a vehicle for nefarious activity on the other.191 Still, the right to anonymity is a
quasi-right that is protected in some instances but not in others.192 Under this broader conception of anonymity,

FaceIt violates a per se right to anonymity because the program


allows citizens in public places to be identified indiscriminately. 53. Another way to
one might argue that

reformulate the value of anonymity is to argue that it encompasses a broader range of non-speech activities that
nevertheless implicate speech. Under this conception, activities that are formative of identity (such as attending
certain meetings, going into certain stores, viewing certain movies, and so on) are part of speech. Similarly,
activities that help an individual formulate his or her thoughtssuch as readingare also closely tied to speech.
These activities therefore should also be granted anonymity. Julie Cohen therefore argues for a right to read
anonymously, because the activity of reading is as intimate and prior to the activity of speaking.193 Logically, that

zone of protection should encompass the entire series of intellectual transactions through which they formed the
opinions they ultimately chose to express. Any less protection would chill inquiry, and as a result, public discourse,
concerning politically and socially controversial issues[].194 One could argue that

there is a right to be

anonymous in public as well as it is expressive conduct. Attending a Green Party meeting or a Catholic mass
requires walking in public and would almost certainly qualify as political and expressive conduct to which there
might be a right to anonymity. But what about attending a New York Giants gamesurely the expression implied is

No
matter how trivial or incidental the expressive conduct, one could still argue that
they have expressive value and should therefore be protected . The case for protection of
ones support for one of the teamsor a Yo Yo Ma concert? What about walking into the local McDonalds?

anonymity is further bolstered by the fact that individuals appearing in public often do not have the option of hiding
their faces under a mask, for instance. Court authority has been divided over whether or not ordinances prohibiting
masks violate the First Amendment.195 Usually, courts have held, however, that unless the masks themselves
constituted symbolic speech (such as a KKK hood), ordinances preventing the wearing of masks that just hide
identity are constitutional.196 Once FaceIt is a common occurrence, ordinary citizens ought to have the right to
protect their anonymity as well, either by wearing masks or by taking down the cameras. In any case, it is
anonymity that might offer a vindication of rights and the privacy invasion that FaceIt carries itself. VIII.
CONCLUSION 54.

Current biometric research has reached a point at which identification


human beings can take place from a distance without touching individuals, stopping them on the
street. Research currently includes efforts to identify humans simply from the way they walk, or their gait.197
Because of the rise of national databases that keep track of financial, health, sexual, consumer and other types of
information, the danger to personal privacy comes from being able to link an individual to all these sources of
information. FaceIt is different from other sense-enhancing technologies because it is almost used exclusively in a
public sphere where it is able to sidestep the Fourth Amendment protection of the reasonable expectation privacy.
55. Under the current jurisprudence, it is not likely that the Fourth Amendment will protect citizens from
technologies such as FaceIt. This article has argued that a re-conceptualization of the Fourth Amendment not so
much as protecting individuals but more as demanding accountability and justification from the government as it
does its searches might be necessary. Under this conception, a de minimis individualized suspicion would be
required before FaceIt can scan an individuals facethis requirement might then essentially prohibit its
indiscriminate use in public spheres .

Anonymity might be another alternative because of its


nexus to the public sphere and because FaceIt is a tool that is used to identify rather
than to pry, so anonymity is a better fit than privacy. 56. With the rise of each new technology,
courts strive to carry forward established constitutional principles into the new context.198 At the same time,
however, the new technology strains the old principles and often requires a new approach that has consequences
for older technologies as well as the newer ones.199 FaceIt is such a technology. It differs from the other
technologies since the Katz decision in that it is used almost exclusively in public, but prominently so, and is able to
identify even innocent parties from a distance without intrusion. The technology is by far the most Orwellian, but
at the same time the Fourth Amendments expectation of privacy that traditionally has with some difficultyruled

Only through a new conception of the Fourth Amendment that


stresses the governments accountability and justification in conducting searches or a First Amendment
protection of anonymity could this dystopia be postponed or even eliminated.
out some technology is helpless against FaceIt.

Card
The precedent from the plan protects anonymous speech and
assembly in elections which is increasingly under threat
*Thats also key to global democracy.

Dranias 14 President & Executive Director of Compact for America


Educational Foundation, Heartland Institute Expert, and former General
Counsel & Constitutional Policy Director at the Goldwater Institute (Nick,
April 20th 2015, https://www.heartland.org/policy-documents/defense-private-civicengagement-why-assault-dark-money-threatens-free-speech-and-h,) NAR
Since at least 1590, anonymous speech has been the refuge of dissidents and
patriots resisting oppression and tyranny in the Anglo-American tradition.5 In 17th century
Britain, anonymous and pseudonymous speech was a common means used by
publishers and authors to avoid the forced disclosure the licensing of printing
presses required.6 Decades later, the pseudonyms adopted by the Framers and their
political opponents before and after ratification of the Constitution ensured that the
merits of their arguments stayed front and center .7 This was important not only to prevent ad
hominem discounting of the opinions expressed, but also because regional jealousies would have prevented
Virginians like James Madison and New Yorkers like Alexander Hamilton from being persuasive in other regions.8 An

no fewer than six presidents,


twenty senators, and thirty-four congressmen published political
writings either unsigned or under pen names. 9 More than 100 years later, private
and anonymous association protected not only members of the Socialist Party
during the Cold War,10 but also the NAACP during the fight against segregation .11
The Supreme Court did not back down from affording such protection.12 Even in cases
of alleged defamation, contemporary courts have developed and enforced a variety of
balancing tests of varying stringency to protect anonymous speakers .13 Anonymous
sourcing in the media has a long and storied history But in electoral politics, the
protection of anonymity and pseudonyms for speakers, donors and their associates is
on the verge of disappearing. Citizens United appeared to bless14 the Federal Election
historical review conducted in 1919 revealed that between 1789 and 1809
fifteen cabinet members,

Commissions broadcast disclosure and disclaimer mandates after applying a level of judicial reviewlawyers might
call it intermediate scrutinylower than what the majority applied when it struck down the restrictions on

Court upheld
mandatory disclosure of the identities of individuals who sign a ballot measure
petition in Doe v. Reed. 16 Private speech and association is also under increasing
assault in the wider policy world; with calls for publicity mandates to force
disclosure of donors to traditional center-right and center-left think tanks .
17 A federal case has already been filed by a blogger challenging laws that forced
him to disclose his identityand the blogger lost his case in the first round.18 Indeed, courts have
largely sustained such publicity mandates .19 This is despite the cross-ideological majority opinion of
independent speech for which the case has become known. 15 During the same term, the

Justice John Paul Stevens and powerful concurring opinion of Justice Clarence Thomas in McIntyre v. Ohio Election
Commn, in which the Court shielded an opponent of a proposed ballot measure tax levy from being forced to

private civic
engagement serves a critically important purpose in keeping the marketplace of
ideas focused on the message, not the messenger. 21 It also protects the messenger
from retaliation when she speaks truth to power. More than 30 years before McIntyre, the
disclose her identity under the First Amendment.20 As previously recognized in McIntyre,

Supreme Court noted in Talley v. California that [p]ersecuted groups and sects from time to time throughout

history have been able to criticize oppressive practices and laws either anonymously or not at all.22 There was a
time when that recognition called into question all publicity mandates and bans on anonymous speech.23 Citizens
United and Doe v. Reed, however, have clearly limited the reach of McIntyre. And the Courts lax application of
intermediate scrutiny also put considerable distance between its analysis and that in Buckley v. Valeothe
foundational case of modern campaign finance lawwhich sustained disclosure requirements as the least

result, people
engaged in politics and political issues face being thrust into the spotlightwhich in
todays polarized political environment encourages retaliation, deters civic
engagement, and thereby enables those already in the incumbent political class to
consolidate their power. To prevent the resulting ossification of existing power
structures and to protect individual liberty, this paper seeks to point the way back
to our nations heritage of private civic engagement.
restrictive means of curbing the evils of campaign ignorance and corruption.24 As a

A.S. k2 Democracy
Anonymous speech key to democracyMcCabe, 14.
[Katherine; JD Candidate at Fordham Unviersity School of Law; Founding Era Free
Speech Theory: Applying Traditional Speech Protection to the Regulation of
Anonymous Cyberspeech; Fordham Intellectual Property, Media & Entertainment
Law Journal; Spring 2014; 24 Fordham Intell. Prop. Media & Ent. L.J. 823]
The First Amendment has protected anonymous speech since the Founding Era.
Historically, freedom of speech has been justified
facilitating representative democracy and self-government
Anonymous speech has been held to have inherent value and is thus
protected by the First [*827] Amendment

for three main reasons: advancing knowledge and truth in the marketplace of ideas;
; and promoting individual autonomy, self-expression and self-

fulfillment. n11

. n12 From an originalist perspective, the mere fact that anonymous speech was allowed and protected during the

Founding Era is enough to justify the protection of anonymous speech today. However, there are prudential concerns to this argument, as the goal was arguably not to protect vicious anonymous hate speech and harassment.

Forced disclosure undermines democracy- anonymous free


speech key.
Arizona Capital Times, 14. 6/27/14. Value of anonymous speech too
often ignored.
http://azcapitoltimes.com/news/2014/06/27/value-of-anonymous-speech-too-oftenignored/
commentators are critical of
political activities without publicly
identifying the organizations donors. The entire vocabulary of the discussion dark
money, secretive practices, and shadowy organizations) manifests this
perspective. I dissent from the majority view. As I see it, the benefits of disclosure
are overstated, and the value of anonymous speech is too often ignored or
minimized
. Disclosure chills political
speech. Academic research has indicated that mandatory disclosure deters political
contributions
Virtually all

an organization engaging in

(e.g.,

. This is significant because federal and state regulators are regularly reviewing policy proposals in this area of law, and our policy debate should reflect the realities of the situation and not an

unthoughtful idealization of disclosure. Disclosure Is Overrated In thinking about the (over) value of disclosure, five points are noteworthy: 1

and expressive association Most readers of this newspaper probably know this from personal experience, having refused to contribute to candidates, or given less to candidates, to avoid being

listed in campaign finance reports. 2. The ultimate goal of disclosure rules less corruption is still only a theoretical benefit. Despite decades of experience, research has not established a significant link between disclosure rules
and reductions in political corruption. 3. Disclosure rules impose a tremendous burden on political campaigns and staffers. The amount of time, stress, and money poured into campaign finance reports is disproportionate to their
modest benefits. And in an increasing number of cases, disclosure rules result in criminal prosecutions and incarceration something that should be very sobering to First Amendment enthusiasts. 4. The information otherwise
available to voters is more significant than the information provided by mandatory disclosure laws. Scholarly research shows that, after taking into account the other information available to voters, the information contained in

Even if we assume disclosure is generally important, our current


disclosure rules are not sensible. Someone who donates $25.01 to a campaign
must be publicly identified
Meanwhile, someone who
bundles $25,000 or hosts large fundraisers
need not be publicly
identified. That makes no sense. If were going to have a system of mandatory
disclosures, this is the wrong way to do it.
Federal courts have long recognized
that certain First Amendment activities will not occur without an option for
anonymous participation
campaign finance reports adds little value. 5.

(and presents

virtually no risk of corruption)

in Arizona campaign finance reports.

(and presents a significant risk of corruption)

Anonymity Has Value While disclosure is overrated, the benefits of anonymity are almost always overlooked. Three

issues are particularly worthy of discussion: 1. Many individuals, organizations, and causes have a legitimate need for anonymity.

and in our election law practice, we regularly encounter donors with bona fide concerns justifying anonymity. Individuals critical of powerful government officials,

for example, are unlikely to engage in extensive expressive activities unless their participation remains confidential. In this sense, disclosure rules systematically favor the establishment and disadvantage dissenters by deterring the
public from speaking truth to power and opposing inept and corrupt government officials. 2. As a practical matter, there is little risk that vehicles for anonymous speech will, in the near future, be overused. Creating and operating a
politically active 501(c)(4) organization, for example, is expensive, legally perilous, and raises suspicions. Because the existing disclosure rules make anonymity costly and inconvenient, the use of vehicles for anonymous speech is
naturally limited to individuals and organizations with a non-trivial interest in anonymity (such as a credible fear of retaliation by incumbents or electoral favorites). 3. Anonymity can clarify, rather than obscure, a message. Research
shows that viewers become skeptical of political arguments based solely on the speakers identity. While the significance of this fact is disputed some treat this as a reason to support disclosure rules I tend to believe

the persuasiveness of an argument should be assessed without regard for the


speakers identity. People we dislike, and people with a financial interest in a

that

political issue, are sometimes right and by distancing a funder from his or her
message, the public is more likely to engage on the merits of an argument rather
than fixate on the speakers personality or motives

. None of this means that disclosure should be abandoned wholesale but these factors

support a much more nuanced view of political disclosures. They recommend, specifically, reconsidering the categories of information political organizations are required to disclose, and a recognition that anonymous political speech
plays an important role in the democratic process.

Even if anonymous speech has flaws, still comparatively better


for democracy.
Barr and Klein '14
[Benjamin, Counsel to the Wyoming Liberty Group; and Stephen R., Staff Attorney
with Wyoming Liberty Group; PUBLIUS WAS NOT A PAC: RECONCILING ANONYMOUS
POLITICAL SPEECH, THE FIRST AMENDMENT, AND CAMPAIGN FINANCE DISCLOSURE;
Wyoming Law Review; 2014; 14 Wyo. L. Rev. 253]
Before discussing the burdens of campaign finance disclosure on political speech, it is important to establish the relevance of anonymous political speech. Even when disclosure laws are
simple enough for the average citizen to understand, they foreclose most avenues of anonymity. Simply, this is because these laws require political speech to include disclaimers that

, anonymity is
not an evil to be cured. In fact, considering the role of anonymous political speech in American history, its benefits to individual
speakers and political discourse at large far outweigh its negative effects . This
article identifies three liberty interests in anonymity to secure: preventing prejudice,
keeping the message central, and preventing retaliation from those in power . This section
identify the speaker [*256] and for certain organizations to report the names of their contributors to the government. n19 Unlike political corruption

discusses prominent historical examples of anonymous political speech and describes various legitimate reasons why Americans have elected to voice their political opinions
anonymously.

Anonymous speech key to democracyWashington Post, 15. 1/25/15. The benefits of anonymous political speech.
http://www.washingtonpost.com/opinions/the-benefits-of-anonymous-politicalspeech/2015/01/25/092abefe-a326-11e4-91fc-7dff95a14458_story.html
The Jan. 21 editorial Undisclosed consequences repeated the tired notion that disclosure in political speech is an unmitigated good. In doing so, it underestimated the ability of

Anonymous speech has a rich history. Publius of the


Federalist Papers and Thomas Paines Common Sense played crucial roles in
shaping this nation, and they might not have been able to if forced to reveal their
identities. Indeed, anonymous speech allows for a debate to focus on ideas rather than
personal attacks on the speaker. No one doubts that Americans are capable of considering anonymous speech in all other realms of life. There is
Americans to tolerate and benefit from anonymous speech.

no reason to assume we are less capable when it comes to elections. Doing so is insulting to voters and unfair to speakers, who have every right to convey messages as they see fit.

Social Movements Adv


Full-scale facial recognition discourages people to freely
associate, engage in politics, and challenge the government.
Lynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
Face recognition implicates important Constitutional values, including privacy, free
speech and association, and the right to be free from unlawful searches and
seizures. If the government starts regularly collecting and indexing public
photographs
this would have a chilling effect on Americans
willingness to engage in public debate and to associate with others whos values,
religion or political views may be considered questionable
or obtains similar data from private companies

. And yet the fact that face images can be captured without a detention

and in public, or may be uploaded voluntarily to a third party such as Facebook, or may be collected and stored by private security firms and data aggregators, presents significant challenges in applying Constitutional protections.

The Fourth Amendments prohibition of unreasonable searches and


seizures presents a baseline protection
The Fourth Amendment

for governmental biometrics collection in the United States.82 Although there are significant exceptions to Fourth

Amendment protections that may make it difficult to map to biometric collection such as facial recognition, 83 a recent Supreme Court case, U.S. v. Jones, 84 and a few other cases85 show that courts are concerned about mass
collection of identifying informationeven collection of information revealed to the public or a third partyand are trying to identify solutions. Cases like Jones suggest support for the premise that although we may tacitly consent to

it is unreasonable to assume that consent extends to


our data being collected and retained in a database, to be subject to repeated
searches for the rest of our lives.
even though people
voluntarily share a significant amount of information about themselves with others
online, they still consider much of this information to be private
someone noticing our face or our movements when we walk around in public,

This is buttressed by important privacy research showing that

in that they dont expect it to be shared outside of the

networks they designate.86 In United States v. Jones, 87 nine justices held that a GPS device planted on a car without a warrant and used to track a suspects movements constantly for 28 days violated the Fourth Amendment. For

a persons expectation of privacy in not having his movements tracked


constantlyeven in public
five of the justices,

was an important factor in determining the outcome of the case.88 Justice Sotomayor would have gone even further, questioning the continued validity of the

third-party doctrine (holding that people lack a reasonable expectation of privacy in data such as bank records that they share with a third-party such as the bank).89 She also recognized that: [a]wareness that the Government may
be watching chills associational and expressive freedoms. And the Government's unrestrained power to assemble data that reveal private aspects of identity is susceptible to abuse.90 She questioned whether people reasonably

The fact
that several members of the Court were willing to reexamine the reasonable
expectation of privacy test92 in light of newly intrusive technology could prove
important for future legal challenges to biometrics collection
expect that their movements will be recorded and aggregated in a manner that enables the Government to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.91

. And some of the questions posed by the justices, both during

oral argument and in their various opinions, could be used as models for establishing greater protections for data like facial recognition that is both shared with a third party such as Facebook and gathered in public.93

This legacy of repression through surveillance continues today


the use of facial recognition technology in Baltimore proves
that the state makes a coordinated effort to repress activism.
Gaist in 15 <Thomas. FBI spy planes used in police-military operation against
Baltimore protests May 7, 2015. http://www.wsws.org/en/articles/2015/05/07/baltm07.html>
Two small planes flew carefully planned routes over crowds assembled in West
Baltimore during the mass demonstrations against the police murder, according to flight
records from a third party called Flightradar 24, cited by the Post . An unnamed source said the planes
were equipped with infrared technology designed to track the movements of
individual human beings on the ground. The Baltimore Police Department deferred
all inquiries on the matter to the FBI, which has refused to comment. Thus far, the
US government has rejected demands by the ACLU and other civil liberties groups

for an explanation of the domestic spying operation . The extended aerial


surveillance operation only came to light as a result of personal investigations by
citizens who noticed unusual airplane activity overhead. It remains unclear precisely what
other forms of technology outside of infrared surveillance were onboard the aircraft. One could assume,
however, that the monitoring of crowd movements was used to help coordinate the
military response against protesters, much like satellite, drone and other airborne
technologies do for battlefield operations in Afghanistan or Iraq. Known technologies
used by the government include mass cell phone data capture, high-resolution
photography and facial recognition. An unnamed US official who spoke with the Post
confirmed the use of federal surveillance assets in support of the military-police
mobilization in Baltimore. The FBI deployed the planes to bolster the surveillance assets deployed by the
citys own police agencies, the Post source said. Government aviation watchdogs had previously tracked one of the
planes used in the operation while it was making unexplained patrols around Langley, Virginia, the headquarters of
the Central Intelligence Agency. The revelations have highlighted the increasing deployment of militarized spying
technologies and urban warfare techniques, honed by US military forces in the Middle East and Central Asia, against
the American people. Private spying firms with ties to the US government, such as Persistent Surveillance Systems,
have developed airborne surveillance technologies on behalf of the government that are capable of recording huge
areas, up to 25-square-miles of urban environment at a time. Persistent CEO Ross McNutt noted that standard scale
aerial surveillance technology used by the government records high-resolution footage of urban environments
simultaneously across five city blocks, at a minimum. In response to the eruption of social outrage after the April 20
funeral of police murder victim Freddie Gray, the states Republican governor declared a state of emergency and
deployed thousands of National Guard troops and militarized police units. The citys Democratic mayor also
imposed a 10 pm-5 a.m. curfew. Humvees, armored vehicles, helicopters and fixed wing aircraft were put into
operation by state and federal security forces as part of a coordinated plan to suppress the demonstrations.
Gatherings of discontented, unarmed civilians were subjected to a barrage of non-lethal weaponry and hundreds
were arrested. The crackdown in Baltimore proceeded in line with doctrines drawn up by the US Defense
Department in its Graduated Defense Matrix, which lays out tactics for the military suppression of large-scale
demonstrations. The operation in Baltimore was a further test of martial law-style operations. Previous exercises
took place in connection with the lockdown in Boston in 2013 and last years suppression of protests against the
police killing of Michael Brown in Ferguson, Missouri. Ferguson is a small suburb of St. Louis, but Baltimore is a city

During the Baltimore protests various


career police officers boasted on CNN that federal and state law enforcement
agencies had intelligence assets among the crowds in Baltimore. The police also
said they were monitoring social media to track protests . The military operation in Baltimore is
of 622,000, located just 40 miles from Washington, DC.

the latest indication of the advanced preparations for military rule in America. Politically isolated, incapable of and
opposed to any measures to ameliorate the conditions of grinding poverty and immense social inequality in
America, the ruling class looks upon the masses of workers and youth with hatred and fear. It has recently been

The operation
in Baltimore is part of an expanding program of persistent surveillance. According
to the ACLU, new technologies, using sophisticated high-tech version of radar that is
akin to a camera to track movements in detail across an immense territory, have
been deployed or are in development,
revealed that paramilitary forces referred to protesters in Ferguson last year as enemy forces.

Social movements are effective in shaping the culture and


structure of society the aff is necessary to combat the
surveillance aimed at preventing them from flourishing
Assadourian in 10 <Erik. The Power of Social Movements 2010.
http://blogs.worldwatch.org/transformingcultures/contents/social-movements/>
social movements have played a powerful part in stimulating rapid
periods of cultural evolution, where new sets of ideas , values, policies, or norms are
rapidly adopted by large groups of people and subsequently embedded firmly into a
culture. From abolishing slavery and ensuring civil rights for all to securing womens suffrage and liberating
Throughout history,

social movements have dramatically redirected


societal paths in just an eye blink of human history. For sustainable societies to take
root quickly in the decades to come, the power of social movements will need to be
fully tapped. Already, interconnected environmental and social movements have
emerged across the world that under the right circumstances could catalyze into
just the force needed to accelerate this cultural shift . Yet it will be important to find ways to
states nonviolently from colonial rulers,

frame the sustainability movement to make it not just possible but attractive. This will increase the likelihood that
the changes will spread beyond the pioneers and excite vast populations. This section looks at some ways this is
happening already. John de Graaf of the Take Back Your Time movement describes one way to sell sustainability
that is likely to appeal to many people: working fewer hours. Many employees are working longer hours even as
gains in productivity would allow shorter workdays and longer vacations. Taking back time will help lower stress,
allow healthier lifestyles, better distribute work, and even help the environment. This last effect will be due not just
to less consumption thanks to lower discretionary incomes but also to people having enough free time to choose
the more rewarding and often more sustainable choicecooking at home with friends instead of eating fast food,
for example, making more careful consumer decisions, even taking slower but more active and relaxing modes of
transport. Closely connected to Take Back Your Time is the voluntary simplicity movement, as Cecile Andrews, coeditor of Less is More, and Wanda Urbanska, producer and host of Simple Living with Wanda Urbanska, discuss. This
encourages people to simplify their lives and focus on inner well-being instead of material wealth. It can help inspire
people to shift away from the consumer dream and instead rebuild personal ties, spend more time with family and

Through educational efforts,


storytelling, and community organizing, the benefits of the lost wisdom of living
simply can be rediscovered and spread, transforming not just personal
lifestyles but broader societal priorities. A third movement that could help redirect broader
on leisure activities, and find space in their lives for being engaged citizens.

cultural norms, traditions, and values is the fairly recent development of ecovillages. Sustainability educator
Jonathan Dawson of the Findhorn ecovillage paints a picture of the exciting role that these are playing around the
world. These sustainability incubators are reinventing what is natural and spreading these ideas to broader society
not just through modeling these new norms but through training and courses in ecovillage living, permaculture,
and local economics. Similar ideas are also spreading through cohousing communities, Transition Towns, and even
green commercial developments like Dockside Green in Canada and Hammarby Sjstad in Sweden. Two Boxes in
this section describe some other exciting initiatives. One provides an overview of a new political movement called
dcroissance (in English, degrowth), which is an important effort to remind people that not only can growth be
detrimental, but sometimes a sustainable decline is actually optimal. And a Box on the Slow Food movement
describes the succulent power of organizing people through their taste buds. Across cultures and time, food has
played an important role in helping to define peoples realities. Mobilizing food producers as well as consumers to
clamor for healthy, fair, tasty, sustainable cuisines can be a shrewd strategy to shift food systems and, through
them, broader social and economic systems. These are just a few of the dozens and dozens of social movements

It is just our imaginations that limit how we can present


sustainability in ways that inspire people to turn off their televisions and join the
movement. Only then, with millions of people rallying to confront political and
economic systems and working to shift perceptions of what should feel natural
and what should not, will we be able to transform our cultures into something that
will withstand the test of time.
that could have been examined.

Biopower Adv
The use of government surveillance has expanded and
legitimized the criminal justice systems war on crime.
Natapoff 2014 [09/10/14, Alexandra Natapoff, Associate Dean for Research,
Theodore A. Bruinsma Fellow & Professor of Law, Loyola Law School, Los Angeles,
MISDEMEANOR DECRIMINALIZATION, 68 VANDERBILT L. REV. (forthcoming 2015),
http://ssrn.com/abstract=2494414]
Decriminalization also sheds light on some seemingly contradictory historical developments. In important ways, the
U.S. criminal process is shrinking. The national correctional population has decreased for four years in a row.229 At
least six states have closed prisons and arrests are down for the sixth year in a row.230 Californiaonce a leader of
the prison boomis cutting its prison population and easing its harshest juvenile sentences.231 At the federal level,
Congress repealed the infamous crack-cocaine sentencing disparity,232 while Attorney General Eric Holder has
instructed U.S. Attorneys around the country to go easier on first-time low level drug offenders.233 The
conservative Right-on-Crime coalition advocates more rehabilitation and less incarceration.234 There is growing
international agreement across the political spectrum that the war on drugs is a failed, destructive and overly

Scholars and commentators say hopeful things


like there seems good reason to hope the war on crime may soon wind down ,236
mass incarceration has come to an end,237 the war on drugs is over,238 and the U.S. has
become a more benevolent nation.239 At the very same time, the penal
apparatus is quietly expanding. While state prison populations declined in 2012, jail
populations went up.240 Supervisory programs like diversion, privatized probation,
community supervision and GPS monitoring are growth industries.241 Public
cameras, COMPSTAT, gang and DNA databases, and easy access to personal
information have created a surveillance state of heretofore unimaginable
proportions.242 Defendants are on the hook for an increasing array of fines and fees
that can require years to pay.243 The collateral consequences of even a minor
convictionfrom employment restrictions to housing, education and immigration--have become a
new and burdensome form of restraint and stigma.244 Misdemeanor
decriminalization epitomizes this tectonic shift away from mass
incarceration towards other expansive forms of intrusion and criminalized
disadvantage. Decriminalization may reject the fiscal and human costs of
incarceration, but it has not relinquished the notion that the criminal process
should track, label, and control dangerous and disfavored populations
over the longterm. And its technologies represent the cutting edge of the shift: fines
and supervision, data collection and monitoring, and collateral consequences that haunt
offenders indefinitely. Moreover, because decriminalization eliminates counsel and other
procedural protections for defendants, it actually expands the penal processs
ability to touch, mark, and burden an ever-growing population, the very
same socially disadvantaged population historically subject to the
excesses of mass incarceration. Decriminalization thus provides an updated
understanding of the concrete mechanisms through which we now govern through
crime and perpetuate a culture of control .245 Todays criminal apparatus reaches far beyond the
expensive policy that should be rolled back.235

jail and courthouse deep into civilian life, even for the most minor of offenses. It influences not only the offender,
but his or her family, neighborhood, community, and the social institutions around them. It operates directly, by
imposing fines, supervision, and criminal records, and indirectly by changing social and institutional relationships.
An offender whose formal punishment is limited to a nonjailable misdemeanor conviction and a fine may
nevertheless experience long-term restrictions on their earnings, credit, housing, employment, public benefits, and
immigration.246 It is this expansive social process that represents the full punishment triggered by an

Because decriminalization preserves and even


intensifies many of these consequences, it functionally extends the punitive
reach of the state even as it purports to roll it back.
encounter with the American criminal system.247

FRT relies on a defense of user choice, while increasingly


partnering business and to rapidly protections of anonymity.
Brown 14 Associate Professor of Law, University of Baltimore School
of Law. B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY,
FACEPRINTS, AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
With the exception of Doe, an important distinguishing feature of the First Amendment anonymity cases is the

FRT presents a
particularly difficult problem under prevailing constitutional law because most faces
are routinely exposed in public. No domestic law requires that a persons facial
features be unobstructed while she maneuvers about in public places so that the
government can use them for identification purposes. Her visage is there for the
governments taking. Technology has thus become deterministic of personal privacy
today. Yet there is no reciprocal power on the part of individuals to direct how
technology will evolve in relationship to their privacy interests or even to opt out of
its implications for their daily lives. The First Amendment anonymity cases and Fourth Amendment
involvement of legislative attempts to coerce the disclosure of personal identities.

doctrine assume that a person possesses the discretion to take steps to protect communications or other effects

In the First Amendment


context, the Court has upheld private individuals ability to choose to keep their
identities anonymous in some respect. Indeed, the fact that the McIntyre plaintiff simultaneously disclosed
from governmental intrusionthat is, by keeping personal information private.

her identity in other pamphlets was irrelevant to the Courts analysis and ultimate conclusion that her choice to

In the Fourth Amendment arena,


disclosure operates as a waiver of sorts, but the Court has taken pains to identify
how the subject of police inquiry could have effectively invoked constitutional
protections by keeping information private. In both contexts, the underlying
assumption supporting the Courts analyses of the constitutional guarantee at issue is that citizens have
a choice andcaveat emptorif they choose public disclosure, the Constitution cannot
save them from the consequences of that choice. Facebooks FRT features are
active by default.431 It takes six clicks to reach a disclosure that Facebook uses FRT.432 Apples
iPhoto does not have an opt-out function at all. 433 Currently, there are no laws
requiring private entities to provide individuals with notice that they are collecting
personal data using FRT, how long that data will be stored, whether and how it will
be shared, or how it will be used.434 Other countries have regulations that give Internet users control
over their own data. 435 In the United States, however, private companies are free to
sell, trade, and profit from individuals biometric information . Private
companies can also disclose individuals data to government authorities without
their consent.436 Fourth and First Amendment law is remarkably consistent in its deference to the subjects
choice to remain anonymous or put information into the public domain . If people protect their privacy,
the Constitution protects it too. In modern times, the problem with this tautology is
that the concept of choice implies that there is more than one meaningful option .
With FRT and other emerging technologies, there is no mechanism for
remain anonymous was protected by the First Amendment.430

opting out of the various sources that are amalgamated into what
amounts to surveillance. The theory behind the Fourth Amendment doctrines that lift its protections
for information disclosed publicly or to third parties is thus unsustainable. Accordingly, the recognition of
anonymity as a constitutional value that warrants protection under the First and
Fourth Amendments may require numerous safeguards in place for forestalling
indiscriminate disclosure, as Justice Brennan suggested in Whalen. 437 In his words, whether
sophisticated storage and matching technology amount[s] to a deprivation of constitutionally protected privacy
interests might depend in part on congressional or regulatory protections put in place to forbid the governments

This will not be


easy. Choosing to opt out of Googles tracking technologies itself leaves a trace,
and technology exists to re-identify people whose personal identifiers, such as
name, address, credit card information, birth date, and social security number had
been removed from a dataset.439 But constitutional limits on the governments
ability to work around individuals attempts to protect their privacy would be an
important step toward rescuing the constitutional value of anonymity before FRT
and big data are used to do more than simply predict who may commit crimesi.e.,
to punish people for future acts.440 The writing is on the wall. One day soon, [y]our
use of big data for arbitrary monitoring of the populace without individuals consent.438

phone or in some years your glasses and, in a few more, your contact lenses will tell you the name of that
person at the party whose name you always forget . . . . Or it will tell the stalker in the bar the address where you

FRT is rapidly moving


society toward a world in which the Constitutions scope needs to be meaningfully
reformulated, else it risks irrelevance when it comes to individuals ability
to hide from the prying eyes of government. 442 The third party doctrine and the
live, or it will tell the police where you have been and where you are going.441

longstanding judicial rejection of a reasonable expectation of privacy in matters made public have depleted the
Fourth Amendment of vitality for purposes of establishing constitutional barriers to the governments use of FRT to
profile and monitor individual citizens.

Although the Court has expressly affirmed protections


for anonymous speech under the First Amendment, that doctrine has not been
extended to address the harms that flow from dragnet-style surveillance . Yet every
member of the modern Court has at some point recognized that technology necessitates a rethinking of traditional

existing First Amendment protections for


anonymity should be brought to bear in assessing how Fourth Amendment doctrine
can adapt to the challenges of modern surveillance methods. Today, the
conglomerate of publicly available data is colossal and constantly expanding.
Technology enables the government and private companies to identify patterns
within such data which reveal new information that does not exist anywhere in
isolation. As a consequence, information in the digital age is fundamentally distinct from information in the predigital age, in which the Courts Fourth Amendment doctrine evolved. This Article thus identified
constitutionally derived guidelines for courts and lawmakers to consider in crafting
judicial, legislative, and regulatory responses to the governments newfound
capacity to create new information from storehouses of data gleaned from social
media sites, public cameras, and increasingly sophisticated technologies like FRT. By
giving these guidelines serious consideration, courts and lawmakers can tether
foundational constitutional protections against over-surveillance with the
development of the lawlaw that is otherwise broken and outdated.
constitutional boundaries. This Article argued that

Value to life comes first- we must reject every violation of


freedom.
Petro 74. (Sylvester, Prof of Law @ Wake Forest U, University of Toledo Law
Review, pg. 4801)
However, one may still insist, echoing Ernest Hemingway - "I believe in only one thing:
liberty." And it is always well to bear in mind David Hume's observation: " It is seldom that
liberty of any kind is lost all at once." Thus, it is unacceptable to say that the invasion of

one aspect of freedom is of no import because there have been invasions of so


many other aspects. That road leads to chaos, tyranny, despotism, and the end of
all human aspiration . Ask Solzhenitsyn. Ask Milovan Djilas. In sum, if one believes in
freedom as a supreme value and the Proper ordering; principle for any society aiming to
maximize spiritual and material welfare, then every invasion of freedom must be

emphatically identified and resisted with undying spirit.

It has created a consolidated database where monitoring no


longer collects data just on criminals.
Lynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
Law Enforcement and government at all levels in the United States regularly collect
biometrics
store them in databases
accessible to many different entities; and share them with other agencies and
governments
many are rapidly
expanding to include facial recognitionready photographs
; combine them with biographic data such as name, address, immigration status, criminal record, gender and race;

. These collection programs have, in the past, typically included only one biometric identifier (generally a fingerprint or DNA). However,

. Federal and State Biometrics Databases The two largest biometrics

databases in the world are the FBIs Integrated Automated Fingerprint System (IAFIS) and DHSs Automated Biometric Identification System (IDENT), a part of its U.S. Visitor and Immigration Status Indicator Technology (USVISIT)
program.11 Each database holds more than 100 million recordsmore than one third the population of the United States. Although each of these databases currently relies on fingerprints, both are in the process of incorporating
facial recognition. IAFISs criminal file includes records on people arrested at the local, state, and federal level and latent prints taken from crime scenes. IAFISs civil file stores biometric and biographic data collected from members
of the military, federal employees and as part of a background check for many types of jobs, such as childcare workers, law-enforcement officers, and lawyers.12 IAFIS includes over 71 million subjects in the criminal master file and
more than 33 million civil fingerprints, 13 and supports over 18,000 lawenforcement agencies at the state, local, tribal, federal, and international level. IDENT stores biometric and biographical data for individuals who interact with
the various agencies under the DHS umbrella, including Immigration and Customs Enforcement (ICE), U.S. Citizenship and Immigration Services (USCIS), Customs and Border Protection (CBP), the Transportation Security
Administration (TSA), the U.S. Coast Guard, and others.14 Through US-VISIT, DHS collects fingerprints from all international travelers to the United States who do not hold U.S. passports.15 USCIS also collects fingerprints from
citizenship applicants and all individuals seeking to study, live, or work in the United States.16 And the State Department transmits fingerprints to IDENT from all visa applicants.17 IDENT processes more than 300,000 encounters
every day and has 130 million records on file.18 In addition to the federal databases, each of the states has its own biometrics databases, and some larger metropolitan areas like Los Angeles also have regional databases. The prints

In the last few


years, federal, state and local governments have been pushing to develop
multimodal biometric systems that collect and combine two or more biometrics
entered into these databases are shared with the FBI, and under the Secure Communities program, with DHS. Incorporating Face Recognition Capabilities into Existing Government Databases

(for

example, photographs and fingerprints19), arguing that collecting multiple biometrics from each subject will make identification systems more accurate.20 The FBIs Next Generation Identification (NGI) database represents the most

FBI has stated it needs to collect as much biometric


data as possible . . . and to make this information accessible to all levels of law
enforcement
it has been working aggressively to build biometric
databases that are comprehensive and international in scope
The FBI has already started collecting such
photographs through a pilot program with a handful of states
They may come from public and private sources,
including from private security cameras, and may or may not be linked to a specific
persons record
robust effort to introduce and streamline multimodal biometrics collection.

, including International agencies. 21 Accordingly,

.22 The biggest and perhaps most controversial change

brought about by NGI will be the addition of face-recognition ready photographs. 23

.24 Unlike traditional mug shots, the new NGI photos may be

taken from any angle and may include close-ups of scars, marks and tattoos.25

(for example, NGI may include crowd photos in which many subjects may not be identified). NGI will allow law enforcement, correctional facilities, and criminal justice agencies at the local,

state, federal, and international level to submit and access photos, and will allow them to submit photos in bulk. The FBI has stated that a future goal of NGI is to allow law-enforcement agencies to identify subjects in public
datasets, which could include publicly available photographs, such as those posted on Facebook or elsewhere on the Internet.26 Although a 2008 FBI Privacy Impact Assessment (PIA) stated that the NGI/IAFIS photo database does
not collect information from commercial data aggregators, the PIA acknowledged this information could be collected and added to the database by other NGI users such as state and local law-enforcement agencies.27 The FBI has
also stated that it hopes to be able to use NGI to track people as they move from one location to another.28 Another big change in NGI will be the addition of non-criminal photos. If someone applies for any type of job that requires
fingerprinting or a background check, his potential employer could require him to submit a photo to the FBI. And, as the 2008 FBI PIA notes, expanding the photo capability within the NGI [Interstate Photo System] will also expand
the searchable photos that are currently maintained in the repository.

Although noncriminal information has always been kept

separate from criminal, the FBI is currently developing a master name system
that will link criminal and civil data and will allow a single search query to access all
data.
The Bureau has stated that it believes that electronic bulk searching of civil records would be desirable.29 DHS is poised to expand IDENT to include face recognition, which would further increase data sharing

between DHS and DOJ through Secure Communities and between both agencies and DOD through other programs. 30 DHS has not yet released a Privacy Impact Assessment discussing this change.

Increasing surveillance space allows for government social


control to disempower communities
Gray, 03. Mitchell, Surveillance and Society. Urban Surveillance and
Panopticism: will we recognize the facial recognition society? JJZ
http://www.surveillance-and-society.org/articles1(3)/facial.pdf
The fact that problems remain in the implementation of facial recognition tools makes the study of this mode of surveillance no less crucial. Its usage will increase as research continues
and the technology becomes more accurate and less expensive. There is little limit to the knowledge that could be compiled about an individuals presence in monitored spaces using a
network of facial recognition systems. As the technology advances, the software will effortlessly track individuals moving through urban space, public and private. Any appearance of a
person deemed threatening can be set to trigger an alarm, assuming that persons face has been recorded in a linked database. The systems represent a significant advance from closed
circuit television (CCTV), which requires constant human attention to scan for potential threats. Facial recognition also has the ability to reach quickly into the past for information,
dramatically extending the effective temporal scope of surveillance data analysis. Once an image is included in the database, stored surveillance data can be searched for occurrences of
that image with a few keystrokes. Searching videotape for evidence, by contrast, is extremely time-consuming. The process of determining whether a suspected terrorist visited Berlin in
2002, for example, could require watching thousands of hours of videotape from potentially hundreds of cameras. If those cameras operated digital facial recognition systems, and the
suspects face were available in a linked database, the same search could conceivably be executed in a fraction of the time. The next section situates these qualities of facial recognition
in the context of surveillance in general. Trends in Surveillance In their everyday story of video surveillance in Britain, Norris and Armstrong (1999) estimate that more than three
hundred cameras may film an individual on an eventful day, and they list reasons this number will continue to rise. First, arguments that CCTV is not successful in reducing crime are
often dismissed summarily in the media as contrary to common sense. Second, when a particular area introduces a CCTV system, it can displace crime into surrounding areas lacking
surveillance, causing the latter to adopt systems also. Third, the presence of surveillance systems is argued to make cities attractive to business. Fourth, the systems have proven useful
in gathering evidence pertaining to serious crimes like murder, and in helping police allocate resources by viewing the nature of an event before responding (Norris and Armstrong,
1999:205-206). The current trajectory of surveillance is toward omnipresence; more spaces are watched in more ways, capturing information about those within. The focus of this
proliferation in recent years has been CCTV, but subsequent sections explore reasons the spread of facial recognition will continue these trends. Increases in surveillance are
accompanied by a series of important related concerns. Facial recognition systems are prey to most common critiques of surveillance (and others unique to it that will be addressed

surveillance
systems jeopardize privacy, and the challenge as surveillance grows is to prevent security solutions from evolving into greater threats to the urban fabric
below). In general, all arguments against camera surveillance apply, because cameras are the carrier for facial recognition technology. Most importantly,

than the ones they are meant to solve. Privacy is inherently valuable, serving a crucial function in the development of individuals and groups. Michael Curry explains: It is in private that
people have the opportunity to become individuals in the sense that we think of the term. People, after all, become individuals in the public realm just by selectively making public
certain things about themselves. Whether this is a matter of being selective about ones religious or political views, work history, education, income, or complexion, the important point
is this: in a complex society, people adjust their public identities in ways that they believe best, and they develop those identities in more private settings. (1997:688) To create a group

It is also only through privacy that the distribution of


political power can change. The less powerful require control over ideas and
information in order to formulate an empowerment strategy (Curry, 1997:688). Like other surveillance tools,
facial recognition systems share the problems that arise from secrecy of
implementation and the possibility of data errors. Urbanites often remain unaware
they are being observed and even when aware, they generally have no access to
information collected and therefore no ability to correct erroneous data . The most egregious case of
is to erect a boundary of privacy separating members and nonmembers.

mistaken data in facial recognition terms is pairing a facial image with the wrong identity. Further problems arise when information is networked. Discrete pieces of information about an
individual may be relatively harmless to privacy, but when information is shared, a comprehensive dossier on the individual can be assembled. Privacy advocates complain that
information ostensibly collected for a specific purpose is frequently used in a myriad of ways, most of which have not been consented to by the subject. This situation becomes more
complex and delicate when public and private institutions share information. The ethical issues of governments purchasing information from private entities, which may or may not follow
the collection guidelines approved by a democratically elected government, are complex and only slowly being examined

. As the spaces of

surveillance grow, private space shrinks. It must be asked whether the potential security and public safety gains from facial recognition
systems outweigh the costs to privacy incurred by their use. Healthy societies seek a balance . Drawing the policy line too close to the
public safety end of the spectrum could result in an undesirably restricted and
unnecessarily transparent society. Conversely, to unconditionally favour privacy could maintain security vulnerabilities at an unacceptable level.
The effects of pervasive surveillance stretch beyond issues related to privacy. At risk, for example, is an erosion of the benefits
of routine urban social interaction. Surveillance saturation could cause a shrinking perception of accountability among those present together
in urban space. Hille Koskela explains that electronic means have more and more often [been] used to
replace informal social control in an urban environment: the eyes of the people on the street are replaced by the eyes
of surveillance cameras (2002:259). There may be less incentive to assist someone in distress when a camera is
viewing the event. Why interfere yourself when you can let the experts behind the lens do it? At its most essential level, omnipresent surveillance
simply has the power to reduce quality of life . Conservative New York Times columnist William Safire (2002) describes succinctly
how constant surveillance is experienced: To be watched at all times, especially when doing nothing seriously wrong, is to be afflicted with a creepy feeling .... It is the
pervasive, inescapable feeling of being unfree .

Racial Profiling Adv


Specifically- police secretly employ FRT against minorities,
guaranteeing racial injustice.
Lochner 13 J.D. Candidate, University of Arizona James E. Rogers
College of Law, 2013 (Sabrina A., SAVING FACE: REGULATING LAW
ENFORCEMENTS USE OF MOBILE FACIAL RECOGNITION TECHNOLOGY & IRIS SCANS,
http://www.arizonalawreview.org/pdf/55-1/55arizlrev201.pdf, ARIZONA LAW REVIEW
VOL. 55:201, 2013) NAR
The mobility of MORIS makes it impracticable for citizens to avoid police using the
technology; there is no opt-out option. And MORISs design leaves room for police
bias and error in its operation. These biases manifest themselves in the form of
discriminatory targeting, racial bias, and context bias. This means that police may
more frequently use MORIS to identify certain groups of people without oversight;
police may not be able to correctly identify the facial features of a person of another
race to make accurate identifications; and outside distractions may cause the police
to make incorrect identifications. The technology also does not eliminate errors inherent in lineups or
the possibility of the data being collected and stored for unanticipated purposes. Lastly, the facial and iris
databases are unregulated and have no guidelines for how to enroll new persons .
This Part addresses each of these policy concerns and proposes regulatory solutions. A. Lack of Notice or Opt-Out

The mobility of MORIS does not give citizens notice of the devices use or the
ability to opt out of getting scanned in the way stationary checkpoints allow . If using
Option

FRT is not a Fourth Amendment search, and probable cause or reasonable suspicion is not a prerequisite to data

police can legally take a picture of anyone and run it through


the database without suspicion that the person has done something illegal .146
collection and use, then the

Although people can opt out of going to sporting events or airports to avoid FRT and iris scans, people cannot opt

no matter where one goes in the United States, the


possibility exists that an officer may use MORIS to take a picture and run it through
a database to learn that persons identity and criminal history. Because the device
works from 5 feet away, this investigation could be done secretly. Just as covert
out of going about their daily lives. Thus,

GPS tracking can alter the relationship between citizen and government in a way that is inimical to democratic
society, 147 covert FRT could similarly sabotage this relationship. B. Discriminatory Targeting and Racial Bias

MORISs
portability grants police discretion in deciding whom to identify. Without guidelines,
nothing prohibits police from acting on potential racial, gender, or class biases.148
Legally, law enforcement could primarily run pictures of a certain type of person,
without justifiable cause. Jay Stanley, an ACLU senior policy analyst, worries about
the new type of facial profiling MORIS could create .149 Not only may police take
pictures discriminatorily, but a racial bias also may arise while police search for a
match. MORIS finds the three most similar faces and displays these headshots on
the screen; however, an officer makes the final selection as to which picture
matches the person he is trying to identify.150 If the police officer is of a different
race than the person to be identified, the officer may not make this selection
accurately.151 Psychology studies show that people can more accurately recall
specific faces if they are of their own race rather than of another race.152 Due to
the otherrace effect, people outside ones own race subjectively look more alike
Concerns Moreover, unlike a stationary checkpoint, where all who pass by are subject to FRT,

153 unless that person has had ample exposure to another race .154 A Northwestern
University study shows that the brain encodes same-race faces with an emphasis on unique identifiers; however,
the brain does not encode other-race faces with this level of detail.155 Consequently, we have poorer memory for
other-race faces, and are therefore less likely to [recognize] them or to distinguish between them. 156 Lay
witnesses have made inaccurate lineup identifications because of the other-race effect.157 In 1984, an innocent
man was convicted of rape after the victim, of another race, identified him as the perpetrator.158 When the man
was exonerated though DNA evidence, the victim said that the other-race effect contributed to her
misidentification.159 Given that MORIS creates a photographic lineup with the three most mathematically similar
faces and that people struggle with distinguishing another races facial features, the Arizona legislature should give
police procedures to follow when making the final match. The other-race bias can be reduced by informing the
witness of the potential bias and by telling the witness to look for individual facial features instead of looking at the
face as a whole.160 In one study, researchers eliminated the other-race bias by giving these warnings before the
brain could encode the face.161 To ensure more accurate identifications, officers using MORIS should be required to
learn about other-race bias and how to look for unique features on faces of other races.

This constructs a generalized image of the enemy causing


racialized scapegoating and structural violence
Gates 6 Assistant Professor in the Department of Communication
and the Science Studies Program at the University of California, San
Diego. (Kelly, Cultural Studies Vol. 20, Nos. 4 5 July/September 2006, pp. 417 440
ISSN 0950-2386 print/ISSN 1466-4348 online 2006 Taylor & Francis
http://www.tandf.co.uk/journals DOI: 10.1080/09502380600708820, IDENTIFYING
THE 9/11 FACES OF TERROR The promise and problem of facial recognition
technology) NAR
In the wake of 9/11, authorities worked feverishly to transform the political priorities
of homeland security into specific technologies and technical systems, in keeping
with a long-standing preoccupation with technology in political thought. As we have
seen, one set of proposals and experiments centered on enhancing existing systems or networks for security

including the integration of automated facial recognition into


security networks. Not only would automated facial recognition be more accurate,
objective, and robust than the recognition capacity of human agents, but it also
promised to accomplish identification at a distance and in real time. These temporal
through identification,

and spatial improvements to standardized identification systems would bring more individuals in contact with
identification systems more often, enabling an automated form of governing at a distance, tying individuals into

The
problem with this model of automated real-time identification at a distance is that,
no matter how sophisticated and state-of-the-art, it must always contend with the
complexity of identity: its variability in individuals and among populations, as well as its status as both an
circuits of inclusion, while identifying and isolating dangerous, terrorist identities from civilized society.

individualizing and classifying construct. To be sure, identities and faces are inextricably connected, but their
hybrid, unstable quality complicates efforts at stabilization, a complication that leads to persistent new efforts at

To function effectively as technologies of homeland security,


new identification systems, enhanced with automated facial recognition, would have
to achieve some measure of stabilization. As their face of terror rhetoric confirmed,
proponents had no problem moving well beyond a measured approach, adopting a
uncompromisingly rigid and essentialized conception of identity in their effort to
transition automated facial recognition from its experimental stages into a
marketable product, ostensibly capable of identifying the nations new
unidentifiable Other. In exploring the implications of wielding essentialized
conceptions of the terrorist identity, it would be perhaps too easy to point to
evidence that many of the individuals swept up and incarcerated as terrorists since
technical improvements.

9/11 had no connection to terrorist acts or plans. It might be too opportunistic to


point to the unfortunate death of Mr Dilawar, the 22-year-old Afghani cab driver who
was beaten repeatedly by US soldiers at the Bagram Collection Point because they
thought his agonized cries of Allah! sounded funny. In fact, the consequences that result from
broadly construed notions of the terrorist identity manifest themselves in a variety of ways, some more subtle
than the incarceration or deaths of individuals designated as terrorists. However, the fact that the designation
essentially strips individuals of their right to life or humane treatment means that the criteria for the designation

Surely there are ways of


accurately and unambiguously designating individuals as terrorists, such as
individuals who have committed acts of violence against civilians, or individuals who
have attempted to do so. Only the most relativist minds would have a problem with
designating as terrorist those individuals planning or in training to commit acts of
violence against civilians. But in fact precisely where and when individuals begin
that training has been subject to interpretation . In a 14 June 2005 article in The New York Times,
must themselves be interrogated, tortured, and interrogated again.

Peter Bergen and Swati Pandey (a fellow and a research associate at the New America Foundation) summarized
their research debunking the myth that Muslim religious schools, known as madrassas, are training grounds for
future terrorists. They investigated the educational backgrounds of 75 terrorists involved in major recent terrorist
attacks against Westerners and found that only nine of them had attended madrassas, and all nine had taken part

Does the
potentially faulty assumption about madrassas provide anyone with more security?
To be effective, security through identification must rely on the broadest criteria
possible for designating terrorists, lest the face of terror go unidentified. The error
that results from this broad designation and the way that it is rendered in technical
form requires vigilant critical analysis.
in one attack, the Bali bombings in 2002. None of the 9/11 hijackers had attended madrassas.

The use of facial recognition exacerbates the monitoring and


false positive identifications of minority populationsIntrona, 05. Lucas, Center for the Study of Technology and Organisation,
Lancaster University Management School, Lancaster, LA1 4YX, UK. Disclosive
ethics and information technology: disclosing facial recognition systems. JJZ
http://download.springer.com/static/pdf/187/art%253A10.1007%252Fs10676-0054583-2.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Farticle
%2F10.1007%2Fs10676-005-4583-2&token2=exp=1435154780~acl=%2Fstatic
%2Fpdf%2F187%2Fart%25253A10.1007%25252Fs10676-005-4583-2.pdf
%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Farticle
%252F10.1007%252Fs10676-005-45832*~hmac=d6beb5349104917c9211b8a938d7de486b38fb07b4d5abc23c044c64265
ce2bf
We would propose that two possibilities are most likely. First, it is possible that the
operators will become so used to false positives that they will start to treat all
alarms as false positives thereby rendering the system useless. Alternatively, they
may deal with it by increasing the identification threshold (requesting the system to
reduce the number of false positives). This will obviously also increase the false
negatives, thereby raising all sorts of questions about the value of the system into
question. However, more important to us, with an increased threshold small
differences in identifiability (the biases outlined above) will mean that those that are

easier to identify by the algorithms (African-Americans, Asians, dark skinned


persons and older people) will have a greater probability of being scrutinised. If the
alarm is an actual positive recognition then one could argue that nothing is lost.
However, it also means that these groups would be subjected to a higher probability
of scrutiny as false positives, i.e. mistaken identity. Moreover, we would propose
that this scrutiny will be more intense as it would be based on the assumption that
the system is working at a higher level and therefore would be more accurate. In
such a case existing biases, against the usual suspects (such as minorities), will
tend to come into play (Norris and Armstrong 1999). The operators may even
override their own judgements as they may think that the system under such high
conditions of operation must see something that they do not. This is highly likely
as humans are not generally very good at facial recognition in pressurised situations
as was indicated in a study by Kemp et al. (1997). Thus, under these conditions the
bias group (African-Americans, Asians, dark skinned persons and older people) may
be subjected to disproportionate scrutiny, thereby creating a new type of digital
divide (Jupp in Graham and Wood, 2003: 234).

Specifically- FRT allows for the legal system to rely on racist


policies by flipping the burden of proof onto minorities.
Lynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
The extensive collection and sharing of biometric data at the local, national, and
international level should raise significant concerns among Americans. Data
accumulation and sharing can be good for solving crimes across jurisdictions or borders, but can also perpetuate racial and ethnic
profiling, social stigma, and inaccuracies throughout all systems and can allow for
government tracking and surveillance on a level not before possible . Some of these concerns are
endemic to all data collection and are merely exacerbated by combining biographic data with any non-changeable biometric. For example, courts have recognized the social stigma
involved with merely having a record in a criminal database.65 Additionally, data inaccuraciessuch those common in immigration66 and Automated Targeting System67 records

Data sharing can also mean that


data collected for non-criminal purposes, such as immigration-related records or employment verification, are combined
with and used for criminal or national-security purposes with little or no standards,
oversight, or transparency. When some of this data comes from sources such as local fusion centers and private security guards in the form of Suspicious
Activity Reports (SARs),68 it can perpetuate racially or politically motivated targeting .69 Standardization of
biometrics data (necessary to enable data sharing) causes additional concerns. Once data are standardized, they become much
easier to use as linking identifiers, not just in interactions with the government but also across disparate databases and throughout society. For
become much more damaging and difficult to correct as they are perpetuated through cross-database sharing.

example, Social Security numbers were created to serve one purposeto track wages for Social Security benefitsbut are now used to identify a person for credit and background
checks, insurance, to obtain food stamps and student loans, and for many other private and government purposes.70 If biometrics become similarly standardized, they could replace
Social Security numbers, and the next time someone applies for insurance, sees her doctor, or fills out an apartment rental application, she could be asked for her face print. This is
problematic if records are ever compromised because, unlike a Social Security Number or other unique identifying number, a person cannot change her biometric data. 71 And the many
recent security breaches and reports of falsified data show that the government and private sector can never fully protect against these kinds of data losses.72 Data standardization also
increases the ability of government and the private sector to locate and track a given person throughout her life. And finally, extensive data retention periods73 can lead to further
problems; data that may be less identifying today, such as a photograph of a large crowd or political protest, could become more identifiable in the future as technology improves.
However, advanced biometrics like face recognition create additional concerns because the data may be collected in public without a persons knowledge. For example, the addition of
crowd and security camera photographs into NGI means that anyone could end up in the databaseeven if theyre not involved in a crimeby just happening to be in the wrong place at
the wrong time, by fitting a stereotype that some in society have decided is a threat, or by, for example, engaging in suspect activities such as political protest in areas rife with
cameras.74 Given the FBIs history of misuse of data gathered on people during former FBI director J. Edgar Hoovers tenure75 and the years following September 11, 2001,76data

Americans have good reason to be


concerned about expanding government biometrics databases to include face
recognition technology. Technical issues specific to facial recognition make its use worrisome for Americans. For example, facial recognitions accuracy is
collection and misuse based on religious beliefs, race, ethnicity and political leanings

strongly dependent on consistent lighting conditions and angles of view.77 It may be less accurate with certain ethnicities and with large age discrepancies (for example, if a person is
compared against a photo taken of himself when he was ten years younger). These issues can lead to a high rate of false positiveswhen, for example, the system falsely identifies
someone as the perpetrator of a crime or as having overstayed their visa. In a 2009 New York University report on facial recognition, the researchers noted that facial recognition

performs rather poorly in more complex attempts to identify individuals who do not voluntarily self-identify . . . Specifically, the face in the crowd scenario, in which a face is picked
out from a crowd in an uncontrolled environment.78 The researchers concluded the challenges in controlling face imaging conditions and the lack of variation in faces over large

Some have also


suggested the false-positive risk inherent in large facial recognition databases could
result in even greater racial profiling by disproportionately shifting the burden of
identification onto certain ethnicities.80 This can alter the traditional presumption of
innocence in criminal cases by placing more of a burden on the defendant to show
he is not who the system identifies him to be . And this is true even if a face recognition system such as NGI offers several results
populations of people make it unlikely that an accurate face recognition system will become an operational reality for the foreseeable future.79

for a search instead of one, because each of the people identified could be brought in for questioning, even if he or she was not involved in the crime. In light of this, German Federal
Data Protection Commissioner Peter Schaar has noted that false positives in facial recognition systems pose a large problem for democratic societies: in the event of a genuine hunt,
[they] render innocent people suspects for a time, create a need for justification on their part and make further checks by the authorities unavoidable.81

Biometrics monitor criminal and non-criminal behavior without


any legal boundaries
Watson, 14. Steve, Info Wars, 6/24. OVERWHELMING NUMBER OF FALSE
MATCHES TO COME FROM FBI BIOMETRIC DATABASE. JJZ
http://www.infowars.com/privacy-groups-warn-overwhelming-number-of-falsematches-to-come-from-fbi-biometric-database/
A coalition of over thirty civil liberties groups have joined forced forces to lobby the Department of Justice to scrutinize the FBIs controversial new biometric database. The groups, including the ACLU and the Electronic Frontier
Foundation, sent an open letter on Tuesday to Attorney General Eric Holder, arguing that the system has not been properly vetted, and will facilitate the collection and storage of a hugely broad sweep of images of law abiding

The Justice Department long ago promised to complete an audit of


the biometric system
So far neither the the
FBI nor the DOJ has done anything toward fulfilling the promise
complete absence of oversight raises serious
privacy and civil-liberty concerns
The capacity of the FBI to collect and
retain information, even on innocent Americans, has grown exponentially
Americans who have done nothing wrong.

, which will be made up of iris scans and palm prints, along with images linked in to facial recognition systems.

. The liberty groups warn that the FBI program has

undergone a radical transformation since it was last reviewed some six years ago. A

, the groups contend.

, the letter reads. It is

essential for the American public to have a complete picture of all the programs and authorities the FBI uses to track our daily lives, and an understanding of how those programs affect our civil rights and civil liberties. The
database, which already holds millions of fingerprint and photographic records, is scheduled to go live before the end of the year, but has never even been subjected to a routine Privacy Impact Assessment. One of the risks here,
without assessing the privacy considerations, is the prospect of mission creep with the use of biometric identifiers, said Electronic Privacy Information Center spokesperson Jeramie Scott. its been almost two years since the FBI

The FBI plans to have up to a third of all Americans


on the database by next year. There are currently no federal laws limiting the use of
facial-recognition technology
some 52
million Americans could be on the Next Generation Identification (NGI) biometric
database by 2015, regardless of whether they have ever committed a crime or been
arrested
said they were going to do an updated privacy assessment, and nothing has occurred.

, either by private companies or by the government. The Electronic Frontier Foundation noted in an April communique that

. Profiles on the system will contain other personal details such as name, address, age and race. The group managed to obtain information pertaining to the program via a freedom of information request. The

system will be capable of searching through millions of facial records obtained not only via mugshots, but also via so called civil images, the origin of which is vague at best. [T]he FBI does not define either the Special Population
Cognizant database or the new repositories category. The EFF writes. This is a problem because we do not know what rules govern these categories, where the data comes from, how the images are gathered, who has access to
them, and whose privacy is impacted. FBI Will Have Up To One Third Of Americans On Biometric Database By Next Year eff us map fbi ngi face recognition 2 A map within the EFFs piece shows which states are already complying
with the program, and which ones are close to agreeing deals to do so. The EFF notes that currently, the FBI has access to fingerprint records of non-criminals who have submitted them for any kind of background check, by an

all records, both criminal and non-criminal will be stored on the


same database. This means that even if you have never been arrested for a crime,
if your employer requires you to submit a photo as part of your background check,
your face image could be searched and you could be implicated as a criminal
suspect, just by virtue of having that image in the non-criminal file
employer or government agency. Going forward, however,

, notes the EFF. EFF points to a disturbing

assertion from the FBI that it will not make positive identifications, via the database, but will use it to produce investigative leads. The Feds claim that Therefore, there is no false positive [identification] rate. [T]he FBI only
ensures that the candidate will be returned in the top 50 candidates 85 percent of the time when the true candidate exists in the gallery. EFF states. It is unclear what happens when the true candidate does not exist in the
gallerydoes NGI still return possible matches? the feature asks, noting that those identified could potentially be subjected to criminal investigation purely because a computer has decided that their face is similar to a suspects.
EFF continues: This doesnt seem to matter much to the FBIthe Bureau notes that because this is an investigative search and caveats will be prevalent on the return detailing that the [non-FBI] agency is responsible for
determining the identity of the subject, there should be NO legal issues. This is not how our system of justice was designed and should not be a system that Americans tacitly consent to move towards, the EFF piece concludes. A

the FBIs facial-recognition technology could fail


up to 20 percent of the time. The coalition of privacy groups notes that even that is
a conservative estimate,
this overwhelming
number of wrong matches may lead to greater racial profiling by law enforcement
by shifting the burden of identification onto certain ethnicities.
previous government report, dating from 2010, was made public last year via FOIA, revealing that

owing to the fact that a search on the database will be dubbed a success if the eventual correct suspect is flagged up within the top 50 possibilities. This means

that 49 other innocent people who have never done anything wrong could be potentially marked as suspects without being considered false matches. The groups say that

The Justice Department says it is reviewing the letter from

the privacy groups. It is somewhat remarkable that when Google announced the release of its Glass product, it was forced to ban applications with the capability for facial recognition due to a huge privacy backlash. The Federal
government, however, continues to use such technology unhindered to create biometric profiles on anyone and everyone. The Department of Homeland Security also has its own facial recognition program, which it routinely
outsources to police departments. Meanwhile, new innovations in facial recognition technology continue to be billed as potential tools for law enforcement, including the prediction of future crime. The NSA, it has been revealed via
the leaked Snowden documents, intercepts approximately 55,000 facial-recognition quality images every day.

This type of advocacy is the only way to bring an end to the


system of institution racismthe reason reforms have failed is
because they havent challenged the underlying racial ideology
its a flawed public consensus not a flawed public policy that
lies at the heart of this system of controlas a result, solvency
for this debate should be measured not in what reforms we
utilize, but how reforms come into existence
Alexander 10, Associate Professor of Law
[2010, Michelle Alexander, is an associate professor of law at Ohio State University,
a civil rights advocate and a writer. New Jim Crow : Mass Incarceration in the Age of
Colorblindness ProQuest ebrary, pp. 221-224]
The central question for racial justice
advocates is this: are we serious about ending this system of control, or not? If we
are, there is a tremendous amount of work to be done . The notion that all of
these reforms can be accomplished piecemeal one at a time, through disconnected
advocacy strategies seems deeply misguided. All of the needed reforms have less to
do with failed policies than a deeply flawed public consensus, one that is
indifferent, at best, to the experience of poor people of color. As Martin Luther King Jr.
explained back in 1965, when describing why it was far more important to engage in mass
mobilizations than file lawsuits, Were trying to win the right to vote and we have
to focus the attention of the world on that. We cant do that making legal
cases. We have to make the case in the court of public opinion. 21 King certainly
appreciated the contributions of civil rights lawyers (he relied on them to get him out of jail), but
he opposed the tendency of civil rights lawyers to identify a handful of individuals
who could make great plaintiffs in a court of law, then file isolated cases. He believed
what was necessary was to mobilize thousands to make their case in the court of
public opinion. In his view, it was a flawed public consensus not merely flawed
policy that was at the root of racial oppression. Today, no less than fifty years ago, a
flawed public consensus lies at the core of the prevailing caste system . When people
think about crime, especially drug crime, they do not think about suburban housewives
violating laws regulating prescription drugs or white frat boys using ecstasy . Drug
crime in this country is understood to be black and brown, and it is because drug
crime is racially defined in the public consciousness that the electorate has not
cared much what happens to drug criminals at least not the way they would have
cared if the criminals were understood to be white . It is this failure to care, really care
across color lines, that lies at the core of this system of control and every racial
caste system that has existed in the United States or anywhere else in the
world. Those who believe that advocacy challenging mass incarceration can be
successful without overturning the public consensus that gave rise to it are engaging in
fanciful thinking, a form of denial. Isolated victories can be won even a string of
victories but in the absence of a fundamental shift in public consciousness,
the system as a whole will remain intact. To the extent that major changes are
achieved without a complete shift, the system will rebound. The caste system
will reemerge in a new form, just as convict leasing replaced slavery, or it will be
The list could go on, of course, but the point has been made.

reborn, just as mass incarceration replaced Jim Crow. Sociologists Michael Omi and
Howard Winant make a similar point in their book Racial Formation in the United States. They attribute the
cyclical nature of racial progress to the unstable equilibrium that characterizes
the United States racial order. 22 Under normal conditions, they argue, state
institutions are able to normalize the organization and enforcement of the
prevailing racial order, and the system functions relatively automatically.
Challenges to the racial order during these periods are easily marginalized or
suppressed, and the prevailing system of racial meanings, identity, and ideology
seems natural. These conditions clearly prevailed during slavery and Jim Crow. When the
equilibrium is disrupted, however, as in Reconstruction and the Civil Rights Movement, the state
initially resists, then attempts to absorb the challenge through a series of reforms
that are, if not entirely symbolic, at least not critical to the operation of the racial
order. In the ab-sence of a truly egalitarian racial consensus, these predictable
cycles inevitably give rise to new, extraordinarily comprehensive systems of
racialized social control. One example of the way in which a well established racial order easily
absorbs legal challenges is the infamous aftermath of the Brown v. Board of Education
decision. After the Supreme Court declared separate schools inherently unequal in 1954, segregation persisted
unabated. One commentator notes: The statistics from the Southern states are truly
amazing. For ten years, 1954 1964, virtually nothing happened . 23 Not a single black
child attended an integrated public grade school in South Carolina, Alabama, or Mississippi as of the 1962 1963

Across the South as a whole, a mere 1 percent of black school children


were attending school with whites in 1964 a full decade after Brown was decided. 24 Brown
did not end Jim Crow; a mass movement had to emerge firstone that aimed
to create a new public consensus opposed to the evils of Jim Crow . This does
not mean Brown v. Board was meaningless, as some commentators have claimed. 25 Brown gave critical
legitimacy to the demands of civil rights activists who risked their lives to
end Jim Crow, and it helped to inspire the movement (as well as a fierce backlash). 26 But
standing alone, Brown accomplished for African Americans little more than
Abraham Lincolns Emancipation Proclamation. A civil war had to be waged to end slavery; a mass
movement was necessary to bring a formal end to Jim Crow. Those who imagine that far less is
required to dismantle mass incarceration and build a new, egalitarian racial
consensus reflecting a compassionate rather than punitive impulse toward poor
people of color fail to appreciate the distance between Martin Luther King Jr.s dream
and the ongoing racial nightmare for those locked up and locked out of American
society. The foregoing should not be read as a call for movement building
to the exclusion of reform work. To the contrary, reform work is the work of
movement building, provided that it is done consciously as movementbuilding work. If all the reforms mentioned above were actually adopted, a radical
transformation in our society would have taken place. The relevant
question is not whether to engage in reform work, but how . There is no
shortage of worthy reform efforts and goals. Differences of opinion are inevitable about which reforms
are most important and in what order of priority they should be pursued. These debates are worthwhile, but it is
critical to keep in mind that the question of how we do reform work is even more
important than the specific reforms we seek. If the way we pursue reforms does not
contribute to the building of a movement to dismantle the system of mass
incarceration, and if our advocacy does not upset the prevailing public
consensus that supports the new caste system, none of the reforms, even if
school year.

won, will successfully disrupt the nations racial equilibrium . Challenges to the
system will be easily absorbed or deflected, and the accommodations
made will serve primarily to legitimate the system, not undermine it. We
run the risk of winning isolated battles but losing the larger war.

Impacts
Combating Racism addresses the root cause for nuclear warLaBalme

2002 (www.activism.net/peace/nvcdh/discrimination.shtml)

In this action, our struggle is not only against missiles and bombs, but against the
system of power they defend: a system based on domination, on the belief that
some people have more value than others, and therefore have the right to control
others, to exploit them so that they can lead better lives than those they oppress.
We say that all people have value. No person, no group, has the right to wield power over the decisions and
resources of others. The structure of our organizations and the processes we use among ourselves are our best attempt to live our
belief in self-determination. Besides working against discrimination of all kinds among ourselves, we must try to understand how
such discrimination supports the system which produces nuclear weapons. For some people who come to this action, the overriding
issue is the struggle to prevent nuclear destruction. For others, that struggle is not separate from the struggles against racism,
sexism, classism, and the oppression of groups of people because of their sexual orientation, religion, age, physical (dis)ability,
appearance, or life history. Understood this way, it is clear that nuclear weapons are already killing people, forcing them to lead lives
of difficulty and struggle. Nuclear war has already begun, and it claims its victims disproportionately from native peoples, the Third

All oppressions are


interlocking. We separate racism, classism, etc. in order to discuss them, not to
imply that any form of oppression works in isolation. We know that to work against
any one of these is not just to try to stop something negative, but to build a positive
vision. Many in the movement call this larger goal feminism. Calling our process "feminist process" does not mean that women
World, women, and those who are economically vulnerable because of the history of oppression.

dominate or exclude men; on the contrary, it challenges all systems of domination. The term recognizes the historical importance of
the feminist movement in insisting that nonviolence begins at home, in the ways we treat each other. In a sexist or patriarchal
society, women are relegated to limited roles and valued primarily for their sexual and reproductive functions, while men are seen
as the central makers of culture, the primary actors in history. Patriarchy is enforced by the language and images of our culture; by
keeping women in the lowest paying and lowest status jobs, and by violence against women in the home and on the streets. Women
are portrayed by the media as objects to be violated; 50% of women are battered by men in their lives, 75% are sexually assaulted.
The sexist splitting of humanity which turns women into others, lesser beings whose purpose is to serve men, is the same split
which allows us to see our enemies as non-human, fair game for any means of destruction or cruelty. In war, the victors frequently
rape the women of the conquered peoples. Our country's foreign policy often seems directed by teenage boys desparately trying to
live up to stereotypes of male toughness, with no regard for the humanity or land of their "enemy." Men are socialized to repress
emotions, to ignore their needs to nurture and cherish other people and the earth. Emotions, tender feelings, care for the living, and
for those to come are not seen as appropriate concerns of public policy. This makes it possible for policymakers to conceive of

racism, or the institutionalized devaluation of darker peoples, supports both the


idea and the practice of the military and the production of nuclear weapons. Racism
operates as a system of divide and conquer. It helps to perpetuate a system in
which some people consistently are "haves" and others are "have nots." Racism tries to
nuclear war as "winnable." Similarly,

make white people forget that all people need and are entitled to self-determination, good health care, and
challenging work.

Racism limits our horizons to what presently exists; it makes us


suppose that current injustices are "natural," or it makes those injustices invisible.
For example, most of the uranium used in making nuclear weapons is mined under incredibly hazardous conditions by people of
color: Native Americans and black South Africans. Similarly, most radioactive and hazardous waste dumps are located on lands
owned or occupied by people of color. If all those people suffering right now from exposure to nuclear materials were white, would
nuclear production remain acceptable to the white-dominated power structure?

Racism also underlies the concept of "national security" : that the U.S. must protect
its "interests" in Third World countries through the exercise of military force and
economic manipulation. In this world-view, the darker peoples of the world are incapable of managing their own affairs
and do not have the right to self-determination. Their struggles to democratize their countries and become independent of U.S.

The greatest danger of


nuclear war today lies in the likelihood of superpower intervention in Third World
countries, fueled by government appeals to nationalistic and racist interests.
military and economic institutions are portrayed as "fanatic," "terrorist," or "Communist."

India Adv
India expanding use of biometric software nowGonzalez, 14. Deborah, author for Elsevier, Oct.9, 2014.Amid rampant data
breaches and hacks, biometrics takes off.
http://www.elsevier.com/connect/amid-rampant-data-breaches-and-hacksbiometrics-takes-off
Privacy advocates do recognize security benefits of some of the biometric technologies but caution and urge their developers and users to apply them responsibly and with transparency.
So since there are no laws, can the industry agree on some voluntary guidelines or best practices, like posting notices if facial recognition technology is being used in an event? What
triggered this particular concern was a recent Super Bowl game where attendees were facially scanned without their knowledge by law enforcement. After the fact, individuals
understood the possible benefit of identifying criminals, but they felt their privacy was violated because they were not told and did not give their consent.Another concern that gets
raised is that these biometrics are stored in a database, so all the information system security concerns are still there. Also, if a regular database of passwords gets hacked, you can
change the password, but if a biometrics database gets hacked, you can't change your face.The use of biometric data that has been collected also raises the question of what the data

. India is being watched for one of the most ambitious biometrics data
collection projects in the world Aadhaar. With over a billion people , most of whom are poor and
undocumented, the Indian government thinks biometrics could be the answer to identifying
their own population and improving government services . The Aadhaar database has collected fingerprints, iris
scans, and photos of over 500 million Indian citizens so far, who receive in exchange a 12-digit national ID number. But human rights activists in India
and abroad fear that the data will be used to marginalize even more the poorer
classes, demonstrating a level of mistrust not necessarily of the technology but of
the government entity using it.
will be used for

US action key to dissuading India from using monitoring


technologyHayes, 15. Lisa, Center for Democracy and Technology, 1/20. Digital Indias
Impact on Privacy: Aadhaar numbers, biometrics, and more.
https://cdt.org/blog/digital-indias-impact-on-privacy-aadhaar-numbers-biometricsand-more/
the State Department hosted a meeting of the India-U.S. Information and
Communications Technologies Working Group, bringing together government,
industry, and civil society for thoughtful discussion about ICT issues between each country. Much of the discussion
focused on the Indian governments Digital India initiative to promote universal
connectivity, with the goal of providing every citizen with broadband connection by December 2016. In furtherance of this plan, the government is building a massive
Last week,

optical fiber network throughout the country, and creating 250,000 computer service centers to provide high-speed access to residents in rural areas. As part of Digital Indias goal of

To
accomplish this, the government plans to draw on the Aadhaar program, a
controversial unique identification system that has led the Indian government to
create the worlds largest biometric database . Using Aadhaar numbers, the government hopes to digitally link every person in India
providing government services to every individual, however, the government envisions a cradle-to-grave digital identity that is unique, lifelong, and authenticable.

to the Internet with a unique 12 digit identifier, to allow them to securely access cutting-edge tools such as digital welfare benefits and online medical services. Digital India brings some
clear benefits to the country and people: universal connectivity is an outstanding goal for individuals and industry alike. Technology can be a powerful, life-changing tool and we applaud
the governments efforts to ensure that people in rural areas have secure, high-speed access for education, commerce, health, and access to the global flow of ideas and information.
However, linking this access to the Aadhaar number brings with it significant risks. In return for secure online access to government services, citizens of India are being asked to give up
vast amounts of personal information. In addition to collecting a name, birth date, and address from each participant, the government is collecting biometric information by using iris and

This is the first time that sensitive biometric


information has ever been collected on such a broad scale . Already, widespread data integrity issues have arisen
fingerprint scanners before assigning each person his or her Aadhaar number.

with the storage of the Aadhaar data from the time of collection. How this information is protected, what is done with the data, who has access to the data, and how it will be shared
between government agencies remain troubling questions in a shifting legal landscape, without further legislative guidance. It is also not clear how the Digital India plan conforms with
the 2013 ruling from the Supreme Court of India that no one should be required to obtain an Aadhaar card in order to access government services. Moreover, India has been
unapologetic about its existing surveillance programs. Advocates have raised significant privacy and free expression concerns with the authority the government claims to conduct
surveillance, and in 2013 the government granted itself even broader latitude to monitor citizens in a process that lacked the opportunity for open public debate or parliamentary
approval. While governments have legitimate national security concerns, increased security must not come at the cost of fundamental human rights. And given the extreme sensitivities
of biometric identification data, and continued concerns over potential government misuse of individuals unique identifiers, its essential that India establish far greater protections for
the digital identities and privacy of all of its citizens. Last weeks ICT Working Group meetings culminated in an agreement between the US and India to collaborate on implementing the
Digital India initiative. President Obama is traveling to India later this month to join Prime Minister Narendra Modi in the celebration of Indias Republic Day, honoring the Indian

The President should take this opportunity to raise these critical privacy and
free expression issues as Digital India marches forward.
constitution.

Data Leaks Adv


Absent restrictions- the industry faces data leaks.
Lumb, 15. David, Fast Company.com, 1/26. Tech writer who dabbled in the
startup world and once did an investigative article on pizza. Is Facial Recognition
The Next Privacy Battleground?. JJZ
http://www.fastcompany.com/3040375/is-facial-recognition-the-next-privacybattleground
Surprisingly, almost no U.S. airports use facial recognition tech. FaceFirst has sold its tech to the Panama City airport, among other South American

to secure a U.S. government contract would require an extremely


expensive lobbying campaign and long negotiations with Homeland Security and the Transportation Security Administration.
Panama, Colombia, and Brazil went out of their way to approach FaceFirst to use their facial recognition systems in bus depots, says Rosenkrantz. The
potential for businesses to compile new customer data is too tempting to pass up
for long. Facial recognition technology has advanced, and if privacy advocates and
businesses don't agree on what precautions to take with all that stored personal
dataand the government doesn't step insome future Target-size leak could spill
biometric-linked data that's even more personal and much harder to recover.
government customers. But

Privacy

1st amendment
FRT violates the first amendment destroys our right to
freedom of anonymous speech.
Brown 14 Associate Professor of Law, University of Baltimore School
of Law. B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY,
FACEPRINTS, AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
a line of First Amendment cases confirms that the privacy threat posed by
FRTthe governments unfettered identification and monitoring of
personal associations, speech, activities, and beliefs, for no justifiable purposeis
one of constitutional dimension. In fact, the Supreme Court has steadfastly
protected anonymous speech.336 The Courts repeated pronouncements that the First
Amendment337 safeguards the right of anonymous speechthat is, the right to distribute
Separately,

technologies like

written materials without personal identification of the authorlargely came about in response to government
attempts to mandate disclosures in public writings. 338 In Talley v. California,339 the Court struck down a Los
Angeles ordinance restricting the distribution of a handbill in any place under any circumstances, which does not
have printed on the cover . . . the name and address of . . . [t]he person who printed, wrote, compiled or
manufactured the same.340 Finding that the law infringed on freedom of expression, the Court observed that
[a]nonymous

pamphlets, leaflets, brochures and even books have played an


important role in the progress of mankind by enabling persecuted groups to
criticize oppressive practices and other matters of public importance, particularly
where the alternative may be not speaking at all. 341 The Talley Court342 relied on two cases
that linked anonymous speech with the ability to freely associate in private. Both involved constitutional
challenges343 to laws requiring members of the National Association for the Advancement of Colored People
(NAACP) to furnish government officials with its member lists. In NAACP v. Alabama ex rel. Patterson, 344 the
lower court imposed a $100,000 civil contempt fine after the organization refused to comply with a court order
requiring production of its lists.345 The Supreme Court lifted the judgment and fine, holding that immunity from
state scrutiny of membership lists . . . is here so related to the right of the members to pursue their lawful private
interests privately as to be constitutionally protected on privacy and free association grounds.346 Although
association is not listed among the First Amendments enumerated freedoms, the Court declared in Talley that
freedom to engage in association for the advancement of beliefs and ideas is an inseparable aspect of . . .
liberty.347 In Bates v. City of Little Rock,348 the NAACPs records custodian was tried, convicted, and fined for
refusing to comply with state ordinances requiring that membership lists be public and subject to the inspection of
any interested party at all reasonable business hours.349 The organization claimed a right on the part of its
members to participate in NAACP activities anonymously and free from any restraints or interference from city or
state officialsa right that it felt has been recognized as the basic right of every American citizen since the
founding of this country.350 The Supreme Court again struck down the ordinances, asserting that the freedom of
speech, a free press, freedom of association, and a right to peaceably assemble are protected from being stifled by
[such] subtle governmental influence as a requirement to divulge membership lists.351 Over four decades later,

the Court in McIntyre v. Ohio Elections Commission characterized anonymous


speech as important to the preservation of personal privacy. 352 In McIntyre, the plaintiff
distributed leaflets opposing a school superintendents referendum which were anonymously attributed to
CONCERNED PARENTS AND TAX PAYERS.353 The Ohio Election Commission fined the plaintiff for violating state

The U.S. Supreme Court reversed a lower


court ruling upholding the ordinance, explaining that an authors decision to remain
anonymous . . . is an aspect of the freedom of speech protected by the First
Amendmenteven if [t]he decision in favor of anonymity [is] motivated . . . merely by a desire to preserve as
laws banning the distribution of unsigned leaflets.354

much of ones privacy as possible.355 The Court extolled the virtues of anonymity as fostering [g]reat works of
literature . . . under assumed names, enabling groups to criticize the government without the threat of
persecution, and provid[ing] a way for a writer who may be personally unpopular to ensure that readers will not
prejudge her message simply because they do not like its proponent.356 As core political speech, it concluded,

[n]o form of speech is entitled to greater constitutional protection.357 Justice Stevens went on in his majority
opinion to tether anonymity to the purpose behind the Bill of Rights and the First Amendment: to protect unpopular
individuals from retaliationand their ideas from suppression at the hand of an intolerant society.358
Anonymity, he explained, is a shield from the tyranny of the majority.359 In a concurring opinion, Justice
Thomas commented that the Founders practices and beliefs on the subject indicate[] that they believed the
freedom of the press to include the right to author anonymous political articles and pamphlets.360 That most
other Americans shared this understanding, he added, is reflected in the Federalists hasty retreat before the
withering criticism of their assault on the liberty of the press.361 Justice Scalia dissented, arguing that anonymity
facilitates wrong by eliminating accountability, which is ordinarily [its] very purpose.362 To treat all anonymous
communication . . . in our society [as] traditionally sacrosanct, he continued, seems to me a distortion of the past
that will lead to a coarsening of the future.363 In Watchtower Bible & Tract Society of New York, Inc. v. Village of
Stratton,364 the Court struck down an ordinance requiring permits for doorto-door canvassing as a prior restraint
on speech but also because the law vitiated the possibility of anonymous speech.365 It characterized the permit
requirement as result[ing] in a surrender of . . . anonymityeven where circulators revealed their physical
identitiesbecause strangers to the resident certainly maintain their anonymity.366 The Court was thus
unmoved by the fact that speakers who ring doorbells necessarily make themselves physically known to their
audience, thus revealing themselves to some extent. For the Court, it was the recognition that occurs when a name
on a permit is connected to a face which triggered the Constitutions protection of anonymity. Most recently, a
fractured plurality in Doe v. Reed367 upheld a state law compelling public disclosure of the identities of referendum
petition signatories while squarely acknowledging the vitality of a First Amendment right to anonymous speech. 368
Significantly, all but one Justice recognized that the governments ability to correlate identifying information with
online data created a First Amendment hazard of unprecedented dimension. Writing for the majority, Chief Justice
Roberts found that an individuals expression of a political view through a signature on a referendum petition
implicated a First Amendment right.369 The Court nonetheless held that the states interest in preserving the
integrity of the electoral process and informing the public about who supports a petition justified the burdens of
compelled disclosure.370 Justice Roberts made a point of deeming significant the plaintiffs argument that, once
on the Internet, their names and addresses could be matched with other publicly available information about them
in what will effectively become a blueprint for harassment and intim idation.371 Because the majority only
considered the facial challenge to the law, Justice Roberts found the burdens imposed by typical referendum
petitions unlike those that the plaintiffs feared.372 Justice Alito wrote separately to emphasize that government
access to personal data online gave rise to a strong as-applied challenge based on the individual . . . right to
privacy of belief and association.373 He considered breathtaking the implications of the states argument that it
has an interest in providing information to the public about supporters of a referendum petition; if true, the State
would be free to require petition signers to disclose all kinds of demographic information, including the signers
race, religion, political affiliation, sexual orientation, ethnic background, and interest-group memberships.374
Justice Alito added that the posting of names and addresses online could allow anyone with access to a computer
[to] compile a wealth of information about all of those persons, with vast potential for use in harassment.375
Justice Thomas dissented on similar grounds, asserting that he would sustain a facial challenge precisely because
[t]he advent of the Internet enables rapid dissemination of the information needed to threaten or harass every
referendum signer, thus chill[ing] protected First Amendment activity.376 Concurring separately, Justice Scalia
stood alone in his complete rejection of First Amendment protections for anonymous speech.377 When considered
in conjunction with the digital-age Fourth Amendment cases, Doe is remarkable in its recognition of the pressures
that modern technology puts on the viability of existing constitutional doctrine relating to individual privacy.
Although Jones addressed GPS monitoring under the Fourth Amendment, Justice Sotomayor invoked the First
Amendment to emphasize that [a]wareness that the Government may be watching chills associational and
expressive freedoms, and that the Governments unrestrained power to assemble data that reveal private aspects
of identity is susceptible to abuse.378 When inexpensive technology is paired with massive amounts of readily
accessible personal information and unfettered government discretion to track individual citizens, she explained,

Although pre-digital-age Fourth Amendment case law appears


to paint FRT surveillance into a doctrinal corner, in the right case the Supreme Court
may well find constitutional limits on surveillance conducted with cutting-edge
technology like FRT and publicly available data. The next Part offers guidelines derived from the
Courts Fourth and First Amendment jurisprudence which courts and legislators should bear in mind in
crafting legal limits on surveillance through technologies like FRT . 380
democracy itself suffers.379

4th amendment
It incentivizes officers to ignore probable causeGaneva, 11. Tana, AlterNet, Aug.30. 5 unexpected places you can be tracked
with facial recognition technology. JJZ
http://www.alternet.org/story/152231/5_unexpected_places_you_can_be_tracked_wit
h_facial_recognition_technology
Earlier this summer Facebook rolled out facial recognition software that identifies users even when they appear in untagged photos. Like every other time the social networking site has introduced a creepy, invasive new feature, they
made it the default setting without telling anyone. Once people realized that Facebook was basically harvesting biometric data, the usual uproar over the site's relentless corrosion of privacy ensued. Germany even threatened to sue

there's
no "opt-out" of leaving your house. Post-9/11, many airports and a few cities rushed
to install cameras hooked to facial recognition technology,
Facebook for violating German and EU data protection laws and a few other countries are investigating. But facial recognition technology is hardly confined to Facebook -- and unlike the social networking site,

a futuristic apparatus that promised to pick out terrorists and criminals

from milling crowds by matching their faces to biometric data in large databases. Many programs were abandoned a few years later, when it became clear they accomplished little beyond creeping people out. Boston's Logan Airport
scrapped face recognition surveillance after two separate tests showed only a 61.4 percent success rate. When the city of Tampa tried to keep tabs on revelers in the city's night-club district, the sophisticated technology was bested
by people wearing masks and flicking off the cameras. Human ingenuity aside, most facial recognition software could also be foiled by eyewear, a bad angle or somebody making a weird face. But nothing drives innovation like the

In the past few years, face recognition technology has advanced


substantially
. As face recognition and
other biometrics advance, the technology has begun to proliferate in two
predictable realms: law enforcement and commerce.
police officers
can take an iris scan from 6 inches away,
This biometric information can be matched to any
database
The process is almost instant
promise of government contracts!

, moving from 2-d to 3-d scanning that can capture identifying information about faces even in profile. Another great leap forward, courtesy of Identex (now L-1 Identity Solutions, Inc.), combines

geometric face scanning and "skinprint" technology that maps pores, skin texture, scars and other identifying facial marks captured in high-resolution photos

Here are 5 places besides Facebook you might encounter face recognition and other

biometric technology -- not that, for the most part, you would know it if you did. 1. In the fall,
Information System (MORIS) device. The gadget, which attaches to an iPhone,

from 40 departments will hit the streets armed with the Mobile Offender Recognition and

a measure of a person's face from 5

feet away, or electronic fingerprints, according to Computer vision central.

of pictures, including, potentially, one of the largest collections of tagged photos in existence: Facebook.

, so no time for a

suspect to opt out of supplying law enforcement with a record of their biometric data. Lee Tien of the Electronic Frontier Foundation told AlterNet that while it's unclear how individual departments will use the technology, there are

Since officers don't have to haul in an unidentified suspect to get their


fingerprints, they have more incentive to pull people over, increasing the likelihood
of racial profiling
two obvious ways it tempts abuse.

. The second danger lurks in the creation and growth of personal information databases. Biometric information is basically worthless to law enforcement unless, for example, the pattern

of someone's iris can be run against a big database full of many people's irises.

Face recognition technology is spreading and constitutes a


fundamental speed bump for fourth amendment jurisprudence.
Fretty 11 Associate at Irell & Manella, LLP, in Los Angeles, California,
J.D from UCLA School of Law (Douglas A., Face-Recognition Surveillance: A
Moment of Truth for Fourth Amendment Rights in Public Places, Vol 16 No 3, Fall
2011, Virginia Journal of Law and Technology,
http://www.vjolt.net/vol16/issue3/v16i3_430-Fretty.pdf) NAR
Since the 2001 Super Bowl, when Tampa Bay installed face-recognizing cameras in its stadium to catch criminals

Americans have been increasingly monitored with facerecognition technology (FRT). Though the technique remains crude, face-based
surveillance is already used in airports and on city streets to detect fugitives,
teenage runaways, criminal suspects, or anyone who was ever arrested. As it spreads,
FRT will be an unusually fraught topic for courts to address, because it straddles so
many fault lines currently lying beneath our Fourth Amendment jurisprudence.
These include whether: (1) people enjoy a reasonable expectation of anonymity in
public, (2) a seizure can occur without halting a persons movement, (3) longterm aggregation of data about individuals can constitute a search, and (4) the
probable-cause standard tolerates generalized surveillance with a high rate of
false positives. These fault lines are not minor questions but fundamental
challenges of the digital-surveillance movement. While most courts to address these issues have
attending the big game,

erred toward diminished Fourth Amendment protection, this Article cites an emerging minority that would reclaim
basic privacy rights currently threatened by electronic monitoring in public.

The 4th amendment is allowed to be stretched for people to be


surveilled but believing people should expect to be
RECOGNIZED fundamentally hollows out the amendment.
Fretty 11 Associate at Irell & Manella, LLP, in Los Angeles, California,
J.D from UCLA School of Law (Douglas A., Face-Recognition Surveillance: A
Moment of Truth for Fourth Amendment Rights in Public Places, Vol 16 No 3, Fall
2011, Virginia Journal of Law and Technology,
http://www.vjolt.net/vol16/issue3/v16i3_430-Fretty.pdf) NAR
When people exit their homes, they risk being observed by others and thereby
forego any reasonable expectation of not being captured by surveillance , even if they
believe they are not observed by anyone.82 The case of Edward Kowalski illustrates the point.83 Mr. Kowalski
suffered a neck injury while working for the Pennsylvania State Police and, a few months after filing for workers
compensation, took a vacation to Florida.84 While at the beach with his wife, he was unknowingly videotaped for
days by a private investigator, hired by the State Police to verify Mr. Kowalskis medical condition.85 Though most
people would not expect or want to be surreptitiously recorded while sunbathing, Mr. Kowalski had no expectation of
privacy and therefore no Fourth Amendment claim against the State Police.86 This doctrine extends even to
secluded spaces such as the elevators and hallways of commercial buildings, where recessed cameras often record
goings-on. 87 Government agencies have a strong argument, then, that where people lack an expectation of not
being observed, they equally lack an expectation of not being recognized. Because one could unexpectedly be
recognized by a fellow pedestrian, so would go the argument, one cannot expect that FRT-equipped cameras will
not match ones face against a government photobase. This reasoning may strike some as strained, but it is the
analysis that the Supreme Court has applied to surveillance since 1986, when California v. Ciraolo and Dow

In Ciraolo,
police officers flew an airplane 1,000 feet over a suspects fenced-off property and
observed a small marijuana field.89 In Dow Chemical, EPA agents photographed the
companys property from varying altitudes with a precision aerial mapping
camera.90 Because the evidence gathering in both cases occurred from public airspace, the Court reasoned,
any air traveler could have observed what the government agents did, had they bothered to look down.91 EPAs
reliance on a sophisticated camera did not amount to a search, said the Court,
because: (1) the camera was available for public use,92 and (2) the agents used the
camera only to augment their natural sensory abilities.93 The first fact matters because, if
Chemical Co. v. United States were decided on the same day.88 The cases presented similar facts.

aerial mapping cameras are available in commerce, Dow could not have expected its land to be immune from the
technology.94 The second fact reflects the Courts view that,

as long as technology does not give

police novel powers of perceptionthe ability to see through walls or hear private
conversations95sensory-enhancing tools are not offensive to public expectations .96
Based on the example of Dow, police are able to enhance their noses with drugsniffing dogs97 and enhance their
eyes with telescopes and binoculars. 98 Police cannot, however, aim a heat-sensing camera at a suspects garage,
since this technique is uncomfortably analogous to looking through a wall into a private space.99 Still, as Justice
Powell admonished in his Dow dissent, the availability and sensory enhancement tests inevitably abrogate
public privacy as snooping technology becomes more pervasive.100 Linking surveillance cameras to FRT, then,
arguably only enhances the polices already-existing senses :

many surveillance advocates posit that


scanning a face with FRT is simply a highly efficient version of looking through a
traditional mug shot book.101 Further support comes from cases where the police have sought to
subpoena a suspects handwriting or voice sample without a warrant. Because a persons handwriting and speech
are frequently made public, the Court upholds such subpoenas, even though the requested sample is for the
unusual purpose of matching the suspects writing or speech to that of a criminal.102 The pro-FRT interpretation is
that, just as the government can demand a voice recording for matching purposes, so too can the government

the Court stated in


dictum in United States v. Dionisio, No person can have a reasonable expectation
digitize a pedestrians likeness for processing with a face-matching algorithm. As

that others will not know the sound of his voice, any more than he can reasonably
expect that his face will be a mystery to the world. 103 Though these cases were not
decided in the surveillance context and so would not bind an FRT dispute, they foreshadow the Courts
low-ebbing protection of facial privacy. Nevertheless, challengers to FRT should
engage the Harlan standard head-on by demonstrating that Americans reasonably
expect not to be identified in public by sophisticated algorithms . Indeed, the
Court has at times cast itself as a bulwark against novel technology that takes away
privacies we once took for granted.104 As evidence that people expect a degree of anonymity while
moving in public, civil libertarians could point to the popular outcries that often accompany a
citys installation of facerecognizing cameras.105 Public reaction to Tampa Bays use
of FRT at the Super Bowl was overwhelmingly negative ;106 the subsequent
installation of FRT cameras in Tampas nightlife district prompted vociferous
protests, effectively ending the citys FRT experiment two years later .107 Courts may
respond that a persons outrage means nothing at the point at which surveillance technology meets the Dow test.
This argument, made by lower courts in other contexts, is that as long as people know a technology could
conceivably be used against them by strangers, the governments use of the technology is not a constitutional
issue.108 As articulated in one district opinion, The proper inquiry . . . is not what a random stranger would
actually or likely do [with surveillance technology], but rather what he feasibly could.109 Members of the public
could conceivably use an online FRT program such as Polar Rose to identify strangers on the street based on a
furtivelysnapped digital photo.110 Making such a scenario all the more plausible, Google is now building an
application that would locate a persons online Google Profile based on any photo of the persons face.111 Thus, like

pedestrians have relinquished their expectation


of facial-identity privacy. Against this mechanical reading, however, a small revolt
is stirring. In August 2010, the D.C. Circuit in United States v. Maynard held that police could not track
it or not, under a strict reading of the Dow line,

suspects via their cell phone records without a warrant.112 The holding was despite the governments truthful
argument that a cell phone company could easily track any subscribers movements by cataloguing the cell phone
towers that received the subscribers signal.113 Maynard reviewed the Courts important reasonable expectation
cases114 and concluded: In

considering whether something is exposed to the public . . .


we ask not what another person can physically and may lawfully do but rather what
a reasonable person expects another might actually do. 115 Were the D.C. Circuit to review
state-run FRT, the inquiry would then be whether D.C. pedestrians expect their fellow travelers to discover their
identities via FRT software. Three weeks after Maynard, a district court followed its result, emboldened by several
rulings in recent years that reclaim domains of personal privacy threatened by encroaching technology.116 Though

the Maynard reasoning is for now the minority view,117 it reflects a broadly felt
instinct to reclaim the reasonable expectation test as a guardian of Fourth
Amendment rights in public spaces.118 Face-recognition challenges offer the
potential to push Maynard further into the mainstream.

Function Creep
Monitoring technology is susceptible to function creep
Brey, 04. P. (2004). 'Ethical Aspects of Face Recognition Systems in Public
Places,' Journal of Information, Communication & Ethics in Society, 2:2, 97-109. JJZ
http://www.utwente.nl/bms/wijsb/organization/brey/Publicaties_Brey/Brey_2004_Face
-Recognition.pdf
A second, and more pressing, problem with facecams is the problem of function creep, an expression that I borrow from RAND report author John Woodward. Function creep is the phenomenon by which a technology designed for a
limited purpose may gain additional, unanticipated purposes or functions. This may occur either through institutionalized expansions of its purposes or through systematic abuse. In relation to Smart CCTV, it is the problem that,

because of the flexibility of the technology,


the system
may be easily
extended from recognizing criminals and missing persons to include other purposes
the purposes for which

is used

There are, I claim, four basic ways in which Smart CCTV can become the subject of function creep. The first is by widening of the database. The databases used in London, Birmingham, Tampa and Virginia Beach only included felons

databases can be easily expanded with the use of already


existing databases such as
(DMV
It is
relatively easy to include new categories of people that are to be monitored, like
people with misdemeanors, political activists, or people with a certain ethnic
background
on a warrant, past sexual offenders and missing persons. Such

those of the departments of motor vehicles

s) in the U.S., which include digitized photographs of licensed drivers.

, then

. Needless to say, some of these expansions, if they were to occur, would be morally highly problematic. The second way in which function creep may occur is by purpose widening. This is the

widening of the purpose for which the technology is used. For example, a police force using Smart CCTV may start using it not only to identify wanted individuals in crowds, but for example to do routine analysis of the composition of

police
departments may be tempted to use the technology for such additional purposes in
their efforts to fight crime
crowds in public places, or to do statistical analysis of faceprints for the purpose of predicting criminal activity, or to track individuals over longer distances. Smart CCTV has the potential to do these things, and

and improve the quality of life in neighborhoods. An third way for function creep to occur is by user shifts. Systems, once developed, may come to be used by

new types of users. For instance, the FBI or CIA may require access to a system used by a police department in a search for terrorists. Or a city government or commercial organization may ask a police department to use the system
for its demographic research. Also, individual operators may be using the system for their own personal reasons. As Reuters journalist Richard Meares reports, there have been several occurrences of CCTV operators being sacked
because of their repeated abuse of the system, for example by tracking and zooming in on attractive women.33 A fourth and final occurrence of function creep lies in domain shifts: changes in the type of area of situation in which
the system is used, such as changes from city neighborhoods to small villages or nature parks, or from public to private areas, or from domestic areas to war zones. Function creep in Smart CCTV may hence occur in several ways,
which may add up to result in new uses of the technology for new purposes by new users in new domains. Studies of technology use have shown that

invariably occurs when a new technology is used,


can be limited by strict regulation of the technology

function creep almost


34 Function creep

and should therefore be taken into account.

(which is not currently into place), but cannot be wholly avoided. This imposes an

obligation on the developers and users of the technology, therefore, to anticipate on function creep and to take steps to prevent undesirable forms of function creep from occurring.

FRT is susceptible to function creep it will be used for more


than finding criminals.
Lochner 13 J.D. Candidate, University of Arizona James E. Rogers
College of Law, 2013 (Sabrina A., SAVING FACE: REGULATING LAW
ENFORCEMENTS USE OF MOBILE FACIAL RECOGNITION TECHNOLOGY & IRIS SCANS,
http://www.arizonalawreview.org/pdf/55-1/55arizlrev201.pdf, ARIZONA LAW REVIEW
VOL. 55:201, 2013) NAR
E. Function Creep Function creep arises when technology designed to be used in a
specific way or for a distinctive purpose starts to be used in unanticipated ways or
to serve unintended purposes.195 A concern exists that the use of MORIS will shift
from storing photographs of the guilty196 to storing photographs of the innocent.
This is problematic because people in prison or on probation have a reduced
expectation of privacy. 197 The lower level of privacy afforded to prisoners and
probationers comes from the states special needs function to ensure safety.
Conversely, innocent citizens retain protection from unreasonable, warrantless
searches. Currently, MORIS does not save the images run through the program,198
but relying on a technical restraint to construct a safeguard is unworkable.
BI2 Technologies, or a competing company, could choose to store these images. BI2
Technologies has already expressed interest in compiling data from other
databases; it sees a benefit in including drivers license photos.199 Similar
programs, like the Automated License Plate Recognition system, store license plate

numbers of the innocent and guilty so the database can be mined during Amber
Alerts or for leads in cases.200 If police know that the databases MORIS uses could
be mined in other events, they may have an incentive to expand the databases by
taking photographs of persons without any level of suspicion for wrongdoing. And
although the Automated License Plate Recognition Program is legal, there is
something inherently more private about our faces than our license plates. Our
country has a long history of function creepof databases, which are created for
one discrete purpose and, which despite the initial promises of their creators,
eventually take on new functions and purposes, said Barry Steinhardt, ACLU
associate director in 2000.201 For example, social security numbers that were
originally to be used for retirement purposes, are now also used to identify
individuals in a variety of settings.202 Many law enforcement agencies using MORIS
have vowed to only use the technology in certain circumstances. The Pinellas
County Sheriffs Office, in Florida, obtains consent before taking someones
picture.203 The Brockton, Massachusetts, police department announced that it
would only use MORIS when actively searching for someone or when someone has
committed an offense.204 Likewise, the Pinal County Sheriffs Office said it will only
use FRT to identify people suspected of arrestable offenses or people from whom
the officers have obtained consent.205 However, these law enforcement agencies
could choose to expand the use of FRT beyond what they have set forth as their
limits. Some police departments have already demonstrated a willingness to use
stored pictures and information about license plates to follow gang members. The
Los Angeles Police Department wanted to use license plate information for more
purposes but had to limit its use due to public pushback.207 While it is a violation of
Pinellas County Sheriffs Offices guidelines to learn the identity of people without
consent, it would be acceptable under the Fourth Amendment. 208 Therefore,
MORISs use creates a potential for function creep.

The status quo takes advantage of gray areas within the lawSteel, 11. Emily, Wall St. Journal, July 13. Device raises fear of facial profiling.
JJZ
http://www.wsj.com/articles/SB10001424052702303678704576440253307985070
Dozens of law-enforcement agencies from Massachusetts to Arizona are preparing to outfit their forces with controversial hand-held facial-recognition devices as soon as September,
raising significant questions about privacy and civil liberties. Police across the nation will soon be using facial-recognition devices that easily connect to an iPhone. Civil liberties groups
have warned that the technology could infringe on privacy rights. . With the device, which attaches to an iPhone, an officer can snap a picture of a face from up to five feet away, or scan
a person's irises from up to six inches away, and do an immediate search to see if there is a match with a database of people with criminal records. The gadget also collects fingerprints.
Until recently, this type of portable technology has mostly been limited to military uses, for instance to identify possible insurgents in Iraq or Afghanistan. The device isn't yet in police
hands, and the database isn't yet complete. Still, the arrival of the new gadgets, made by BI2 Technologies of Plymouth, Mass., is yet another sign that futuristic facial-recognition

A
fundamental question is whether or not using the device in certain ways would
constitute a "search" that requires a warrant. Courts haven't decided the issue . It is
technologies are becoming reality after a decade of false starts. The rollout has raised concerns among some privacy advocates about the potential for misuse.

generally legal for anyone with a camera, including the police, to take pictures of people freely passing through a public space. (One exception: Some courts have limited video
surveillance of political protests, saying it violates demonstrators' First Amendment rights.) However, once a law-enforcement officer stops or detains someone, a different standard

The Supreme Court has ruled that there must be "reasonable suspicion"
to force individuals to be fingerprinted. Because face- and iris-recognition
technology hasn't been put to a similar legal test, it remains "a gray area of the law ,"
might apply, experts say.

says Orin Kerr, a law professor at George Washington University with an expertise in search-and-seizure law. "A warrant might be required to force someone to open their eyes." BI2 says
it has agreements with about 40 agencies to deliver roughly 1,000 of the devices, which cost $3,000 apiece. Some law-enforcement officials believe the new gear could be an important
weapon against crime. "We are living in an age where a lot of people try to live under the radar and in the shadows and avoid law enforcement," says Sheriff Paul Babeu of Pinal County,
Ariz. He is equipping 75 deputies under his command with the device in the fall. Mr. Babeu says his deputies will start using the gadget try to identify people they stop who aren't
carrying other identification. (In Arizona, police can arrest people not carrying valid photo ID.) Mr. Babeu says it also will be used to verify the identity of people arrested for a crime,
potentially exposing the use of fake IDs and quickly determining a person's criminal history. Other police officials urge caution in using the device, which is known as Moris, for Mobile
Offender Recognition and Information System. Bill Johnson, executive director at the National Association of Police Organizations, a group of police unions and associations, says he is
concerned in particular that iris scanning, which must be done at close range and requires special technology, could be considered a "search." "Even technically if some law says you can
do it, it is not worth itit is just not the right thing to do," Mr. Johnson says, adding that developing guidelines for use of the technology is "a moral responsibility." Sheriff Joseph
McDonald Jr. of Plymouth County in Massachusetts, who tested early versions of the device and will get a handful of them in the fall, says he plans to tell his deputies not to use facial

recognition without reasonable suspicion. "Two hundred years of constitutional law isn't going away," he says. BI2 says it urges officers to use it only when they have reasonable
suspicion of criminal activity. "Sheriffs and law enforcement should not use this on anybody but suspected criminals," says Sean Mullin, BI2's chief executive. With this device, made by
BI2 Technologies, an officer can snap a picture of a face from up to five feet away, or scan a person's irises from up to six inches away, and do an immediate search to see if there is a
match with a database of people with criminal records. ENLARGE With this device, made by BI2 Technologies, an officer can snap a picture of a face from up to five feet away, or scan a
person's irises from up to six inches away, and do an immediate search to see if there is a match with a database of people with criminal records. BI2 Technologies . The Department of
Justice referred questions about the device to the Federal Bureau of Investigation, which didn't respond to a request for comment by late Tuesday. Facial-recognition technology is going
mainstream not just in police departments. Facebook Inc., the social-networking giant, recently rolled out facial-recognition technology to let its users more easily identify their friends in
photos. Several iPhone and Android apps claimwith varying successto be able to use cellphone cameras to identify Facebook friends by snapping pictures of them. Middle Eastern
and European countries use iris scans to recognize travelers at airports and border crossings. Some U.S. troops carry hand-held devices to capture faces, eyes and fingerprints of "known
and suspected insurgents," according to Lt. Col. Thomas Pratt of the Defense Department's Biometric Identity Management Agency. The agency says more than 7,000 devices,
manufactured by L-1 Identity Solutions Inc. and Cross Match Technologies Inc., are being used in the field. Internet search giant Google Inc. also considered, but rejected, a project that
would have offered facial recognition on mobile phones. Google's technology would have let cellphone users take pictures of people, then conduct an image search on Google to find a
person with matching facial features. Google's chairman, Eric Schmidt, discussed the decision to shut down the project at a May conference. "I'm very concerned by the union of mobile
tracking and face recognition," he said. "My guess is in free societies, it will be regulated." A spokesman for Google says the company won't launch the facial recognition tools "unless we
have strong privacy protections in place." Face- and iris-recognition technologies are still a small portion, about 16%, of the $4.3 billion biometrics industry, which is dominated by
fingerprint technology, according to market research by New York-based International Biometric Group LLC. The technology has advanced greatly since a series of embarrassing setbacks
after the Sept. 11, 2001, terror attacks. In 2002, Boston's Logan International Airport tested facial-recognition software, but pulled the plug after cameras failed to recognize airport
employees whose photos were in the system. Since then, face-recognition technology has improved, and has been augmented to recognize irises, which are unique to individuals. BI2's
device attaches to the back of an iPhone, adding about 1.75 inches to its width. It plans to offer a version for Android phones in the future. The company says Moris will be sold only to
law-enforcement agencies, although it is considering building applications for the health-care and financial industries. The device links to a database of criminal records, iris and face
images contributed by local law enforcement that use other BI2 technologies. "The database is the golden nugget of the whole thing," says BI2's Mr. Mullin. The database includes face
and iris data collected primarily when people are admitted to or released from a correctional facility, Mr. Mullin says. Some states also are contributing mug shots to the database. BI2
says it doesn't sell the data, since it doesn't own it. The company hopes to eventually access additional data from larger state and federal databases, such as the FBI's registry of
fingerprints or the driver's-license photos from motor-vehicle departments. William Conlon, chief of police in Brockton, Mass., says he doesn't consider the mobile device to be an
invasion of privacy. "It is just a picture. If you are out in public, I can take a picture of anybody," says Mr. Conlon, whose police department tested a prototype last summer and is
planning to adopt the device. "Most people will say, 'I don't have anything to hide, go ahead.'" Every local municipality in the country should have face recognition devices and iris
scanning lenses installed on public streets throughout their cities. A computer program would disseminate information quickly and objectively to the authorities monitoring said
database. For those with something to hide, hopefully this technology will keep you from committing any criminal acts. For those with nothing to hide, this device will cost effectively
keep our society safe from those who commit crime.

2AC Stuff

Yes Topical
We meet the plan curtails domestic surveillance.
Brown 14 Associate Professor of Law, University of Baltimore School
of Law. B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY,
FACEPRINTS, AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
national security benefits of using FRT in targeted criminal investigations
But its potential for enabling surveillance of common citizens is

The law enforcement and


are self-evident. 186

troubling. Before FRT, drivers license photos were of limited utility to investigators unless a subjects name was
already known.187 Law enforcement can now capture a facial image of an unknown individual without the subjects
knowledge, match the image with other bits of data using FRT algorithms, and come up with a rich dossier of
personal information.188 Although fingerprint data similarly enables investigators to attach a name to unidentified
biometric data,189 FRT goes much further. Once a person is identified, rapid correlations with countless other
images and data points in cyberspace and self-contained databases can detect past activity and predict future
movements.190 Taken to its extreme,

this technology could be used to perform identity

sweeps of random subjects for relatively benign activities like walking a dog without a leash.191 A few states
have imposed legislative barriers to police collection of and access to FRT data, but many others have not.192

The feasibility of domestic police state surveillance operations is thus no longer a


matter of science fiction.193 FRT renders innocent people susceptible to intrusive police investigations for
being tagged in a photo with someone suspected of a crime.194

Prefer contextual we-meets key to legal education.


Millhiser 15 Reporter @ Think Progress Quoting Supreme Court
officials and history,
(http://thinkprogress.org/justice/2015/02/25/3627161/supreme-court-just-explainedcase-obamacare-lose/, Ian, FEBRUARY 25, 2015) NAR
An obscure case involving fish provides a road map that the Supreme Court could use
to reject a much more high-profile case seeking to gut the Affordable Care Act . Indeed,
while the justices divided 5-4 over the correct result in the fish case, Yates v. United States, 8 of the 9 justices
agreed with broad legal principles that are incompatible with the legal attack on Obamacare. On the surface, Yates
and the health care case, King v. Burwell, have little in common. John Yates is a commercial fisherman who was
boarded by law enforcement while at sea and caught with several dozen undersized fish. He later ordered a
crewman to toss these fish overboard, and was charged with violating a federal law which provides that anyone
who knowingly alters, destroys, mutilates, conceals, covers up, falsifies, or makes a false entry in any record,
document, or tangible object with the intent to impede, obstruct, or influence the investigation or proper
administration of any matter within the jurisdiction of any department or agency of the United States is guilty of a
felony. King, by contrast, asks the Supreme Court to read six words of the Affordable Care Act an exchange
established by the State out of context in order to cut off tax credits that enable millions of Americans to afford
insurance. Obamacare gives states flexibility to decide whether to set up their own exchange where people can
buy subsidized health plans or to allow the federal government to do so for the state .

The six words the


King plaintiffs rely upon appear in a provision that, if read entirely in isolation, seem
to suggest that tax credits are unavailable in federally-run exchanges. If that
provision is read in context with the entire law , however, it is clear that the law
provides for credits in all 50 states. In determining that the fisherman in Yates could not be charged under the
statute he was accused of violating, Justice Ruth Bader Ginsburgs opinion for a plurality of the Court
rejects the view that provisions of a law can be read in isolation . Whether a

statutory term is unambiguous, Ginsburg explains, does not turn solely on


dictionary definitions of its component words. Rather, [t]he plainness or
ambiguity of statutory language is determined [not only] by reference to the language itself, [but
as well by] the specific context in which that language is used, and the broader
context of the statute as a whole.' Though a words meaning ordinarily lines
up with its dictionary definition, . . . the same words, placed in different
contexts, sometimes mean different things. Yates conviction hinged upon the meaning of
the words tangible object in the law he was charged with violating. Though, under the dictionary definitions of the
words tangible and object, a fish certainly qualifies as a tangible object, Ginsburg writes that this fact is not
dispositive of its meaning under this particular law. Her opinion then cites other evidence in the law indicating that
Congress intended to give these two words a narrow meaning, including the fact that it was passed as part of a
broader statute targeting corporate document-shredding to hide evidence of financial wrongdoing, and that the
words tangible object appear under the caption Destruction, alteration, or falsification of records in Federal
investigations and bankruptcy. Based on this evidence, Ginsburg concludes that this particular crime should only
apply to objects used to record or preserve information, not to fish. Though Justice Elena Kagan authored a
dissent, joined by three other justices, disagreeing with Ginsburgs analysis of this particular law, her dissent
includes an even stronger statement against the notion that a few words of a law can be divorced from their

context matters in interpreting


statutes, Kagan writes. We do not construe the meaning of statutory terms in a
vacuum. Rather, we interpret particular words in their context and with a view to
their place in the overall statutory scheme. And sometimes that means, as the plurality says, that
broader context. I agree with the plurality (really, who does not?) that

the dictionary definition of a disputed term cannot control. Chief Justice John Roberts, along with Justices Stephen
Breyer and Sonia Sotomayor joined Ginsburgs opinion, while Justices Antonin Scalia, Anthony Kennedy and
Clarence Thomas joined Kagans opinion. Thats

eight justices who reject the context-free way of

reading federal laws that animates the King litigation. Only Justice Samuel Alito, who wrote a brief opinion
agreeing with the result in Ginsburgs opinion but not with its rationale, did not author or join an opinion that casts
doubt over the King plaintiffs reading of the Affordable Care Act. None of this means, of course, that King will be an
8-1 decision upholding Obamacare. The last time the fate of the Affordable Care Act was before the Supreme Court,
Justice Scalia voted to repeal the entire law, even though he once authored an opinion that left little doubt that
Obamacare is constitutional. Nevertheless, the fact that the justices would hand down a decision like Yates just one
week before they hear oral arguments in King is a hopeful sign for the millions of people whose health care is
threatened by the legal attack on Obamacare.

A2: Real-ID
No link.
Tatelman 8 - Legislative Attorney American Law Division (Todd B, The
REAL ID Act of 2005: Legal, Regulatory, and Implementation Issues, CRS Report for
Congress, April 1, 2008) NAR
Contrary to the assertion of some REAL ID opponents,131 neither biometric
technology nor radio-frequency identification (RFID) is required by the regulations to be used
on REAL ID-compliant licenses or personal identification cards. Although these more
advanced technologies are not required, the machine-readable requirement does raise security
and personal privacy concerns that were addressed by DHS in the final rule. With respect to security of
the information contained on the bar code, many commentators suggested that DHS prohibit the collection and
storage of the data on the bar codes by third parties, specifically private businesses.132 DHS responded by noting
that although the underlying statute does not provide them with the legal authority to prohibit such data collection,
at least four states California, Nebraska, New Hampshire, and Texas currently have such provisions in place,
and DHS is supportive of additional state efforts in this regard.133

If you want to thump the DA read this article and cut state-bystate
^^^^http://object.cato.org/sites/cato.org/files/pubs/pdf/pa749_web_1.pdf^^^^^^

A2: Business Confidence/Innovation


DA

2AC
N/U- existing lawsuits should have triggered the disad.
Sobel, 6/11. Ben, Washington Post, 2015. Facial recognition technology is
everywhere. It may not be legal.
http://www.washingtonpost.com/blogs/the-switch/wp/2015/06/11/facial-recognitiontechnology-is-everywhere-it-may-not-be-legal/
Being anonymous in public might be a thing of the past.

Facial recognition technology is

already

being deployed

to let brick-and-mortar stores scan

the face of every shopper, identify returning customers and offer them individualized pricing or find pre-identified shoplifters and known litigious individuals. Microsoft has patented a billboard that identifies you as you walk by
and serves ads personalized to your purchase history. An app called NameTag claims it can identify people on the street just by looking at them through Google Glass. Privacy advocates and representatives from companies like

There
are no federal laws that specifically govern the use of facial recognition technology.
But
Illinois and Texas have laws against using such
technology
A lawsuit filed in Illinois
could reshape Facebooks practices
for getting user consent
Facebook and Google are meeting in Washington on Thursday to try to set rules for how companies should use this powerful technology. They may be forgetting that a good deal of it could already be illegal.

while few people know it, and even fewer are talking about it, both

to identify people without their informed consent. That means that one out of every eight Americans currently has a legal right to biometric privacy. The Illinois law is facing the most public test to date

of what its protections mean for facial recognition technology.

trial court in April alleges Facebook violates the states Biometric Information Privacy Act by taking users

faceprints without even informing its users let alone obtaining their informed written consent. This suit, Licata v. Facebook,

, and may even influence the expansion of facial recognition technology. How commonand how accurateis facial recognition technology? You may not be walking

by ads that address you by name, but odds are that your facial geometry is already being analyzed regularly. Law enforcement agencies deploy facial recognition technology in public and can identify someone by searching a
biometric database that contains information on as many as one-third of Americans. Advertisement Click here for more information! Companies like Facebook and Google routinely collect facial recognition data from their users, too.
(Facebooks system is on by default; Googles only works if you opt in to it.) Their technology may be even more accurate than the governments. Googles FaceNet algorithm can identify faces with 99.63 percent accuracy.
Facebooks algorithm, DeepFace, gets a 97.25 percent rating. The FBI, on the other hand, has roughly 85 percent accuracy in identifying potential matchesthough, admittedly, the photographs it handles may be harder to analyze
than those used by the social networks. Facebook and Google use facial recognition to detect when a user appears in a photograph and to suggest that he or she be tagged. Facebook calls this Tag Suggestions and explains it as
follows: We currently use facial recognition software that uses an algorithm to calculate a unique number (template) based on someones facial featuresThis template is based on your profile pictures and photos youve been
tagged in on Facebook. Once it has built this template, Tag Suggestions analyzes photos uploaded by your friends to see if your face appears in them. If its algorithm detects your face, Facebook can encourage the uploader to tag
you. With the boom in personalized advertising technology, a facial recognition database of its users is likely very, very valuable to Facebook. The company hasnt disclosed the size of its faceprint repository, but it does acknowledge
that it has more than 250 billion user-uploaded photos with 350 million more uploaded every day. The director of engineering at Facebooks AI research lab recently suggested that this information was the biggest human dataset
in the world. Eager to extract that value, Facebook signed users up by default when it introduced Tag Suggestions in 2011. This meant that Facebook calculated faceprints for every user who didnt take the steps to opt out. The Tag
Suggestions rollout prompted Sen. Al Franken (D-Minn.) to worry that Facebook may have created the worlds largest privately held data base of faceprints without the explicit consent of its users. Tag Suggestions was more
controversial in Europe, where Facebook committed to stop using facial identification technology after European regulators complained. The introduction of Tag Suggestions is whats at issue in the Illinois lawsuit. In Illinois,
companies have to inform users whenever biometric information is being collected, explain the purpose of the collection and disclose how long theyll keep the data. Once informed, users must provide written release that they
consent to the data collection. Only after receiving this written consent may companies obtain biometric information, including scans of facial geometry. Advertisement Click here for more information! Facebook declined to comment
on the lawsuit and has not filed a written response in court. Its unclear whether todays paradigm for consent clicking a "Sign Up" button that attests you've read and agreed to a lengthy privacy policy fulfills the requirements
written into the Illinois law. Its also unclear whether the statute will cover the Tag Suggestions data that Facebook derives from photographs. If the law does apply, Facebook could be on the hook for significant financial penalties.
This case is one of the first applications of the Illinois law to facial recognition, and it will set a hugely important precedent for consumer privacy. Why biometric privacy laws? Biometric information like face geometry is high-stakes
data because it encodes physical properties that are immutable, or at least very hard to conceal. Moreover, unlike other biometrics, faceprints are easy to collect remotely and surreptitiously by staking out a public place with a
decent camera. Anticipating the importance of this information, Texas passed a law in 2001 that restricts how commercial entities can collect, store, trade in and use biometric data. Illinois passed a similar law in 2008 called the
Biometric Information Privacy Act, or BIPA. A year later, Texas followed up with another law to further regulate biometric data in commerce. The Texas laws were passed with facial recognition in mind. Brian McCall, now chancellor of
the Texas State University system, introduced both Texas bills during his tenure as a state representative. Legislation is seldom ahead of science, and in this case I felt it was absolutely necessary that legislation get ahead of
common practice," McCall explained. "And in fact, we were concerned about how the market would use personally identifiable information. Sean Cunningham, McCalls chief of staff, added the use of facial recognition by law
enforcement at the 2001 Super Bowl in Tampa helped bring the issue to their attention. However, it appears that the Texas statute has not been used very often to litigate the commercial collection of facial identification information.
Advertisement Click here for more information! On the other hand, the Illinois law was galvanized by a few high-profile incidents of in-state collection of fingerprint data. Most notably, a company called Pay By Touch had installed
machines in supermarkets across Illinois that allowed customers to pay by a fingerprint scan, which was linked to their bank and credit card information. Pay By Touch subsequently went bankrupt, and its liquidation prompted
concerns about what might happen to its database of biometric information. James Ferg-Cadima, a former attorney with the ACLU of Illinois who worked on drafting and lobbying for the BIPA, told me that the original vision of the bill
was tied to the specific issue that was presenting itself across Illinois, and that was the deploying of thumbprint technologies Oddly enough, Ferg-Cadima added, this was a bill where there was little voice from the private
business sector. This corporate indifference might be a thing of the past. Tech companies of all stripes have grown more and more interested in biometrics. Theyve become more politically powerful, too: For instance, Facebooks
federal lobbying expenditures grew from $207,878 in 2009 to $9,340,000 in 2014. Testing the Illinois law The crucial question here is whether the Illinois and Texas laws can be applied to todays most common uses of biometric
identifiers. What real-world business practices would meet the standard of informed consent that Illinois law requires for biometric data collection? When asked about the privacy law cited in the Licata case, Jay Edelson, the
managing partner of the firm representing the plaintiff, said, The key thing to understand is that almost all privacy statutes are really consent statutes. The lawsuit stands to determine precisely what kind of consent the Illinois law

the Illinois biometrics law


it
may upend the practices of one of the worlds largest Internet companies,
And if the lawsuit fails for one reason or another, it would
emphasize that regulation of facial recognition needs to take place on a federal
level
demands. If the court finds that Facebook can be sued for violating

, and that its opt-out consent framework for Tag Suggestions violated the law,

one that is possibly the

single largest user of commercial facial recognition technology.

if it is to happen at all. Either way, theres a chance this lawsuit will end up shaping the future of facial recognition technology.

N/U and turn- industry self-regulating now and wants stronger


privacy standards.
Gross, 6/16. Grant, CIO News, 2015. Privacy groups to quit US talks on facial
recognition standards. JJZ
http://www.cio.com.au/article/577553/privacy-groups-quit-us-talks-facialrecognition-standards/
facial recognition vendors have been more careful with deploying the
technology than negotiators at the NTIA meetings have advocated
Due to
state laws and just good business sense, most of the leading companies have
refused to turn facial recognition on automatically
Instead, they turn it on only if
customers choose to turn it on
In many cases,

, Bedoya added. "

," he said. "

. Industry associations have staked out a position that is less protective of privacy than the companies they represent -- and far less protective of

what consumers deserve." If the NTIA process goes forward without privacy and consumer groups, that will raise questions about the product, Bedoya added. "If all consumer groups who have been active withdraw, I don't think it
can be called a 'multistakeholder' process," he said. "It can be called an 'industry' stakeholder process." Still,

one industry participant

said Monday he

remained

optimistic that the NTIA process would produce a strong set of facial recognition
privacy standards

. Despite disagreements about the consent issues, participants have made a lot of progress, said Carl Szabo, policy counsel with NetChoice, an e-commerce trade group. "We're

getting to a point when we can start putting pen to paper," he said. The final standards need to incorporate compromise from both industry and privacy groups, Szabo added. All the new privacy standards being negotiated are
"actually limiting on business, in some capacity," he said. Since mid-2012, the NTIA has convened for a series of negotiations related to technology and privacy, with the first meetings focused on mobile application privacy. The NTIAled discussions produced a set of app privacy standards that some companies are now adopting, although two privacy groups declined to sign on to the final product. In March, the NTIA announced it would next host negotiations on
privacy standards for aerial drones.

No link- the aff only places restrictions on federal programs


means theres 0 risk we impact corporate investment.
Even if were a regulation- only voting aff provides legal
certainty to the industry to develop. Thats a better internal
link to market certainty and investment.

1AR
Turn- industry supports regulations on facial recognition.
Chayka, 14. Kyle, Newsweek, 4/25. Biometric Surveillance Means Someone Is
Always Watching. JJZ
http://www.newsweek.com/2014/04/25/biometric-surveillance-means-someonealways-watching-248161.html
In the private sector, efforts are being made to ensure face recognition isn't abused ,
but standards are similarly vague. A 2012 Federal Trade Commission report recommends that companies should obtain "affirmative express consent before collecting or using biometric
data from facial images." Facebook collects face-prints by default, but users can opt out of having their face-prints collected. Technology entrepreneurs argue that passing strict laws
before face recognition technology matures will hamper its growth. "What I'm worried about is policies being made inappropriately before their time," Animetrics's Schuepp says. "I don't

the technology itself is not the problem; rather, it's


how the biometrics data are controlled. Yet precedents for biometric surveillance
must be set early in order to control its application. "I would like to see regulation of
this before it goes too far," Lynch says. "There should be laws to prevent misuse of
biometric data by the government and by private companies . We should decide whether we want to be able to
think it's face recognition we want to pick on." He suggests that

track people through society or not."

A2: Terror DA
N/U: Surveillance is proving to be less effective against terrorists
Economist Jan 17th 2015 | From the print edition
http://www.economist.com/news/briefing/21639538-western-security-agencies-arelosing-capabilities-they-used-count-getting-harder
ONCE the shock that a terrorist outrage generates begins to fade,
questions start to be asked about whether the security services could
have done better in preventing it. Nearly all the perpetrators of recent
attacks in the West were people the security services of their various
countries already knew about. The Kouachi brothers and Amdy Coulibaly were
no exception; the Direction Gnrale de la Scurit Intrieure (DGSI), Frances
internal security agency, and the police knew them to be radicalised and potentially
dangerous. Yet their plot or plots, which probably involved more people and may
have been triggered either by al-Qaeda in Yemen or the so-called Islamic State (IS)
in Syria, went undiscovered. There there may have been a blunder, and there will
undoubtedly be lessons to be learned, just as there were in Britain after the 2013
murder of Fusilier Lee Rigby by Michael Adebolajo and Michael Adebowale, both of
whom had featured in several prior operations by MI5, the internal-security agency.
But it is worth reflecting on the extent to which Western security agencies have
succeeded in keeping their countries safe in the 13 years since September 2001.
And it is worth noting that their job looks set to get harder. Europe has suffered
many Islamist terrorist attacks in recent years, but before the assault on Charlie
Hebdo, only two of them caused more than ten deaths: the Madrid train attack in
May 2004 and the London tube and bus bombings 14 months later (see chart). This
was not for want of trying; intelligence sources say they have been thwarting
several big plots a year. Sometimes this has meant arresting the people involved:
more than 140 people have been convicted of terrorism-related offences in Britain
since 2010. But often plots have been disrupted in order to protect the public before
the authorities have enough evidence to bring charges. Three factors threaten this
broadly reassuring success. The first is the break-up of states in the Middle East.
The civil wars in Libya, Yemen and Syria mean there is a much broader
range of places and groups from which threats can come than there was
five years ago. And there has never previously been anything remotely on
the same scale as IS in terms of financial resources, number of fighters,
territory controlled, sophistication in its use of media and ability to
radicalise young Muslims in the West. Andrew Parker, the head of MI5, says
that since October 2013 there have been more than 20 plots either directed or
provoked by extremist groups in Syria. In September 2014 Abu Mohammed al
Adnani, an IS leader, told would-be recruits not to bother coming to Syria or Iraq but
to launch attacks in their home countries. Attempts to reduce the risks posed by
fighters who join the wars in the Middle East and then return to Europe range from
employment programmes (in Denmark) to banning their return unless they agree to
be monitored and tagged (in Britain). But the sheer number of those returning
makes it almost impossible to guarantee that all will be defanged. A second

problem for the security forces is that the nature of terrorist attacks has
changed. Al-Qaeda, and in particular its Yemeni offshoot al-Qaeda in the
Arabian Peninsula, is still keen on complex plots involving explosions and
airliners. But others prefer to use fewer people, as in commando-style
raids such as the one on Charlie Hebdo and lone-wolf attacks that are
not linked to any organisation. IS has called for attacks on soft targets in the
West by any means availableone method is to drive a car at pedestrians, as in
Dijon on December 21st last year. At any one time MI5 and DGSI will each be
keeping an eye on around 3,000 people who range from fairly low-priority targets
people who hold extremist views that they may or may not one day want to put into
practicethrough those who have attended training camps or been involved in
terrorist activity in the past to those who are thought likely to be actively plotting an
attack. But only a small number at the top are subjected to intensive resource
surveillance. The amount of monitoring available for the others, particularly those
towards the bottom, varies widely. This provides holes for smaller plots to get
through. And a smaller plot can still be large in its outragesee the decapitation of
Fusilier Rigbyand in its body count. Anders Breivik killed 77 Norwegians in 2011
with no co-conspirators at all. Even when there are identified co-conspirators,
though, it is getting harder to tell what they might be up to. This is
because of the third factor that is worrying the heads of Western security
agencies; the increasing difficulty they say they have in monitoring the
communications within terrorist networks. The explosion of oftenencrypted new means of communication, from Skype to gaming forums to
WhatsApp, has made surveillance far more technically demanding and in
some instances close to impossible. Apples latest mobile operating system
comes with default encryption and Googles Android is about to follow suit. In
such systems the companies do not have access to their customers
passwords and therefore cannot provide security agencies access to
messages even if the law requires them to. They say that they are simply
responding to the demands of their users for privacy, but the heads of the security
agencies see the new approach as, at least in part, a response to what Edward
Snowden, a contractor for Americas intelligence services, revealed about their
abilities in 2014. The tech firms are very different from the once-publicly owned
telephone companies that spooks used to work with, which were always happy to
help with a wire tap when asked. Some, especially some of the smaller ones, have a
strong libertarian distrust of government. And technology tends to move faster than
legislation. Although the security agencies may have ways into some of the new
systems, others will stymie them from the modern equivalent of steaming open
envelopes. The citizens of the West have grown used to the idea that their security
services can protect them from the worst that might happen. Faced by a new range
of threats and with countermeasures apparently of rapidly declining effectiveness,
that may be about to change.

Tech fails- creates illusion of security.


The Economist, 2001. Watching you. JJZ
http://www.economist.com/node/787987
The traditional objections to facial-recognition systemsthat they violate privacy, and could end up being used to pick out people with overdue parking tickets, as well as terroristsare
likely to be drowned out in the fearful atmosphere following last week's attacks. But according to Richard Smith of the Privacy Foundation, a lobby group based in Denver, Colorado, even
if facial-recognition systems were in place, the technology would not be a silver bullet. Most of the hijackers, he notes, were not suspected terrorists, so no pictures of them were
available. And in the case of the two who were suspects, attempts to track them down had begun only a few weeks before the attack. He concludes that a breakdown in communications,
rather than inadequate technology, is the problem. Another technology that has been the focus of renewed interest is advanced forms of luggage scanner, such as three-dimensional
scanners, remote scanners that can look at people at a distance, and scanners with threat image projection (TIP). The idea behind TIP is to keep luggage screeners on their toes by
randomly projecting a fake threat imagein other words, a picture of a knife, gun or explosive deviceinto occasional items of luggage. When the screener presses the threat
button, the fake image is removed, and the luggage can be checked again for real threats. In this way, it is possible to monitor the performance of individual screeners. TIP-capable
luggage scanners are now being installed in America's largest airports. But however clever scanning machinery becomes, the real problem, once again, is human. Luggage screeners are
paid little, and the work is tedious, so that it is hard to concentrate for long. Research by the FAA, which has not been published in detail for security reasons, found that screeners' ability

last week's hijackers seem to have used


weapons that would not have been picked up as threats by existing scanners. A
really determined hijacker, notes Frank Taylor of the Aviation Security Centre at Cranfield University in Britain, could use almost any blunt object, or even
a piece of in-flight cutlery, as a weapon. Already, some airlines have switched to plastic cutlery as a precautionary measure. Faced with terrorists who may not be
carrying weapons, and are travelling under their own names , another technology that might help to identify them
to detect suspect objects is not improving, despite the new technology. In any case,

is computer-assisted passenger screening (CAPS), which was first introduced by a number of American airlines in 1998. CAPS uses information from the reservation system, and a
passenger's prior travel history, to select passengers for additional security procedures. It has been fiercely criticised by civil-liberties campaigners who accuse it of picking on members
of particular ethnic groups or nationalities. Besides, terrorists expect to be questioned at check-in, says Mr Taylor. He suggests that CCTV surveillance should be extended to cover
passengers away from areas where they expect to be observed. Some of last week's hijackers were reported to have had an argument in the car park at Boston airport. But Mr Taylor
admits that this process could not be automated. There would also be privacy implications, plus the usual accusations of bias. On autopilot into the future If spotting terrorists on the
ground is so hard, what can be done to make aircraft harder to hijack in the air? Again, there has been no shortage of suggestions. Robert Ayling, a former boss of British Airways,
suggested in the Financial Times this week that aircraft could be commandeered from the ground and controlled remotely in the event of a hijack. The problem with this, says Mr Taylor,
is that remote-control systems might themselves open aircraft up to hijacking by malicious computer hackers. He suggests instead that automated landing systems should be modified
so that, in the event of a hijack, the pilot could order his aircraft to land itself, with no option to cancel the command. Another idea is that existing collision-avoidance and terrainavoidance systems could be modified to prevent aircraft from being crashed deliberately. But such proposals, says Chris Yates, an aviation-security expert at Jane's Defence Weekly,
belong in the realms of science fiction. (Mr Yates advocates simpler, low-tech fixes, such as doing away with curbside and city-centre check-ins, and allowing only passengers to have

Clever gizmos can do only so much, and they


may also provide a dangerous illusion of invulnerability. No matter how advanced
the technology, it has to be backed up with skilled personnel and appropriate procedures. Until now, people's priorities when travelling have been convenience and
price, not security. That may change. But the reality is that no technology can neutralise the threat of
terrorism. Indeed, nothing ever can.
access to departure gates.) In short, for every quick fix, there is an unseen drawback.

No link- reliance on facial recognition fails and undermines


security effortsGray, 03. Mitchell, Surveillance and Society. Urban Surveillance and
Panopticism: will we recognize the facial recognition society? JJZ
http://www.surveillance-and-society.org/articles1(3)/facial.pdf
Facial recognition technology requires further development , however, before reaching maximal surveillance utility.
3 The American Civil Liberties Union explains: Facial recognition software is easily tripped up by changes in
hairstyle or facial hair, by aging, weight gain or loss, and by simple disguises. It
adds that the U.S. Department of Defense found very high error rates even under
ideal conditions, where the subject is staring directly into the camera under bright lights.4 The Department of Defense study demonstrated significant rates of false
positive test responses, in which observed faces were incorrectly matched with faces in the database. Many false negatives were also revealed, meaning the system failed to recognize

these systems would miss a high proportion of


suspects included in the photo database, and flag huge numbers of innocent people - thereby lessening
vigilance, wasting precious manpower resources, and creating a false sense of security .5
faces in the database. The A.C.L.U. argues that the implementation of facial recognition systems is undesirable, because

FRT has a high rate of error


Olsen and Lemos, 1. Stefanie and Robert, 9/1/01, CNET. Can face recognition
keep airports safe? http://news.cnet.com/2100-1023-275313.html
a study by the Department of Defense that recorded a high rate of error when
identifying suspects- even under ideal settings such as scanning a person's image under bright lights face forward. The
study showed a large number of "false positives," wrongly matching people with photos of others, and
The group cited

"false negatives," missing people not in the database. "Facial-recognition software is easily tripped up
by changes in hairstyle or facial hair, by aging, weight gain or loss, and by simple
disguises," the ACLU report said. "That suggests, if installed in airports, these systems would miss a high
proportion of suspects included in the photo database, and flag huge numbers of
innocent people--thereby lessening vigilance, wasting precious manpower resources, and creating a false sense of security.

Facial recognition is not reliable


Electronic Privacy Information Center, 5 9/2005, Electronic Privacy
Information Center, Spotlight on Surveillance,
https://epic.org/privacy/surveillance/spotlight/1105/
NIST found that the recognition rate for
faces captured in uncontrolled environments, such as outdoors, could be as low as 50%. NIST
also determined that, because of the high failure rates when applied to large groups of
people, facial recognition was not a viable technology for large-scale
identification. Time also affects facial recognition systems. The longer the time
between the original photograph in the database and the new image captured, the
less likely the facial recognition system will make a correct match.
In 2002, NIST conducted a comprehensive study of facial recognition systems.

A2: Courts CP
Legislative action is the only way to uphold constitutional
protections of privacyLynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
Face recognition allows for covert, remote and mass capture and identification of imagesand the photos that may
end up in a database include not just a persons face but also how she is dressed and possibly whom she is with. This creates threats to free association and free expression not evident

Americans cannot participate in society without exposing their faces to


public view. Similarly, connecting with friends, family and the broader world through social media has quickly become a daily (and some would say necessary) experience
for Americans of all ages. Though face recognition implicates important First and Fourth Amendment values, it is unclear whether the
Constitution would protect against over-collection. Without legal protections in
place, it could be relatively easy for the government or private companies to amass a database
of images on all Americans. This presents opportunities for Congress to develop
legislation that would protect Americans from inappropriate and excessive
biometrics collection. The Constitution creates a baseline, but Congress can
legislate significant additional privacy protections . As I discuss further below, Congress could use statutes like the Wiretap
in other biometrics.

Act9 and the Video Privacy Protection Act10 as models for this legislation. Both were passed in direct response to privacy threats posed by new technologies and each includes
meaningful limits and protections to guard against over-collection, retention and misuse of data. My testimony will discuss some of the larger current and proposed facial recognition
collection programs and their implications for privacy and civil liberties in a democratic society. It will also review some of the laws that may govern biometrics collection and will outline
best practices for developing effective and responsible biometrics programsand legislation to regulate those programsin the future.

A2: PTX
Their entire understanding of the politics disad is educational
and wrong - the plan would not emerge from Obama spending
political capital, but rather by riding the coat-tails of an
exigency which reformulates acceptable standards of privacy
and government intrusion.
Ni & Ho 8 - *graduated from the doctoral program in public
administration, assistant professor at California State University, San
Bernardi, ** associate professor in the School of Public and
Environmental Affairs at Indiana University Purdue University
Indianapolis. (Anna Ya, Alfred Tat-Kei, A Quiet Revolution or a Flashy Blip? The
Real ID Act and U.S. National Identification System Reforms,
http://www.jstor.org/stable/pdf/25145704.pdf, Published by: Wiley on behalf of the
American Society for Public Administration) NAR
There is a rich body of literature on how policies are formulated . Many studies of the policymaking process in modern democracies reveal that policies are usually not made based on
economic rationality and often do not reflect a clear relationship between problems,
goals, and policy solutions (Cohen, March, and Olsen 1972; Kingdon 2003). Rather, policies are the result
of dynamic and fluid interactions among interest groups and parties that try to shape the policy agenda in the
legislative process, capture public debate and media attention on certain issues, and engage each other in
persuasion and interest exchanges to establish a political equilibrium (Baumgartner and Jones 1993; Downs 1972;
Kelman 1987; Lindblom 1977; Majone 1989). Such equilibrium is akin to Theodore Lowi's (1969) "iron triangles," in
which interest groups, bureaucrats, and Congress dominate a particular policy area. Over time, these actors will
accommodate each other in the policy subsystem (Griffith 1939), resisting new ideas and outside pressures and

drastic
policy changes do happen, especially in response to system shocks caused by pivotal
political events that are marked by urgent peril, intense threat, and massive horror
(Kingdon 2003; Lewis 2006). Such events force entrenched policy subsystems to
restructure and generate sufficient political support and attention to
leverage an alteration of views in the subsystem (Baumgartner and Jones 1993; Wood
2006). For example, a major education policy shift in the 1960s was triggered by the civil
rights movement and was successfully advocated by reformers who took
advantage of the national mood of "equality and justice for all."28 The development of
trying to maintain the status quo in order to protect their mu tually compromised interests. Nevertheless,

the U.S. identification system over the past few years seems to follow this pattern. Prior to the 2000s, many
attempts had been made to change the system and to introduce a national ID to meet different policy needs (see
figure l).29 For example, in the 1970s and the early 1980s, illegal immigration and employ ment status verification
were the key concerns of policy makers. In the 1990s, e-commerce and identity thefts were the new concerns.30
Nonetheless, none of these issues was sufficient to shake up the political equilib rium built by privacy advocacy

It was not until the 9/11 tragedies that " a window


of opportunity" was opened to bring the problem stream, the policy stream, and
the political stream together to drastically change the U.S. identification system . An
groups, state govern ments, and businesses.

unprecedented number of congressional hearings were held in the aftermath of 9/11 to examine the funda mental
limits of the system (see figure 1). Nonetheless, it should be pointed out that even the 9/11 crisis did not seem to
be sufficient to break the old powerful subsystems that opposed the idea of creating a national ID. Despite the
surge of congressional hearings in 2002-4, no legisla tion on national ID could be passed immediately following the

that even when a policy window is open, there


still can be considerable room for disagreement (Kingdon 2003, 171). What is critically
9/11 attack. This echoes Kingdons observation

important is policy entrepreneurship and craftsmanship to " seize the moment"


and to fully exploit different opportunities and come up with effective strategies to
enforce a policy change within an institutional setting (Bryson and Crosby 1992; Bostdorff 1994;
Burns 1978; Kingdon 2003; Stimson 1999; Stimson, MacKuen, and Erikson 1995). Our case study shows that the
Bush administration and some of its congressional supporters demonstrated such
craftsmanship. By early 2005, the administration had successfully framed the
national security concern as a "war" against terrorism and used it to support
military action in Afghanistan and Iraq. These actions reinforced the national mood
that the country was at war, which helped the administration push through a series
of antiterrorism bills, including the USA PATRIOT Act. Finally, when opposition to the
identification reform lingered, they tactfully attached the Real ID Act to an
emergency military spending bill in 2005 so that it could be passed without much
opposition. Our analysis also affirms that disruption of policy equilibrium is never a mere result of a single
socioeco nomic or political event. Policy entrepreneurship and effective strategies are often
critical in turning an event into a political opportunity and finally bringing about a
policy change. This seems to largely conform to the multiple streams framework suggested by Kingdon.
However, the abrupt passage of the Real ID Act raises several interesting questions that have not been exam ined
carefully by the past literature. The primary issue is what happens to a legislation or policy in the long run when it is
passed under unusual circumstances? Could it be long lasting? Would the established interests and policy
subsystems reinstitute the previous policy equilibrium once the national crisis mood is over? What are the lessons
for policy makers and public managers who need to respond quickly to a crisis and yet maintain the spirit of
democratic discourse and the institutional integrity of checks and balances? The Real ID Act may offer some
insights into these questions. Even though the law was successfully passed in 2005, there are now questions about
whether it was just a "flashy blip" in the U.S. identification system debate. As the crisis mentality of 9/11 fades and
the integrity of the Bush administration in handling the war in Iraq has been questioned more openly and publicly,
the proportion of Americans identifying national security or antiterrorism as the most impor tant issue facing the
country has begun to decline.31 Also, while the Bush administration was very effective in capitalizing on the
national mood and policy cir cumstances to get the law passed, it never allowed sufficient open debate of the policy
to shake up the policy subsystems and soften (but not silence) the opposition, nor was it effective in building a
strong, long-lasting coalition to support the policy change. As a result, we witness tremendous pressure on the
administration to backpedal on the execution of the Real ID Act in recent years. Legal challenges to the Real ID Act
have already been filed at the time this article was written.32 Moreover, the National Conference of State
Legislatures has expressed concerns over the legislation because of its high implementation cost.33 Eight states
have already passed legislations that refuse the standards established in the Real ID Act, and nine states have
passed resolutions calling on Congress to repeal or amend the law.34 There have also been growing grassroots and
legislative campaigns in different states to tie the law to illegal immigration issues and to make the law less
politically acceptable to some groups (Schuck 2007; see also AP 2007; Grand Rapids Press 2007; Washington Post
2007). As a result, the Real ID Act has basically been put on hold. By December 2007, two years after the passage
of the law, only four states?Arizona, New York, Wash ington, and Vermont?had signed on (see Lancaster New Era
2007). It originally required state governments to comply with the national ID standards by May 2008, but the
deadline was extended to 2013 and was recendy extended further by the Department of Homeland Security to
2018. Furthermore, the department has eased its demands that the new licenses be renewed every five years and
that expensive, tamper-resistant materials be used to create the ID cards (Hsu 2007). Hence, the permanence of
the Real ID Act is now an open question. This recent development suggests several important lessons. As the
multiple streams policy literature suggests, national crises may offer a window of opportunity for policy
entrepreneurs to get a policy on the agenda or even to get it passed under a certain na tional mood. However, they
may be insufficient to sustain a policy shift in the long run unless policy entre preneurs also invest time and political

a major policy
shift often takes decades to build up momentum in a democratic system. Policy
options have to be sorted out, debated, evaluated, and examined critically by
diverse groups and all these need time. The translation from an idea to a policy
can be painfully slow, but rushing it through the political system can actually be
counterproductive and may risk policy backpedaling later . Regardless of the future outcome of
capital to realign the established interests. The development of the Real ID Act also affirms why

the legislative and legal challenges, the way in which the Real ID Act was passed by Congress causes us to rethink
the ethical responsibilities of policy makers and public adminis trators in times of national crises.

When crises

occur, many checks and balances in the political system can be easily weakened or
put on hold. Politicians can take advantage of the situation and the public sentiment
to introduce controversial policy changes. As shown in World War II and in the modern experiences
of many developing countries that are headed by authoritarian regimes today, the failure of political leaders to
uphold the core democratic values can have serious conse quences for the security and stability of a country. We
believe that policy makers and public administra tors need to uphold the principles of accountability and checks and
balances of power even in times of crises. They have an ethical responsibility to safeguard the democratic process,
to help the press and the public to fully understand the implications of any policy change within the legal
framework, and to ensure that there is sufficient public discourse to reflect the public good and to protect the
pluralistic interests of society (Rohr 1989). Otherwise, a nation may be merely trading many fundamental
democratic values for a false sense of security.

A2: Neolib
Their critique of the privatization of surveillance is too
sweeping the government will continue to be a PRODUCER,
not a consumer, or genetic surveillance.
*Genetic survielanceindustrial complex

Kreag 15 Visiting Assistant Professor, University of Arizona James E.


Rogers College of Law (Jason, GOING LOCAL: THE FRAGMENTATION OF GENETIC
SURVEILLANCE, Boston University Law Review (Forthcoming October 2015), p. 31-3)
NAR
Current Fourth Amendment doctrinein particular the principles of the third-party doctrine153
allows law enforcement to benefit from the vast amount of information the public
voluntarily shares with private companies.154 This has led some scholars to conclude that
law enforcement will respond by altering their surveillance practic es. Professor Paul Ohm
predicts that [a]s the surveillance society expands, the police will learn to rely more on
the products of private surveillance, and will shift their time, energy, and money
away from traditional self-help policing, becoming passive consumers rather
than active producers of surveillance.155 Professor Ohms instincts are correct about certain
types of surveillance activities. It seems likely that police will be inclined to use information
amassed by private sources, decreasing the need for law enforcement to conduct
duplicative surveillance. However, not all information sought by law
enforcement is captured in the private sector. Specifically, genetic surveillance
is one area where law enforcement will continue to be producers , as opposed
to consumers, of surveillance. Whereas Google, Facebook, and other companies will
feed law enforcements desire for digital surveillance, the expansion of local
databases demonstrates that law enforcement will be the driver of collecting
and analyzing genetic evidence. In addition, local law enforcements use of genetic
surveillance will be shaped by corporate interests.156 Corporate interests have
played a role in the development of local DNA databases since their inception. 157 The
first local DNA database was designed jointly by a private DNA lab and the Palm Bay Police Department.158 And

private firms are integral to the continued expansion of these database s. Large
firms, such as Bode Technology and Orchid Cellmark, view local law enforcement databases as
potential revenue streams, particularly because they promise to promote the use of
DNA beyond violent crimes (sexual assaults and homicides) to property crimes.159 These
firms see a business opportunity in processing the evidence swabs collected from
property crimes. Indeed, in marketing their products, they trumpet the studies that
have highlighted DNAs promise for solving these crimes .160 Similarly, smaller firms have also
sought to benefit from and to drive the expansion of local databases. These include SmallPond and IntegenX.161
These companies have been consistent participants in law enforcement conferences in the last several years,162
and they have sought meetings with local agencies to pitch their products. Furthermore, IntegenX offers to help
potential buyers secure grants to purchase its products.163 The influence of private firms on policing techniques is
not new and is certainly not unique to genetic surveillance.164 However, it is important to recognize that these

because
these private interests have evolved simultaneously with local law enforcements
push to enter the genetic surveillance space, the prospect of a genetic
surveillanceindustrial complex further entrenching the practice of local
private interests will influence the expansion, use, and long-term viability of this surveillance tool. And

databases seems likely. Finally, the very use of these databases will also contribute
to the publics acceptance of them. Even those with only a casual understanding of
surveillance techniques accept without question law enforcements ability to collect
personal informationincluding photographs, fingerprints, addresses, etc.for
investigative databases. Furthermore, because CODIS has been around for 20 years, there is widespread
understanding that law enforcement collects DNA profiles from at least some segments of the population. Thus,
local databases are not a completely new surveillance tool. This incremental evolution of law enforcement
investigative databases in general, and DNA databases in particular, will help to solidify local databases as a
tolerated, if not accepted, law enforcement tool.165

Perm do both.
Use of state based technologies to monitor and control
relationships reinforces neoliberalism.
Stroo, 13. Sara, School of Journalism and Communication and the Graduate
School of the University of Oregon, 6/13. JJZ
https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/13316/Stroo_oregon
_0171N_10750.pdf?sequence=1
From this, one can easily see why neoliberal governmentality is all too frequently presented as the retreat of the state. These feeds come from
equipment which was installed and maintained by a private company but funded by grants from the state of Texas, and the responsibility for sovereign
security and border patrol are made the purview of the Virtual Deputies. I want to argue however that this example will actually show that it is much more
productive to ideate neoliberalism as a transformation of politics. Neoliberalism is not a retreat, it is a restructuring of power relations away from the
formal techniques of the state, toward informal techniques of power which are introduced and overseen by new, non-governmental actors. As Wendy
Brown writes in her seminal essay on neoliberalism: We are not simply in the throes of a right-wing or conservative positioning within liberal democracy
but rather at the threshold of a different political formation a formation made possible by the production of citizens as individual entrepreneurial actors
across all dimensions of their lives, reduction of civil society to a domain for exercising this entrepreneurship, and figuration of the state as a firm whose
products are rational individual subjects, an expanding economy, national security, and global power.12 In practice then,

the neoliberal

turn is marked by

intense deregulation of the marketplace, decreased welfare spending, privatization of services, financialization of wealth,
and tax cuts for the very rich. In such a regime the role of government is reduced mainly to deregulating markets and acting as the lender of last resort
to mitigate the risk of this increasingly financialized market.13 According to Sim and Coleman, [N]eo-liberal conditions [trend] towards multiple centres

autonomous forms of expertise and localized technologies and mechanics


of rule. Thus contemporary forms of crime control, and more broadly social control,
are understood as phenomena exercised and nurtured through neo-liberal rule
within dense networks and alliances acting at a distance from central and national
public powers.14 Foucault concluded that the distinguishing feature of American Neoliberals was the unprecedented expansion of the
of government,

economic enterprise to the entire social realma dynamic clearly at play in the creation and use of BlueServo. Hamman concluded from this state of

Within the reason of state of American neoliberalism, the role of government is


defined by its obligations to foster competition through the installation of marketbased mechanisms for constraining and conditioning the actions of individuals,
institutions, and the population as a whole.15 As a result of these new relationships
between the state and forces of civil society, indirect techniques of
power/knowledge are injected into the social fabric. Notably there is a shift of responsibility such that the notion
affairs,

of being off work, becomes a question of demonstrating self-discipline. John Carey called the creation of the Sabbath a form of resistance to state and
market powers, it was a time to rest, to reconnect with family and community and the self under the moral auspice of piety.16 Neoliberalism eliminates a
day of rest by shifting the moral center away from worship of an omnipotent and toward worship of production. In this way, the loss of leisure is not seen
as an imposition, but takes on the sort of righteousness that honoring the Sabbath used to hold. The neoliberal discipline taps in to a sense of moral

The active, morality-managing


subject in the neoliberal ideal is an individual that is fit, flexible and autonomous . As a
superiority, and notably, it is a morality that can be managed by the self and by others.17

result, we see a rise of predictive or actuarial control, which is marked by a shift away from normativity and individual treatment and toward technicality
and classificatory management.18 Where this fitness is most often measured is in the reaction and mitigation of risk. Risk, as identified by Ulrich Beck and
Anthony Giddens in the early 1990s, is a mode of decision making based on the possibility of future dangers and negative effects.19 Risk is not defined by
ignorance of threat, but rather by positive knowledge its existence; to be capable of best averting risk is to learn to see risk everywhere. Risks are the
reflection of human actions and omissions, the expression of highly developed productive forces. That means that the sources of danger are no longer
ignorance but knowledge20 This conception of risk shares with Foucaults notion of governmentality a preoccupation with developing control strategies

As a result, we see a rise of an era of predictive or


actuarial control, which is marked by a shift away from normativity and individual
treatment and a shift toward technicality and classificatory management. 22 In this era entire
that are technically efficient, and politically neutral.21

categories or classes of people become potential risks and objects of control. Threats to society are no longer seen as an action committed by homo

penalis, the law breaker, but as embodied and theoretically identifiable in homo criminalis, the criminal person. 23 Through the introduction of this type,

hostile elements are


segregated, monitored, and excluded in the name of the greater good24 and ideal
self-caring, risk managing citizens aid in this monitoring and exclusion however they
are able. To that end, two contemporary social processes are critical to the surveillance preformed in the name of this type of preemptive risk
the protection of the social body is achieved through the prevention of crime in the first place

management: the first is the expansion of information systems, and the second is reliance on computerized technologies.25 Homo economicus can be

If one wishes to remain part of the social body, one must fall into
a category which is deemed worthy of inclusion . According to Beck, Even outside of work, industrial society is a
seen as a foil to homo criminalis.

wage labor society through and through in the plan of its life, in its joys and sorrows, in its concept of achievement, in its justification of inequality, in is
social welfare laws, in its balance of power and in its politics and culture.26 As Foucault has noted, this new mechanism of power is notable then, that it
permits extraction of time and immaterial labor from bodies as proof of worthiness, rather than tangible wealth and commodities as proof of capital
productivity.27 In such a regime the notion of a worker undergoes radical redefinition: The primary economic image offered to the modern citizen is not
that of the producer but of the consumer The worker is portrayed neither as an economic actor, rationally pursing financial advantage, nor as a social
creature seeking satisfaction of needs for solidarity and security. The worker is an individual in search of meaning, responsibility, as sense of personal
achievement, a maximized quality of life, and hence of work. 28 The neoliberal subject produces more than things, he produces himself. Homo
economicus is above all an entrepreneur of the self.29

Aff is a negative state action- biometrics perpetuate economic


purposes.
Liberatore, 07. Angela, Eur J Crim Policy Res (2007). Balancing Security and
Democracy, and the Role of Expertise: Biometrics Politics in the European Union.
JJZ
http://download.springer.com/static/pdf/133/art%253A10.1007%252Fs10610-0069016-1.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Farticle
%2F10.1007%2Fs10610-006-9016-1&token2=exp=1435157031~acl=%2Fstatic
%2Fpdf%2F133%2Fart%25253A10.1007%25252Fs10610-006-9016-1.pdf
%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Farticle
%252F10.1007%252Fs10610-006-90161*~hmac=d5b3e91ae501f8800570af914e9fa4c180cedb68744620335bc5dde61ae2
a4af
Biometric technology (from the Greek: bios, life and metron, measurement) identifies individuals automatically by
using their biological or behavioural characteristics . First applications of non-digitalised biometrics, namely fingerprints,
appear to date back to 14th century China, developed in Europe in the 19th century for law enforcement purposes, and were explicitly associated with crime and the suspicion of crime.
Biometrics was then expanded to other fields and digitalised biometrics and the related industry developed in the 1990s (see IPTS, 2003). Current applications of biometrics are
multipurpose, ranging from regulation of asylum and migration to electronic patients records, access by personnel to sensitive areas of governmental or commercial buildings to aviation
security and the fight against terrorism. Biometric applications in such broad and diverse areas involve a range of impacts including on market competition, civil liberties and
fundamental rights, the legitimacy of state and supranational policies, and the role of the private sector in managing personal data. Pervasive surveillance is quite a sensitive and
controversial issue in democratic societies as indicated by the popularity of Orwells 1984, the public and media attention devoted to the Echelon case, or the academic literature on

support for them has been argued and implemented


by executive and legislative authorities of democratic countries in connection with
the fight against crime most recently with the fight against terrorism, in particular after the attacks of 11 September 2001 and support or even active
Foucaults conceptualisation of social control.6 At the same time,

request for surveillance measures has been expressed by citizens in local contexts ,e.g., to control street crime in urban environments. A paradox of surveillance is that surveillance
techniques such as census and civil registration were developed as means of granting civil rights and, at the same time, serve as potential means for states to gain informational power

This paradoxical character is retained with globalisation (Lyon, 2004), where states as
well as corporations boast technologies such as satellite tracking stations or supercomputer filtering devices and have access to international flows of personal data.
Such developments may be driven by economic purposes rather than surveillance
ones; however; they also produce unprecedented surveillance capacities by public
and private actors on a global scale. Such a paradox on a global scale is well illustrated by the positive connotations of notions such as
over citizens.

information society, global village, Internet democracy versus the concerns for security both with regard to the security of IT itself (e.g., in relation to cyber-crime) and with regard
to their security applications and related implications for civil liberties and fundamental rights. Last but not least, the dual-use (civilian and military) IT applications, including
identification devices, raises specific questions with regard to the possibility or desirability of keeping different functions distinct (e.g., identification in relationship to migration control or
in relation to military intelligence gathering) and allow for democratic oversight. Issues of dual-use (or, as also frequently referred to, multi-functionality) have been recently addressed in
EU-level research policy, which used to be civilian only.7 These issues cannot be pursued in depth in this article, but they point to science and technology itself as a specific field
blurring internal and external dimensions of security, and the intermingling roles of public and private sectors in the definition of security issues and options.8 Another paradoxical
aspect of surveillance technologies is that they may mitigate as well as reinforce fears. Someone working with them may feel more in control, and they may enhance perception of
being safer in controlled areas; at the same time, they may induce suspicion and fears by subjects of surveillance who may wonder why they are screened (Why do I have to provide
my fingerprints? I am not a criminal.) or whether data could be manipulated and misused. Surveillance evokes uncertainties, risks, threats and the related questions of how significant
these are, whether and in how far they can be prevented, at what costs. As noted by Frank Furedi (2002), stressing fears may lead to an obsession with theoretical risks and the
unintended effect of distracting from some of the daily ones; surveillance technologies may contribute to this process by amplifying perceptions that something bad could happen,
while, as mentioned above, they may also provide a sense of enhanced control.

A2: Heiddeger
Were a negative state action, status quo uses these
technologies to monitor and control the relationships we have.
Stroo, 13. Sara, School of Journalism and Communication and the Graduate
School of the University of Oregon, 6/13. JJZ
https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/13316/Stroo_oregon
_0171N_10750.pdf?sequence=1
From this, one can easily see why neoliberal governmentality is all too frequently presented as the retreat of the state. These feeds come from
equipment which was installed and maintained by a private company but funded by grants from the state of Texas, and the responsibility for sovereign
security and border patrol are made the purview of the Virtual Deputies. I want to argue however that this example will actually show that it is much more
productive to ideate neoliberalism as a transformation of politics. Neoliberalism is not a retreat, it is a restructuring of power relations away from the
formal techniques of the state, toward informal techniques of power which are introduced and overseen by new, non-governmental actors. As Wendy
Brown writes in her seminal essay on neoliberalism: We are not simply in the throes of a right-wing or conservative positioning within liberal democracy
but rather at the threshold of a different political formation a formation made possible by the production of citizens as individual entrepreneurial actors
across all dimensions of their lives, reduction of civil society to a domain for exercising this entrepreneurship, and figuration of the state as a firm whose
products are rational individual subjects, an expanding economy, national security, and global power.12 In practice then,

the neoliberal

turn is marked by

intense deregulation of the marketplace, decreased welfare spending, privatization of services, financialization of wealth,
and tax cuts for the very rich. In such a regime the role of government is reduced mainly to deregulating markets and acting as the lender of last resort
to mitigate the risk of this increasingly financialized market.13 According to Sim and Coleman, [N]eo-liberal conditions [trend] towards multiple centres

autonomous forms of expertise and localized technologies and mechanics


of rule. Thus contemporary forms of crime control, and more broadly social control,
are understood as phenomena exercised and nurtured through neo-liberal rule
within dense networks and alliances acting at a distance from central and national
public powers.14 Foucault concluded that the distinguishing feature of American Neoliberals was the unprecedented expansion of the
of government,

economic enterprise to the entire social realma dynamic clearly at play in the creation and use of BlueServo. Hamman concluded from this state of

Within the reason of state of American neoliberalism, the role of government is


defined by its obligations to foster competition through the installation of marketbased mechanisms for constraining and conditioning the actions of individuals,
institutions, and the population as a whole.15 As a result of these new relationships
between the state and forces of civil society, indirect techniques of
power/knowledge are injected into the social fabric. Notably there is a shift of responsibility such that the notion
affairs,

of being off work, becomes a question of demonstrating self-discipline. John Carey called the creation of the Sabbath a form of resistance to state and
market powers, it was a time to rest, to reconnect with family and community and the self under the moral auspice of piety.16 Neoliberalism eliminates a
day of rest by shifting the moral center away from worship of an omnipotent and toward worship of production. In this way, the loss of leisure is not seen
as an imposition, but takes on the sort of righteousness that honoring the Sabbath used to hold. The neoliberal discipline taps in to a sense of moral

The active, morality-managing


subject in the neoliberal ideal is an individual that is fit, flexible and autonomous . As a
superiority, and notably, it is a morality that can be managed by the self and by others.17

result, we see a rise of predictive or actuarial control, which is marked by a shift away from normativity and individual treatment and toward technicality
and classificatory management.18 Where this fitness is most often measured is in the reaction and mitigation of risk. Risk, as identified by Ulrich Beck and
Anthony Giddens in the early 1990s, is a mode of decision making based on the possibility of future dangers and negative effects.19 Risk is not defined by
ignorance of threat, but rather by positive knowledge its existence; to be capable of best averting risk is to learn to see risk everywhere. Risks are the
reflection of human actions and omissions, the expression of highly developed productive forces. That means that the sources of danger are no longer
ignorance but knowledge20 This conception of risk shares with Foucaults notion of governmentality a preoccupation with developing control strategies

As a result, we see a rise of an era of predictive or


actuarial control, which is marked by a shift away from normativity and individual
treatment and a shift toward technicality and classificatory management. 22 In this era entire
that are technically efficient, and politically neutral.21

categories or classes of people become potential risks and objects of control. Threats to society are no longer seen as an action committed by homo
penalis, the law breaker, but as embodied and theoretically identifiable in homo criminalis, the criminal person. 23 Through the introduction of this type,

hostile elements are


segregated, monitored, and excluded in the name of the greater good24 and ideal
self-caring, risk managing citizens aid in this monitoring and exclusion however they
are able. To that end, two contemporary social processes are critical to the surveillance preformed in the name of this type of preemptive risk
the protection of the social body is achieved through the prevention of crime in the first place

management: the first is the expansion of information systems, and the second is reliance on computerized technologies.25 Homo economicus can be

If one wishes to remain part of the social body, one must fall into
a category which is deemed worthy of inclusion . According to Beck, Even outside of work, industrial society is a
seen as a foil to homo criminalis.

wage labor society through and through in the plan of its life, in its joys and sorrows, in its concept of achievement, in its justification of inequality, in is
social welfare laws, in its balance of power and in its politics and culture.26 As Foucault has noted, this new mechanism of power is notable then, that it
permits extraction of time and immaterial labor from bodies as proof of worthiness, rather than tangible wealth and commodities as proof of capital
productivity.27 In such a regime the notion of a worker undergoes radical redefinition: The primary economic image offered to the modern citizen is not
that of the producer but of the consumer The worker is portrayed neither as an economic actor, rationally pursing financial advantage, nor as a social
creature seeking satisfaction of needs for solidarity and security. The worker is an individual in search of meaning, responsibility, as sense of personal
achievement, a maximized quality of life, and hence of work. 28 The neoliberal subject produces more than things, he produces himself. Homo
economicus is above all an entrepreneur of the self.29

A2: K
The aff functions as a negative state action- exposing social
control and the politics of facial recognition
Introna, 05. Lucas, Center for the Study of Technology and Organisation,
Lancaster University Management School, Lancaster, LA1 4YX, UK. Disclosive
ethics and information technology: disclosing facial recognition systems. JJZ
http://download.springer.com/static/pdf/187/art%253A10.1007%252Fs10676-0054583-2.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Farticle
%2F10.1007%2Fs10676-005-4583-2&token2=exp=1435154780~acl=%2Fstatic
%2Fpdf%2F187%2Fart%25253A10.1007%25252Fs10676-005-4583-2.pdf
%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Farticle
%252F10.1007%252Fs10676-005-45832*~hmac=d6beb5349104917c9211b8a938d7de486b38fb07b4d5abc23c044c64265
ce2bf
Facial recognition

algorithms, which we will discuss below, is a particularly good example of a opaque technology. The facial recognition capability can be imbedded into existing CCTV networks, making

its operation impossible to detect. Furthermore, it

from its targets

is passive in its operation. It requires no participation or consent

it is non-intrusive, contact-free process (Woodward et al. 2003: 7). Its application is flexible. It can as easily be used by a supermarket to monitor potential shoplifters (as was proposed

and later abandoned, by the Borders bookstore), by casinos to track potential fraudsters, by law enforcement to monitor spectators at a Super Bowl match (as was done in Tampa, Florida), or used for identifying terrorists at airports
(as is currently in operation at various US airports). However, most important of all is the obscurity of its operation. Most of the software algorithms at the heart of facial recognition systems (and other information technology
products) are propriety software objects. Thus, it is very difficult to get access to them for inspection and scrutiny. More specifically, however, even if you can go through the code line by line, it is impossible to inspect that code in
operation, as it becomes implemented through multiple layers of translation for its execution. At the most basic level we have electric currents flowing through silicon chips, at the highest level we have programme instructions, yet it
is almost impossible to trace the connection between these as it is being executed. Thus, it is virtually impossible to know if the code you inspected is the code being executed, when executed. In short, software algorithms are

the opaque and silent nature of digital technology makes it


particularly difficult for society to scrutinise it. Furthermore, this inability to
scrutinise creates unprecedented opportunities for this silent and invisible micropolitics to become pervasive
operationally obscure. It is our argument that

(Graham and Wood 2003). Thus, a profound sort of micro-politics can emerge as these opaque (closed) algorithms become enclosed in the social-

technical infrastructure of everyday life. We tend to have extensive community consultation and impact studies when we build a new motorway. However, we tend not to do this when we install CCTV in public places or when we
install facial recognition systems in public spaces such as airports, shopping malls, etc. To put is simply: most informed people tend to understand the cost (economic, personal, social, environmental) of more transparent
technologies such as a motorway, or a motorcar, or maybe even cloning. However, we would argue that they do not often understand the cost of the more opaque information technologies that increasingly pervade our everyday
life. We will aim to disclose this in the case of facial recognition systems below. Before we do this we want to give an account of what we mean by this disclosure of disclosive ethics. Ethics is always and already the other side of

For politics
to function as politics it seeks closure one could say enrolment in the actor
network theory language
politics (Critchley 1999). When we use the term politics (with a small p) as indicated above we refer to the actual operation of power in serving or enclosing particular interests, and not others.

. Decisions (and technologies) need to be made and programmes (and technologies) need to be implemented. Without closure politics cannot be effective as a

programme of action and change. Obviously, if the interests of the many are included in the enclosure as it were then we might say that it is a good politics (such as democracy). If the interests of only a few are included we
might say it is a bad politics (such as totalitarianism). Nevertheless, all political events of enclosing are violent as they always include and exclude as their condition of operation. It is the excluded the other on the outside as it
were that is the concern of ethics. Thus, every political action has, always and immediately, tied to its very operation an ethical question or concern it is the other side of politics. When making this claim it is clear that for us ethics
(with a small e) is not ethical theory or moral reasoning about how we ought live (Caputo 1993). It is rather the question of the actual operation of closure in which the interests of some become excluded as an implicit part of the
material operation of power in plans, programmes, technologies and the like. More particularly, we are concerned with the way in which the interest of some become excluded through the operation of closure as an implicit and
essential part of the design of information technology and its operation in socialtechnical networks. As those concerned with ethics, we can see the operation of this closure or enclosure in many related ways. We can see it
operating as already closed from the start where the voices (or interests) of some are shut out from the design process and use context from the start. We can also see it as an ongoing operation of closing where the possibility
for suggesting or requesting alternatives are progressively excluded. We can also see it as an ongoing operation of enclosing where the design decisions become progressively black-boxed so as to be inaccessible for further
scrutiny. And finally, we can see it as enclosed in as much as the artefacts become subsumed into larger socio-technical networks from which it becomes difficult to unentangle or scrutinise. Fundamental to all these senses of

We need to acknowledge that


politics
is fundamental
Decisions have to be made,
technologies have to be designed and implemented, as part of the ongoing ordering
of society
we are not suggesting an end to
politics as the operation of closure.
Ethics cannot escape politics
closure is the event of closure [as] a delimitation which shows the double appartenance of an inside and an outside... (Critchley 1999: 63).
or the operation of closure

to the ongoing production of social order.

. Agendas cannot be kept open forever, designs cannot be discussed and considered indefinitely. Thus,

Closure is a pragmatic condition for life. Equally, we are not arguing that the question of ethics can, and ought to be, divorced from

politics.

. The concern of ethics is always and already also a political concern. To choose, propose or argue for certain values such as justice, autonomy,

democracy and privacy as suggested by Brey (2000) is already a political act of closure. We may all agree with these values as they might seem to serve our interests, or not. Nevertheless, one could argue that they are very

If ethics cannot escape politics then it is


equally true that politics cannot escape ethics
The design or use
of information technology is not morally wrong as such. The moral wrongdoing is
rather the nondisclosure of the closure or the operation of politics as if ethics does
not matter
anthropocentric and potentially excludes the claims of many others animals, nature, the environment, things, etc.

. This is our starting point a powerful one in our view.

whether it is intended or not. We know that power is most effective when it hides itself (Foucault 1975). Thus, power has a very good reason to seek and maintain nondisclosure. Disclosive ethics takes

as its moral imperative the disclosure of this nondisclosure the presumption that politics can operate without regard to ethics as well as the disclosure of all attempts at closing or enclosing that are implicitly part of the design and
use of information technology in the pursuit of social order. Many security analysts see FRSs as the ideal biometric to deal with the new emerging security environment (post 11 September). They claim that it is efficient (FaceIt only
requires a single 733 Mhz Pentium PC to run) and effective, often quoting close to 80% recognition rates from the FRVT 2002 evaluation while leaving out of the discussion issues of the quality of the images used in the FRVT, the size
of the database, the elapsed time between database image and probe image, etc. But most of all they claim that these systems performs equally well on all races and both genders. Does not matter if population is homogeneous or
heterogeneous in facial appearance (Faceit technical specification1). This claim is not only made by the suppliers of FRSs such as Identix and Imagis Technologies. It is also echoed in various security forums: Face recognition is
completely oblivious to differences in appearance as a result of race or gender differences and is a highly robust Biometrics2 Even the critical scholar Gary Marx (1995: 238) argued that algorithmic surveillance provides the
possibility of eliminating discrimination. The question is not whether these claims are correct or not. One could argue that in a certain sense they are correct. The significance of these claims is the way they frame the technology. It

presents the technology itself as neutral and unproblematic. More than this it presents the technology as a solution to the problem of terrorism. Atick of Identix claimed, in the wake of the 9/11 attacks, that with FaceIt the US has the
ability to turn all of these cameras around the country into a national shield (OHarrow 2001). He might argue that in the face of terrorism minor injustices (biases in the algorithms) and loss of privacy is a small price to pay for
security. This may be so, although we would disagree. Nevertheless, our main concern is that these arguments present the technical artefacts in isolation with disregard to the socio-technical networks within which they will become

We need
to disclose the network effects
There is every
reason to believe that the silent and non-invasiveness of FRSs make it highly
desirable as a biometric for digital surveillance. It is therefore important that this
technology becomes disclosed for its potential politics in the socio-technical
network of digital surveillance.
enclosed. As argued above, it is not just the micro-politics of the artefact that is the issue. It is how these become multiplied and magnified as they become tied to other social practices that is of significance.
, as it were, of the micro-politics of artefacts. This is especially so for opaque digital technology.

Thus, not just as isolated software objects as was done in the FRVTs but in its multiplicity of implementations and practices. We would claim it is

here where the seemingly trivial exclusions may become very important as they become incorporated into actual practices.

The aff deconstructs security logic by rejecting biometric


surveillance.
Liberatore, 07. Angela, Eur J Crim Policy Res (2007). Balancing Security and
Democracy, and the Role of Expertise: Biometrics Politics in the European Union.
JJZ
http://download.springer.com/static/pdf/133/art%253A10.1007%252Fs10610-0069016-1.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Farticle
%2F10.1007%2Fs10610-006-9016-1&token2=exp=1435157031~acl=%2Fstatic
%2Fpdf%2F133%2Fart%25253A10.1007%25252Fs10610-006-9016-1.pdf
%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Farticle
%252F10.1007%252Fs10610-006-90161*~hmac=d5b3e91ae501f8800570af914e9fa4c180cedb68744620335bc5dde61ae2
a4af
Summing up, it can be argued that a new emphasis on security is emerging in the EU context and is accompanied by attempts at (further) democratising
the EU, and that a purely technocratic mode of decision making is not applied even in a field like biometrics where specialised expertise is crucial. The
strong role of executives is a key factor in the vigorous pursuit of biometric identification, a certain degree of pluralism explains the importance of civil
liberties and fundamental rights in public discourse, and some diffusion of expertise, including technical, legal, social, enables such pluralism. New policy
areas that managed to take the lead in these developments did so in conjunction more than in competition with more established ones, and made more
explicit the issue of the interactions between internal and external dimensions. The EU is establishing itself as a new security actor, but the nature of such

From a more explicitly


normative standpoint, it can be argued that we are already living in a surveillance
society partly due to the pace and pervasiveness of technological change and
partly due to the influence of security concerns and discourses. Biometric
identification is only one, but clearly significant, component of such surveillance
society. Differently from the image of one centralised Big Brother, we may consider that there are a number of bigger and smaller brothers; this
its part is still unclear between its longstanding civilian power role and an unlikely fully-fledged superpower role.

should relax the fears of a totalitarian risk; however, it should not lead to the neglect of the problem of accountability of multiple actors. Also, the various
brothers may look mainly benevolent, but how benevolent will depend on the state of health of democracy, namely pluralism, accountability, checks
and balances, binding protection of fundamental rights. This in turn will be influenced by the intelligence of democracy, which is the capacity to avoid both
the possibility of debating without knowing a charge often addressed by experts to lay citizens as well as parliaments and the tendency of knowing
without debating that characterises forms of secretive expertise. In the EU context some interesting developments can be noted, as well as some difficult
challenges ahead. The developments include the limited but nevertheless significant pluralism connected with the role of different EU and national
institutions, some emerging forms of public debate and some diffusion of expertise. Current efforts to increase the areas of the third (intergovernmental)
pillar on justice and home affairs to become part of the first (Community) one could increase the role of the EP, the related increase of oversight and the
likely enhanced access to information by civil society organisations. At the same time, the search for establishing a new legal basis for international cooperation based on third pillar provisions and explicitly on security grounds, e.g., in response to the ECJ judgement on Balancing security and democracy,
and the role of expertise: Biometrics politics in the European Union 133 PNR45, may counter-balance the communitarisation trend. With regard to
diffusion of expertise, this can be seen also in the context of the broader impact assessment culture of EU policy-making.46 Such a culture stresses the
need to explain and justify the policy options considered and selected. Obviously even the most refined procedure and methods for impact assessment
cannot completely avoid the technocratic temptation to find arguments to justify pre-selected options nor the risk of regulatory capture by well resourced
(including in terms of expertise) groups. Nevertheless the important link between expertise and accountability is explicitly drawn. Last but surely not least,
the still nonbinding legal status of the Charter of Fundamental Rights weakens its weight, even if the ECJ and some national courts are referring to it in
their judgements. In this regard, it would be desirable to have the Charter becoming binding either as part of a binding EU Constitutional Treaty or in other
form. To conclude, the EU experimental capacity will be put once again to a hard test by multiple and possibly contradictory expectations to deliver
security, be a champion of peace and democracy, provide welfare internally and not become a fortress. It may fail with hard consequences or may
reach maturity as a supranational democratic polity. Strengthening accountability and safeguarding fundamental rights can lead us there; weakening
them, or even opting out through undue exceptions in the name of security would undermine some of the very foundations of the European project.

The debate space is key to raising awareness of the


widespread consequences of facial recognitionGray, 03. Mitchell, Surveillance and Society. Urban Surveillance and
Panopticism: will we recognize the facial recognition society? JJZ
http://www.surveillance-and-society.org/articles1(3)/facial.pdf
The first step in harnessing the progress of facial recognition
tools is to raise public awareness concerning their use and potential consequences. Privacy International presented its 2001 U.S. Big Brother
Conclusion Balancing Privacy and Security

Award for Worst Public Official to the City of Tampa for spying on all of the Super Bowl attendees with facial recognition. The annual award presents Orwell-inspired statues to the

Beyond raising awareness, there must


be an active and widespread debate about the consequences of facial recognition
systems and the power they give to their controllers. Cindy Cohn, legal director of the Electronic Frontier Foundation (U.S.), says, If we are going
to decide as a country that because of our worry about terrorism that we are willing to give up our basic privacy, we need an open and full debate
government agencies, companies and initiatives which have done most to invade personal privacy.8

on whether we want to make such a fundamental change.9 The objective of sur veillance studies must be to ensure that people are more than just objects of information. The power of
the panopticon is limited by the process of giving those observed a degree of control over, and knowledge of, facial recognition systems. As surveillance systems are implemented, they
must be carefully scrutinized to ensure accountability among those who gain power from the systems. Benefits to public safety must be clearly described, and government must justify
any secrecy. There are important openings for dissent in the nascent facial recognition society. The U.S. Supreme Court may have denied a right of privacy over facial features, but there
is sociological evidence suggesting people observe a customary right to facial privacy. Journalist Malcolm Gladwell (2002) says we tend to focus on audible communication and ignore
much of the visual information given in the face, because to do otherwise would challenge the ordinary boundaries of human relationships. Gladwell refers to an essay written by
psychologist Paul Ekman in which Ekman discusses Erving Goffmans sociological work: Goffman said that part of what it means to be civilized is not to steal information that is not
freely given to us. When someone picks his nose or cleans his ears, out of unthinking habit, we look away .... for Goffman the spoken word is the acknowledged information, the
information for which the person who states it is willing to take responsibility ... (2002) Gladwell writes that it is disrespectful and an invasion of privacy to probe peoples faces for
information their words leave out. Awareness of the information also entails an obligation, Gladwell says, to react to a message that was never intended to be transmitted. To see what
is intended to be hidden, or, at least, what is usually missed, Gladwell explains, opens up a world of uncomfortable possibilities (2002). Ideas such as these, that examine the forms of
interaction a facial recognition society would create, can be exploited in mounting a defence against the observation onslaught. They may be of little consequence now, when people
have yet to experience the full brunt of facial surveillance, but as its drawbacks become increasingly apparent, the arguments will become more salient. Paradoxically, it may be

As panoptic
surveillance continues to cover more of the urban space and be experienced more
constantly and intrusively by urban dwellers, there is a theoretical threshold point
beyond which the surveillance ceases to achieve control . If most members of a society develop the expectation that
precisely the potential for surveillance to influence behaviour that may ultimately destroy the possibility of exercising that influence.

their mistakes and indiscretions have been recorded and may be revealed, the stigmatisation of their behaviour that encourages orderliness will slowly disappear. If an individual can no
longer anticipate that his life - especially the rough edges - is safely hidden from view, there is less incentive for that person to maintain the false distinction between his actual and
reported behaviour. Society would gradually adopt new norms, ones that less strictly censure behaviours that were previously common yet concealed. Criticism could be pre-empted at
this stage by embracing publicly our foibles and declaring them normal before society at large can say otherwise. It is the same model used by the politician who calmly discloses that he

Left alone, the trajectory of facial recognition


surveillance could result in dramatic changes in individual access to privacy and
group interaction. Powerful forces, especially governments but also marketers and others, seek to weaken privacy
provisions, even at Gray: Urban Surveillance and Panopticism Surveillance & Society 1(3) 328 the risk of the potentially negative changes this could entail. This is not a new
has smoked marijuana with a Who hasnt tried it? tone in his voice.]

situation, but these institutions and individuals were dealt a favourable hand when the September 11 terrorist attacks aggravated the risk society and facilitated the manipulation of the

Only by actively engaging in the ongoing debate between privacy and


security advocates immediately, before facial recognition systems are omnipresent,
can the evolution of surveillance be balanced with societys need for privacy . Raising
public consciousne ss.

awareness of dangers, speculating on the future and taking small steps of resistance is a beginning.

Perm do bothPragmatism is key to resolve problems ignored by the stateBurns 08 Professor in History of Medicine at the Kings University College at
the University of Western Ontario (Lawrence, Identifying concrete ethical demands
in the face of the abstract other: Emmanuel Levinas pragmatic ethics, Philosophy
Social Criticism March, 2008, Vol. 34, No. 3)
The link between the face of the other and the demand for justification establishes
the pragmatic character of Levinas ethics. To see the other is to be obligated
to respond to a need. I may agree, disagree, explain why I cannot help the other,
or I may even act to help the other, but the obligation to respond is not diminished
no matter what my response may be. Thus, even though the other may call my
joyous possession of the world into question (TI, 756/73), I can still turn a blind eye

and a deaf ear to the face and voice of the other and develop an alibi. Given the
distinction between shouldering responsibility on the one hand and acting on that
responsibility on the other, i.e. between the experience of obligation (the
imperative) and the subjects responsive performance (Gibbs, 2000: 3), we need to
situate this pragmatic ethics at the proper level of analysis. Thus, the
responsive performance will require that the subject draw up a plan in which
traditional moral norms of the kind that Ricoeur envisions are invoked and repaired.
The guidance for that repair cannot come from the internal force of the norms
themselves because they are broken norms that cause suffering. Instead, a
prophetic response is required that looks beyond the norms in order to repair them.
However, even though it is necessary to look beyond the norms, those norms do not
disappear. They are repaired, revised, and justified, but only because of the
subjects assumption of responsibility for the other.

Anda mungkin juga menyukai