1ACFRT
Plan
Plan: The United States federal government should
substantially curtail its domestic facial biometric surveillance.
Exclusion
Contention 1 is exclusion
FRT has become an instrument of power that creates
ontological gaps between populations and functions under a
paradigm of security based bio-politics
Kember 13 (Sarah Kember is a writer and academic. Her work incorporates new
media, photography and feminist cultural approaches to science and technology,
Gender Estimation in Face Recognition Technology: How Smart Algorithms Learn to
Discriminate, November 30, 2013, pg. 4-6,
http://static1.1.sqspcdn.com/static/f/707453/23986911/1385871002407/Kember_MF
7.pdf?token=aCDMxWa5aZ54giSsrZsP%2BJRBRFs%3D accessed 6/23/15)//CS
Jacque Penrys PhotoFIT pack came in to use in the 1970s and consisted of
photographic images of five features (hair and forehead, eyes, nose, mouth, and
chin) mounted on card.[12] He included a male and female database but
established what he claimed was a universalgenderlessfacial topography. This
was actually derived from a norm, a young white male that face recognition
systems continue to use, but with the aim, for example, of restricting access to
certain areas based on gender or collecting valuable demographics such as the
number of women entering a retail store on a given day.[13] The segue from
disciplinary to biopower is, for Foucault, contingent on the increasing use of
demographics and statistics that orient governance more towards the populace
than the individual.[14] Face recognition systems demonstrate both forms of
power and perhaps even the shift from one to the other. This becomes clearer as
we track back from the biopolitical uses and applications of face recognition
technology to the disciplinary design and architecture of the technology itself. Koray
Balci and Volkan Atalay present two algorithms for gender estimation.[15] They
point out that the same algorithms can be used for different face specific tasks
such as race or age estimation, without any modification.[16] In the first
algorithm, the training face images are normalized and the eigenfaces are
established using PCA.[17] PCA is described here as a statistical technique for
dimensionality reduction and feature extraction.[18] The performance of the
system is improved by the subsequent use of a pruning algorithm, which
identifies statistical connections extraneous to gender (race or age) estimation and
deletes them. After deletion, the system is re-trained and the pruning is repeated
until all the connections are deleted.[19] A performance table is produced,
showing the relation between each iteration of pruning, the percentage of deleted
connections, and the accuracy of the system. The accuracy of gender estimation in
Balci and Volkans experiment actually diminishes after the eighth iteration, albeit
by only a few percentage points, allowing them to claim that the system is stable.
They maintain that pruning or the deletion of statistical connections improves
gender estimation not in a linear or absolute sense but by enhancing the process of
classification itself. For Geoffrey Bowker and Susan Leigh Star, classification is a
largely invisible, increasingly technological, and fundamentally infrastructural
means of sorting things out.[20] It is an instrument of power-knowledge that is
productive of the things it sortsthings such as faces that are by no means
unambiguous entities that precede their sorting.[21] The existence of a pruning
algorithm that renders faces less ambiguous testifies to their elusiveness, or their
inherent resistance to classification as one mode of representationalism. It would,
perhaps, be going too far to suggest that there is a crisis of representationalism in
appearance-based face recognition systems. However, their designers and
engineers are clearly aware that faces are things that resist depiction[22] because
they are complex and multidimensional[23] and not unique, rigid objects.[24]
The advantage of a more dynamic and relational approach to the production of
faces in face recognition technology would include recognizing representationalism
as a claim, a defensive manoeuvre in the face of faces non-essential ontology and
dynamic co-evolution with technological systems. Still, this defensive manoeuvre
matters in a double sense: it is both meaningful and material, reproducing norms
for example, norms of gender in a machine that is learning to classify, sort, and
discriminate among the populationbetter than it could before. If this is a last push
to representationalism, it is one that reinforces it rather than shows it the door.
Face recognition technology upholds a belief in the existence of ontological
gaps between representations and that which they represent. It also re-produces
the norms of nineteenth-century disciplinary photography even as photography
becomes allied to the security-based biopolitics of computational vision and smart
algorithmic sorting. In this sense, Kelly Gates is right to argue that new vantage
points can underscore old visions as well as old claims to unmediated visuality.[25]
Like her, I question the autonomy of face recognition systems without denying that,
in conjunction with human input of various kinds, they enact what Barad calls
agential realism, generating both categories and entities by cutting and sorting
male from female, black from white, old from young.[26] In a context in which
security systems are fully integrated with those of marketing, these particular
epistem-ontologies intersect in predictable ways with the category of
criminal/citizen-consumer.[27] Since the events of 9/11, the stereotypical face of
terror (gendered, racialized) has been perhaps the most represented and most
elusive of all. If the problem, from a system point of view, is that the categories leak
and the classification structure does not hold, the solution is to reinforce it by
pruning it. This process of agential cutting and sorting strengthens statistical groups
by deleting connections between them and is precisely the point of a possible
intervention, the means by which the biopolitics and ethics of computational vision
can be intercepted in order to make a difference .
Scenario 1 is Biopower
recognition systems may affect those who are observed. The conclusion addresses
ways in which societies that value the balance between privacy and security must
respond. Facial Recognition Defined Facial recognition programs are part of the
growing realm of biometrics, or body measurement. Face images, fingerprints, hand
geometries, retinal patterns, voice modulations and DNA are all identification
sources unique to individuals. Facial recognition software maps details and ratios of
facial geometry using algorithms, the most popular of which results in a
computation of what is called the eigenface, composed of eigenvalues (Selinger
and Socolinsky, 2002). Many basic uses of facial recognition technology are
relatively benign and receive little criticism. For example, the technology can be
used like a high-tech key, allowing access to virtual or actual spaces. Instead of
presenting a password, magnetic card or other such identifier, the face of the
person seeking access is screened to ensure it matches an authorized identity. This
eliminates the problem of stolen passwords or access cards. In heightened security
situations, facial recognition could be used in conjunction with other forms of
identification (Lyon, 2001: 75). The next step in facial recognition is to connect the
systems to digital surveillance cameras, which can then be used to monitor spaces
for the presence of individuals whose digital images are stored in databases. Images
of those present in the spaces under watch can also be recorded and subsequently
paired with identities. Surveillance power grows as various systems , public and
private, are networked together to share information. Risk management is able to
enhance security by cataloguing and analysing observable behaviour, but it also
has a deeper significance: the ability to directly affect that behaviour. For Foucault
(1994), the modern art of governance arose with a turning away from the blunt
forces of sovereign power and control over a state to a disciplinary influence on the
population within a state through the acquisition of knowledge and conduct of
analysis about that population. Clive Norris and Gary Armstrong (1998:7) list three
types of power created by surveillance. First is a direct, authoritative response seen,
for example, when a security guard using CCTV observes a person behaving
inappropriately and asks the person to cease the behaviour. The second form is
deterrence, exemplified by an individual who refrains from inappropriate behaviour
due to a fear of being caught based in the perceived ability of CCTV monitors to
identify him. The third form is not meant to punish or deter, but to abolish the
potential for deviance. This requires an internalisation of the power of surveillance
that transforms those under its gaze. Understanding this third type of power begins
with Jeremy Benthams eighteenth-century disciplinary concept of the panopticon.
The panopticon is a simple architectural design meant to impose order on the lives
of those within, be they criminals, insane, workers or school children. A multi- level
circular building surrounds a central observation tower. The building is divided into
individual cells traversing its entire width, so that sunlight from a window in the
outside wall of the cells illuminates each inhabitant for viewing by disciplinarians in
the tower. Windows on the tower are fitted with blinds or other mechanisms,
allowing disciplinarians to observe those sequestered in the cells without being seen
themselves (Foucault, 1995:200). The surveillance ability from the tower is
complete: each actor is alone, perfectly individualized and constantly visible. The
panoptic mechanism arranges spatial unities that make it possible to see constantly
and to recognize immediately (Foucault, 1995:200). The strength of the panopticon
derives from the visible yet unverifiable operation of power within . Captives
constantly sense the presence of the tower and the possibility they are being
observed at any given time, yet have no way to determine exactly when they are
under scrutiny (Foucault, 1995:201). In this way, the panopticon induces in the
inmate a state of conscious and permanent visibility that assures the automatic
functioning of power (Foucault, 1995:201). Carefully orchestrated power of this sort
does not need to be exercised constantly, because subjects internalise the power
relationship. As Foucault explains: He who is subjected to a field of visibility, and
who knows it, assumes responsibility for the constraints of power; he makes them
play spontaneously upon himself; he inscribes in himself the power relation in which
he simultaneously plays both roles; he becomes the princip le of his own subjection.
(1995:202-203). In a similar vein, Norris and Armstrong note that the power of
surveillance is not merely that it is exercised over someone but through them .
and [s]urveillance therefore involves not only being watched but watching over
ones self. The result is habituated anticipatory conformity and social control that
automatically enforces commonly accepted societal norms and values (1998:5-6).
An urban space permeated with facial recognition systems is the apotheosis of the
panopticon. While CCTV has the power to see constantly like the panopticon, only
facial recognition can recognize immediately. Disciplinary influence can be
achieved in this way over bodies on the move; bodies no longer need to be
physically sequestered for panoptic discipline to affect them. New dangers lurk in
the contemporary panopticon. As surveillance spreads throughout society and its
control disperses accordingly, its influence is also dispersed, and in unanticipated
ways. The operation of surveillance power stems largely from its ability to sort and
categorize. David Lyon calls contemporary panopticism the phenetic fix. It acts to
capture personal data triggered by human bodies and to use these abstractions to
place people in new social classes of income, attributes, habits, preferences, or
offences, in order to influence, manage, or control them (2002). Little is known
about the overall effects of the process on urban centres and their liveability, and it
must be analysed critically. As Lyon says, We simply do not understand ... the full
implications of networked surveillance for power relationships, or of the phenetic
fix for security and social justice (2002).
A complicated rush of forces and alliances converged and parted ways in response
to the September 11 catastrophe in the United States. From the cacophony of public
debates and policy responses emerged the official rubric of Homeland Security,
but what exactly this type of security would entail in practice would require
additional political energy and resources to define. What sort of political priorities
would define Homeland Security and how would they be translated into programs,
technologies, and practices for security provision? Among the range of security
solutions proposed in the immediate aftermath of 9/11, and later as part of the
responsibilities of the Department of Homeland Security (DHS), was the widespread
deployment of new identification technologies called biometrics. Following the
attacks, security through identification, or the securitization of identity (Rose 1999,
p. 240), emerged as a major political and governmental imperative , with significant
attention and resources applied to conceive of how to improve upon existing
identification systems, and how to deploy those improved techniques at a
proliferation of sites. The biometrics industry made up of private companies
developing and marketing digital fingerprinting, iris scanning, voice recognition, and
similar systems reoriented itself around the new political priorities of homeland
security. The press gave considerable coverage to biometrics and their developers,
and awareness of the technologies mushroomed among policy makers and the
public. Biometric industry stocks rose to inflated levels, and industry representatives
appeared in the press touting the prospects of their product offerings to protect the
nation from the new threat of catastrophic terrorism on US soil. Hearings were held
on Capitol Hill to debate the potential uses of biometrics in new state security
programs, and Congress subsequently funded more research and development
toward their integration into airport security and border and immigration control
systems. Every major piece of post-9/11 federal security legislation included
biometrics provisions, including the USA Patriot Act, the Enhanced Border Security
and Visa Entry Reform Act, the Aviation and Transportation Security Act, and the
Homeland Security Act, officially establishing the Department of Homeland Security
and its duties. One specific type of biometric that drew the attention of policy
makers and the press was automated facial recognition. More of an ongoing set of
experiments than an actually existing technical system, automated facial
recognition included a number of different competing computer processes designed
to identify individuals using digitized visual images of the face. While civil
libertarians have raised critical questions about the privacy implications of
computerized facial recognition, this essay sets aside the unresolved privacy issues,
and the security versus privacy dichotomy, to approach facial recognition from a
different critical lens.1 Why was facial recognition considered a solution to the
newly salient problem of terrorism after 9/11? And what problems and
contradictions had to be forgotten or glossed over in order to construct it as a socalled security solution? I explore three answers to these questions. First and most
generally, interest in facial recognition in the wake of 9/11 was indicative of the
preoccupation with technical concerns in contemporary political life. As Andrew
Barry (2001) has argued, present governmental rationality is concerned with all
things technical, and problems of government are conceived as having technical
solutions, often involving networking and interactivity. In this context, it made sense
for the problem of terrorism to be thought of as having a high-tech, state-of-the-art
to this message, but it has many small enemies that do not play by the
conventional rules of state warfare and thus represent significant threats,
disproportionate to their small size and military resources. These new
unidentifiable and unpredictable enemies are constructed as major risks , a
construction given considerable leverage by the enormity of the violence on 9/ 11
along with its precession as simulacra. Of course, the mug shot images of specific
faces in this video contradict the notion that the new national threats are
unidentifiable. The visual text, including images of specific faces and groups of
ethnically coded people, exemplifies the way in which the problem of asymmetric
threats is bound symbolically to the stereotype of the Arab terrorist. While the
implication is that facial recognition and other technologies can accomplish the truly
magical feet of identifying the unidentifiable threats to the nation, we are invited
to imagine precisely who will be identified . In the aftermath of 9/11, the
identity of asymmetric, unidentifiable threats articulated not only to visual
images of Arab men, but also to repeated references to the face of terror . One
example was the title of the Visionics white paper discussed above, Protecting
Civilization from the Faces of Terror. In addition, the Washington Post published an
article on facial recognition headlined In the Face of Terror; Recognition Technology
Spreads Quickly (OHarrow 2001b, p. E1), and the Technology, Terrorism, and
Government Information Subcommittee of the US Senate Judiciary Committee held
a hearing on Biometric Identifiers and the Modern Face of Terror: New Technologies
in the Global War on Terrorism (2001). The headline of John Poindexters September
2003 New York Times op-ed piece, defending the politically unpopular Total
Information Awareness system, read Finding the Face of Terror in Data. The faces
of terror metaphor, while obviously used as a clever turn of phrase in order to
position facial recognition as a solution to airport security, cannot be dismissed as a
clever copy.7 Ostensibly referencing the individual faces of the 9/11 hijackers as
well as potential future terrorists, it also conjured up the idea of an amorphous,
racialized, and fetishized enemy Other that had penetrated both the
national territory and the national imagination. With images of the faces of
the hijackers and of Osama bin Laden circulating in the press, the faces of terror
metaphor invoked specific objects: mug-shots and grainy video images of Arab
men. It is not surprising or unusual that the facial images stood in for the individuals
themselves; we commonly understand the image of the face as a signifier for
individual identity. However, the idea that certain faces could be inherently faces of
terror that individuals embody terror or evil in their faces could not help but
invoke a paranoid discourse of racialized otherness. Such discourse
recuperated the famous eugenicist Francis Galtons pseudo-scientific research into
the typology of criminal faces, as well as the guiding principle of the eighteenth and
nineteenth-century science of physiognomy: that a persons true character could be
read from the features of the face, the window to the soul. Further, it is not difficult
to read a subtext of incubation and national contamination in the reference to
finding the face of terror in data, along with an implicit effort to posit facial
recognition and other new identification technologies as capable of purifying the
nation of its enemies within. In the fall of 1993, TIME published its now famous
cover depicting The New Face of America: a computer generated image of a
womans face, morphed together from the facial images of seven men and seven
here than the resurgence of old-style racism; there is the recognition that some
groups have the power to protect themselves from such stereotypes and others do
not, and Henry A. Giroux 177 for those who do not?especially poor blacks?racist
myths have a way of producing precise, if not deadly, material consequences. Given
the public's preoccupation with violence and safety, crime and terror merge in the
all too-familiar equation of black culture with the culture of criminality, and images
of poor blacks are made indistinguishable from images of crime and violence.
Criminalizing black behavior and relying on punitive measures to solve social
problems do more than legitimate a biopolitics defined increas ingly by the
authority of an expanding national security state under George W. Bush. They also
legitimize a state in which the police and military, often operating behind closed
doors, take on public functions that are not subject to public scrutiny (Bleifuss 2005,
22) ? This becomes particularly dangerous in a democracy when paramilitary or
military organisations gain their legitimacy increasingly from an appeal to fear and
terror, prompted largely by the presence of those racialized and class-specific
groups considered both dangerous and disposable.
mass slaughters are the effect, the result, the logical consequence of our rationality,
nor do I mean that the state has the obligation of taking care of individuals since it
has the right to kill millions of people. After proceeding through this set of
inconclusive negatives he avers, as if trying to defer the answer to the questions he
poses: It is this rationality, and the death and life game which takes place in it, that
Id like to investigate from a historical point of view. One aspect of this historical
investigation occurred in Foucaults 1976 lectures. These lectures cover such
concerns as the seventeenth-century historical-political narrative of the war of the
races, and the biological and social class re-inscriptions of racial discourse in the
nineteenth century.14 He concludes with the development of the biological state
racisms and the genocidal politics of the twentieth century, including a radical
analysis of the Nazi state and of socialism. From this perspective, there is a certain
potentiality within the human sciences which, when alloyed to notions such as race,
can help make Contretemps 5, December 2004 20 intelligible the catastrophes
of the twentieth century. Such lectures seem to make the totalitarian rule of the
twentieth century a capstone on the histories of confinement, internment and
punishment that had made up his genealogical work. This thesis is perhaps close to
the work of the first generation of the Frankfurt School and a certain reading of Max
Weber. Here the one-sided development of rationality and application of reason to
man in the human sciences has the consequence of converting instrumental
rationality into forms of domination. Bio-politics in this reading is the application of
instrumental rationality to life. The dreadful outcomes of the twentieth century then
result from this kind of scientization and technologization of earlier notions of race.
There is also a similarity in this reading of Foucault and the work of Zygmunt
Bauman.15 The latter presents the Holocaust as something that must be
understood as endogenous to Western civilization and its processes of
rationalization rather than as an aberrant psychological, social or political pathology.
Scenario 2 is racism
FRTs lack of institutional checks exacerbates racial profiling
Volz 14(Dustin, National Journal, June 14. Privacy Groups Sound the Alarm Over
FBIs Facial-Recognition Technology. JJZ
http://www.nationaljournal.com/tech/privacy-groups-sound-the-alarm-over-fbi-sfacial-recognition-technology-20140624)
More than 30 privacy and civil-liberties groups are asking the Justice Department to
complete a long-promised audit of the FBI's facial-recognition database. The groups
argue the database, which the FBI says it uses to identify targets, could pose
privacy risks to every American citizen because it has not been properly vetted,
possesses dubious accuracy benchmarks, and may sweep up images of ordinary
people not suspected of wrongdoing. In a joint letter sent Tuesday to Attorney
General Eric Holder, the American Civil Liberties Union, the Electronic Frontier
Foundation, and others warn that an FBI facial-recognition program "has undergone
a radical transformation" since its last privacy review six years ago. That lack of
recent oversight "raises serious privacy and civil-liberty concerns," the groups
contend. "The capacity of the FBI to collect and retain information , even on innocent
Americans, has grown exponentially," the letter reads. "It is essential for the
American public to have a complete picture of all the programs and authorities the
FBI uses to track our daily lives, and an understanding of how those programs affect
our civil rights and civil liberties." The Next Generation Identification programa
biometric database that includes iris scans and palm prints along with facial
recognitionis scheduled to become fully operational later this year and has not
undergone a rigorous privacy litmus testknown as a Privacy Impact Assessment
since 2008, despite pledges from government officials. "One of the risks here,
without assessing the privacy considerations, is the prospect of mission creep with
the use of biometric identifiers," said Jeramie Scott, national security counsel with
the Electronic Privacy Information Center, another of the letter's signatories. "it's
been almost two years since the FBI said they were going to do an updated privacy
assessment, and nothing has occurred." The facial-recognition component of the
database, however, is what privacy advocates find most alarming. The FBI projects
that by 2015 the facial-recognition database could catalog up to 52 million face
photos. A substantial portion of thoseabout 4.3 millionare expected to be
gleaned from noncriminal photography, such as employer background checks,
according to privacy groups. But earlier this month, FBI Director James Comey told
Congress the database would not collect and store photos of average civilians and is
intended to "find bad guys by matching pictures to mugshots." But privacy hawks
remain concerned that images may be shared among the FBI and other agencies,
such as the Defense Department and National Security Agency, and even state
motor-vehicle departments. FBI Director James Comey defends the limited scope of
the agency's new facial recognition technology. Comey, during his testimony, did
not completely refute the suggestion that photos would be shared with states.
"There are some circumstances in which when states send us records, they'll send
us pictures of people who are getting special driving licenses to transport children
or explosive materials or something," Comey said. "But as I understand it, those are
not part of the searchable Next Generation Identification database." Currently, no
federal laws limit the use of facial-recognition software, either by the private sector
or the government. A 2010 government report made public last year through a
Freedom of Information Act request filed by the Electronic Privacy Information
Center stated that the agency's facial-recognition technology could fail up to 20
percent of the time. When used against a searchable repository, that failure rate
could be as high as 15 percent. But even those numbers are misleading, privacy
groups contend, because a search can be considered a success if the correct
suspect is listed within the top 50 candidates. Such an "overwhelming number" of
false matches could lead to "greater racial profiling by law enforcement by shifting
the burden of identification onto certain ethnicities ." Facial-recognition technology
has recently endured heightened scrutiny from the anti-government-surveillance
crowd for its potential as an invasive means of tracking. Last month, documents
supplied by Edward Snowden to The New York Times revealed that the National
Security Agency intercepts "millions of images per day" as part of a program
officials believe could fundamentally revolutionize the way government spies on
intelligence targets across the globe. That daily cache includes about 55,000 "facial-
recognition quality images," which the NSA considers possibly more important to its
mission than the surveillance of more traditional communications. When asked for
comment, the Justice Department would only say it was reviewing the letter.
modern technologies reduces accountability for those who use the data to make
decisions that affect the people they are monitoring.213 The collection of images
for FRT applica tions is indiscriminate, with no basis for suspecting a particular
subject of wrongdoing. It allows users to cluster disparate bits of information
together from one or more random, unidentified images such that [t]he whole
becomes greater than the parts.214 The individuals whose images are captured do
not know how their data is being used and have no ability to control the
manipulation of their faceprints, even though the connections that are made reveal
new facts that the subjects did not knowingly disclose. The party doing the
aggregating gains a powerful tool for forming and disseminating personal
judgments that render the subject vulnerable to public humiliation and other
tangible harms, including criminal investigation. 215 Incorrect surveillance
information can lead to lost job opportunities, intense scrutiny at airports, false
arrest, and denials of public benefits.216 In turn, a lack of transparency,
accountability, and public participation in and around surveillance activities fosters
distrust in government. The recent scandal and fractured diplomatic relations over
NSA surveillance of U.S. allies is a case in point.217 Perhaps most troubling, FRT
enhances users capacity to identify and track individuals propensity to take
particular actions,218 which stands in tension with the common law presumption of
innocence embodied in the Due Process Clause of the Fifth and Fourteenth
Amendments.219 As described below, prevailing constitutional doctrine does not
account for the use of technology to identify, track, and predict the behavior of a
subject using an anonymous public image and big data correlations.
This is where racism intervenes, not from without, exogenously, but from within,
constitutively. For the emergence of biopower as the form of a new form of political
rationality, entails the inscription within the very logic of the modern state the logic
of racism. For racism grants, and here I am quoting: the conditions for the
acceptability of putting to death in a society of normalization. Where there is a
society of normalization, where there is a power that is, in all of its surface and in
first instance, and first line, a bio-power, racism is indispensable as a condition to
be able to put to death someone, in order to be able to put to death others. The
homicidal [meurtrire] function of the state, to the degree that the state functions
on the modality of bio-power, can only be assured by racism (Foucault 1997, 227)
To use the formulations from his 1982 lecture The Political Technology of
Individuals which incidentally, echo his 1979 Tanner Lectures the power of the
state after the 18th century, a power which is enacted through the police, and is
enacted over the population, is a power over living beings, and as such it is a
biopolitics. And, to quote more directly, since the population is nothing more than
what the state takes care of for its own sake, of course, the state is entitled to
slaughter it, if necessary. So the reverse of biopolitics is thanatopolitics. (Foucault
2000, 416). Racism, is the thanatopolitics of the biopolitics of the total state. They
are two sides of one same political technology, one same political rationality: the
management of life, the life of a population, the tending to the continuum of life of
a people. And with the inscription of racism within the state of biopower, the long
history of war that Foucault has been telling in these dazzling lectures has made a
new turn: the war of peoples, a war against invaders, imperials colonizers, which
turned into a war of races, to then turn into a war of classes, has now turned into
the war of a race, a biological unit, against its polluters and threats. Racism is the
means by which bourgeois political power , biopower, re-kindles the fires of war
within civil society. Racism normalizes and medicalizes war. Racism makes war
the permanent condition of society, while at the same time masking its
weapons of death and torture. As I wrote somewhere else, racism banalizes
genocide by making quotidian the lynching of suspect threats to the health of the
social body. Racism makes the killing of the other, of others, an everyday
occurrence by internalizing and normalizing the war of society against its enemies .
To protect society entails we be ready to kill its threats, its foes, and if we
understand society as a unity of life, as a continuum of the living, then these threat
and foes are biological in nature.
pollution of our environment, is still going on, causing also losses and fatal dangers
for human life. Behind global terrorism and invisible wars we find striking
international and intra-society inequities and distorted development patterns, which
tend to generate social as well as international tensions, thus pacing the way for
unrest and visible wars. It is a common place now that peace is not merely the
absence of war. The prerequisites of a lasting peace between and within societies
involve not only though, of course, necessarily demilitarization, but also a
systematic and gradual elimination of the roots of violence, of the causes of
invisible wars, of the structural and institutional cases of large-scale international
and intra-society inequalities, exploitation and oppression. Peace requires a process
of social and national emancipation, a progressive, democratic transformation of
societies and the world bringing about equal rights and opportunities for all people,
sovereign participation and mutually advantageous co-operation among nations. It
further requires a pluralistic democracy on global level with an appropriate system
of proportional representation of the world society, articulation of diverse interests
and their peaceful reconciliation, by non-violent conflict management, and thus also
a global governance with a really global institutional system. Under the
contemporary conditions of accelerating globalization and deepening global
interdependencies in our world, peace is indivisible in both time and space. It
cannot exist if reduced to a period only after or before war, and cannot be
safeguarded in one part of the world when some others suffer visible or invisible
wars. Thus, peace requires, indeed, a new, demilitarized and democratic world
order, which can provide equal opportunities for sustainable development.
Continues The causes of inequalities on local, national, regional and world levels
are often interlinked. Dominance and exploitation relations go across country
boundaries, oppressors are supporting each other and oppressing other oppressors.
Societies that exploit others can hardly stay free of exploitation, themselves.
Nations that hinder other in democratic transformation can hardly live in democracy.
Monopolies induce also other to monopolize. Narrow, selfish interest generate
narrow, selfish interest. Discrimination gives birth to discrimination. And so on
Solvency
The US government must preserve the right to anonymityChayka, 14
Kyle Chayka, 3 October 2014, theguardian, The facial recognition databases are
coming. Why aren't the privacy laws? theguardian,
http://www.theguardian.com/commentisfree/2014/apr/30/facial-recognition-databases-privacy-laws
Online dating is kind of like going on a shopping trip. But instead of looking at pairs of shoes, we're perusing people, glancing over their photos and profiles in an effort to gauge how
interested we might be in them. So why shouldn't we be warned, like a grocery-store expiration date, when one is rotten. Such is the intention of CreepShield, a new web-browser
extension that uses facial recognition technology to allow users to scan the faces they see on social networking websites Facebook, eHarmony, OKCupid, even Grindr and see if the
faces match any public records in databases of sex offenders. The app seems somewhat useful. Unless, of course, you're mistakenly identified as a sex offender. When I uploaded my
own photo after writing a recent Newsweek cover story on biometric surveillance, the CreepShield search engine showed results that were less than 50% sure I was a match though
there were some people in the database who looked eerily similar to me. Thankfully, my data-filled profile photo didn't have a match, but actual sex offenders "have no right to privacy",
CreepShield's founder told me. This raises the question: will the rest of us have the right to our own faces when they get stored in search engines of the future? The US government is
currently building the largest biometrics database in the world with Next Generation Identification, a system meant to help identify criminals. The FBI estimates that it will store over 50m
faces images by 2015, according to documents obtained by the Electronic Frontier Foundation. Facial recognition technology has plenty of practical applications. Germany is beginning to
use biometric data to scan individuals at border crossings, and Facebook even collects face patterns to suggest who should be tagged in photos. The technology is contributing to what
will become a $20bn market by 2020, according to the Secure Identity Biometrics Association (Siba). Companies including Animetrics and Cognitec are selling their technology to
startups like CreepShield as well as to police and military, with success rates of over 98% for facial matching. From a clear face image, ethnicity can be identified with an error rate of
13% and gender with an error rate of 3%. Unfortunately, United States law has not caught up with the technology's expansion. And as facial recognition becomes more present in
everyday life, we are going to need new regulations protecting the anonymity of our faces, just as we are protecting our cellphones and, hopefully, the metadata therein. If we don't, we
will lose our ability to be anonymous and even when we're talking about identifying sex offenders, retaining some measure of anonymity is important. Would you really want to cast a
controversial vote or publicly protest in a world where your peers or the cops could track down your cheekbone pattern in seconds? With Facebook's burgeoning databases as well as the
FBI's s Next Generation Identification system, it's now easier than ever to get access to a photo of a person's face and turn it into a kind of fingerprint on steroids, without them knowing.
We need to find a way to preserve our anonymity , and fast. Fingerprints and DNA data are protected under US
Supreme Court law, providing a possible precedent for face-prints. If a fingerprint or DNA test is collected without due cause, it can't be used in court as evidence it constitutes an
unreasonable search and seizure, outlawed by the Fourth Amendment. The Supreme Court is just this week embroiled in debate over whether or not search and seizure of social media
and cellphone data should require a warrant. While we grapple with today's dominant technologies, we should also be looking forward to tomorrow's, regulating the Fourth Amendment's
Otherwise, the future gets dystopian quickly. The door opens to a version of CreepShield that runs on gossip or Yelp-like reviews of people instead of a sex-offender database. (That
random guy you see in the bar? Forget Lulu try "see you later".) Indeed, a world without facial-recognition laws is a world without strangers. Not being able to lie about height on
OKCupid is the least of our worries.
Doug; Lieutenant, US Navy. M.A. from Navy. Now Hear thisWhy the age of great power war is over.
Proceeding Magazine May 2012 Vol. 138 http://www.usni.org/magazines/proceedings/2012-05/now-hear-why-agegreat-power-war-over
The 20th century brought seismic shifts as the global political system transitioned from being multipolar during the
first 40 years to bipolar during the Cold War before emerging as the American-led, unipolar international order we
state. Whereas in years past, when nations allied with their neighbors in ephemeral bonds of convenience, todays
between economic security and physical security . Increasingly, great-power interests demand
cooperation rather than conflict. To that end, maritime nations such as the United States and China desire open sea
lines of communication and protected trade routes, a common security challenge that could bring these powers
together, rather than drive them apart (witness Chinas response to the issue of piracy in its backyard). Facing
these security tasks cooperatively is both mutually advantageous and common sense. Democratic Peace Theory
championed by Thomas Paine and international relations theorists such as New York Times columnist Thomas
Friedmanpresumes that great-power war will likely occur between a democratic and non-democratic state.
However, as information flows freely and people find outlets for and access to new ideas, authoritarian leaders will
find it harder to cultivate popular support for total waran argument advanced by philosopher Immanuel Kant in
his 1795 essay Perpetual Peace. Consider, for example, Chinas unceasing attempts to control Internet access.
The 2011 Arab Spring demonstrated that organized opposition to unpopular despotic rule has begun to reshape the
political order, a change galvanized largely by social media. Moreover, few would argue that China today is not
socially more liberal, economically more capitalistic, and governmentally more inclusive than during Mao Tse-tungs
regime. As these trends continue, nations will find large-scale conflict increasingly disagreeable. In terms of the
military, ongoing fiscal constraints and socio-economic problems likely will marginalize defense issues. All the more
reason why great powers will find it mutually beneficial to work together to find solutions to
common security problems, such as countering drug smuggling, piracy, climate change, human trafficking, and
terrorismmissions that Admiral Robert F. Willard, former Commander, U.S. Pacific Command, called deterrence
and reassurance.
excessive fear
about nuclear weapons led to many policies that turned out to be wasteful and
unnecessary. We should take the time to assess these new risks to avoid an overreaction that will take resources and attention away from other
that terrorists and rogue nations could acquire nuclear weapons have sparked a new surge of fear and speculation. In the past,
problems. Indeed, a more thoughtful analysis will reveal that the new perceived danger is far less likely than it might at first appear. Albert Einstein
memorably proclaimed that nuclear weapons have changed everything except our way of thinking. But the weapons actually seem to have changed
little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money. To begin with, the
find much military reason to use them, even in principle, in actual armed conflicts. Although they may have failed to alter substantive history, nuclear
weapons have inspired legions of strategists to spend whole careers agonizing over what one analyst has called nuclear metaphysics, arguing, for
example, over how many MIRVs (multiple independently targetable reentry vehicles) could dance on the head of an ICBM (intercontinental ballistic
missile). The result was a colossal expenditure of funds. Most important for current policy is the fact that contrary to decades of hand-wringing about the
has taken place has been substantially inconsequential. When the quintessential rogue state, Communist China, obtained nuclear weapons in 1964,
Central Intelligence Agency Director John McCone sternly proclaimed that nuclear war was almost inevitable. But far from engaging in the nuclear
were imposed and then a war was waged, and each venture has probably resulted in more deaths than were suffered at Hiroshima and Nagasaki
combined. (At Hiroshima and Nagasaki, about 67,000 people died immediately and 36,000 more died over the next four months. Most estimates of the
Iraq war have put total deaths there at about the Hiroshima-Nagasaki levels, or higher.) Today, alarm is focused on the even more pathetic regime in North
Korea, which has now tested a couple of atomic devices that seem to have been fizzles. There is even more hysteria about Iran, which has repeatedly
insisted it has no intention of developing weapons. If that regime changes its mind or is lying, experience suggests it is likely to find that, except for
Politicians of
all stripes preach to an anxious, appreciative, and very numerous choir when they ,
like President Obama, proclaim atomic terrorism to be the most immediate and extreme
threat to global security. It is the problem that, according to Defense Secretary Robert Gates, currently keeps every senior leader
awake at night. This is hardly a new anxiety . In 1946, atomic bomb maker J. Robert Oppenheimer ominously warned that if three or
stoking the national ego for a while, the bombs are substantially valueless and a very considerable waste of money and effort.
four men could smuggle in units for an atomic bomb, they could blow up New York. This was an early expression of a pattern of dramatic risk inflation that
has persisted throughout the nuclear age. In fact, although expanding fires and fallout might increase the effective destructive radius, the blast of a
Hiroshima-size device would blow up about 1% of the citys areaa tragedy, of course, but not the same as one 100 times greater. In the early 1970s,
nuclear physicist Theodore Taylor proclaimed the atomic terrorist problem to be immediate, explaining at length how comparatively easy it would be to
steal nuclear material and step by step make it into a bomb. At the time he thought it was already too late to prevent the making of a few bombs, here
and there, now and then, or in another ten or fifteen years, it will be too late. Three decades after Taylor, we continue to wait for terrorists to carry out
their easy task. In contrast to these predictions, terrorist groups seem to have exhibited only limited desire and even less progress in going atomic. This
may be because, after brief exploration of the possible routes, they, unlike generations of alarmists, have discovered that the tremendous effort required
is scarcely likely to be successful. The most plausible route for terrorists, according to most experts, would be to manufacture an atomic device
themselves from purloined fissile material (plutonium or, more likely, highly enriched uranium). This task, however, remains a daunting one, requiring that
a considerable series of difficult hurdles be conquered and in sequence. Outright armed theft of fissile material is exceedingly unlikely not only because of
the resistance of guards, but because chase would be immediate. A more promising approach would be to corrupt insiders to smuggle out the required
substances. However, this requires the terrorists to pay off a host of greedy confederates, including brokers and money-transmitters, any one of whom
could turn on them or, either out of guile or incompetence, furnish them with stuff that is useless. Insiders might also consider the possibility that once the
heist was accomplished, the terrorists would, as analyst Brian Jenkins none too delicately puts it, have every incentive to cover their trail, beginning with
Crossing international borders would be facilitated by following established smuggling routes, but these are not as chaotic as they appear and are often under the watch of suspicious and careful criminal regulators. If border personnel became suspicious of the commodity being
smuggled, some of them might find it in their interest to disrupt passage, perhaps to collect the bounteous reward money that would probably be offered by alarmed governments once the uranium theft had been discovered. Once outside the country with their precious booty, terrorists would need to set up a
large and well-equipped machine shop to manufacture a bomb and then to populate it with a very select team of highly skilled scientists, technicians, machinists, and administrators. The group would have to be assembled and retained for the monumental task while no consequential suspicions were
generated among friends, family, and police about their curious and sudden absence from normal pursuits back home. Members of the bomb-building team would also have to be utterly devoted to the cause, of course, and they would have to be willing to put their lives and certainly their careers at high risk,
because after their bomb was discovered or exploded they would probably become the targets of an intense worldwide dragnet operation. Some observers have insisted that it would be easy for terrorists to assemble a crude bomb if they could get enough fissile material. But Christoph Wirz and Emmanuel
Egger, two senior physicists in charge of nuclear issues at Switzerlands Spiez Laboratory, bluntly conclude that the task could hardly be accomplished by a subnational group. They point out that precise blueprints are required, not just sketches and general ideas, and that even with a good blueprint the
terrorist group would most certainly be forced to redesign. They also stress that the work is difficult, dangerous, and extremely exacting, and that the technical requirements in several fields verge on the unfeasible. Stephen Younger, former director of nuclear weapons research at Los Alamos Laboratories, has
made a similar argument, pointing out that uranium is exceptionally difficult to machine whereas plutonium is one of the most complex metals ever discovered, a material whose basic properties are sensitive to exactly how it is processed. Stressing the daunting problems associated with material purity,
machining, and a host of other issues, Younger concludes, to think that a terrorist group, working in isolation with an unreliable supply of electricity and little access to tools and supplies could fabricate a bomb is farfetched at best. Under the best circumstances, the process of making a bomb could take
months or even a year or more, which would, of course, have to be carried out in utter secrecy. In addition, people in the area, including criminals, may observe with increasing curiosity and puzzlement the constant coming and going of technicians unlikely to be locals. If the effort to build a bomb was
successful, the finished product, weighing a ton or more, would then have to be transported to and smuggled into the relevant target country where it would have to be received by collaborators who are at once totally dedicated and technically proficient at handling, maintaining, detonating, and perhaps
assembling the weapon after it arrives. The financial costs of this extensive and extended operation could easily become monumental. There would be expensive equipment to buy, smuggle, and set up and people to pay or pay off. Some operatives might work for free out of utter dedication to the cause, but
the vast conspiracy also requires the subversion of a considerable array of criminals and opportunists, each of whom has every incentive to push the price for cooperation as high as possible. Any criminals competent and capable enough to be effective allies are also likely to be both smart enough to see
boundless opportunities for extortion and psychologically equipped by their profession to be willing to exploit them. Those who warn about the likelihood of a terrorist bomb contend that a terrorist group could, if with great difficulty, overcome each obstacle and that doing so in each case is not impossible.
But although it may not be impossible to surmount each individual step, the likelihood that a group could surmount a series of them quickly becomes vanishingly small. Table 1 attempts to catalogue the barriers that must be overcome under the scenario considered most likely to be successful. In
contemplating the task before them, would-be atomic terrorists would effectively be required to go though an exercise that looks much like this. If and when they do, they will undoubtedly conclude that their prospects are daunting and accordingly uninspiring or even terminally dispiriting. It is possible to
calculate the chances for success. Adopting probability estimates that purposely and heavily bias the case in the terrorists favorfor example, assuming the terrorists have a 50% chance of overcoming each of the 20 obstaclesthe chances that a concerted effort would be successful comes out to be less
than one in a million. If one assumes, somewhat more realistically, that their chances at each barrier are one in three, the cumulative odds that they will be able to pull off the deed drop to one in well over three billion. Other routes would-be terrorists might take to acquire a bomb are even more problematic.
They are unlikely to be given or sold a bomb by a generous like-minded nuclear state for delivery abroad because the risk would be high, even for a country led by extremists, that the bomb (and its source) would be discovered even before delivery or that it would be exploded in a manner and on a target the
donor would not approve, including on the donor itself. Another concern would be that the terrorist group might be infiltrated by foreign intelligence. The terrorist group might also seek to steal or illicitly purchase a loose nuke somewhere. However, it seems probable that none exist. All governments have an
intense interest in controlling any weapons on their territory because of fears that they might become the primary target. Moreover, as technology has developed, finished bombs have been out-fitted with devices that trigger a non-nuclear explosion that destroys the bomb if it is tampered with. And there are
other security techniques: Bombs can be kept disassembled with the component parts stored in separate high-security vaults, and a process can be set up in which two people and multiple codes are required not only to use the bomb but to store, maintain, and deploy it. As Younger points out, only a few
people in the world have the knowledge to cause an unauthorized detonation of a nuclear weapon. There could be dangers in the chaos that would emerge if a nuclear state were to utterly collapse; Pakistan is frequently cited in this context and sometimes North Korea as well. However, even under such
conditions, nuclear weapons would probably remain under heavy guard by people who know that a purloined bomb might be used in their own territory. They would still have locks and, in the case of Pakistan, the weapons would be disassembled. The al Qaeda factor The degree to which al Qaeda, the only
terrorist group that seems to want to target the United States, has pursued or even has much interest in a nuclear weapon may have been exaggerated. The 9/11 Commission stated that al Qaeda has tried to acquire or make nuclear weapons for at least ten years, but the only substantial evidence it
supplies comes from an episode that is supposed to have taken place about 1993 in Sudan, when al Qaeda members may have sought to purchase some uranium that turned out to be bogus. Information about this supposed venture apparently comes entirely from Jamal al Fadl, who defected from al Qaeda in
1996 after being caught stealing $110,000 from the organization. Others, including the man who allegedly purchased the uranium, assert that although there were various other scams taking place at the time that may have served as grist for Fadl, the uranium episode never happened. As a key indication of
al Qaedas desire to obtain atomic weapons, many have focused on a set of conversations in Afghanistan in August 2001 that two Pakistani nuclear scientists reportedly had with Osama bin Laden and three other al Qaeda officials. Pakistani intelligence officers characterize the discussions as academic in
nature. It seems that the discussion was wide-ranging and rudimentary and that the scientists provided no material or specific plans. Moreover, the scientists probably were incapable of providing truly helpful information because their expertise was not in bomb design but in the processing of fissile material,
which is almost certainly beyond the capacities of a nonstate group. Kalid Sheikh Mohammed, the apparent planner of the 9/11 attacks, reportedly says that al Qaedas bomb efforts never went beyond searching the Internet. After the fall of the Taliban in 2001, technical experts from the CIA and the
Department of Energy examined documents and other information that were uncovered by intelligence agencies and the media in Afghanistan. They uncovered no credible information that al Qaeda had obtained fissile material or acquired a nuclear weapon. Moreover, they found no evidence of any
radioactive material suitable for weapons. They did uncover, however, a nuclear-related document discussing openly available concepts about the nuclear fuel cycle and some weapons-related issues. Just a day or two before al Qaeda was to flee from Afghanistan in 2001, bin Laden supposedly told a
Pakistani journalist, If the United States uses chemical or nuclear weapons against us, we might respond with chemical and nuclear weapons. We possess these weapons as a deterrent. Given the military pressure that they were then under and taking into account the evidence of the primitive or more
probably nonexistent nature of al Qaedas nuclear program, the reported assertions, although unsettling, appear at best to be a desperate bluff. Bin Laden has made statements about nuclear weapons a few other times. Some of these pronouncements can be seen to be threatening, but they are rather coy
and indirect, indicating perhaps something of an interest, but not acknowledging a capability. And as terrorism specialist Louise Richardson observes, Statements claiming a right to possess nuclear weapons have been misinterpreted as expressing a determination to use them. This in turn has fed the
exaggeration of the threat we face. Norwegian researcher Anne Stenersen concluded after an exhaustive study of available materials that, although it is likely that al Qaeda central has considered the option of using non-conventional weapons, there is little evidence that such ideas ever developed into
actual plans, or that they were given any kind of priority at the expense of more traditional types of terrorist attacks. She also notes that information on an al Qaeda computer left behind in Afghanistan in 2001 indicates that only $2,000 to $4,000 was earmarked for weapons of mass destruction research and
that the money was mainly for very crude work on chemical weapons. Today, the key portions of al Qaeda central may well total only a few hundred people, apparently assisting the Talibans distinctly separate, far larger, and very troublesome insurgency in Afghanistan. Beyond this tiny band, there are
thousands of sympathizers and would-be jihadists spread around the globe. They mainly connect in Internet chat rooms, engage in radicalizing conversations, and variously dare each other to actually do something. Any threat, particularly to the West, appears, then, principally to derive from self-selected
people, often isolated from each other, who fantasize about performing dire deeds. From time to time some of these people, or ones closer to al Qaeda central, actually manage to do some harm. And occasionally, they may even be able to pull off something large, such as 9/11. But in most cases, their
capacities and schemes, or alleged schemes, seem to be far less dangerous than initial press reports vividly, even hysterically, suggest. Most important for present purposes, however, is that any notion that al Qaeda has the capacity to acquire nuclear weapons, even if it wanted to, looks farfetched in the
extreme. It is also noteworthy that, although there have been plenty of terrorist attacks in the world since 2001, all have relied on conventional destructive methods. For the most part, terrorists seem to be heeding the advice found in a memo on an al Qaeda laptop seized in Pakistan in 2004: Make use of
that which is available rather than waste valuable time becoming despondent over that which is not within your reach. In fact, history consistently demonstrates that terrorists prefer weapons that they know and understand, not new, exotic ones. Glenn Carle, a 23-year CIA veteran and once its deputy
intelligence officer for transnational threats, warns, We must not take fright at the specter our leaders have exaggerated. In fact, we must see jihadists for the small, lethal, disjointed, and miserable opponents that they are. al Qaeda, he says, has only a handful of individuals capable of planning, organizing,
and leading a terrorist organization, and although the group has threatened attacks with nuclear weapons, its capabilities are far inferior to its desires. Policy alternatives The purpose here has not been to argue that policies designed to inconvenience the atomic terrorist are necessarily unneeded or unwise.
Rather, in contrast with the many who insist that atomic terrorism under current conditions is rather likely indeed, exceedingly likelyto come about, I have contended that it is hugely unlikely. However, it is important to consider not only the likelihood that an event will take place, but also its consequences.
any time attack the United States with their submarine-launched missiles and kill millions of Americans, far more than even the most monumentally gifted
and lucky terrorist group. Yet the risk that this potential calamity might take place evokes little concern; essentially it is an acceptable risk. Meanwhile,
Russia, with whom the United States has a rather strained relationship, could at any time do vastly more damage with its nuclear weapons, a fully
imaginable calamity that is substantially ignored. In constructing what he calls a case for fear, Cass Sunstein, a scholar and current Obama
administration official, has pointed out that if there is a yearly probability of 1 in 100,000 that terrorists could launch a nuclear or massive biological
attack, the risk would cumulate to 1 in 10,000 over 10 years and to 1 in 5,000 over 20. These odds, he suggests, are not the most comforting. Comfort,
of course, lies in the viscera of those to be comforted, and, as he suggests, many would probably have difficulty settling down with odds like that. But
there must be some point at which the concerns even of these people would ease. Just perhaps it is at one of the levels suggested above: one in a million
for that other central policy concern, nuclear proliferation, it seems to me that
policymakers should maintain their composure. The pathetic North Korean regime mostly seems to be engaged in a
or one in three billion per attempt. As
process of extracting aid and recognition from outside. A viable policy toward it might be to reduce the threat level and to wait while continuing to be
extorted, rather than to carry out policies that increase the already intense misery of the North Korean people. If the Iranians do break their pledge not to
develop nuclear weapons (a conversion perhaps stimulated by an airstrike on its facilities), they will probably use any nuclear capacity in the same way
all other nuclear states have: for prestige (or ego-stoking) and deterrence. Indeed, suggests strategist and Nobel laureate Thomas Schelling, deterrence is
about the only value the weapons might have for Iran. Nuclear weapons, he points out, would be too precious to give away or to sell and too precious
to waste killing people when they could make other countries hesitant to consider military action. It seems overwhelmingly probable that, if a nuclear
Iran brandishes its weapons to intimidate others or to get its way, it will find that those threatened, rather than capitulating to its blandishments or rushing
off to build a compensating arsenal of their own, will ally with others, including conceivably Israel, to stand up to the intimidation. The popular notion that
nuclear weapons furnish a country with the capacity to dominate its region has little or no historical support. The application of diplomacy and bribery in
an effort to dissuade these countries from pursuing nuclear weapons programs may be useful; in fact, if successful, we would be doing them a favor. But
the world can live with a nuclear Iran or North Korea, as it has
lived now for 45 years with a nuclear China, a country once viewed as the ultimate
although it may be heresy to say so,
rogue.
Should push eventually come to shove in these areas, the problem will be to establish orderly deterrent and containment strategies and to
that, whatever their impact on activist rhetoric, strategic theorizing, defense budgets, and political posturing, nuclear weapons have had at best a quite
limited effect on history, have been a substantial waste of money and effort, do not seem to have been terribly appealing to most states that do not have
them, are out of reach for terrorists, and are unlikely to materially shape much of our future.
Case Extensions
Inh
Facial recognition lacks any governmental regulationPeterson, 6/16. Andrea, Washington Post, 2015. The governments plan to
regulate facial recognition tech is falling apart. JJZ
http://www.washingtonpost.com/blogs/the-switch/wp/2015/06/16/the-governmentsplan-to-regulate-facial-recognition-tech-is-falling-apart/
Facial recognition is being used by the government and big tech companies free of
federal regulation. Now, the government process trying to craft a voluntary code of
conduct to govern the technology appears to be falling apart. Privacy groups are dropping out of the multi-stakeholder meetings organized by
the Department of Commerce's National Telecommunication and Information Administration (NTIA), they said in a letter obtained by The Washington Post that will be sent Tuesday. "At
, we do not believe that the NTIA process is likely to yield a set of privacy rules
that offer adequate protections for the use of facial recognition technology ," the letter says. "We
this point
are convinced that in many contexts, facial recognition of consumers should only occur when an individual has affirmatively decided to allow it to occur. In recent NTIA meetings,
however, industry stakeholders were unable to agree on any concrete scenario where companies should employ facial recognition only with a consumers permission." NTIA has hosted
12 meetings on the issue since February 2014. But the tipping point was at the most recent meeting, on Thursday. First, Alvaro Bedoya, the executive director of Georgetown University's
Center on Privacy and Law, asked if companies could agree to making opt-in for facial recognition technology the default for when identifying people -- meaning that if companies wanted
to use someone's face to name them, the person would have to agree to it. No companies or trade associations would commit to that, according to multiple attendees at the meeting.
Then Justin Brookman, the director of the Center for Democracy & Technology's consumer privacy project, asked if companies would agree to a concrete scenario: What if a company set
up a camera on a public street and surreptitiously used it identify people by name? Could companies agree to opt-in consent there? Again, no companies would commit, according to
several attendees. "This is a pretty remarkable opposition to a core privacy concept that's already in state laws on this issue," Bedoya said in an interview. Some on the business side
argue that privacy advocates "drew a line in the sand" and weren't willing to negotiate. Privacy advocates blame the industry for the impasse. "Trade associations have successfully
blocked any expansion of privacy rights since 2009, and here they are successfully shutting down a process that could have given consumers more control," Bedoya said. Others go even
further, blaming the Obama administration's ties with Silicon Valley. "The White House staff are veterans from Google and Facebook -- they see this sector as vital to the American
economy and they used data mining techniques in elections, so it is no surprise that they are ambivalent about protecting privacy, to say the least," said Jeff Chester, the executive
director of the Center for Digital Democracy. Members of the administration disputed that description. "This process is being spearheaded by people who come out of the public interest
community," said John Morris, associate administrator and director of Internet policy at the NTIA. But, he agreed, there are few federal standards for how companies can collect
information about consumers right now. Most of what little protection people have at the national level stems from the Federal Trade Commission's ability to go after companies that
The NTIA process was a major part of the White House's draft proposal for comprehensive consumer privacy legislation, which received a chilly reception from the privacy community
and even the FTC when it was released this spring. The NTIA meetings were designed to bring the private sector and privacy advocates together to help develop "legally enforceable
codes of conduct" based on concepts in the White House's 2012 Privacy Bill of Rights Blueprint for use in the real world. The approach was first used to come up with rules for mobile app
data, but the process was grueling and few companies have adopted the code of conduct that resulted from it, according to privacy advocates. And now representatives from the
consumer advocacy world are pulling out of the facial recognition meetings. They include representatives from the Electronic Frontier Foundation, the Consumer Federation of America,
the American Civil Liberties Union, the Center for Digital Democracy and the Center for Democracy & Technology. Privacy advocates believe their decision to withdraw could be a
significant blow to the legitimacy of proceedings. "Without the consumer and privacy groups there is no multi-stakeholder process," Chester said. But the meetings will continue, the
agency said. NTIA is disappointed that some stakeholders have chosen to stop participating in our multistakeholder engagement process regarding privacy and commercial facial
recognition technology," an agency spokesperson said. "A substantial number of stakeholders want to continue the process and are establishing a working group that will tackle some of
the thorniest privacy topics concerning facial recognition technology. The process is the strongest when all interested parties participate and are willing to engage on all issues. Industry
representatives also have committed to continuing with the process. "It is disappointing that some stakeholders have chosen to stop participating, but well continue to engage with the
goal of building guidelines that help people enjoy the benefits of this technology while protecting their privacy. a Facebook spokeswoman said in a statement. At this point, consumers
are better off turning to state legislatures for protection from facial recognition technology, Bedoya said. Two states, Texas and Illinois, have biometric data privacy laws on the books that
may already provide some protection against the use of facial technology without informed consent, he said. Under the Illinois law, companies must tell users whenever biometric
information is collected, why it's being collected and how long the companies will keep it. Consumers then must provide written release that they consent to the data collection.
Facebook is being sued over its image tagging feature, which relies on facial recognition technology. The case will decide if the feature triggers the Illinois law -- and if so, if clicking
through the terms and conditions when signing up counts as adequate consent.
and walking stride. It's called the FBI's Next Generation Identification system, and the agency said it became fully operational Monday.
. Hawaii, Maryland and Michigan took part in the NGI system's pilot program, documents show. A dozen others including California, Florida and
New York have discussed participation in the program as well, according to the Electronic Frontier Foundation.
searches. Police nationwide are expected to use it 196 times a day, government documents show. There are several ways your photo could end up in this massive, one-of-a-kind human tracker. Police agencies can submit your postarrest mug shot, video feed from a security camera, or photos from your family and friends. But the FBI database will also keep photos that it receives when conducting background checks, which it does for lots of private sector and
government job candidates. Surprised the FBI didn't have this before? It actually had a limited, low-tech version that only stored fingerprints. But that old system was slow to respond. Police who took fingerprints from people they
arrested would wait two hours for a response from the FBI's database. The new wait time? 10 minutes. And the 24-hour wait for employers performing background checks is now down to 15 minutes. The NGI system, which started as
a pilot program in 2009, was designed by defense contractor Lockheed Martin (LMT) in a deal worth up to $1 billion. The facial recognition software was built by MorphoTrust, a Billerica, Massachusetts-based company that already
does the biometric scans at 450 U.S. airports and DMVs in 42 states. The fingerprint features of the system are meant to help police officers identify suspects in real time. Facial recognition is meant to help detectives identify
. The Electronic Frontier Foundation has sued the Justice Department to get details on the program, but questions remain. For example, the
FBI said it will gather data from security cameras at a crime scene. But does that include the estimated 30 million surveillance cameras installed at street corners and parks? If the FBI information slide below is any indication, the FBI
is interested in using NGI system to identify a random person in a crowd -- and track them as they move, said EFF attorney Jennifer Lynch. That's why the Electronic Privacy Information Center worries the NGI system will get
integrated with CCTV cameras everywhere -- including at private businesses -- and let the government track folks without justification. FBI NGI slideFBI slide from a presentation about its new program's facial recognition capabilities.
To that point, the FBI has already mentioned it will store all photos -- even those with faces it can't immediately pinpoint -- for later identification. However, there are a few ground rules cited in FBI documents: This tool doesn't let
the government start collecting your fingerprint and body data if it couldn't before. Police aren't supposed to rely solely on the facial recognition software to arrest anyone. Photos on people's social media accounts (Facebook,
Instagram, etc.) cannot be submitted into the NGI database (at least during the pilot phase). NGI isn't just about cameras, though. The system is also designed to alert police if someone "holding positions of trust," such as a school
teacher, has run-ins with the law. And the identification system isn't limited to your face. The system is able to spot and search for scars, tattoos, birth marks. FBI documents show the agency built the system to accommodate for
future collection of biometric data. If and when our eyeballs, voices and walking style are recorded and categorized, the system will be able to uniquely identify a person that way too.
Technology exists now to make biometric data interoperableLynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
Recent advancements in camera and surveillance technology over the last few
years support law enforcement goals to use face recognition to track Americans . For
example, the National Institute of Justice has developed a 3D binocular and camera that allows realtime facial acquisition and recognition at 1000 meters.31 The tool wirelessly transmits
images to a server, which searches them against a photo database and identifies the photos subject. As of 2010, these binoculars were already in field-testing with the Los Angeles
Sheriffs Department. Presumably, the back-end technology for these binoculars could be incorporated into other tools like body-mounted video cameras or the MORIS (Mobile Offender
Recognition and Information System) iPhone add-on that some police officers are already using.32 Private security cameras and the cameras already in use by police departments have
They are more capable of capturing the details and facial features necessary
to support facial recognition-based searches, and the software supporting them allows photo manipulation that can improve the
also advanced.
chances of matching a photo to a person already in the database. For example, Gigapixel technology, which creates a panorama photo of many megapixel images stitched together (like
those taken by security cameras), allows anyone viewing the photo to drill down to see and tag faces from even the largest crowd photos.33 It also shows not just a face but also what
that person is wearing; what books and political or religious materials he is carrying; and whom he is with. And image enhancement software, already in use by some local law
enforcement, can adjust photos taken in the wild34 so they work better with facial recognition searches. Cameras are also being incorporated into more and more devices that are
capable of tracking Americans and that can provide that data to law enforcement. For example, one of the largest manufacturers of highway toll collection systems filed a patent
application in 2011 to incorporate cameras into the transponder that sits on the dashboard in your car.35 This manufacturer's transponders are already in 22 million cars, and law
enforcement already uses this data to track subjects. While a patent application does not mean the company is currently manufacturing or trying to sell the devices, it certainly shows
its interested. Interoperability and Data Sharing Before September 11, 2001, the federal government had many policies and practices in place to silo data and information within each
agency. Since that time the government has enacted several measures that allowand in many cases requireinformation sharing within and among federal intelligence and federal,
Carolina, one of a handful of states reported to be in the NGI pilot program, to track criminals using the states DMV records.40 States also share fingerprints (and face prints soon)
access to criminal and terrorist databases. 42 And ICE and the FBI share biometric data on deportees with the countries to which they are deported.43
Solvency
Legislative actions are key to establishing necessary oversight
mandates on biometric collectionsLynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
The over-collection of biometrics has become a real concern, but there are still
opportunitiesboth technological and legalfor change . Given the current uncertainty of Fourth Amendment
jurisprudence in the context of biometrics and the fact that biometrics capabilities are undergoing dramatic technological change,121 legislative action
could be a good solution to curb the overcollection and over-use of biometrics in
society today and in the future. If so, the federal governments response to two seminal wiretapping
cases in the late 60s could be used as a model .122 In the wake of Katz v. United States123 and
New York v. Berger, 124 the federal government enacted the Wiretap Act,125 which laid out specific rules that govern
federal wiretapping, including the evidence necessary to obtain a wiretap order,
limits on a wiretaps duration, reporting requirements, and a notice provision .126 Since then,
law enforcements ability to wiretap a suspects phone or electronic device has been governed primarily by statute rather than Constitutional case law. Congress could also look to the
Video Privacy Protection Act (VPPA),127 enacted in 1988, which prohibits the wrongful disclosure of video tape rental or sale records or similar audio-visual materials, requires a
. If
legislation or regulations are proposed in the biometrics context, the following
principles should be considered to protect privacy and security. These principles are based in part on key
warrant before a video service provider may disclose personally identifiable information to law enforcement, and includes a civil remedies enforcement provision
provisions of the Wiretap Act and VPPA and in part on the Fair Information Practice Principles (FIPPs), an internationally recognized set of privacy protecting principles.128 Limit the
collection. Collection and retention should be specifically disallowed without legal process unless the collection falls under a few very limited and defined exceptions. For example, clear
rules should be defined to govern when law enforcement or similar agencies may collect biometrics revealed to the public, such as a face print. Limit the Amount and Type of Data Stored
and RetainedFor biometrics such as a face print that can reveal much more information about a person than his or her identity, rules should be set to limit the amount of data stored.
recognition technology from public cameras with license plate information increases the potential for tracking and surveillance. This should be avoided or limited to specific individual
investigations. Define Clear Rules for Use and SharingBiometrics collected for one purpose should not be used for another purpose. For example, face prints collected for use in a
criminal context should not automatically be used or shared with an agency to identify a person in an immigration context. Similarly, photos taken in a non-criminal context, such as for a
drivers license, should not be shared with law enforcement without proper legal process. For private sector databases, users should be required to consent or opt-in to any face
recognition system. Enact Robust Security Procedures to Avoid Data CompromiseBecause biometrics are immutable, data compromise is especially problematic. Using traditional
security procedures, such as basic access controls that require strong passwords and exclude unauthorized users, as well as encrypting data transmitted throughout the system, is
paramount. However security procedures specific to biometrics should also be enacted to protect the data. For example, data should be anonymized or stored separate from personal
biographical information. Strategies should also be employed at the outset to counter data compromise after the fact and to prevent digital copies of biometrics. Biometric encryption130
or hashing protocols that introduce controllable distortions into the biometric before matching can reduce the risk of problems later. The distortion parameters can easily be changed to
make it technically difficult to recover the original privacy-sensitive data from the distorted data, should the data ever be breached or compromised.131 Mandate Notice Procedures
Because of the real risk that face prints will be collected without their knowledge, rules should define clear notice requirements to alert people to the fact that a face print has been
collected. The notice provision should also make clear how long the biometric will be stored and how to request its removal from the database. Define and Standardize Audit Trails and
Accountability Throughout the SystemAll database transactions, including biometric input, access to and searches of the system, data transmission, etc. should be logged and recorded
in a way that assures accountability. Privacy and security impact assessments, including independent certification of device design and accuracy, should be conducted regularly. Ensure
infringing on the right to be left alone,163 for example, is not useful because in the FaceIt case, the people being
scanned are technically being left alone. The great simplicity of this definition gives it rhetorical force and
attractiveness, but also denies it the distinctiveness that is necessary for the phrase to be useful in more than a
As a spokesman for the Tampa Police department stated after the use
is no expectation of privacy in a crowd of
100,000 people.165 Such a definition of privacy exempts biometric surveillance
because proponents can simply claim that such technology leaves citizens alone
while ignoring the argument that privacy claims also have to do with, for example, an
individuals reluctance to have a file in a database or to have his or her face scanned unknowingly .
Anonymity is a much narrower conception of the value at stake insofar as
biometric technology is concerned. While there may be no expectation of
privacy in a crowd, there may be an expectation of anonymity in such a space. 166
Because this technology is primarily concerned with identification rather than
searches, anonymity is a value that is tailored much more narrowly and is therefore
conclusory sense.164
better equipped to deal with biometric surveillance . 47. Privacy is closely allied with anonymity.
We may commute for yearssame train, same compartment, same fellowtravelersand yet the man to whom we reveal our hopes, our opinions, our beliefs,
our business and domestic joys and crises remains The chap who gets on at Dorking with The
Times and a pipe; I dont know who he is. And he does not know who we are, because
we have never exchanged names, and thus the necessary communication and release of our private
concerns is accomplished without violation of our privacy . In our anonymity is our security.167
But the value of anonymity is its role as buffer to privacy intrusions. In other words, we will tolerate
considerable intrusion, and even volunteer supererogatory circumstantial detail of
our lives, if our anonymity is preserved.168 48. The strength of using anonymity to oppose FaceIt
rather than expectations of privacy lies in the fact that courts have generally protected anonymity
in public spaces whereas they have in general held that there is no expectation of
privacy in public places. This is because anonymity has implications for the First
Amendment and has strong political dimensions, from the earliest beginnings of the
country. The Federalist papers of Alexander Hamilton, James Madison and John Jay were published anonymously,
under the pen name of Publius.169 Over the years, at least six presidents, fifteen cabinet members, and thirtyfour congressmen published anonymous political writings.170 In McIntyre v. Ohio Elections Comn, the court
indicated in striking down an ordinance requiring that political pamphlets bear the name of the author that: Under
our Constitution, anonymous pamphleteering is not a pernicious, fraudulent practice, but an honorable tradition of
advocacy and of dissent. Anonymity is a shield from the tyranny of the majority [citing J. Mill, On Liberty]. It thus
exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular
individuals from retaliationand their ideas from suppressionat the hand of an intolerant society. The right to
remain anonymous may be abused when it shields fraudulent conduct. But political speech by its nature will
sometimes have unpalatable consequences, and, in general, our society accords greater weight to the value of free
speech than to the dangers of its misuse.171 49. In Thomas v. Collins, 172 the Court held that the president of the
United Auto Workers did not have to register as a labor organizer with the Secretary of State in Texas before being
able to identify himself as such on business cards and solicit new members. Although the ambiguities in the
Thomas opinion leave its scope in doubt, it may be read as a recognition of a right of anonymity.173 The Court has
also upheld the refusal of individuals to disclose the names of individuals who had bought defendants book,174 the
refusal of party officials to divulge the names of other members of the Progressive Party175 and the refusal of a
witness to reveal to the House Committee on Un-American Activities if other individuals had participated in the
Communist Party.176 The right to anonymity was even more firmly expounded on in NAACP v. Alabama ex rel.
Patterson177 in which the Supreme Court upheld the refusal of the NAACP to disclose its membership lists because
to do so would be a violation of the associational privacy implied by the First Amendment. And in Shelton v.
Tucker178 the Court struck down a statute requiring teachers to list their group affiliations on an annual basis.
Despite this line of cases, the scope of anonymity has not really been specified.179 This term, the Supreme Court
will hear Watchtower Bible & Tract Soc. of New York, Inc. v. Village of Stratton, 180 in which Jehovahs Witnesses are
challenging the constitutionality of an ordinance that requires door-to-door proselytizers to register first. 50. Courts
have further upheld anonymity in another prominent public forum: The Internet.181 Various scholars have decried
the fact that cookies and other technology are eroding anonymity on the Internet.182 Individuals and organizations
have argued, and courts have agreed, that there is a strong interest in being anonymous on the Internet because in
the discussion of sensitive topics, they would like to avoid ostracism or embarrassment.183 In some cases,
scholars have even argued, anonymity might even change race relations.184 51. Internet anonymity is easy to
come byunlike anonymity off-line. Instead of having to go outside to find a payphone and making a call using a
One of the
most valuable democratic aspects of the Internet is its capability for anonymous
communication.185 Thus, it is evident that anonymity is a fundamental right that courts have in general
disguised voice, now users could simply find a re-mailer service that would ensure anonymity.
been very aggressive in protecting and it is this right that might offer a foundation for constitutional protection
against FaceIt. B. A PER SE RIGHT? ANONYMITY DECOUPLES FROM SPEECH 52. From these cases it would appear
that a speech nexus is always required and that a right of anonymity only exists insofar as it has consequences for
speech. But in fact,
reformulate the value of anonymity is to argue that it encompasses a broader range of non-speech activities that
nevertheless implicate speech. Under this conception, activities that are formative of identity (such as attending
certain meetings, going into certain stores, viewing certain movies, and so on) are part of speech. Similarly,
activities that help an individual formulate his or her thoughtssuch as readingare also closely tied to speech.
These activities therefore should also be granted anonymity. Julie Cohen therefore argues for a right to read
anonymously, because the activity of reading is as intimate and prior to the activity of speaking.193 Logically, that
zone of protection should encompass the entire series of intellectual transactions through which they formed the
opinions they ultimately chose to express. Any less protection would chill inquiry, and as a result, public discourse,
there is a right to be
anonymous in public as well as it is expressive conduct. Attending a Green Party meeting or a Catholic mass
concerning politically and socially controversial issues[].194 One could argue that
requires walking in public and would almost certainly qualify as political and expressive conduct to which there
might be a right to anonymity. But what about attending a New York Giants gamesurely the expression implied is
No
matter how trivial or incidental the expressive conduct, one could still argue that
they have expressive value and should therefore be protected . The case for protection of
ones support for one of the teamsor a Yo Yo Ma concert? What about walking into the local McDonalds?
anonymity is further bolstered by the fact that individuals appearing in public often do not have the option of hiding
their faces under a mask, for instance. Court authority has been divided over whether or not ordinances prohibiting
masks violate the First Amendment.195 Usually, courts have held, however, that unless the masks themselves
constituted symbolic speech (such as a KKK hood), ordinances preventing the wearing of masks that just hide
identity are constitutional.196 Once FaceIt is a common occurrence, ordinary citizens ought to have the right to
protect their anonymity as well, either by wearing masks or by taking down the cameras. In any case, it is
anonymity that might offer a vindication of rights and the privacy invasion that FaceIt carries itself. VIII.
CONCLUSION 54.
From the users perspective, however, the notice and consent model can only be effective if it is accompanied by
freedom to withhold consent. For that reason, the architectural and market solutions below are intended to give
users more choice by making social networks interoperable with distributed social networks and providing users
because they still do not understand the particular data process or because they have not had time to become
no new data should be collected and previously collected data should not
be used in a new way.261 Privacy settings262 that allow users to opt out of collection and use of biometric
informed
data simply cannot serve as consent not least because by the time a user opts out, the data has already been
collected and potentially used to identify the person in new photos.263 Professor James Grimmelmann has noted
that opt-out consent is particularly insufficient when a new practice in a social network involves a large cultural
Collection and use of biometric data from photos previously shared with
friends involves such a cultural shift because it uses technology that, for most
users, is completely unimaginable. Further, opt-out consent is often designed so that users are not
shift. 264
even aware of the change that they are accepting by default. But even if a social network were to actually notify a
user that she can opt out if she does not want to have her friends automatically find her in new photos, the user
would simply not understand the issue. This is because it would be presented in terms of trust vis--vis the users
friends not the social network that will be collecting, storing, and using highly sensitive and personally
aspect of consent with respect to face recognition technology is that in order to know whether an unidentified
individual consents to automatic face recognition, you need to first extract and process her biometric data to
compare it against a database of consenting individuals.265 This could be addressed by allowing automatic face
recognition for the limited purpose of determining consent and requiring immediate deletion of any data derived
respect, Googles opt-in notice for its face recognition technology, Find My Face, is a good start, though not
perfect. Google launched Find My Face in December 2011.267 At that time, it used a cartoon to illustrate how Find
My Face would [h]elp people tag you in photos and it provided the users real name above a face in the cartoon to
make the example feel realistic.268 Users could then select to turn on the function. The problem was that the
notice did not indicate that this function would use old photos to find users faces in new photos. If users did not
know how face recognition technology works and most users do not this notice did not tell them that it was
asking for permission to collect and use their data in a new way. The notice had a link that users could click to
learn more, but given the small print that usually appears after clicking on links of this sort, by now most users
have learned not to be too curious online.269 Yet, while this notice failed to inform users of every relevant aspect of
the face recognition process, it demonstrated Googles ability to communicate abstract information like automatic
face recognition through a simple cartoon. Google is treading a narrow path here. On the one hand, it tries to live
up to its motto, Dont be evil. 270 On the other hand, it does not want to provide more information than its
competitors so as not to overwhelm the users. But if Google had clearer instructions about what information it
needed to present, and the same requirements applied to its competitors, it could have designed a notice to obtain
adequate consent from its users. 2. Notice Regarding the Collection and Processing of Biometrics What sort of
which they restrict access through their privacy settings. Most users probably think that if they opt out of automatic
face recognition their biometric data will never be collected. But as a function of the opt-out consent, chances are
that a social network collects biometrics when it rolls out a service, which then resides in a database even after a
whether information about privacy practices can ever be effectively communicated to users.274 Professor
Nissenbaum argues that attempts to concisely communicate this information in plain language present a
transparency paradox. 275 Thorough information overwhelms users, while concise notices contain general
provisions and do not describe the details that differentiate between good and bad practices.276 I am more
optimistic about companies ability to concisely present this information if they have the right incentives. Work in
infographics has shown that it is possible to explain incredibly complex information, such as geography or medical
information, with graphs and charts that can easily be understood by non-experts.277 The recent start-up trend of
creating demo videos to communicate often very complex online business models to users and investors in only a
few minutes is another example of this capability.278 Emerging research in user experience design further suggests
that websites can be designed to notify users of the data collection in real time and show how it will be used.279
Indeed, social networks already spend most of their time thinking about how to present our intricate social
relationships, correspondence, and social lives in a clear and accessible manner so that the platforms can be used
by children and grandparents alike.280 Organizing information about data practices is in fact a very similar task
that they have the resources to handle.281 The cartoon in the Google Plus notice though not perfect is a good
example of how social networks can communicate very detailed information through a simple picture.282 Another
example is Facebooks Interactive Tools that allow a user to browse her own profile as if she was another person to
experience what that particular individual can learn about her.283 Were comprehensible information in nontraditional form incentivized by legal requirements and user expectations, these companies could extend their
innovative solutions to provide simple and informative notice about biometric data collection and processing.
Impact Framing
No War
War isnt a threat
John Aziz 14, 3-6-2014, "Don't worry: World War III will almost certainly never
happen," No Publication, http://theweek.com/articles/449783/dont-worry-world-wariii-almost-certainly-never-happen
Next year will be the seventieth anniversary of the end of the last global conflict.
There have been points on that timeline such as the Cuban missile crisis in 1962,
and a Soviet computer malfunction in 1983 that erroneously suggested that the U.S.
had attacked, and perhaps even the Kosovo War in 1999 when a global conflict
was a real possibility. Yet today in the shadow of a flare up which some are calling
a new Cold War between Russia and the U.S. I believe the threat of World War III
has almost faded into nothingness. That is, the probability of a world war is the
lowest it has been in decades, and perhaps the lowest it has ever been since the
dawn of modernity. This is certainly a view that current data supports. Steven
Pinker's studies into the decline of violence reveal that deaths from war have fallen
and fallen since World War II. But we should not just assume that the past is an
accurate guide to the future. Instead, we must look at the factors which have led to
the reduction in war and try to conclude whether the decrease in war is sustainable.
So what's changed? Well, the first big change after the last world war was the
arrival of mutually assured destruction. It's no coincidence that the end of the last
global war coincided with the invention of atomic weapons. The possibility of
complete annihilation provided a huge disincentive to launching and expanding
total wars. Instead, the great powers now fight proxy wars like Vietnam and
Afghanistan (the 1980 version, that is), rather than letting their rivalries expand into
full-on, globe-spanning struggles against each other. Sure, accidents could happen,
but the possibility is incredibly remote. More importantly, nobody in power wants to
be the cause of Armageddon.
Africa. Neodymium mined in China. Plastics forged out of oil, perhaps from Saudi
Arabia, or Russia, or Venezuela. Aluminum from bauxite, perhaps mined in Brazil.
Iron, perhaps mined in Australia. These raw materials are turned into components
memory manufactured in Korea, semiconductors forged in Germany, glass made in
the United States. And it takes gallons and gallons of oil to ship all the resources
and components back and forth around the world, until they are finally assembled in
China, and shipped once again around the world to the consumer.
blame on anyone, but rather to help ensure that peace movement theory and
strategy are founded on sound beliefs. By understanding our motivations and
emotional responses, some insight may be gained into how better to struggle
against nuclear war. (a) Exaggeration to justify inaction. For many people, nuclear
war is seen as such a terrible event, and as something that people can do so little
about, that they can see no point in taking action on peace issues and do not even
think about the danger. For those who have never been concerned or taken action
on the issue, accepting an extreme account of the effects of nuclear war can
provide conscious or unconscious justification for this inaction. In short, one
removes from one's awareness the upsetting topic of nuclear war, and justifies this
psychological denial by believing the worst. This suggests two things. First, it may
be more effective in mobilising people against nuclear war to describe the dangers
in milder terms. Some experiments have shown that strong accounts of danger - for
example, of smoking[17] - can be less effective than weaker accounts in changing
behaviour. Second, the peace movement should devote less attention to the
dangers of nuclear war and more attention to what people can do to oppose it in
their day-to-day lives. (b) Fear of death. Although death receives a large amount of
attention in the media, the consideration of one's own death has been one of the
most taboo topics in western culture, at least until recently.[18] Nuclear war as an
issue raises the topic insistently, and unconsciously many people may prefer to
avoid the issue for this reason. The fear of and repression of conscious thoughts
about personal death may also lead to an unconscious tendency to exaggerate the
effects of nuclear war. One's own personal death - the end of consciousness - can be
especially threatening in the context of others remaining alive and conscious.
Somehow the death of everyone may be less threatening. Robert Lifton[19] argues
that children who learn at roughly the same age about both personal death and
nuclear holocaust may be unable to separate the two concepts, and as a result
equate death with annihilation, with undesirable consequences for coping
individually with life and working collectively against nuclear war. Another factor
here may be a feeling of potential guilt at the thought of surviving and having done
nothing, or not enough or not the right thing, to prevent the deaths of others. Again,
the idea that nearly everyone will die in nuclear war does not raise such disturbing
possibilities. (c) Exaggeration to stimulate action. When people concerned about
nuclear war describe the threat to others, in many cases this does not trigger any
action. An understandable response by the concerned people is to expand the
threat until action is triggered. This is valid procedure in many physiological and
other domains. If a person does not heed a call of 'Fire!', shouting louder may do the
trick. But in many instances of intellectual argument this procedure is not
appropriate. In the case of nuclear war it seems clear that the threat, even when
stated very conservatively, is already past the point of sufficient stimulation. This
means that what is needed is not an expansion of the threat but rather some
avenue which allows and encourages people to take action to challenge the threat.
A carefully thought out and planned strategy for challenging the war system, a
strategy which makes sense to uncommitted people and which can easily
accommodate their involvement, is one such avenue.[20] (d) Planning and
defeatism. People may identify thinking about and planning for an undesirable
future - namely the occurrence and aftermath of nuclear war - with accepting its
differences. The identification of the degree of opposition to nuclear war with the
degree of devastation envisaged may also lead to the labelling of those who make
moderate estimates of the danger as lukewarm opponents of nuclear war. In many
cases such an identification has some degree of validity: those with more awareness
of the extent of racism, sexism, exploitation and misery in the world are often the
ones who take the strongest action. But the connection is not invariable. Extremism
of belief and action does not automatically ensure accurate beliefs or effective
action. A recurrent problem is how to talk about nuclear war and wide scale
devastation without appearing - or being - hardhearted. Peace activists are quite
right to reject sterilised language and doublethink ('Peace is war') in discussions on
nuclear death and destruction, especially when the facade of objectivity masks
dangerous policies. But an exclusive reliance on highly emotional arguments, or an
unofficial contest to see who can paint the worst picture of nuclear doom, is
undesirable too, especially to the degree it subverts or paralyses critical thinking
and creative development of strategy. Another unconscious identification, related
to the identification of the level of opposition to nuclear war with the level of
destruction thought to be caused by it, arises out of people's abhorrence at
'thinking about the unthinkable', namely post-nuclear war planning by military and
strategic planners. This abhorrence easily becomes abhorrence at 'thinking about
the unthinkable' in another sense, namely thinking about nuclear war and its
aftermath from a peace activist point of view. The abhorrence, though, should be
directed at the morality and politics of the military and strategic planners, not at
thinking about the 'unthinkable' event itself. Many peace activists have accepted
the reality of nuclear war as 'unthinkable', leaving the likes of strategic planner
Herman Kahn with a virtual monopoly on thinking about nuclear war. So while postnuclear war planning is seriously carried out by some military and government
bodies, the strategies of the peace movement are seriously hampered by the gap
created by self-imposed 'unthinkability'. (g) White, western orientation. Most of the
continuing large-scale suffering in the world - caused by poverty, starvation, disease
and torture - is borne by the poor, non-white peoples of the third world. A global
nuclear war might well kill fewer people than have died of starvation and hungerrelated disease in the past 50 or 100 years.[22] Smaller nuclear wars would make
this sort of contrast greater.[23] Nuclear war is the one source of possible deaths of
millions of people that would affect mainly white, rich, western societies (China and
Japan are the prime possible exceptions). By comparison, the direct effect of global
nuclear war on nonwhite, poor, third world populations would be relatively small.
Framing
Detachment from crisis-driven politics spurs practical
resistance
Cuomo, 96 [Chris Cuomo 1996 - Professor of Philosophy and Women's Studies, and Director of the Institute for
Women's Studies at the Univerity of Georgia 1996 War Is Not Just an Event: Reflections on the Significance of
Everyday Violence Published in Hypatia 11.4 nb, pp. 30-46
(https://www.academia.edu/476274/War_is_not_just_an_event_Reflections_on_the_significance_of_everyday_violenc
e)]
Moving away from crisis-driven politics and ontologies concerning war and military violence
also enables consideration of relationships among seemingly disparate phenomena, and therefore can shape
more nuanced theoretical and practical forms of resistance. For example, investigating the ways in
which war is part of a presence allows consideration of the relationships among the events of war and the following:
how militarism is a foundational trope in the social and political imagination; how the pervasive presence and
which threats of statesponsored violence are a sometimes invisible/sometimes bold agent of racism,
nationalism, and corporate interests; the fact that vast numbers of communities, cities, and
symbolism of soldiers/warriors/patriots shape meanings of gender; the ways in
nations are currently in the midst of excruciatingly violent circumstances. It also provides a lens for considering the
relationships among the various kinds of violence that get labeled "war." Given current American obsessions with
nationalism, guns, and militias, and growing hunger for the death penalty, prisons, and a more powerful police
state, one cannot underestimate the need for philosophical and political attention to connections among
phenomena like the "war on drugs," the "war on crime," and other state-funded militaristic campaigns. . I propose
that the constancy of militarism and its effects on social reality be reintroduced as a crucial locus of contemporary
feminist attentions, and that feminists emphasize how wars are eruptions and manifestations of omnipresent
militarism that is a product and tool of multiply oppressive, corporate, technocratic states.(2) Feminists should be
particularly interested in making this shift because it better allows consideration of the effects of war and militarism
on women, subjugated peoples, and environments. While giving attention to the constancy of militarism in
contemporary life we need not neglect the importance of addressing the specific qualities of direct, large-scale,
increasingly technologically sophisticated ways and the significance of military institutions and everyday practices
in shaping reality. Philosophical discussions that focus only on the ethics of declaring and fighting wars miss these
connections, and also miss the ways in which even declared military conflicts are often experienced as omnipresent
horrors. These approaches also leave unquestioned tendencies to suspend or distort moral judgement in the face of
what appears to be the inevitability of war and militarism.
Worst-case
thinking means generally bad decision making for several reasons. First, it's
only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards.
magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.
By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking
focuses only on the extreme but improbable risks
outcomes. Second, it's based on flawed logic. It begs the question by assuming that
a proponent of an action must prove that the nightmare scenario is
impossible. Third, it can be used to support any position or its opposite. If we
build a nuclear power plant, it could melt down. If we don't build it, we will
run short of power and society will collapse into anarchy. If we allow flights near
Iceland's volcanic ash, planes will crash and people will die. If we don't, organs won't arrive in time for transplant
operations and people will die. If we don't invade Iraq, Saddam Hussein might use the nuclear weapons he might
have. If we do, we might destabilize the Middle East, leading to widespread violence and death. Of course, not all
fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking. So terrorism
fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary;
nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and
we can imagine. Remember Defense Secretary Donald Rumsfeld's quote? "Reports that say that something hasn't
happened are always interesting to me, because as we know, there are known knowns; there are things we know
we know. We also know there are known unknowns; that is to say we know there are some things we do not know.
But there are also unknown unknowns -- the ones we don't know we don't know." And this: "the absence of
evidence is not evidence of absence." Ignorance isn't a cause for doubt; when you can fill that ignorance with
imagination, it can be a call to action. Even worse, it can lead to hasty and dangerous acts . You can't wait for a
Rather than making us safer, worstcase thinking has the potential to cause dangerous escalation. The new
smoking gun, so you act as if the gun is about to go off.
undercurrent in this is that our society no longer has the ability to calculate probabilities. Risk assessment is
devalued. Probabilistic thinking is repudiated in favor of "possibilistic thinking": Since we can't know what's likely to
its effects: airline security and the TSA, which we make fun of when we're not appalled that they're harassing 93-
You can
refuse to fly because of the possibility of plane crashes. You can lock your
children in the house because of the possibility of child predators . You can
year-old women or keeping first-graders off airplanes. You can't be too careful! Actually, you can.
eschew all contact with people because of the possibility of hurt. Steven Hawking wants to avoid trying to
communicate with aliens because they might be hostile; does he want to turn off all the planet's television
broadcasts because they're radiating into space? It isn't hard to parody worst-case thinking, and at its extreme it's a
psychological condition. Frank Furedi, a sociology professor at the University of Kent, writes: "Worst-case thinking
encourages society to adopt fear as one of the dominant principles around which the public, the government and
institutions should organize their life. It institutionalizes insecurity and fosters a mood of confusion and
powerlessness. Through popularizing the belief that worst cases are normal, it incites people to feel defenseless and
vulnerable to a wide range of future threats." Even worse, it plays directly into the hands of terrorists, creating a
population that is easily terrorized -- even by failed terrorist attacks like the Christmas Day underwear bomber and
the Times Square SUV bomber. When someone is proposing a change, the onus should be on them to justify it over
the status quo. But worst case thinking is a way of looking at the world that exaggerates the rare and
unusual and gives the rare much more credence than it deserves. It isn't really a principle; it's a cheap trick to
Democracy/Anonymous Advantage
FRT presents a
particularly difficult problem under prevailing constitutional law because most faces
are routinely exposed in public. No domestic law requires that a persons facial
features be unobstructed while she maneuvers about in public places so that the
government can use them for identification purposes. Her visage is there for the
governments taking. Technology has thus become deterministic of personal privacy
today. Yet there is no reciprocal power on the part of individuals to direct how
technology will evolve in relationship to their privacy interests or even to opt out of
its implications for their daily lives. The First Amendment anonymity cases and Fourth Amendment
involvement of legislative attempts to coerce the disclosure of personal identities.
doctrine assume that a person possesses the discretion to take steps to protect communications or other effects
her identity in other pamphlets was irrelevant to the Courts analysis and ultimate conclusion that her choice to
amounts to surveillance. The theory behind the Fourth Amendment doctrines that lift its protections
for information disclosed publicly or to third parties is thus unsustainable. Accordingly, the recognition of
anonymity as a constitutional value that warrants protection under the First and
Fourth Amendments may require numerous safeguards in place for forestalling
indiscriminate disclosure, as Justice Brennan suggested in Whalen. 437 In his words, whether
sophisticated storage and matching technology amount[s] to a deprivation of constitutionally protected privacy
interests might depend in part on congressional or regulatory protections put in place to forbid the governments
phone or in some years your glasses and, in a few more, your contact lenses will tell you the name of that
person at the party whose name you always forget . . . . Or it will tell the stalker in the bar the address where you
longstanding judicial rejection of a reasonable expectation of privacy in matters made public have depleted the
Fourth Amendment of vitality for purposes of establishing constitutional barriers to the governments use of FRT to
profile and monitor individual citizens.
Americans have been increasingly monitored with facerecognition technology (FRT). Though the technique remains crude, face-based
surveillance is already used in airports and on city streets to detect fugitives,
teenage runaways, criminal suspects, or anyone who was ever arrested. As it spreads,
FRT will be an unusually fraught topic for courts to address, because it straddles so
many fault lines currently lying beneath our Fourth Amendment jurisprudence.
These include whether: (1) people enjoy a reasonable expectation of anonymity in
public, (2) a seizure can occur without halting a persons movement, (3) longterm aggregation of data about individuals can constitute a search, and (4) the
probable-cause standard tolerates generalized surveillance with a high rate of
false positives. These fault lines are not minor questions but fundamental
challenges of the digital-surveillance movement. While most courts to address these issues have
attending the big game,
erred toward diminished Fourth Amendment protection, this Article cites an emerging minority that would reclaim
basic privacy rights currently threatened by electronic monitoring in public.
In Ciraolo,
police officers flew an airplane 1,000 feet over a suspects fenced-off property and
observed a small marijuana field.89 In Dow Chemical, EPA agents photographed the
Chemical Co. v. United States were decided on the same day.88 The cases presented similar facts.
police novel powers of perceptionthe ability to see through walls or hear private
conversations95sensory-enhancing tools are not offensive to public expectations .96
Based on the example of Dow, police are able to enhance their noses with drugsniffing dogs97 and enhance their
eyes with telescopes and binoculars. 98 Police cannot, however, aim a heat-sensing camera at a suspects garage,
since this technique is uncomfortably analogous to looking through a wall into a private space.99 Still, as Justice
Powell admonished in his Dow dissent, the availability and sensory enhancement tests inevitably abrogate
public privacy as snooping technology becomes more pervasive.100 Linking surveillance cameras to FRT, then,
arguably only enhances the polices already-existing senses :
respond that a persons outrage means nothing at the point at which surveillance technology meets the Dow test.
This argument, made by lower courts in other contexts, is that as long as people know a technology could
conceivably be used against them by strangers, the governments use of the technology is not a constitutional
issue.108 As articulated in one district opinion, The proper inquiry . . . is not what a random stranger would
actually or likely do [with surveillance technology], but rather what he feasibly could.109 Members of the public
could conceivably use an online FRT program such as Polar Rose to identify strangers on the street based on a
furtivelysnapped digital photo.110 Making such a scenario all the more plausible, Google is now building an
application that would locate a persons online Google Profile based on any photo of the persons face.111 Thus, like
suspects via their cell phone records without a warrant.112 The holding was despite the governments truthful
argument that a cell phone company could easily track any subscribers movements by cataloguing the cell phone
towers that received the subscribers signal.113 Maynard reviewed the Courts important reasonable expectation
cases114 and concluded: In
we ask not what another person can physically and may lawfully do but rather what
a reasonable person expects another might actually do. 115 Were the D.C. Circuit to review
state-run FRT, the inquiry would then be whether D.C. pedestrians expect their fellow travelers to discover their
identities via FRT software. Three weeks after Maynard, a district court followed its result, emboldened by several
rulings in recent years that reclaim domains of personal privacy threatened by encroaching technology.116 Though
the Maynard reasoning is for now the minority view,117 it reflects a broadly felt
instinct to reclaim the reasonable expectation test as a guardian of Fourth
Amendment rights in public spaces.118 Face-recognition challenges offer the
potential to push Maynard further into the mainstream.
technologies like
written materials without personal identification of the authorlargely came about in response to government
attempts to mandate disclosures in public writings. 338 In Talley v. California,339 the Court struck down a Los
Angeles ordinance restricting the distribution of a handbill in any place under any circumstances, which does not
have printed on the cover . . . the name and address of . . . [t]he person who printed, wrote, compiled or
manufactured the same.340 Finding that the law infringed on freedom of expression, the Court observed that
[a]nonymous
much of ones privacy as possible.355 The Court extolled the virtues of anonymity as fostering [g]reat works of
literature . . . under assumed names, enabling groups to criticize the government without the threat of
persecution, and provid[ing] a way for a writer who may be personally unpopular to ensure that readers will not
prejudge her message simply because they do not like its proponent.356 As core political speech, it concluded,
[n]o form of speech is entitled to greater constitutional protection.357 Justice Stevens went on in his majority
opinion to tether anonymity to the purpose behind the Bill of Rights and the First Amendment: to protect unpopular
individuals from retaliationand their ideas from suppression at the hand of an intolerant society.358
Anonymity, he explained, is a shield from the tyranny of the majority.359 In a concurring opinion, Justice
Thomas commented that the Founders practices and beliefs on the subject indicate[] that they believed the
freedom of the press to include the right to author anonymous political articles and pamphlets.360 That most
other Americans shared this understanding, he added, is reflected in the Federalists hasty retreat before the
withering criticism of their assault on the liberty of the press.361 Justice Scalia dissented, arguing that anonymity
facilitates wrong by eliminating accountability, which is ordinarily [its] very purpose.362 To treat all anonymous
communication . . . in our society [as] traditionally sacrosanct, he continued, seems to me a distortion of the past
that will lead to a coarsening of the future.363 In Watchtower Bible & Tract Society of New York, Inc. v. Village of
Stratton,364 the Court struck down an ordinance requiring permits for doorto-door canvassing as a prior restraint
on speech but also because the law vitiated the possibility of anonymous speech.365 It characterized the permit
requirement as result[ing] in a surrender of . . . anonymityeven where circulators revealed their physical
identitiesbecause strangers to the resident certainly maintain their anonymity.366 The Court was thus
unmoved by the fact that speakers who ring doorbells necessarily make themselves physically known to their
audience, thus revealing themselves to some extent. For the Court, it was the recognition that occurs when a name
on a permit is connected to a face which triggered the Constitutions protection of anonymity. Most recently, a
fractured plurality in Doe v. Reed367 upheld a state law compelling public disclosure of the identities of referendum
petition signatories while squarely acknowledging the vitality of a First Amendment right to anonymous speech. 368
Significantly, all but one Justice recognized that the governments ability to correlate identifying information with
online data created a First Amendment hazard of unprecedented dimension. Writing for the majority, Chief Justice
Roberts found that an individuals expression of a political view through a signature on a referendum petition
implicated a First Amendment right.369 The Court nonetheless held that the states interest in preserving the
integrity of the electoral process and informing the public about who supports a petition justified the burdens of
compelled disclosure.370 Justice Roberts made a point of deeming significant the plaintiffs argument that, once
on the Internet, their names and addresses could be matched with other publicly available information about them
in what will effectively become a blueprint for harassment and intim idation.371 Because the majority only
considered the facial challenge to the law, Justice Roberts found the burdens imposed by typical referendum
petitions unlike those that the plaintiffs feared.372 Justice Alito wrote separately to emphasize that government
access to personal data online gave rise to a strong as-applied challenge based on the individual . . . right to
privacy of belief and association.373 He considered breathtaking the implications of the states argument that it
has an interest in providing information to the public about supporters of a referendum petition; if true, the State
would be free to require petition signers to disclose all kinds of demographic information, including the signers
race, religion, political affiliation, sexual orientation, ethnic background, and interest-group memberships.374
Justice Alito added that the posting of names and addresses online could allow anyone with access to a computer
[to] compile a wealth of information about all of those persons, with vast potential for use in harassment.375
Justice Thomas dissented on similar grounds, asserting that he would sustain a facial challenge precisely because
[t]he advent of the Internet enables rapid dissemination of the information needed to threaten or harass every
referendum signer, thus chill[ing] protected First Amendment activity.376 Concurring separately, Justice Scalia
stood alone in his complete rejection of First Amendment protections for anonymous speech.377 When considered
in conjunction with the digital-age Fourth Amendment cases, Doe is remarkable in its recognition of the pressures
that modern technology puts on the viability of existing constitutional doctrine relating to individual privacy.
Although Jones addressed GPS monitoring under the Fourth Amendment, Justice Sotomayor invoked the First
Amendment to emphasize that [a]wareness that the Government may be watching chills associational and
expressive freedoms, and that the Governments unrestrained power to assemble data that reveal private aspects
of identity is susceptible to abuse.378 When inexpensive technology is paired with massive amounts of readily
accessible personal information and unfettered government discretion to track individual citizens, she explained,
Mechanism
The plan represents an enshrined attempt to protect
anonymous assembly and communication
Nguyen 2 - J.D. Candidate, Yale Law School, 2003 (Alexander T., Heres
Looking At You, Kid: Has Face-Recognition Technology Completely Outflanked The
Fourth Amendment, VIRGINIA JOURNAL of LAW and TECHNOLOGY , UNIVERSITY OF
VIRGINIA SPRING 2002 7 VA. J.L. & TECH. 2,
http://www.vjolt.net/vol7/issue1/v7i1_a02-Nguyen.PDF ) NAR
Fourth Amendment-based arguments against FaceIt are mostly based on dicta or
extrapolation, and therefore offer very weak opposition to technology such as FaceIt . Even
though the arguments are intellectually interesting, to contend that the Fourth Amendment would
prohibit the use of technology such as FaceIt is simply to fight an quixotic battle, and it
might take too long for courts to reformulate a new conceptualization of the Fourth
Amendment to protect citizens against FaceIt. Instead, one must realize that the
expectation of privacy has crumbled with the onslaught of technology, and it might
be time to turn to another potentialand more immediately availablesource of opposition to
FaceIt technology. That source is anonymity. 46. If technology has eroded the expectation of privacy,
one could argue that courts have consistently upheld what might be termed the
expectation of anonymity. The definition of privacy is almost certainly too broad in
order to meaningfully protect individuals against FaceIt. Privacy has become as
nebulous a concept as happiness or security. 162 To simply say that FaceIt violates privacy by
45. These potential
infringing on the right to be left alone,163 for example, is not useful because in the FaceIt case, the people being
scanned are technically being left alone. The great simplicity of this definition gives it rhetorical force and
attractiveness, but also denies it the distinctiveness that is necessary for the phrase to be useful in more than a
As a spokesman for the Tampa Police department stated after the use
of video surveillance at the Super Bowl: There is no expectation of privacy in a crowd of
100,000 people.165 Such a definition of privacy exempts biometric surveillance
because proponents can simply claim that such technology leaves citizens alone
while ignoring the argument that privacy claims also have to do with, for example, an
individuals reluctance to have a file in a database or to have his or her face scanned unknowingly .
Anonymity is a much narrower conception of the value at stake insofar as
biometric technology is concerned. While there may be no expectation of
privacy in a crowd, there may be an expectation of anonymity in such a space. 166
Because this technology is primarily concerned with identification rather than
searches, anonymity is a value that is tailored much more narrowly and is therefore
better equipped to deal with biometric surveillance . 47. Privacy is closely allied with anonymity.
We may commute for yearssame train, same compartment, same fellowtravelersand yet the man to whom we reveal our hopes, our opinions, our beliefs,
our business and domestic joys and crises remains The chap who gets on at Dorking with The
Times and a pipe; I dont know who he is. And he does not know who we are, because
we have never exchanged names, and thus the necessary communication and release of our private
concerns is accomplished without violation of our privacy . In our anonymity is our security.167
But the value of anonymity is its role as buffer to privacy intrusions. In other words, we will tolerate
considerable intrusion, and even volunteer supererogatory circumstantial detail of
our lives, if our anonymity is preserved.168 48. The strength of using anonymity to oppose FaceIt
conclusory sense.164
under the pen name of Publius.169 Over the years, at least six presidents, fifteen cabinet members, and thirtyfour congressmen published anonymous political writings.170 In McIntyre v. Ohio Elections Comn, the court
indicated in striking down an ordinance requiring that political pamphlets bear the name of the author that: Under
our Constitution, anonymous pamphleteering is not a pernicious, fraudulent practice, but an honorable tradition of
advocacy and of dissent. Anonymity is a shield from the tyranny of the majority [citing J. Mill, On Liberty]. It thus
exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular
individuals from retaliationand their ideas from suppressionat the hand of an intolerant society. The right to
remain anonymous may be abused when it shields fraudulent conduct. But political speech by its nature will
sometimes have unpalatable consequences, and, in general, our society accords greater weight to the value of free
speech than to the dangers of its misuse.171 49. In Thomas v. Collins, 172 the Court held that the president of the
United Auto Workers did not have to register as a labor organizer with the Secretary of State in Texas before being
able to identify himself as such on business cards and solicit new members. Although the ambiguities in the
Thomas opinion leave its scope in doubt, it may be read as a recognition of a right of anonymity.173 The Court has
also upheld the refusal of individuals to disclose the names of individuals who had bought defendants book,174 the
refusal of party officials to divulge the names of other members of the Progressive Party175 and the refusal of a
witness to reveal to the House Committee on Un-American Activities if other individuals had participated in the
Communist Party.176 The right to anonymity was even more firmly expounded on in NAACP v. Alabama ex rel.
Patterson177 in which the Supreme Court upheld the refusal of the NAACP to disclose its membership lists because
to do so would be a violation of the associational privacy implied by the First Amendment. And in Shelton v.
Tucker178 the Court struck down a statute requiring teachers to list their group affiliations on an annual basis.
Despite this line of cases, the scope of anonymity has not really been specified.179 This term, the Supreme Court
will hear Watchtower Bible & Tract Soc. of New York, Inc. v. Village of Stratton, 180 in which Jehovahs Witnesses are
challenging the constitutionality of an ordinance that requires door-to-door proselytizers to register first. 50. Courts
have further upheld anonymity in another prominent public forum: The Internet.181 Various scholars have decried
the fact that cookies and other technology are eroding anonymity on the Internet.182 Individuals and organizations
have argued, and courts have agreed, that there is a strong interest in being anonymous on the Internet because in
the discussion of sensitive topics, they would like to avoid ostracism or embarrassment.183 In some cases,
scholars have even argued, anonymity might even change race relations.184 51. Internet anonymity is easy to
come byunlike anonymity off-line. Instead of having to go outside to find a payphone and making a call using a
One of the
most valuable democratic aspects of the Internet is its capability for anonymous
communication.185 Thus, it is evident that anonymity is a fundamental right that courts have in general
disguised voice, now users could simply find a re-mailer service that would ensure anonymity.
been very aggressive in protecting and it is this right that might offer a foundation for constitutional protection
against FaceIt. B. A PER SE RIGHT? ANONYMITY DECOUPLES FROM SPEECH 52. From these cases it would appear
that a speech nexus is always required and that a right of anonymity only exists insofar as it has consequences for
speech. But in fact,
reformulate the value of anonymity is to argue that it encompasses a broader range of non-speech activities that
nevertheless implicate speech. Under this conception, activities that are formative of identity (such as attending
certain meetings, going into certain stores, viewing certain movies, and so on) are part of speech. Similarly,
activities that help an individual formulate his or her thoughtssuch as readingare also closely tied to speech.
These activities therefore should also be granted anonymity. Julie Cohen therefore argues for a right to read
anonymously, because the activity of reading is as intimate and prior to the activity of speaking.193 Logically, that
zone of protection should encompass the entire series of intellectual transactions through which they formed the
opinions they ultimately chose to express. Any less protection would chill inquiry, and as a result, public discourse,
concerning politically and socially controversial issues[].194 One could argue that
there is a right to be
anonymous in public as well as it is expressive conduct. Attending a Green Party meeting or a Catholic mass
requires walking in public and would almost certainly qualify as political and expressive conduct to which there
might be a right to anonymity. But what about attending a New York Giants gamesurely the expression implied is
No
matter how trivial or incidental the expressive conduct, one could still argue that
they have expressive value and should therefore be protected . The case for protection of
ones support for one of the teamsor a Yo Yo Ma concert? What about walking into the local McDonalds?
anonymity is further bolstered by the fact that individuals appearing in public often do not have the option of hiding
their faces under a mask, for instance. Court authority has been divided over whether or not ordinances prohibiting
masks violate the First Amendment.195 Usually, courts have held, however, that unless the masks themselves
constituted symbolic speech (such as a KKK hood), ordinances preventing the wearing of masks that just hide
identity are constitutional.196 Once FaceIt is a common occurrence, ordinary citizens ought to have the right to
protect their anonymity as well, either by wearing masks or by taking down the cameras. In any case, it is
anonymity that might offer a vindication of rights and the privacy invasion that FaceIt carries itself. VIII.
CONCLUSION 54.
Card
The precedent from the plan protects anonymous speech and
assembly in elections which is increasingly under threat
*Thats also key to global democracy.
Commissions broadcast disclosure and disclaimer mandates after applying a level of judicial reviewlawyers might
call it intermediate scrutinylower than what the majority applied when it struck down the restrictions on
Court upheld
mandatory disclosure of the identities of individuals who sign a ballot measure
petition in Doe v. Reed. 16 Private speech and association is also under increasing
assault in the wider policy world; with calls for publicity mandates to force
disclosure of donors to traditional center-right and center-left think tanks .
17 A federal case has already been filed by a blogger challenging laws that forced
him to disclose his identityand the blogger lost his case in the first round.18 Indeed, courts have
largely sustained such publicity mandates .19 This is despite the cross-ideological majority opinion of
independent speech for which the case has become known. 15 During the same term, the
Justice John Paul Stevens and powerful concurring opinion of Justice Clarence Thomas in McIntyre v. Ohio Election
Commn, in which the Court shielded an opponent of a proposed ballot measure tax levy from being forced to
private civic
engagement serves a critically important purpose in keeping the marketplace of
ideas focused on the message, not the messenger. 21 It also protects the messenger
from retaliation when she speaks truth to power. More than 30 years before McIntyre, the
disclose her identity under the First Amendment.20 As previously recognized in McIntyre,
Supreme Court noted in Talley v. California that [p]ersecuted groups and sects from time to time throughout
history have been able to criticize oppressive practices and laws either anonymously or not at all.22 There was a
time when that recognition called into question all publicity mandates and bans on anonymous speech.23 Citizens
United and Doe v. Reed, however, have clearly limited the reach of McIntyre. And the Courts lax application of
intermediate scrutiny also put considerable distance between its analysis and that in Buckley v. Valeothe
foundational case of modern campaign finance lawwhich sustained disclosure requirements as the least
result, people
engaged in politics and political issues face being thrust into the spotlightwhich in
todays polarized political environment encourages retaliation, deters civic
engagement, and thereby enables those already in the incumbent political class to
consolidate their power. To prevent the resulting ossification of existing power
structures and to protect individual liberty, this paper seeks to point the way back
to our nations heritage of private civic engagement.
restrictive means of curbing the evils of campaign ignorance and corruption.24 As a
A.S. k2 Democracy
Anonymous speech key to democracyMcCabe, 14.
[Katherine; JD Candidate at Fordham Unviersity School of Law; Founding Era Free
Speech Theory: Applying Traditional Speech Protection to the Regulation of
Anonymous Cyberspeech; Fordham Intellectual Property, Media & Entertainment
Law Journal; Spring 2014; 24 Fordham Intell. Prop. Media & Ent. L.J. 823]
The First Amendment has protected anonymous speech since the Founding Era.
Historically, freedom of speech has been justified
facilitating representative democracy and self-government
Anonymous speech has been held to have inherent value and is thus
protected by the First [*827] Amendment
for three main reasons: advancing knowledge and truth in the marketplace of ideas;
; and promoting individual autonomy, self-expression and self-
fulfillment. n11
. n12 From an originalist perspective, the mere fact that anonymous speech was allowed and protected during the
Founding Era is enough to justify the protection of anonymous speech today. However, there are prudential concerns to this argument, as the goal was arguably not to protect vicious anonymous hate speech and harassment.
an organization engaging in
(e.g.,
. This is significant because federal and state regulators are regularly reviewing policy proposals in this area of law, and our policy debate should reflect the realities of the situation and not an
unthoughtful idealization of disclosure. Disclosure Is Overrated In thinking about the (over) value of disclosure, five points are noteworthy: 1
and expressive association Most readers of this newspaper probably know this from personal experience, having refused to contribute to candidates, or given less to candidates, to avoid being
listed in campaign finance reports. 2. The ultimate goal of disclosure rules less corruption is still only a theoretical benefit. Despite decades of experience, research has not established a significant link between disclosure rules
and reductions in political corruption. 3. Disclosure rules impose a tremendous burden on political campaigns and staffers. The amount of time, stress, and money poured into campaign finance reports is disproportionate to their
modest benefits. And in an increasing number of cases, disclosure rules result in criminal prosecutions and incarceration something that should be very sobering to First Amendment enthusiasts. 4. The information otherwise
available to voters is more significant than the information provided by mandatory disclosure laws. Scholarly research shows that, after taking into account the other information available to voters, the information contained in
(and presents
Anonymity Has Value While disclosure is overrated, the benefits of anonymity are almost always overlooked. Three
issues are particularly worthy of discussion: 1. Many individuals, organizations, and causes have a legitimate need for anonymity.
and in our election law practice, we regularly encounter donors with bona fide concerns justifying anonymity. Individuals critical of powerful government officials,
for example, are unlikely to engage in extensive expressive activities unless their participation remains confidential. In this sense, disclosure rules systematically favor the establishment and disadvantage dissenters by deterring the
public from speaking truth to power and opposing inept and corrupt government officials. 2. As a practical matter, there is little risk that vehicles for anonymous speech will, in the near future, be overused. Creating and operating a
politically active 501(c)(4) organization, for example, is expensive, legally perilous, and raises suspicions. Because the existing disclosure rules make anonymity costly and inconvenient, the use of vehicles for anonymous speech is
naturally limited to individuals and organizations with a non-trivial interest in anonymity (such as a credible fear of retaliation by incumbents or electoral favorites). 3. Anonymity can clarify, rather than obscure, a message. Research
shows that viewers become skeptical of political arguments based solely on the speakers identity. While the significance of this fact is disputed some treat this as a reason to support disclosure rules I tend to believe
that
political issue, are sometimes right and by distancing a funder from his or her
message, the public is more likely to engage on the merits of an argument rather
than fixate on the speakers personality or motives
. None of this means that disclosure should be abandoned wholesale but these factors
support a much more nuanced view of political disclosures. They recommend, specifically, reconsidering the categories of information political organizations are required to disclose, and a recognition that anonymous political speech
plays an important role in the democratic process.
, anonymity is
not an evil to be cured. In fact, considering the role of anonymous political speech in American history, its benefits to individual
speakers and political discourse at large far outweigh its negative effects . This
article identifies three liberty interests in anonymity to secure: preventing prejudice,
keeping the message central, and preventing retaliation from those in power . This section
identify the speaker [*256] and for certain organizations to report the names of their contributors to the government. n19 Unlike political corruption
discusses prominent historical examples of anonymous political speech and describes various legitimate reasons why Americans have elected to voice their political opinions
anonymously.
Anonymous speech key to democracyWashington Post, 15. 1/25/15. The benefits of anonymous political speech.
http://www.washingtonpost.com/opinions/the-benefits-of-anonymous-politicalspeech/2015/01/25/092abefe-a326-11e4-91fc-7dff95a14458_story.html
The Jan. 21 editorial Undisclosed consequences repeated the tired notion that disclosure in political speech is an unmitigated good. In doing so, it underestimated the ability of
no reason to assume we are less capable when it comes to elections. Doing so is insulting to voters and unfair to speakers, who have every right to convey messages as they see fit.
. And yet the fact that face images can be captured without a detention
and in public, or may be uploaded voluntarily to a third party such as Facebook, or may be collected and stored by private security firms and data aggregators, presents significant challenges in applying Constitutional protections.
for governmental biometrics collection in the United States.82 Although there are significant exceptions to Fourth
Amendment protections that may make it difficult to map to biometric collection such as facial recognition, 83 a recent Supreme Court case, U.S. v. Jones, 84 and a few other cases85 show that courts are concerned about mass
collection of identifying informationeven collection of information revealed to the public or a third partyand are trying to identify solutions. Cases like Jones suggest support for the premise that although we may tacitly consent to
networks they designate.86 In United States v. Jones, 87 nine justices held that a GPS device planted on a car without a warrant and used to track a suspects movements constantly for 28 days violated the Fourth Amendment. For
was an important factor in determining the outcome of the case.88 Justice Sotomayor would have gone even further, questioning the continued validity of the
third-party doctrine (holding that people lack a reasonable expectation of privacy in data such as bank records that they share with a third-party such as the bank).89 She also recognized that: [a]wareness that the Government may
be watching chills associational and expressive freedoms. And the Government's unrestrained power to assemble data that reveal private aspects of identity is susceptible to abuse.90 She questioned whether people reasonably
The fact
that several members of the Court were willing to reexamine the reasonable
expectation of privacy test92 in light of newly intrusive technology could prove
important for future legal challenges to biometrics collection
expect that their movements will be recorded and aggregated in a manner that enables the Government to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.91
oral argument and in their various opinions, could be used as models for establishing greater protections for data like facial recognition that is both shared with a third party such as Facebook and gathered in public.93
the latest indication of the advanced preparations for military rule in America. Politically isolated, incapable of and
opposed to any measures to ameliorate the conditions of grinding poverty and immense social inequality in
America, the ruling class looks upon the masses of workers and youth with hatred and fear. It has recently been
The operation
in Baltimore is part of an expanding program of persistent surveillance. According
to the ACLU, new technologies, using sophisticated high-tech version of radar that is
akin to a camera to track movements in detail across an immense territory, have
been deployed or are in development,
revealed that paramilitary forces referred to protesters in Ferguson last year as enemy forces.
frame the sustainability movement to make it not just possible but attractive. This will increase the likelihood that
the changes will spread beyond the pioneers and excite vast populations. This section looks at some ways this is
happening already. John de Graaf of the Take Back Your Time movement describes one way to sell sustainability
that is likely to appeal to many people: working fewer hours. Many employees are working longer hours even as
gains in productivity would allow shorter workdays and longer vacations. Taking back time will help lower stress,
allow healthier lifestyles, better distribute work, and even help the environment. This last effect will be due not just
to less consumption thanks to lower discretionary incomes but also to people having enough free time to choose
the more rewarding and often more sustainable choicecooking at home with friends instead of eating fast food,
for example, making more careful consumer decisions, even taking slower but more active and relaxing modes of
transport. Closely connected to Take Back Your Time is the voluntary simplicity movement, as Cecile Andrews, coeditor of Less is More, and Wanda Urbanska, producer and host of Simple Living with Wanda Urbanska, discuss. This
encourages people to simplify their lives and focus on inner well-being instead of material wealth. It can help inspire
people to shift away from the consumer dream and instead rebuild personal ties, spend more time with family and
cultural norms, traditions, and values is the fairly recent development of ecovillages. Sustainability educator
Jonathan Dawson of the Findhorn ecovillage paints a picture of the exciting role that these are playing around the
world. These sustainability incubators are reinventing what is natural and spreading these ideas to broader society
not just through modeling these new norms but through training and courses in ecovillage living, permaculture,
and local economics. Similar ideas are also spreading through cohousing communities, Transition Towns, and even
green commercial developments like Dockside Green in Canada and Hammarby Sjstad in Sweden. Two Boxes in
this section describe some other exciting initiatives. One provides an overview of a new political movement called
dcroissance (in English, degrowth), which is an important effort to remind people that not only can growth be
detrimental, but sometimes a sustainable decline is actually optimal. And a Box on the Slow Food movement
describes the succulent power of organizing people through their taste buds. Across cultures and time, food has
played an important role in helping to define peoples realities. Mobilizing food producers as well as consumers to
clamor for healthy, fair, tasty, sustainable cuisines can be a shrewd strategy to shift food systems and, through
them, broader social and economic systems. These are just a few of the dozens and dozens of social movements
Biopower Adv
The use of government surveillance has expanded and
legitimized the criminal justice systems war on crime.
Natapoff 2014 [09/10/14, Alexandra Natapoff, Associate Dean for Research,
Theodore A. Bruinsma Fellow & Professor of Law, Loyola Law School, Los Angeles,
MISDEMEANOR DECRIMINALIZATION, 68 VANDERBILT L. REV. (forthcoming 2015),
http://ssrn.com/abstract=2494414]
Decriminalization also sheds light on some seemingly contradictory historical developments. In important ways, the
U.S. criminal process is shrinking. The national correctional population has decreased for four years in a row.229 At
least six states have closed prisons and arrests are down for the sixth year in a row.230 Californiaonce a leader of
the prison boomis cutting its prison population and easing its harshest juvenile sentences.231 At the federal level,
Congress repealed the infamous crack-cocaine sentencing disparity,232 while Attorney General Eric Holder has
instructed U.S. Attorneys around the country to go easier on first-time low level drug offenders.233 The
conservative Right-on-Crime coalition advocates more rehabilitation and less incarceration.234 There is growing
international agreement across the political spectrum that the war on drugs is a failed, destructive and overly
jail and courthouse deep into civilian life, even for the most minor of offenses. It influences not only the offender,
but his or her family, neighborhood, community, and the social institutions around them. It operates directly, by
imposing fines, supervision, and criminal records, and indirectly by changing social and institutional relationships.
An offender whose formal punishment is limited to a nonjailable misdemeanor conviction and a fine may
nevertheless experience long-term restrictions on their earnings, credit, housing, employment, public benefits, and
immigration.246 It is this expansive social process that represents the full punishment triggered by an
FRT presents a
particularly difficult problem under prevailing constitutional law because most faces
are routinely exposed in public. No domestic law requires that a persons facial
features be unobstructed while she maneuvers about in public places so that the
government can use them for identification purposes. Her visage is there for the
governments taking. Technology has thus become deterministic of personal privacy
today. Yet there is no reciprocal power on the part of individuals to direct how
technology will evolve in relationship to their privacy interests or even to opt out of
its implications for their daily lives. The First Amendment anonymity cases and Fourth Amendment
involvement of legislative attempts to coerce the disclosure of personal identities.
doctrine assume that a person possesses the discretion to take steps to protect communications or other effects
her identity in other pamphlets was irrelevant to the Courts analysis and ultimate conclusion that her choice to
opting out of the various sources that are amalgamated into what
amounts to surveillance. The theory behind the Fourth Amendment doctrines that lift its protections
for information disclosed publicly or to third parties is thus unsustainable. Accordingly, the recognition of
anonymity as a constitutional value that warrants protection under the First and
Fourth Amendments may require numerous safeguards in place for forestalling
indiscriminate disclosure, as Justice Brennan suggested in Whalen. 437 In his words, whether
sophisticated storage and matching technology amount[s] to a deprivation of constitutionally protected privacy
interests might depend in part on congressional or regulatory protections put in place to forbid the governments
phone or in some years your glasses and, in a few more, your contact lenses will tell you the name of that
person at the party whose name you always forget . . . . Or it will tell the stalker in the bar the address where you
longstanding judicial rejection of a reasonable expectation of privacy in matters made public have depleted the
Fourth Amendment of vitality for purposes of establishing constitutional barriers to the governments use of FRT to
profile and monitor individual citizens.
. These collection programs have, in the past, typically included only one biometric identifier (generally a fingerprint or DNA). However,
databases in the world are the FBIs Integrated Automated Fingerprint System (IAFIS) and DHSs Automated Biometric Identification System (IDENT), a part of its U.S. Visitor and Immigration Status Indicator Technology (USVISIT)
program.11 Each database holds more than 100 million recordsmore than one third the population of the United States. Although each of these databases currently relies on fingerprints, both are in the process of incorporating
facial recognition. IAFISs criminal file includes records on people arrested at the local, state, and federal level and latent prints taken from crime scenes. IAFISs civil file stores biometric and biographic data collected from members
of the military, federal employees and as part of a background check for many types of jobs, such as childcare workers, law-enforcement officers, and lawyers.12 IAFIS includes over 71 million subjects in the criminal master file and
more than 33 million civil fingerprints, 13 and supports over 18,000 lawenforcement agencies at the state, local, tribal, federal, and international level. IDENT stores biometric and biographical data for individuals who interact with
the various agencies under the DHS umbrella, including Immigration and Customs Enforcement (ICE), U.S. Citizenship and Immigration Services (USCIS), Customs and Border Protection (CBP), the Transportation Security
Administration (TSA), the U.S. Coast Guard, and others.14 Through US-VISIT, DHS collects fingerprints from all international travelers to the United States who do not hold U.S. passports.15 USCIS also collects fingerprints from
citizenship applicants and all individuals seeking to study, live, or work in the United States.16 And the State Department transmits fingerprints to IDENT from all visa applicants.17 IDENT processes more than 300,000 encounters
every day and has 130 million records on file.18 In addition to the federal databases, each of the states has its own biometrics databases, and some larger metropolitan areas like Los Angeles also have regional databases. The prints
(for
example, photographs and fingerprints19), arguing that collecting multiple biometrics from each subject will make identification systems more accurate.20 The FBIs Next Generation Identification (NGI) database represents the most
.24 Unlike traditional mug shots, the new NGI photos may be
taken from any angle and may include close-ups of scars, marks and tattoos.25
(for example, NGI may include crowd photos in which many subjects may not be identified). NGI will allow law enforcement, correctional facilities, and criminal justice agencies at the local,
state, federal, and international level to submit and access photos, and will allow them to submit photos in bulk. The FBI has stated that a future goal of NGI is to allow law-enforcement agencies to identify subjects in public
datasets, which could include publicly available photographs, such as those posted on Facebook or elsewhere on the Internet.26 Although a 2008 FBI Privacy Impact Assessment (PIA) stated that the NGI/IAFIS photo database does
not collect information from commercial data aggregators, the PIA acknowledged this information could be collected and added to the database by other NGI users such as state and local law-enforcement agencies.27 The FBI has
also stated that it hopes to be able to use NGI to track people as they move from one location to another.28 Another big change in NGI will be the addition of non-criminal photos. If someone applies for any type of job that requires
fingerprinting or a background check, his potential employer could require him to submit a photo to the FBI. And, as the 2008 FBI PIA notes, expanding the photo capability within the NGI [Interstate Photo System] will also expand
the searchable photos that are currently maintained in the repository.
separate from criminal, the FBI is currently developing a master name system
that will link criminal and civil data and will allow a single search query to access all
data.
The Bureau has stated that it believes that electronic bulk searching of civil records would be desirable.29 DHS is poised to expand IDENT to include face recognition, which would further increase data sharing
between DHS and DOJ through Secure Communities and between both agencies and DOD through other programs. 30 DHS has not yet released a Privacy Impact Assessment discussing this change.
surveillance
systems jeopardize privacy, and the challenge as surveillance grows is to prevent security solutions from evolving into greater threats to the urban fabric
below). In general, all arguments against camera surveillance apply, because cameras are the carrier for facial recognition technology. Most importantly,
than the ones they are meant to solve. Privacy is inherently valuable, serving a crucial function in the development of individuals and groups. Michael Curry explains: It is in private that
people have the opportunity to become individuals in the sense that we think of the term. People, after all, become individuals in the public realm just by selectively making public
certain things about themselves. Whether this is a matter of being selective about ones religious or political views, work history, education, income, or complexion, the important point
is this: in a complex society, people adjust their public identities in ways that they believe best, and they develop those identities in more private settings. (1997:688) To create a group
mistaken data in facial recognition terms is pairing a facial image with the wrong identity. Further problems arise when information is networked. Discrete pieces of information about an
individual may be relatively harmless to privacy, but when information is shared, a comprehensive dossier on the individual can be assembled. Privacy advocates complain that
information ostensibly collected for a specific purpose is frequently used in a myriad of ways, most of which have not been consented to by the subject. This situation becomes more
complex and delicate when public and private institutions share information. The ethical issues of governments purchasing information from private entities, which may or may not follow
the collection guidelines approved by a democratically elected government, are complex and only slowly being examined
. As the spaces of
surveillance grow, private space shrinks. It must be asked whether the potential security and public safety gains from facial recognition
systems outweigh the costs to privacy incurred by their use. Healthy societies seek a balance . Drawing the policy line too close to the
public safety end of the spectrum could result in an undesirably restricted and
unnecessarily transparent society. Conversely, to unconditionally favour privacy could maintain security vulnerabilities at an unacceptable level.
The effects of pervasive surveillance stretch beyond issues related to privacy. At risk, for example, is an erosion of the benefits
of routine urban social interaction. Surveillance saturation could cause a shrinking perception of accountability among those present together
in urban space. Hille Koskela explains that electronic means have more and more often [been] used to
replace informal social control in an urban environment: the eyes of the people on the street are replaced by the eyes
of surveillance cameras (2002:259). There may be less incentive to assist someone in distress when a camera is
viewing the event. Why interfere yourself when you can let the experts behind the lens do it? At its most essential level, omnipresent surveillance
simply has the power to reduce quality of life . Conservative New York Times columnist William Safire (2002) describes succinctly
how constant surveillance is experienced: To be watched at all times, especially when doing nothing seriously wrong, is to be afflicted with a creepy feeling .... It is the
pervasive, inescapable feeling of being unfree .
The mobility of MORIS does not give citizens notice of the devices use or the
ability to opt out of getting scanned in the way stationary checkpoints allow . If using
Option
FRT is not a Fourth Amendment search, and probable cause or reasonable suspicion is not a prerequisite to data
Although people can opt out of going to sporting events or airports to avoid FRT and iris scans, people cannot opt
GPS tracking can alter the relationship between citizen and government in a way that is inimical to democratic
society, 147 covert FRT could similarly sabotage this relationship. B. Discriminatory Targeting and Racial Bias
MORISs
portability grants police discretion in deciding whom to identify. Without guidelines,
nothing prohibits police from acting on potential racial, gender, or class biases.148
Legally, law enforcement could primarily run pictures of a certain type of person,
without justifiable cause. Jay Stanley, an ACLU senior policy analyst, worries about
the new type of facial profiling MORIS could create .149 Not only may police take
pictures discriminatorily, but a racial bias also may arise while police search for a
match. MORIS finds the three most similar faces and displays these headshots on
the screen; however, an officer makes the final selection as to which picture
matches the person he is trying to identify.150 If the police officer is of a different
race than the person to be identified, the officer may not make this selection
accurately.151 Psychology studies show that people can more accurately recall
specific faces if they are of their own race rather than of another race.152 Due to
the otherrace effect, people outside ones own race subjectively look more alike
Concerns Moreover, unlike a stationary checkpoint, where all who pass by are subject to FRT,
153 unless that person has had ample exposure to another race .154 A Northwestern
University study shows that the brain encodes same-race faces with an emphasis on unique identifiers; however,
the brain does not encode other-race faces with this level of detail.155 Consequently, we have poorer memory for
other-race faces, and are therefore less likely to [recognize] them or to distinguish between them. 156 Lay
witnesses have made inaccurate lineup identifications because of the other-race effect.157 In 1984, an innocent
man was convicted of rape after the victim, of another race, identified him as the perpetrator.158 When the man
was exonerated though DNA evidence, the victim said that the other-race effect contributed to her
misidentification.159 Given that MORIS creates a photographic lineup with the three most mathematically similar
faces and that people struggle with distinguishing another races facial features, the Arizona legislature should give
police procedures to follow when making the final match. The other-race bias can be reduced by informing the
witness of the potential bias and by telling the witness to look for individual facial features instead of looking at the
face as a whole.160 In one study, researchers eliminated the other-race bias by giving these warnings before the
brain could encode the face.161 To ensure more accurate identifications, officers using MORIS should be required to
learn about other-race bias and how to look for unique features on faces of other races.
and spatial improvements to standardized identification systems would bring more individuals in contact with
identification systems more often, enabling an automated form of governing at a distance, tying individuals into
The
problem with this model of automated real-time identification at a distance is that,
no matter how sophisticated and state-of-the-art, it must always contend with the
complexity of identity: its variability in individuals and among populations, as well as its status as both an
circuits of inclusion, while identifying and isolating dangerous, terrorist identities from civilized society.
individualizing and classifying construct. To be sure, identities and faces are inextricably connected, but their
hybrid, unstable quality complicates efforts at stabilization, a complication that leads to persistent new efforts at
Peter Bergen and Swati Pandey (a fellow and a research associate at the New America Foundation) summarized
their research debunking the myth that Muslim religious schools, known as madrassas, are training grounds for
future terrorists. They investigated the educational backgrounds of 75 terrorists involved in major recent terrorist
attacks against Westerners and found that only nine of them had attended madrassas, and all nine had taken part
Does the
potentially faulty assumption about madrassas provide anyone with more security?
To be effective, security through identification must rely on the broadest criteria
possible for designating terrorists, lest the face of terror go unidentified. The error
that results from this broad designation and the way that it is rendered in technical
form requires vigilant critical analysis.
in one attack, the Bali bombings in 2002. None of the 9/11 hijackers had attended madrassas.
example, Social Security numbers were created to serve one purposeto track wages for Social Security benefitsbut are now used to identify a person for credit and background
checks, insurance, to obtain food stamps and student loans, and for many other private and government purposes.70 If biometrics become similarly standardized, they could replace
Social Security numbers, and the next time someone applies for insurance, sees her doctor, or fills out an apartment rental application, she could be asked for her face print. This is
problematic if records are ever compromised because, unlike a Social Security Number or other unique identifying number, a person cannot change her biometric data. 71 And the many
recent security breaches and reports of falsified data show that the government and private sector can never fully protect against these kinds of data losses.72 Data standardization also
increases the ability of government and the private sector to locate and track a given person throughout her life. And finally, extensive data retention periods73 can lead to further
problems; data that may be less identifying today, such as a photograph of a large crowd or political protest, could become more identifiable in the future as technology improves.
However, advanced biometrics like face recognition create additional concerns because the data may be collected in public without a persons knowledge. For example, the addition of
crowd and security camera photographs into NGI means that anyone could end up in the databaseeven if theyre not involved in a crimeby just happening to be in the wrong place at
the wrong time, by fitting a stereotype that some in society have decided is a threat, or by, for example, engaging in suspect activities such as political protest in areas rife with
cameras.74 Given the FBIs history of misuse of data gathered on people during former FBI director J. Edgar Hoovers tenure75 and the years following September 11, 2001,76data
strongly dependent on consistent lighting conditions and angles of view.77 It may be less accurate with certain ethnicities and with large age discrepancies (for example, if a person is
compared against a photo taken of himself when he was ten years younger). These issues can lead to a high rate of false positiveswhen, for example, the system falsely identifies
someone as the perpetrator of a crime or as having overstayed their visa. In a 2009 New York University report on facial recognition, the researchers noted that facial recognition
performs rather poorly in more complex attempts to identify individuals who do not voluntarily self-identify . . . Specifically, the face in the crowd scenario, in which a face is picked
out from a crowd in an uncontrolled environment.78 The researchers concluded the challenges in controlling face imaging conditions and the lack of variation in faces over large
for a search instead of one, because each of the people identified could be brought in for questioning, even if he or she was not involved in the crime. In light of this, German Federal
Data Protection Commissioner Peter Schaar has noted that false positives in facial recognition systems pose a large problem for democratic societies: in the event of a genuine hunt,
[they] render innocent people suspects for a time, create a need for justification on their part and make further checks by the authorities unavoidable.81
, which will be made up of iris scans and palm prints, along with images linked in to facial recognition systems.
undergone a radical transformation since it was last reviewed some six years ago. A
essential for the American public to have a complete picture of all the programs and authorities the FBI uses to track our daily lives, and an understanding of how those programs affect our civil rights and civil liberties. The
database, which already holds millions of fingerprint and photographic records, is scheduled to go live before the end of the year, but has never even been subjected to a routine Privacy Impact Assessment. One of the risks here,
without assessing the privacy considerations, is the prospect of mission creep with the use of biometric identifiers, said Electronic Privacy Information Center spokesperson Jeramie Scott. its been almost two years since the FBI
, either by private companies or by the government. The Electronic Frontier Foundation noted in an April communique that
. Profiles on the system will contain other personal details such as name, address, age and race. The group managed to obtain information pertaining to the program via a freedom of information request. The
system will be capable of searching through millions of facial records obtained not only via mugshots, but also via so called civil images, the origin of which is vague at best. [T]he FBI does not define either the Special Population
Cognizant database or the new repositories category. The EFF writes. This is a problem because we do not know what rules govern these categories, where the data comes from, how the images are gathered, who has access to
them, and whose privacy is impacted. FBI Will Have Up To One Third Of Americans On Biometric Database By Next Year eff us map fbi ngi face recognition 2 A map within the EFFs piece shows which states are already complying
with the program, and which ones are close to agreeing deals to do so. The EFF notes that currently, the FBI has access to fingerprint records of non-criminals who have submitted them for any kind of background check, by an
assertion from the FBI that it will not make positive identifications, via the database, but will use it to produce investigative leads. The Feds claim that Therefore, there is no false positive [identification] rate. [T]he FBI only
ensures that the candidate will be returned in the top 50 candidates 85 percent of the time when the true candidate exists in the gallery. EFF states. It is unclear what happens when the true candidate does not exist in the
gallerydoes NGI still return possible matches? the feature asks, noting that those identified could potentially be subjected to criminal investigation purely because a computer has decided that their face is similar to a suspects.
EFF continues: This doesnt seem to matter much to the FBIthe Bureau notes that because this is an investigative search and caveats will be prevalent on the return detailing that the [non-FBI] agency is responsible for
determining the identity of the subject, there should be NO legal issues. This is not how our system of justice was designed and should not be a system that Americans tacitly consent to move towards, the EFF piece concludes. A
owing to the fact that a search on the database will be dubbed a success if the eventual correct suspect is flagged up within the top 50 possibilities. This means
that 49 other innocent people who have never done anything wrong could be potentially marked as suspects without being considered false matches. The groups say that
the privacy groups. It is somewhat remarkable that when Google announced the release of its Glass product, it was forced to ban applications with the capability for facial recognition due to a huge privacy backlash. The Federal
government, however, continues to use such technology unhindered to create biometric profiles on anyone and everyone. The Department of Homeland Security also has its own facial recognition program, which it routinely
outsources to police departments. Meanwhile, new innovations in facial recognition technology continue to be billed as potential tools for law enforcement, including the prediction of future crime. The NSA, it has been revealed via
the leaked Snowden documents, intercepts approximately 55,000 facial-recognition quality images every day.
reborn, just as mass incarceration replaced Jim Crow. Sociologists Michael Omi and
Howard Winant make a similar point in their book Racial Formation in the United States. They attribute the
cyclical nature of racial progress to the unstable equilibrium that characterizes
the United States racial order. 22 Under normal conditions, they argue, state
institutions are able to normalize the organization and enforcement of the
prevailing racial order, and the system functions relatively automatically.
Challenges to the racial order during these periods are easily marginalized or
suppressed, and the prevailing system of racial meanings, identity, and ideology
seems natural. These conditions clearly prevailed during slavery and Jim Crow. When the
equilibrium is disrupted, however, as in Reconstruction and the Civil Rights Movement, the state
initially resists, then attempts to absorb the challenge through a series of reforms
that are, if not entirely symbolic, at least not critical to the operation of the racial
order. In the ab-sence of a truly egalitarian racial consensus, these predictable
cycles inevitably give rise to new, extraordinarily comprehensive systems of
racialized social control. One example of the way in which a well established racial order easily
absorbs legal challenges is the infamous aftermath of the Brown v. Board of Education
decision. After the Supreme Court declared separate schools inherently unequal in 1954, segregation persisted
unabated. One commentator notes: The statistics from the Southern states are truly
amazing. For ten years, 1954 1964, virtually nothing happened . 23 Not a single black
child attended an integrated public grade school in South Carolina, Alabama, or Mississippi as of the 1962 1963
won, will successfully disrupt the nations racial equilibrium . Challenges to the
system will be easily absorbed or deflected, and the accommodations
made will serve primarily to legitimate the system, not undermine it. We
run the risk of winning isolated battles but losing the larger war.
Impacts
Combating Racism addresses the root cause for nuclear warLaBalme
2002 (www.activism.net/peace/nvcdh/discrimination.shtml)
In this action, our struggle is not only against missiles and bombs, but against the
system of power they defend: a system based on domination, on the belief that
some people have more value than others, and therefore have the right to control
others, to exploit them so that they can lead better lives than those they oppress.
We say that all people have value. No person, no group, has the right to wield power over the decisions and
resources of others. The structure of our organizations and the processes we use among ourselves are our best attempt to live our
belief in self-determination. Besides working against discrimination of all kinds among ourselves, we must try to understand how
such discrimination supports the system which produces nuclear weapons. For some people who come to this action, the overriding
issue is the struggle to prevent nuclear destruction. For others, that struggle is not separate from the struggles against racism,
sexism, classism, and the oppression of groups of people because of their sexual orientation, religion, age, physical (dis)ability,
appearance, or life history. Understood this way, it is clear that nuclear weapons are already killing people, forcing them to lead lives
of difficulty and struggle. Nuclear war has already begun, and it claims its victims disproportionately from native peoples, the Third
dominate or exclude men; on the contrary, it challenges all systems of domination. The term recognizes the historical importance of
the feminist movement in insisting that nonviolence begins at home, in the ways we treat each other. In a sexist or patriarchal
society, women are relegated to limited roles and valued primarily for their sexual and reproductive functions, while men are seen
as the central makers of culture, the primary actors in history. Patriarchy is enforced by the language and images of our culture; by
keeping women in the lowest paying and lowest status jobs, and by violence against women in the home and on the streets. Women
are portrayed by the media as objects to be violated; 50% of women are battered by men in their lives, 75% are sexually assaulted.
The sexist splitting of humanity which turns women into others, lesser beings whose purpose is to serve men, is the same split
which allows us to see our enemies as non-human, fair game for any means of destruction or cruelty. In war, the victors frequently
rape the women of the conquered peoples. Our country's foreign policy often seems directed by teenage boys desparately trying to
live up to stereotypes of male toughness, with no regard for the humanity or land of their "enemy." Men are socialized to repress
emotions, to ignore their needs to nurture and cherish other people and the earth. Emotions, tender feelings, care for the living, and
for those to come are not seen as appropriate concerns of public policy. This makes it possible for policymakers to conceive of
make white people forget that all people need and are entitled to self-determination, good health care, and
challenging work.
Racism also underlies the concept of "national security" : that the U.S. must protect
its "interests" in Third World countries through the exercise of military force and
economic manipulation. In this world-view, the darker peoples of the world are incapable of managing their own affairs
and do not have the right to self-determination. Their struggles to democratize their countries and become independent of U.S.
India Adv
India expanding use of biometric software nowGonzalez, 14. Deborah, author for Elsevier, Oct.9, 2014.Amid rampant data
breaches and hacks, biometrics takes off.
http://www.elsevier.com/connect/amid-rampant-data-breaches-and-hacksbiometrics-takes-off
Privacy advocates do recognize security benefits of some of the biometric technologies but caution and urge their developers and users to apply them responsibly and with transparency.
So since there are no laws, can the industry agree on some voluntary guidelines or best practices, like posting notices if facial recognition technology is being used in an event? What
triggered this particular concern was a recent Super Bowl game where attendees were facially scanned without their knowledge by law enforcement. After the fact, individuals
understood the possible benefit of identifying criminals, but they felt their privacy was violated because they were not told and did not give their consent.Another concern that gets
raised is that these biometrics are stored in a database, so all the information system security concerns are still there. Also, if a regular database of passwords gets hacked, you can
change the password, but if a biometrics database gets hacked, you can't change your face.The use of biometric data that has been collected also raises the question of what the data
. India is being watched for one of the most ambitious biometrics data
collection projects in the world Aadhaar. With over a billion people , most of whom are poor and
undocumented, the Indian government thinks biometrics could be the answer to identifying
their own population and improving government services . The Aadhaar database has collected fingerprints, iris
scans, and photos of over 500 million Indian citizens so far, who receive in exchange a 12-digit national ID number. But human rights activists in India
and abroad fear that the data will be used to marginalize even more the poorer
classes, demonstrating a level of mistrust not necessarily of the technology but of
the government entity using it.
will be used for
optical fiber network throughout the country, and creating 250,000 computer service centers to provide high-speed access to residents in rural areas. As part of Digital Indias goal of
To
accomplish this, the government plans to draw on the Aadhaar program, a
controversial unique identification system that has led the Indian government to
create the worlds largest biometric database . Using Aadhaar numbers, the government hopes to digitally link every person in India
providing government services to every individual, however, the government envisions a cradle-to-grave digital identity that is unique, lifelong, and authenticable.
to the Internet with a unique 12 digit identifier, to allow them to securely access cutting-edge tools such as digital welfare benefits and online medical services. Digital India brings some
clear benefits to the country and people: universal connectivity is an outstanding goal for individuals and industry alike. Technology can be a powerful, life-changing tool and we applaud
the governments efforts to ensure that people in rural areas have secure, high-speed access for education, commerce, health, and access to the global flow of ideas and information.
However, linking this access to the Aadhaar number brings with it significant risks. In return for secure online access to government services, citizens of India are being asked to give up
vast amounts of personal information. In addition to collecting a name, birth date, and address from each participant, the government is collecting biometric information by using iris and
with the storage of the Aadhaar data from the time of collection. How this information is protected, what is done with the data, who has access to the data, and how it will be shared
between government agencies remain troubling questions in a shifting legal landscape, without further legislative guidance. It is also not clear how the Digital India plan conforms with
the 2013 ruling from the Supreme Court of India that no one should be required to obtain an Aadhaar card in order to access government services. Moreover, India has been
unapologetic about its existing surveillance programs. Advocates have raised significant privacy and free expression concerns with the authority the government claims to conduct
surveillance, and in 2013 the government granted itself even broader latitude to monitor citizens in a process that lacked the opportunity for open public debate or parliamentary
approval. While governments have legitimate national security concerns, increased security must not come at the cost of fundamental human rights. And given the extreme sensitivities
of biometric identification data, and continued concerns over potential government misuse of individuals unique identifiers, its essential that India establish far greater protections for
the digital identities and privacy of all of its citizens. Last weeks ICT Working Group meetings culminated in an agreement between the US and India to collaborate on implementing the
Digital India initiative. President Obama is traveling to India later this month to join Prime Minister Narendra Modi in the celebration of Indias Republic Day, honoring the Indian
The President should take this opportunity to raise these critical privacy and
free expression issues as Digital India marches forward.
constitution.
Privacy
1st amendment
FRT violates the first amendment destroys our right to
freedom of anonymous speech.
Brown 14 Associate Professor of Law, University of Baltimore School
of Law. B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY,
FACEPRINTS, AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
a line of First Amendment cases confirms that the privacy threat posed by
FRTthe governments unfettered identification and monitoring of
personal associations, speech, activities, and beliefs, for no justifiable purposeis
one of constitutional dimension. In fact, the Supreme Court has steadfastly
protected anonymous speech.336 The Courts repeated pronouncements that the First
Amendment337 safeguards the right of anonymous speechthat is, the right to distribute
Separately,
technologies like
written materials without personal identification of the authorlargely came about in response to government
attempts to mandate disclosures in public writings. 338 In Talley v. California,339 the Court struck down a Los
Angeles ordinance restricting the distribution of a handbill in any place under any circumstances, which does not
have printed on the cover . . . the name and address of . . . [t]he person who printed, wrote, compiled or
manufactured the same.340 Finding that the law infringed on freedom of expression, the Court observed that
[a]nonymous
much of ones privacy as possible.355 The Court extolled the virtues of anonymity as fostering [g]reat works of
literature . . . under assumed names, enabling groups to criticize the government without the threat of
persecution, and provid[ing] a way for a writer who may be personally unpopular to ensure that readers will not
prejudge her message simply because they do not like its proponent.356 As core political speech, it concluded,
[n]o form of speech is entitled to greater constitutional protection.357 Justice Stevens went on in his majority
opinion to tether anonymity to the purpose behind the Bill of Rights and the First Amendment: to protect unpopular
individuals from retaliationand their ideas from suppression at the hand of an intolerant society.358
Anonymity, he explained, is a shield from the tyranny of the majority.359 In a concurring opinion, Justice
Thomas commented that the Founders practices and beliefs on the subject indicate[] that they believed the
freedom of the press to include the right to author anonymous political articles and pamphlets.360 That most
other Americans shared this understanding, he added, is reflected in the Federalists hasty retreat before the
withering criticism of their assault on the liberty of the press.361 Justice Scalia dissented, arguing that anonymity
facilitates wrong by eliminating accountability, which is ordinarily [its] very purpose.362 To treat all anonymous
communication . . . in our society [as] traditionally sacrosanct, he continued, seems to me a distortion of the past
that will lead to a coarsening of the future.363 In Watchtower Bible & Tract Society of New York, Inc. v. Village of
Stratton,364 the Court struck down an ordinance requiring permits for doorto-door canvassing as a prior restraint
on speech but also because the law vitiated the possibility of anonymous speech.365 It characterized the permit
requirement as result[ing] in a surrender of . . . anonymityeven where circulators revealed their physical
identitiesbecause strangers to the resident certainly maintain their anonymity.366 The Court was thus
unmoved by the fact that speakers who ring doorbells necessarily make themselves physically known to their
audience, thus revealing themselves to some extent. For the Court, it was the recognition that occurs when a name
on a permit is connected to a face which triggered the Constitutions protection of anonymity. Most recently, a
fractured plurality in Doe v. Reed367 upheld a state law compelling public disclosure of the identities of referendum
petition signatories while squarely acknowledging the vitality of a First Amendment right to anonymous speech. 368
Significantly, all but one Justice recognized that the governments ability to correlate identifying information with
online data created a First Amendment hazard of unprecedented dimension. Writing for the majority, Chief Justice
Roberts found that an individuals expression of a political view through a signature on a referendum petition
implicated a First Amendment right.369 The Court nonetheless held that the states interest in preserving the
integrity of the electoral process and informing the public about who supports a petition justified the burdens of
compelled disclosure.370 Justice Roberts made a point of deeming significant the plaintiffs argument that, once
on the Internet, their names and addresses could be matched with other publicly available information about them
in what will effectively become a blueprint for harassment and intim idation.371 Because the majority only
considered the facial challenge to the law, Justice Roberts found the burdens imposed by typical referendum
petitions unlike those that the plaintiffs feared.372 Justice Alito wrote separately to emphasize that government
access to personal data online gave rise to a strong as-applied challenge based on the individual . . . right to
privacy of belief and association.373 He considered breathtaking the implications of the states argument that it
has an interest in providing information to the public about supporters of a referendum petition; if true, the State
would be free to require petition signers to disclose all kinds of demographic information, including the signers
race, religion, political affiliation, sexual orientation, ethnic background, and interest-group memberships.374
Justice Alito added that the posting of names and addresses online could allow anyone with access to a computer
[to] compile a wealth of information about all of those persons, with vast potential for use in harassment.375
Justice Thomas dissented on similar grounds, asserting that he would sustain a facial challenge precisely because
[t]he advent of the Internet enables rapid dissemination of the information needed to threaten or harass every
referendum signer, thus chill[ing] protected First Amendment activity.376 Concurring separately, Justice Scalia
stood alone in his complete rejection of First Amendment protections for anonymous speech.377 When considered
in conjunction with the digital-age Fourth Amendment cases, Doe is remarkable in its recognition of the pressures
that modern technology puts on the viability of existing constitutional doctrine relating to individual privacy.
Although Jones addressed GPS monitoring under the Fourth Amendment, Justice Sotomayor invoked the First
Amendment to emphasize that [a]wareness that the Government may be watching chills associational and
expressive freedoms, and that the Governments unrestrained power to assemble data that reveal private aspects
of identity is susceptible to abuse.378 When inexpensive technology is paired with massive amounts of readily
accessible personal information and unfettered government discretion to track individual citizens, she explained,
4th amendment
It incentivizes officers to ignore probable causeGaneva, 11. Tana, AlterNet, Aug.30. 5 unexpected places you can be tracked
with facial recognition technology. JJZ
http://www.alternet.org/story/152231/5_unexpected_places_you_can_be_tracked_wit
h_facial_recognition_technology
Earlier this summer Facebook rolled out facial recognition software that identifies users even when they appear in untagged photos. Like every other time the social networking site has introduced a creepy, invasive new feature, they
made it the default setting without telling anyone. Once people realized that Facebook was basically harvesting biometric data, the usual uproar over the site's relentless corrosion of privacy ensued. Germany even threatened to sue
there's
no "opt-out" of leaving your house. Post-9/11, many airports and a few cities rushed
to install cameras hooked to facial recognition technology,
Facebook for violating German and EU data protection laws and a few other countries are investigating. But facial recognition technology is hardly confined to Facebook -- and unlike the social networking site,
from milling crowds by matching their faces to biometric data in large databases. Many programs were abandoned a few years later, when it became clear they accomplished little beyond creeping people out. Boston's Logan Airport
scrapped face recognition surveillance after two separate tests showed only a 61.4 percent success rate. When the city of Tampa tried to keep tabs on revelers in the city's night-club district, the sophisticated technology was bested
by people wearing masks and flicking off the cameras. Human ingenuity aside, most facial recognition software could also be foiled by eyewear, a bad angle or somebody making a weird face. But nothing drives innovation like the
, moving from 2-d to 3-d scanning that can capture identifying information about faces even in profile. Another great leap forward, courtesy of Identex (now L-1 Identity Solutions, Inc.), combines
geometric face scanning and "skinprint" technology that maps pores, skin texture, scars and other identifying facial marks captured in high-resolution photos
Here are 5 places besides Facebook you might encounter face recognition and other
biometric technology -- not that, for the most part, you would know it if you did. 1. In the fall,
Information System (MORIS) device. The gadget, which attaches to an iPhone,
from 40 departments will hit the streets armed with the Mobile Offender Recognition and
of pictures, including, potentially, one of the largest collections of tagged photos in existence: Facebook.
, so no time for a
suspect to opt out of supplying law enforcement with a record of their biometric data. Lee Tien of the Electronic Frontier Foundation told AlterNet that while it's unclear how individual departments will use the technology, there are
. The second danger lurks in the creation and growth of personal information databases. Biometric information is basically worthless to law enforcement unless, for example, the pattern
of someone's iris can be run against a big database full of many people's irises.
Americans have been increasingly monitored with facerecognition technology (FRT). Though the technique remains crude, face-based
surveillance is already used in airports and on city streets to detect fugitives,
teenage runaways, criminal suspects, or anyone who was ever arrested. As it spreads,
FRT will be an unusually fraught topic for courts to address, because it straddles so
many fault lines currently lying beneath our Fourth Amendment jurisprudence.
These include whether: (1) people enjoy a reasonable expectation of anonymity in
public, (2) a seizure can occur without halting a persons movement, (3) longterm aggregation of data about individuals can constitute a search, and (4) the
probable-cause standard tolerates generalized surveillance with a high rate of
false positives. These fault lines are not minor questions but fundamental
challenges of the digital-surveillance movement. While most courts to address these issues have
attending the big game,
erred toward diminished Fourth Amendment protection, this Article cites an emerging minority that would reclaim
basic privacy rights currently threatened by electronic monitoring in public.
In Ciraolo,
police officers flew an airplane 1,000 feet over a suspects fenced-off property and
observed a small marijuana field.89 In Dow Chemical, EPA agents photographed the
companys property from varying altitudes with a precision aerial mapping
camera.90 Because the evidence gathering in both cases occurred from public airspace, the Court reasoned,
any air traveler could have observed what the government agents did, had they bothered to look down.91 EPAs
reliance on a sophisticated camera did not amount to a search, said the Court,
because: (1) the camera was available for public use,92 and (2) the agents used the
camera only to augment their natural sensory abilities.93 The first fact matters because, if
Chemical Co. v. United States were decided on the same day.88 The cases presented similar facts.
aerial mapping cameras are available in commerce, Dow could not have expected its land to be immune from the
technology.94 The second fact reflects the Courts view that,
police novel powers of perceptionthe ability to see through walls or hear private
conversations95sensory-enhancing tools are not offensive to public expectations .96
Based on the example of Dow, police are able to enhance their noses with drugsniffing dogs97 and enhance their
eyes with telescopes and binoculars. 98 Police cannot, however, aim a heat-sensing camera at a suspects garage,
since this technique is uncomfortably analogous to looking through a wall into a private space.99 Still, as Justice
Powell admonished in his Dow dissent, the availability and sensory enhancement tests inevitably abrogate
public privacy as snooping technology becomes more pervasive.100 Linking surveillance cameras to FRT, then,
arguably only enhances the polices already-existing senses :
that others will not know the sound of his voice, any more than he can reasonably
expect that his face will be a mystery to the world. 103 Though these cases were not
decided in the surveillance context and so would not bind an FRT dispute, they foreshadow the Courts
low-ebbing protection of facial privacy. Nevertheless, challengers to FRT should
engage the Harlan standard head-on by demonstrating that Americans reasonably
expect not to be identified in public by sophisticated algorithms . Indeed, the
Court has at times cast itself as a bulwark against novel technology that takes away
privacies we once took for granted.104 As evidence that people expect a degree of anonymity while
moving in public, civil libertarians could point to the popular outcries that often accompany a
citys installation of facerecognizing cameras.105 Public reaction to Tampa Bays use
of FRT at the Super Bowl was overwhelmingly negative ;106 the subsequent
installation of FRT cameras in Tampas nightlife district prompted vociferous
protests, effectively ending the citys FRT experiment two years later .107 Courts may
respond that a persons outrage means nothing at the point at which surveillance technology meets the Dow test.
This argument, made by lower courts in other contexts, is that as long as people know a technology could
conceivably be used against them by strangers, the governments use of the technology is not a constitutional
issue.108 As articulated in one district opinion, The proper inquiry . . . is not what a random stranger would
actually or likely do [with surveillance technology], but rather what he feasibly could.109 Members of the public
could conceivably use an online FRT program such as Polar Rose to identify strangers on the street based on a
furtivelysnapped digital photo.110 Making such a scenario all the more plausible, Google is now building an
application that would locate a persons online Google Profile based on any photo of the persons face.111 Thus, like
suspects via their cell phone records without a warrant.112 The holding was despite the governments truthful
argument that a cell phone company could easily track any subscribers movements by cataloguing the cell phone
towers that received the subscribers signal.113 Maynard reviewed the Courts important reasonable expectation
cases114 and concluded: In
the Maynard reasoning is for now the minority view,117 it reflects a broadly felt
instinct to reclaim the reasonable expectation test as a guardian of Fourth
Amendment rights in public spaces.118 Face-recognition challenges offer the
potential to push Maynard further into the mainstream.
Function Creep
Monitoring technology is susceptible to function creep
Brey, 04. P. (2004). 'Ethical Aspects of Face Recognition Systems in Public
Places,' Journal of Information, Communication & Ethics in Society, 2:2, 97-109. JJZ
http://www.utwente.nl/bms/wijsb/organization/brey/Publicaties_Brey/Brey_2004_Face
-Recognition.pdf
A second, and more pressing, problem with facecams is the problem of function creep, an expression that I borrow from RAND report author John Woodward. Function creep is the phenomenon by which a technology designed for a
limited purpose may gain additional, unanticipated purposes or functions. This may occur either through institutionalized expansions of its purposes or through systematic abuse. In relation to Smart CCTV, it is the problem that,
is used
There are, I claim, four basic ways in which Smart CCTV can become the subject of function creep. The first is by widening of the database. The databases used in London, Birmingham, Tampa and Virginia Beach only included felons
, then
. Needless to say, some of these expansions, if they were to occur, would be morally highly problematic. The second way in which function creep may occur is by purpose widening. This is the
widening of the purpose for which the technology is used. For example, a police force using Smart CCTV may start using it not only to identify wanted individuals in crowds, but for example to do routine analysis of the composition of
police
departments may be tempted to use the technology for such additional purposes in
their efforts to fight crime
crowds in public places, or to do statistical analysis of faceprints for the purpose of predicting criminal activity, or to track individuals over longer distances. Smart CCTV has the potential to do these things, and
and improve the quality of life in neighborhoods. An third way for function creep to occur is by user shifts. Systems, once developed, may come to be used by
new types of users. For instance, the FBI or CIA may require access to a system used by a police department in a search for terrorists. Or a city government or commercial organization may ask a police department to use the system
for its demographic research. Also, individual operators may be using the system for their own personal reasons. As Reuters journalist Richard Meares reports, there have been several occurrences of CCTV operators being sacked
because of their repeated abuse of the system, for example by tracking and zooming in on attractive women.33 A fourth and final occurrence of function creep lies in domain shifts: changes in the type of area of situation in which
the system is used, such as changes from city neighborhoods to small villages or nature parks, or from public to private areas, or from domestic areas to war zones. Function creep in Smart CCTV may hence occur in several ways,
which may add up to result in new uses of the technology for new purposes by new users in new domains. Studies of technology use have shown that
(which is not currently into place), but cannot be wholly avoided. This imposes an
obligation on the developers and users of the technology, therefore, to anticipate on function creep and to take steps to prevent undesirable forms of function creep from occurring.
numbers of the innocent and guilty so the database can be mined during Amber
Alerts or for leads in cases.200 If police know that the databases MORIS uses could
be mined in other events, they may have an incentive to expand the databases by
taking photographs of persons without any level of suspicion for wrongdoing. And
although the Automated License Plate Recognition Program is legal, there is
something inherently more private about our faces than our license plates. Our
country has a long history of function creepof databases, which are created for
one discrete purpose and, which despite the initial promises of their creators,
eventually take on new functions and purposes, said Barry Steinhardt, ACLU
associate director in 2000.201 For example, social security numbers that were
originally to be used for retirement purposes, are now also used to identify
individuals in a variety of settings.202 Many law enforcement agencies using MORIS
have vowed to only use the technology in certain circumstances. The Pinellas
County Sheriffs Office, in Florida, obtains consent before taking someones
picture.203 The Brockton, Massachusetts, police department announced that it
would only use MORIS when actively searching for someone or when someone has
committed an offense.204 Likewise, the Pinal County Sheriffs Office said it will only
use FRT to identify people suspected of arrestable offenses or people from whom
the officers have obtained consent.205 However, these law enforcement agencies
could choose to expand the use of FRT beyond what they have set forth as their
limits. Some police departments have already demonstrated a willingness to use
stored pictures and information about license plates to follow gang members. The
Los Angeles Police Department wanted to use license plate information for more
purposes but had to limit its use due to public pushback.207 While it is a violation of
Pinellas County Sheriffs Offices guidelines to learn the identity of people without
consent, it would be acceptable under the Fourth Amendment. 208 Therefore,
MORISs use creates a potential for function creep.
The status quo takes advantage of gray areas within the lawSteel, 11. Emily, Wall St. Journal, July 13. Device raises fear of facial profiling.
JJZ
http://www.wsj.com/articles/SB10001424052702303678704576440253307985070
Dozens of law-enforcement agencies from Massachusetts to Arizona are preparing to outfit their forces with controversial hand-held facial-recognition devices as soon as September,
raising significant questions about privacy and civil liberties. Police across the nation will soon be using facial-recognition devices that easily connect to an iPhone. Civil liberties groups
have warned that the technology could infringe on privacy rights. . With the device, which attaches to an iPhone, an officer can snap a picture of a face from up to five feet away, or scan
a person's irises from up to six inches away, and do an immediate search to see if there is a match with a database of people with criminal records. The gadget also collects fingerprints.
Until recently, this type of portable technology has mostly been limited to military uses, for instance to identify possible insurgents in Iraq or Afghanistan. The device isn't yet in police
hands, and the database isn't yet complete. Still, the arrival of the new gadgets, made by BI2 Technologies of Plymouth, Mass., is yet another sign that futuristic facial-recognition
A
fundamental question is whether or not using the device in certain ways would
constitute a "search" that requires a warrant. Courts haven't decided the issue . It is
technologies are becoming reality after a decade of false starts. The rollout has raised concerns among some privacy advocates about the potential for misuse.
generally legal for anyone with a camera, including the police, to take pictures of people freely passing through a public space. (One exception: Some courts have limited video
surveillance of political protests, saying it violates demonstrators' First Amendment rights.) However, once a law-enforcement officer stops or detains someone, a different standard
The Supreme Court has ruled that there must be "reasonable suspicion"
to force individuals to be fingerprinted. Because face- and iris-recognition
technology hasn't been put to a similar legal test, it remains "a gray area of the law ,"
might apply, experts say.
says Orin Kerr, a law professor at George Washington University with an expertise in search-and-seizure law. "A warrant might be required to force someone to open their eyes." BI2 says
it has agreements with about 40 agencies to deliver roughly 1,000 of the devices, which cost $3,000 apiece. Some law-enforcement officials believe the new gear could be an important
weapon against crime. "We are living in an age where a lot of people try to live under the radar and in the shadows and avoid law enforcement," says Sheriff Paul Babeu of Pinal County,
Ariz. He is equipping 75 deputies under his command with the device in the fall. Mr. Babeu says his deputies will start using the gadget try to identify people they stop who aren't
carrying other identification. (In Arizona, police can arrest people not carrying valid photo ID.) Mr. Babeu says it also will be used to verify the identity of people arrested for a crime,
potentially exposing the use of fake IDs and quickly determining a person's criminal history. Other police officials urge caution in using the device, which is known as Moris, for Mobile
Offender Recognition and Information System. Bill Johnson, executive director at the National Association of Police Organizations, a group of police unions and associations, says he is
concerned in particular that iris scanning, which must be done at close range and requires special technology, could be considered a "search." "Even technically if some law says you can
do it, it is not worth itit is just not the right thing to do," Mr. Johnson says, adding that developing guidelines for use of the technology is "a moral responsibility." Sheriff Joseph
McDonald Jr. of Plymouth County in Massachusetts, who tested early versions of the device and will get a handful of them in the fall, says he plans to tell his deputies not to use facial
recognition without reasonable suspicion. "Two hundred years of constitutional law isn't going away," he says. BI2 says it urges officers to use it only when they have reasonable
suspicion of criminal activity. "Sheriffs and law enforcement should not use this on anybody but suspected criminals," says Sean Mullin, BI2's chief executive. With this device, made by
BI2 Technologies, an officer can snap a picture of a face from up to five feet away, or scan a person's irises from up to six inches away, and do an immediate search to see if there is a
match with a database of people with criminal records. ENLARGE With this device, made by BI2 Technologies, an officer can snap a picture of a face from up to five feet away, or scan a
person's irises from up to six inches away, and do an immediate search to see if there is a match with a database of people with criminal records. BI2 Technologies . The Department of
Justice referred questions about the device to the Federal Bureau of Investigation, which didn't respond to a request for comment by late Tuesday. Facial-recognition technology is going
mainstream not just in police departments. Facebook Inc., the social-networking giant, recently rolled out facial-recognition technology to let its users more easily identify their friends in
photos. Several iPhone and Android apps claimwith varying successto be able to use cellphone cameras to identify Facebook friends by snapping pictures of them. Middle Eastern
and European countries use iris scans to recognize travelers at airports and border crossings. Some U.S. troops carry hand-held devices to capture faces, eyes and fingerprints of "known
and suspected insurgents," according to Lt. Col. Thomas Pratt of the Defense Department's Biometric Identity Management Agency. The agency says more than 7,000 devices,
manufactured by L-1 Identity Solutions Inc. and Cross Match Technologies Inc., are being used in the field. Internet search giant Google Inc. also considered, but rejected, a project that
would have offered facial recognition on mobile phones. Google's technology would have let cellphone users take pictures of people, then conduct an image search on Google to find a
person with matching facial features. Google's chairman, Eric Schmidt, discussed the decision to shut down the project at a May conference. "I'm very concerned by the union of mobile
tracking and face recognition," he said. "My guess is in free societies, it will be regulated." A spokesman for Google says the company won't launch the facial recognition tools "unless we
have strong privacy protections in place." Face- and iris-recognition technologies are still a small portion, about 16%, of the $4.3 billion biometrics industry, which is dominated by
fingerprint technology, according to market research by New York-based International Biometric Group LLC. The technology has advanced greatly since a series of embarrassing setbacks
after the Sept. 11, 2001, terror attacks. In 2002, Boston's Logan International Airport tested facial-recognition software, but pulled the plug after cameras failed to recognize airport
employees whose photos were in the system. Since then, face-recognition technology has improved, and has been augmented to recognize irises, which are unique to individuals. BI2's
device attaches to the back of an iPhone, adding about 1.75 inches to its width. It plans to offer a version for Android phones in the future. The company says Moris will be sold only to
law-enforcement agencies, although it is considering building applications for the health-care and financial industries. The device links to a database of criminal records, iris and face
images contributed by local law enforcement that use other BI2 technologies. "The database is the golden nugget of the whole thing," says BI2's Mr. Mullin. The database includes face
and iris data collected primarily when people are admitted to or released from a correctional facility, Mr. Mullin says. Some states also are contributing mug shots to the database. BI2
says it doesn't sell the data, since it doesn't own it. The company hopes to eventually access additional data from larger state and federal databases, such as the FBI's registry of
fingerprints or the driver's-license photos from motor-vehicle departments. William Conlon, chief of police in Brockton, Mass., says he doesn't consider the mobile device to be an
invasion of privacy. "It is just a picture. If you are out in public, I can take a picture of anybody," says Mr. Conlon, whose police department tested a prototype last summer and is
planning to adopt the device. "Most people will say, 'I don't have anything to hide, go ahead.'" Every local municipality in the country should have face recognition devices and iris
scanning lenses installed on public streets throughout their cities. A computer program would disseminate information quickly and objectively to the authorities monitoring said
database. For those with something to hide, hopefully this technology will keep you from committing any criminal acts. For those with nothing to hide, this device will cost effectively
keep our society safe from those who commit crime.
2AC Stuff
Yes Topical
We meet the plan curtails domestic surveillance.
Brown 14 Associate Professor of Law, University of Baltimore School
of Law. B.A., Cornell; J.D., University of Michigan (Kimberly N, ANONYMITY,
FACEPRINTS, AND THE CONSTITUTION, GEO. MASON L. REV. [VOL. 21:2, 2014,
http://georgemasonlawreview.org/wp-content/uploads/2014/03/Brown-Website.pdf)
NAR
national security benefits of using FRT in targeted criminal investigations
But its potential for enabling surveillance of common citizens is
troubling. Before FRT, drivers license photos were of limited utility to investigators unless a subjects name was
already known.187 Law enforcement can now capture a facial image of an unknown individual without the subjects
knowledge, match the image with other bits of data using FRT algorithms, and come up with a rich dossier of
personal information.188 Although fingerprint data similarly enables investigators to attach a name to unidentified
biometric data,189 FRT goes much further. Once a person is identified, rapid correlations with countless other
images and data points in cyberspace and self-contained databases can detect past activity and predict future
movements.190 Taken to its extreme,
sweeps of random subjects for relatively benign activities like walking a dog without a leash.191 A few states
have imposed legislative barriers to police collection of and access to FRT data, but many others have not.192
the dictionary definition of a disputed term cannot control. Chief Justice John Roberts, along with Justices Stephen
Breyer and Sonia Sotomayor joined Ginsburgs opinion, while Justices Antonin Scalia, Anthony Kennedy and
Clarence Thomas joined Kagans opinion. Thats
reading federal laws that animates the King litigation. Only Justice Samuel Alito, who wrote a brief opinion
agreeing with the result in Ginsburgs opinion but not with its rationale, did not author or join an opinion that casts
doubt over the King plaintiffs reading of the Affordable Care Act. None of this means, of course, that King will be an
8-1 decision upholding Obamacare. The last time the fate of the Affordable Care Act was before the Supreme Court,
Justice Scalia voted to repeal the entire law, even though he once authored an opinion that left little doubt that
Obamacare is constitutional. Nevertheless, the fact that the justices would hand down a decision like Yates just one
week before they hear oral arguments in King is a hopeful sign for the millions of people whose health care is
threatened by the legal attack on Obamacare.
A2: Real-ID
No link.
Tatelman 8 - Legislative Attorney American Law Division (Todd B, The
REAL ID Act of 2005: Legal, Regulatory, and Implementation Issues, CRS Report for
Congress, April 1, 2008) NAR
Contrary to the assertion of some REAL ID opponents,131 neither biometric
technology nor radio-frequency identification (RFID) is required by the regulations to be used
on REAL ID-compliant licenses or personal identification cards. Although these more
advanced technologies are not required, the machine-readable requirement does raise security
and personal privacy concerns that were addressed by DHS in the final rule. With respect to security of
the information contained on the bar code, many commentators suggested that DHS prohibit the collection and
storage of the data on the bar codes by third parties, specifically private businesses.132 DHS responded by noting
that although the underlying statute does not provide them with the legal authority to prohibit such data collection,
at least four states California, Nebraska, New Hampshire, and Texas currently have such provisions in place,
and DHS is supportive of additional state efforts in this regard.133
If you want to thump the DA read this article and cut state-bystate
^^^^http://object.cato.org/sites/cato.org/files/pubs/pdf/pa749_web_1.pdf^^^^^^
2AC
N/U- existing lawsuits should have triggered the disad.
Sobel, 6/11. Ben, Washington Post, 2015. Facial recognition technology is
everywhere. It may not be legal.
http://www.washingtonpost.com/blogs/the-switch/wp/2015/06/11/facial-recognitiontechnology-is-everywhere-it-may-not-be-legal/
Being anonymous in public might be a thing of the past.
already
being deployed
the face of every shopper, identify returning customers and offer them individualized pricing or find pre-identified shoplifters and known litigious individuals. Microsoft has patented a billboard that identifies you as you walk by
and serves ads personalized to your purchase history. An app called NameTag claims it can identify people on the street just by looking at them through Google Glass. Privacy advocates and representatives from companies like
There
are no federal laws that specifically govern the use of facial recognition technology.
But
Illinois and Texas have laws against using such
technology
A lawsuit filed in Illinois
could reshape Facebooks practices
for getting user consent
Facebook and Google are meeting in Washington on Thursday to try to set rules for how companies should use this powerful technology. They may be forgetting that a good deal of it could already be illegal.
while few people know it, and even fewer are talking about it, both
to identify people without their informed consent. That means that one out of every eight Americans currently has a legal right to biometric privacy. The Illinois law is facing the most public test to date
trial court in April alleges Facebook violates the states Biometric Information Privacy Act by taking users
faceprints without even informing its users let alone obtaining their informed written consent. This suit, Licata v. Facebook,
, and may even influence the expansion of facial recognition technology. How commonand how accurateis facial recognition technology? You may not be walking
by ads that address you by name, but odds are that your facial geometry is already being analyzed regularly. Law enforcement agencies deploy facial recognition technology in public and can identify someone by searching a
biometric database that contains information on as many as one-third of Americans. Advertisement Click here for more information! Companies like Facebook and Google routinely collect facial recognition data from their users, too.
(Facebooks system is on by default; Googles only works if you opt in to it.) Their technology may be even more accurate than the governments. Googles FaceNet algorithm can identify faces with 99.63 percent accuracy.
Facebooks algorithm, DeepFace, gets a 97.25 percent rating. The FBI, on the other hand, has roughly 85 percent accuracy in identifying potential matchesthough, admittedly, the photographs it handles may be harder to analyze
than those used by the social networks. Facebook and Google use facial recognition to detect when a user appears in a photograph and to suggest that he or she be tagged. Facebook calls this Tag Suggestions and explains it as
follows: We currently use facial recognition software that uses an algorithm to calculate a unique number (template) based on someones facial featuresThis template is based on your profile pictures and photos youve been
tagged in on Facebook. Once it has built this template, Tag Suggestions analyzes photos uploaded by your friends to see if your face appears in them. If its algorithm detects your face, Facebook can encourage the uploader to tag
you. With the boom in personalized advertising technology, a facial recognition database of its users is likely very, very valuable to Facebook. The company hasnt disclosed the size of its faceprint repository, but it does acknowledge
that it has more than 250 billion user-uploaded photos with 350 million more uploaded every day. The director of engineering at Facebooks AI research lab recently suggested that this information was the biggest human dataset
in the world. Eager to extract that value, Facebook signed users up by default when it introduced Tag Suggestions in 2011. This meant that Facebook calculated faceprints for every user who didnt take the steps to opt out. The Tag
Suggestions rollout prompted Sen. Al Franken (D-Minn.) to worry that Facebook may have created the worlds largest privately held data base of faceprints without the explicit consent of its users. Tag Suggestions was more
controversial in Europe, where Facebook committed to stop using facial identification technology after European regulators complained. The introduction of Tag Suggestions is whats at issue in the Illinois lawsuit. In Illinois,
companies have to inform users whenever biometric information is being collected, explain the purpose of the collection and disclose how long theyll keep the data. Once informed, users must provide written release that they
consent to the data collection. Only after receiving this written consent may companies obtain biometric information, including scans of facial geometry. Advertisement Click here for more information! Facebook declined to comment
on the lawsuit and has not filed a written response in court. Its unclear whether todays paradigm for consent clicking a "Sign Up" button that attests you've read and agreed to a lengthy privacy policy fulfills the requirements
written into the Illinois law. Its also unclear whether the statute will cover the Tag Suggestions data that Facebook derives from photographs. If the law does apply, Facebook could be on the hook for significant financial penalties.
This case is one of the first applications of the Illinois law to facial recognition, and it will set a hugely important precedent for consumer privacy. Why biometric privacy laws? Biometric information like face geometry is high-stakes
data because it encodes physical properties that are immutable, or at least very hard to conceal. Moreover, unlike other biometrics, faceprints are easy to collect remotely and surreptitiously by staking out a public place with a
decent camera. Anticipating the importance of this information, Texas passed a law in 2001 that restricts how commercial entities can collect, store, trade in and use biometric data. Illinois passed a similar law in 2008 called the
Biometric Information Privacy Act, or BIPA. A year later, Texas followed up with another law to further regulate biometric data in commerce. The Texas laws were passed with facial recognition in mind. Brian McCall, now chancellor of
the Texas State University system, introduced both Texas bills during his tenure as a state representative. Legislation is seldom ahead of science, and in this case I felt it was absolutely necessary that legislation get ahead of
common practice," McCall explained. "And in fact, we were concerned about how the market would use personally identifiable information. Sean Cunningham, McCalls chief of staff, added the use of facial recognition by law
enforcement at the 2001 Super Bowl in Tampa helped bring the issue to their attention. However, it appears that the Texas statute has not been used very often to litigate the commercial collection of facial identification information.
Advertisement Click here for more information! On the other hand, the Illinois law was galvanized by a few high-profile incidents of in-state collection of fingerprint data. Most notably, a company called Pay By Touch had installed
machines in supermarkets across Illinois that allowed customers to pay by a fingerprint scan, which was linked to their bank and credit card information. Pay By Touch subsequently went bankrupt, and its liquidation prompted
concerns about what might happen to its database of biometric information. James Ferg-Cadima, a former attorney with the ACLU of Illinois who worked on drafting and lobbying for the BIPA, told me that the original vision of the bill
was tied to the specific issue that was presenting itself across Illinois, and that was the deploying of thumbprint technologies Oddly enough, Ferg-Cadima added, this was a bill where there was little voice from the private
business sector. This corporate indifference might be a thing of the past. Tech companies of all stripes have grown more and more interested in biometrics. Theyve become more politically powerful, too: For instance, Facebooks
federal lobbying expenditures grew from $207,878 in 2009 to $9,340,000 in 2014. Testing the Illinois law The crucial question here is whether the Illinois and Texas laws can be applied to todays most common uses of biometric
identifiers. What real-world business practices would meet the standard of informed consent that Illinois law requires for biometric data collection? When asked about the privacy law cited in the Licata case, Jay Edelson, the
managing partner of the firm representing the plaintiff, said, The key thing to understand is that almost all privacy statutes are really consent statutes. The lawsuit stands to determine precisely what kind of consent the Illinois law
, and that its opt-out consent framework for Tag Suggestions violated the law,
if it is to happen at all. Either way, theres a chance this lawsuit will end up shaping the future of facial recognition technology.
. Industry associations have staked out a position that is less protective of privacy than the companies they represent -- and far less protective of
what consumers deserve." If the NTIA process goes forward without privacy and consumer groups, that will raise questions about the product, Bedoya added. "If all consumer groups who have been active withdraw, I don't think it
can be called a 'multistakeholder' process," he said. "It can be called an 'industry' stakeholder process." Still,
said Monday he
remained
optimistic that the NTIA process would produce a strong set of facial recognition
privacy standards
. Despite disagreements about the consent issues, participants have made a lot of progress, said Carl Szabo, policy counsel with NetChoice, an e-commerce trade group. "We're
getting to a point when we can start putting pen to paper," he said. The final standards need to incorporate compromise from both industry and privacy groups, Szabo added. All the new privacy standards being negotiated are
"actually limiting on business, in some capacity," he said. Since mid-2012, the NTIA has convened for a series of negotiations related to technology and privacy, with the first meetings focused on mobile application privacy. The NTIAled discussions produced a set of app privacy standards that some companies are now adopting, although two privacy groups declined to sign on to the final product. In March, the NTIA announced it would next host negotiations on
privacy standards for aerial drones.
1AR
Turn- industry supports regulations on facial recognition.
Chayka, 14. Kyle, Newsweek, 4/25. Biometric Surveillance Means Someone Is
Always Watching. JJZ
http://www.newsweek.com/2014/04/25/biometric-surveillance-means-someonealways-watching-248161.html
In the private sector, efforts are being made to ensure face recognition isn't abused ,
but standards are similarly vague. A 2012 Federal Trade Commission report recommends that companies should obtain "affirmative express consent before collecting or using biometric
data from facial images." Facebook collects face-prints by default, but users can opt out of having their face-prints collected. Technology entrepreneurs argue that passing strict laws
before face recognition technology matures will hamper its growth. "What I'm worried about is policies being made inappropriately before their time," Animetrics's Schuepp says. "I don't
A2: Terror DA
N/U: Surveillance is proving to be less effective against terrorists
Economist Jan 17th 2015 | From the print edition
http://www.economist.com/news/briefing/21639538-western-security-agencies-arelosing-capabilities-they-used-count-getting-harder
ONCE the shock that a terrorist outrage generates begins to fade,
questions start to be asked about whether the security services could
have done better in preventing it. Nearly all the perpetrators of recent
attacks in the West were people the security services of their various
countries already knew about. The Kouachi brothers and Amdy Coulibaly were
no exception; the Direction Gnrale de la Scurit Intrieure (DGSI), Frances
internal security agency, and the police knew them to be radicalised and potentially
dangerous. Yet their plot or plots, which probably involved more people and may
have been triggered either by al-Qaeda in Yemen or the so-called Islamic State (IS)
in Syria, went undiscovered. There there may have been a blunder, and there will
undoubtedly be lessons to be learned, just as there were in Britain after the 2013
murder of Fusilier Lee Rigby by Michael Adebolajo and Michael Adebowale, both of
whom had featured in several prior operations by MI5, the internal-security agency.
But it is worth reflecting on the extent to which Western security agencies have
succeeded in keeping their countries safe in the 13 years since September 2001.
And it is worth noting that their job looks set to get harder. Europe has suffered
many Islamist terrorist attacks in recent years, but before the assault on Charlie
Hebdo, only two of them caused more than ten deaths: the Madrid train attack in
May 2004 and the London tube and bus bombings 14 months later (see chart). This
was not for want of trying; intelligence sources say they have been thwarting
several big plots a year. Sometimes this has meant arresting the people involved:
more than 140 people have been convicted of terrorism-related offences in Britain
since 2010. But often plots have been disrupted in order to protect the public before
the authorities have enough evidence to bring charges. Three factors threaten this
broadly reassuring success. The first is the break-up of states in the Middle East.
The civil wars in Libya, Yemen and Syria mean there is a much broader
range of places and groups from which threats can come than there was
five years ago. And there has never previously been anything remotely on
the same scale as IS in terms of financial resources, number of fighters,
territory controlled, sophistication in its use of media and ability to
radicalise young Muslims in the West. Andrew Parker, the head of MI5, says
that since October 2013 there have been more than 20 plots either directed or
provoked by extremist groups in Syria. In September 2014 Abu Mohammed al
Adnani, an IS leader, told would-be recruits not to bother coming to Syria or Iraq but
to launch attacks in their home countries. Attempts to reduce the risks posed by
fighters who join the wars in the Middle East and then return to Europe range from
employment programmes (in Denmark) to banning their return unless they agree to
be monitored and tagged (in Britain). But the sheer number of those returning
makes it almost impossible to guarantee that all will be defanged. A second
problem for the security forces is that the nature of terrorist attacks has
changed. Al-Qaeda, and in particular its Yemeni offshoot al-Qaeda in the
Arabian Peninsula, is still keen on complex plots involving explosions and
airliners. But others prefer to use fewer people, as in commando-style
raids such as the one on Charlie Hebdo and lone-wolf attacks that are
not linked to any organisation. IS has called for attacks on soft targets in the
West by any means availableone method is to drive a car at pedestrians, as in
Dijon on December 21st last year. At any one time MI5 and DGSI will each be
keeping an eye on around 3,000 people who range from fairly low-priority targets
people who hold extremist views that they may or may not one day want to put into
practicethrough those who have attended training camps or been involved in
terrorist activity in the past to those who are thought likely to be actively plotting an
attack. But only a small number at the top are subjected to intensive resource
surveillance. The amount of monitoring available for the others, particularly those
towards the bottom, varies widely. This provides holes for smaller plots to get
through. And a smaller plot can still be large in its outragesee the decapitation of
Fusilier Rigbyand in its body count. Anders Breivik killed 77 Norwegians in 2011
with no co-conspirators at all. Even when there are identified co-conspirators,
though, it is getting harder to tell what they might be up to. This is
because of the third factor that is worrying the heads of Western security
agencies; the increasing difficulty they say they have in monitoring the
communications within terrorist networks. The explosion of oftenencrypted new means of communication, from Skype to gaming forums to
WhatsApp, has made surveillance far more technically demanding and in
some instances close to impossible. Apples latest mobile operating system
comes with default encryption and Googles Android is about to follow suit. In
such systems the companies do not have access to their customers
passwords and therefore cannot provide security agencies access to
messages even if the law requires them to. They say that they are simply
responding to the demands of their users for privacy, but the heads of the security
agencies see the new approach as, at least in part, a response to what Edward
Snowden, a contractor for Americas intelligence services, revealed about their
abilities in 2014. The tech firms are very different from the once-publicly owned
telephone companies that spooks used to work with, which were always happy to
help with a wire tap when asked. Some, especially some of the smaller ones, have a
strong libertarian distrust of government. And technology tends to move faster than
legislation. Although the security agencies may have ways into some of the new
systems, others will stymie them from the modern equivalent of steaming open
envelopes. The citizens of the West have grown used to the idea that their security
services can protect them from the worst that might happen. Faced by a new range
of threats and with countermeasures apparently of rapidly declining effectiveness,
that may be about to change.
is computer-assisted passenger screening (CAPS), which was first introduced by a number of American airlines in 1998. CAPS uses information from the reservation system, and a
passenger's prior travel history, to select passengers for additional security procedures. It has been fiercely criticised by civil-liberties campaigners who accuse it of picking on members
of particular ethnic groups or nationalities. Besides, terrorists expect to be questioned at check-in, says Mr Taylor. He suggests that CCTV surveillance should be extended to cover
passengers away from areas where they expect to be observed. Some of last week's hijackers were reported to have had an argument in the car park at Boston airport. But Mr Taylor
admits that this process could not be automated. There would also be privacy implications, plus the usual accusations of bias. On autopilot into the future If spotting terrorists on the
ground is so hard, what can be done to make aircraft harder to hijack in the air? Again, there has been no shortage of suggestions. Robert Ayling, a former boss of British Airways,
suggested in the Financial Times this week that aircraft could be commandeered from the ground and controlled remotely in the event of a hijack. The problem with this, says Mr Taylor,
is that remote-control systems might themselves open aircraft up to hijacking by malicious computer hackers. He suggests instead that automated landing systems should be modified
so that, in the event of a hijack, the pilot could order his aircraft to land itself, with no option to cancel the command. Another idea is that existing collision-avoidance and terrainavoidance systems could be modified to prevent aircraft from being crashed deliberately. But such proposals, says Chris Yates, an aviation-security expert at Jane's Defence Weekly,
belong in the realms of science fiction. (Mr Yates advocates simpler, low-tech fixes, such as doing away with curbside and city-centre check-ins, and allowing only passengers to have
"false negatives," missing people not in the database. "Facial-recognition software is easily tripped up
by changes in hairstyle or facial hair, by aging, weight gain or loss, and by simple
disguises," the ACLU report said. "That suggests, if installed in airports, these systems would miss a high
proportion of suspects included in the photo database, and flag huge numbers of
innocent people--thereby lessening vigilance, wasting precious manpower resources, and creating a false sense of security.
A2: Courts CP
Legislative action is the only way to uphold constitutional
protections of privacyLynch, 12. Jennifer, Attorney for the Electronic Frontier, July 18. What Facial
Recognition Technology Means for Privacy and Civil Liberties. Presentation to the
Senate Committee on the Judiciary. JJZ
https://www.eff.org/files/filenode/jenniferlynch_eff-senate-testimonyface_recognition.pdf
Face recognition allows for covert, remote and mass capture and identification of imagesand the photos that may
end up in a database include not just a persons face but also how she is dressed and possibly whom she is with. This creates threats to free association and free expression not evident
Act9 and the Video Privacy Protection Act10 as models for this legislation. Both were passed in direct response to privacy threats posed by new technologies and each includes
meaningful limits and protections to guard against over-collection, retention and misuse of data. My testimony will discuss some of the larger current and proposed facial recognition
collection programs and their implications for privacy and civil liberties in a democratic society. It will also review some of the laws that may govern biometrics collection and will outline
best practices for developing effective and responsible biometrics programsand legislation to regulate those programsin the future.
A2: PTX
Their entire understanding of the politics disad is educational
and wrong - the plan would not emerge from Obama spending
political capital, but rather by riding the coat-tails of an
exigency which reformulates acceptable standards of privacy
and government intrusion.
Ni & Ho 8 - *graduated from the doctoral program in public
administration, assistant professor at California State University, San
Bernardi, ** associate professor in the School of Public and
Environmental Affairs at Indiana University Purdue University
Indianapolis. (Anna Ya, Alfred Tat-Kei, A Quiet Revolution or a Flashy Blip? The
Real ID Act and U.S. National Identification System Reforms,
http://www.jstor.org/stable/pdf/25145704.pdf, Published by: Wiley on behalf of the
American Society for Public Administration) NAR
There is a rich body of literature on how policies are formulated . Many studies of the policymaking process in modern democracies reveal that policies are usually not made based on
economic rationality and often do not reflect a clear relationship between problems,
goals, and policy solutions (Cohen, March, and Olsen 1972; Kingdon 2003). Rather, policies are the result
of dynamic and fluid interactions among interest groups and parties that try to shape the policy agenda in the
legislative process, capture public debate and media attention on certain issues, and engage each other in
persuasion and interest exchanges to establish a political equilibrium (Baumgartner and Jones 1993; Downs 1972;
Kelman 1987; Lindblom 1977; Majone 1989). Such equilibrium is akin to Theodore Lowi's (1969) "iron triangles," in
which interest groups, bureaucrats, and Congress dominate a particular policy area. Over time, these actors will
accommodate each other in the policy subsystem (Griffith 1939), resisting new ideas and outside pressures and
drastic
policy changes do happen, especially in response to system shocks caused by pivotal
political events that are marked by urgent peril, intense threat, and massive horror
(Kingdon 2003; Lewis 2006). Such events force entrenched policy subsystems to
restructure and generate sufficient political support and attention to
leverage an alteration of views in the subsystem (Baumgartner and Jones 1993; Wood
2006). For example, a major education policy shift in the 1960s was triggered by the civil
rights movement and was successfully advocated by reformers who took
advantage of the national mood of "equality and justice for all."28 The development of
trying to maintain the status quo in order to protect their mu tually compromised interests. Nevertheless,
the U.S. identification system over the past few years seems to follow this pattern. Prior to the 2000s, many
attempts had been made to change the system and to introduce a national ID to meet different policy needs (see
figure l).29 For example, in the 1970s and the early 1980s, illegal immigration and employ ment status verification
were the key concerns of policy makers. In the 1990s, e-commerce and identity thefts were the new concerns.30
Nonetheless, none of these issues was sufficient to shake up the political equilib rium built by privacy advocacy
unprecedented number of congressional hearings were held in the aftermath of 9/11 to examine the funda mental
limits of the system (see figure 1). Nonetheless, it should be pointed out that even the 9/11 crisis did not seem to
be sufficient to break the old powerful subsystems that opposed the idea of creating a national ID. Despite the
surge of congressional hearings in 2002-4, no legisla tion on national ID could be passed immediately following the
a major policy
shift often takes decades to build up momentum in a democratic system. Policy
options have to be sorted out, debated, evaluated, and examined critically by
diverse groups and all these need time. The translation from an idea to a policy
can be painfully slow, but rushing it through the political system can actually be
counterproductive and may risk policy backpedaling later . Regardless of the future outcome of
capital to realign the established interests. The development of the Real ID Act also affirms why
the legislative and legal challenges, the way in which the Real ID Act was passed by Congress causes us to rethink
the ethical responsibilities of policy makers and public adminis trators in times of national crises.
When crises
occur, many checks and balances in the political system can be easily weakened or
put on hold. Politicians can take advantage of the situation and the public sentiment
to introduce controversial policy changes. As shown in World War II and in the modern experiences
of many developing countries that are headed by authoritarian regimes today, the failure of political leaders to
uphold the core democratic values can have serious conse quences for the security and stability of a country. We
believe that policy makers and public administra tors need to uphold the principles of accountability and checks and
balances of power even in times of crises. They have an ethical responsibility to safeguard the democratic process,
to help the press and the public to fully understand the implications of any policy change within the legal
framework, and to ensure that there is sufficient public discourse to reflect the public good and to protect the
pluralistic interests of society (Rohr 1989). Otherwise, a nation may be merely trading many fundamental
democratic values for a false sense of security.
A2: Neolib
Their critique of the privatization of surveillance is too
sweeping the government will continue to be a PRODUCER,
not a consumer, or genetic surveillance.
*Genetic survielanceindustrial complex
private firms are integral to the continued expansion of these database s. Large
firms, such as Bode Technology and Orchid Cellmark, view local law enforcement databases as
potential revenue streams, particularly because they promise to promote the use of
DNA beyond violent crimes (sexual assaults and homicides) to property crimes.159 These
firms see a business opportunity in processing the evidence swabs collected from
property crimes. Indeed, in marketing their products, they trumpet the studies that
have highlighted DNAs promise for solving these crimes .160 Similarly, smaller firms have also
sought to benefit from and to drive the expansion of local databases. These include SmallPond and IntegenX.161
These companies have been consistent participants in law enforcement conferences in the last several years,162
and they have sought meetings with local agencies to pitch their products. Furthermore, IntegenX offers to help
potential buyers secure grants to purchase its products.163 The influence of private firms on policing techniques is
not new and is certainly not unique to genetic surveillance.164 However, it is important to recognize that these
because
these private interests have evolved simultaneously with local law enforcements
push to enter the genetic surveillance space, the prospect of a genetic
surveillanceindustrial complex further entrenching the practice of local
private interests will influence the expansion, use, and long-term viability of this surveillance tool. And
databases seems likely. Finally, the very use of these databases will also contribute
to the publics acceptance of them. Even those with only a casual understanding of
surveillance techniques accept without question law enforcements ability to collect
personal informationincluding photographs, fingerprints, addresses, etc.for
investigative databases. Furthermore, because CODIS has been around for 20 years, there is widespread
understanding that law enforcement collects DNA profiles from at least some segments of the population. Thus,
local databases are not a completely new surveillance tool. This incremental evolution of law enforcement
investigative databases in general, and DNA databases in particular, will help to solidify local databases as a
tolerated, if not accepted, law enforcement tool.165
Perm do both.
Use of state based technologies to monitor and control
relationships reinforces neoliberalism.
Stroo, 13. Sara, School of Journalism and Communication and the Graduate
School of the University of Oregon, 6/13. JJZ
https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/13316/Stroo_oregon
_0171N_10750.pdf?sequence=1
From this, one can easily see why neoliberal governmentality is all too frequently presented as the retreat of the state. These feeds come from
equipment which was installed and maintained by a private company but funded by grants from the state of Texas, and the responsibility for sovereign
security and border patrol are made the purview of the Virtual Deputies. I want to argue however that this example will actually show that it is much more
productive to ideate neoliberalism as a transformation of politics. Neoliberalism is not a retreat, it is a restructuring of power relations away from the
formal techniques of the state, toward informal techniques of power which are introduced and overseen by new, non-governmental actors. As Wendy
Brown writes in her seminal essay on neoliberalism: We are not simply in the throes of a right-wing or conservative positioning within liberal democracy
but rather at the threshold of a different political formation a formation made possible by the production of citizens as individual entrepreneurial actors
across all dimensions of their lives, reduction of civil society to a domain for exercising this entrepreneurship, and figuration of the state as a firm whose
products are rational individual subjects, an expanding economy, national security, and global power.12 In practice then,
the neoliberal
turn is marked by
intense deregulation of the marketplace, decreased welfare spending, privatization of services, financialization of wealth,
and tax cuts for the very rich. In such a regime the role of government is reduced mainly to deregulating markets and acting as the lender of last resort
to mitigate the risk of this increasingly financialized market.13 According to Sim and Coleman, [N]eo-liberal conditions [trend] towards multiple centres
economic enterprise to the entire social realma dynamic clearly at play in the creation and use of BlueServo. Hamman concluded from this state of
of being off work, becomes a question of demonstrating self-discipline. John Carey called the creation of the Sabbath a form of resistance to state and
market powers, it was a time to rest, to reconnect with family and community and the self under the moral auspice of piety.16 Neoliberalism eliminates a
day of rest by shifting the moral center away from worship of an omnipotent and toward worship of production. In this way, the loss of leisure is not seen
as an imposition, but takes on the sort of righteousness that honoring the Sabbath used to hold. The neoliberal discipline taps in to a sense of moral
result, we see a rise of predictive or actuarial control, which is marked by a shift away from normativity and individual treatment and toward technicality
and classificatory management.18 Where this fitness is most often measured is in the reaction and mitigation of risk. Risk, as identified by Ulrich Beck and
Anthony Giddens in the early 1990s, is a mode of decision making based on the possibility of future dangers and negative effects.19 Risk is not defined by
ignorance of threat, but rather by positive knowledge its existence; to be capable of best averting risk is to learn to see risk everywhere. Risks are the
reflection of human actions and omissions, the expression of highly developed productive forces. That means that the sources of danger are no longer
ignorance but knowledge20 This conception of risk shares with Foucaults notion of governmentality a preoccupation with developing control strategies
categories or classes of people become potential risks and objects of control. Threats to society are no longer seen as an action committed by homo
penalis, the law breaker, but as embodied and theoretically identifiable in homo criminalis, the criminal person. 23 Through the introduction of this type,
management: the first is the expansion of information systems, and the second is reliance on computerized technologies.25 Homo economicus can be
If one wishes to remain part of the social body, one must fall into
a category which is deemed worthy of inclusion . According to Beck, Even outside of work, industrial society is a
seen as a foil to homo criminalis.
wage labor society through and through in the plan of its life, in its joys and sorrows, in its concept of achievement, in its justification of inequality, in is
social welfare laws, in its balance of power and in its politics and culture.26 As Foucault has noted, this new mechanism of power is notable then, that it
permits extraction of time and immaterial labor from bodies as proof of worthiness, rather than tangible wealth and commodities as proof of capital
productivity.27 In such a regime the notion of a worker undergoes radical redefinition: The primary economic image offered to the modern citizen is not
that of the producer but of the consumer The worker is portrayed neither as an economic actor, rationally pursing financial advantage, nor as a social
creature seeking satisfaction of needs for solidarity and security. The worker is an individual in search of meaning, responsibility, as sense of personal
achievement, a maximized quality of life, and hence of work. 28 The neoliberal subject produces more than things, he produces himself. Homo
economicus is above all an entrepreneur of the self.29
request for surveillance measures has been expressed by citizens in local contexts ,e.g., to control street crime in urban environments. A paradox of surveillance is that surveillance
techniques such as census and civil registration were developed as means of granting civil rights and, at the same time, serve as potential means for states to gain informational power
This paradoxical character is retained with globalisation (Lyon, 2004), where states as
well as corporations boast technologies such as satellite tracking stations or supercomputer filtering devices and have access to international flows of personal data.
Such developments may be driven by economic purposes rather than surveillance
ones; however; they also produce unprecedented surveillance capacities by public
and private actors on a global scale. Such a paradox on a global scale is well illustrated by the positive connotations of notions such as
over citizens.
information society, global village, Internet democracy versus the concerns for security both with regard to the security of IT itself (e.g., in relation to cyber-crime) and with regard
to their security applications and related implications for civil liberties and fundamental rights. Last but not least, the dual-use (civilian and military) IT applications, including
identification devices, raises specific questions with regard to the possibility or desirability of keeping different functions distinct (e.g., identification in relationship to migration control or
in relation to military intelligence gathering) and allow for democratic oversight. Issues of dual-use (or, as also frequently referred to, multi-functionality) have been recently addressed in
EU-level research policy, which used to be civilian only.7 These issues cannot be pursued in depth in this article, but they point to science and technology itself as a specific field
blurring internal and external dimensions of security, and the intermingling roles of public and private sectors in the definition of security issues and options.8 Another paradoxical
aspect of surveillance technologies is that they may mitigate as well as reinforce fears. Someone working with them may feel more in control, and they may enhance perception of
being safer in controlled areas; at the same time, they may induce suspicion and fears by subjects of surveillance who may wonder why they are screened (Why do I have to provide
my fingerprints? I am not a criminal.) or whether data could be manipulated and misused. Surveillance evokes uncertainties, risks, threats and the related questions of how significant
these are, whether and in how far they can be prevented, at what costs. As noted by Frank Furedi (2002), stressing fears may lead to an obsession with theoretical risks and the
unintended effect of distracting from some of the daily ones; surveillance technologies may contribute to this process by amplifying perceptions that something bad could happen,
while, as mentioned above, they may also provide a sense of enhanced control.
A2: Heiddeger
Were a negative state action, status quo uses these
technologies to monitor and control the relationships we have.
Stroo, 13. Sara, School of Journalism and Communication and the Graduate
School of the University of Oregon, 6/13. JJZ
https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/13316/Stroo_oregon
_0171N_10750.pdf?sequence=1
From this, one can easily see why neoliberal governmentality is all too frequently presented as the retreat of the state. These feeds come from
equipment which was installed and maintained by a private company but funded by grants from the state of Texas, and the responsibility for sovereign
security and border patrol are made the purview of the Virtual Deputies. I want to argue however that this example will actually show that it is much more
productive to ideate neoliberalism as a transformation of politics. Neoliberalism is not a retreat, it is a restructuring of power relations away from the
formal techniques of the state, toward informal techniques of power which are introduced and overseen by new, non-governmental actors. As Wendy
Brown writes in her seminal essay on neoliberalism: We are not simply in the throes of a right-wing or conservative positioning within liberal democracy
but rather at the threshold of a different political formation a formation made possible by the production of citizens as individual entrepreneurial actors
across all dimensions of their lives, reduction of civil society to a domain for exercising this entrepreneurship, and figuration of the state as a firm whose
products are rational individual subjects, an expanding economy, national security, and global power.12 In practice then,
the neoliberal
turn is marked by
intense deregulation of the marketplace, decreased welfare spending, privatization of services, financialization of wealth,
and tax cuts for the very rich. In such a regime the role of government is reduced mainly to deregulating markets and acting as the lender of last resort
to mitigate the risk of this increasingly financialized market.13 According to Sim and Coleman, [N]eo-liberal conditions [trend] towards multiple centres
economic enterprise to the entire social realma dynamic clearly at play in the creation and use of BlueServo. Hamman concluded from this state of
of being off work, becomes a question of demonstrating self-discipline. John Carey called the creation of the Sabbath a form of resistance to state and
market powers, it was a time to rest, to reconnect with family and community and the self under the moral auspice of piety.16 Neoliberalism eliminates a
day of rest by shifting the moral center away from worship of an omnipotent and toward worship of production. In this way, the loss of leisure is not seen
as an imposition, but takes on the sort of righteousness that honoring the Sabbath used to hold. The neoliberal discipline taps in to a sense of moral
result, we see a rise of predictive or actuarial control, which is marked by a shift away from normativity and individual treatment and toward technicality
and classificatory management.18 Where this fitness is most often measured is in the reaction and mitigation of risk. Risk, as identified by Ulrich Beck and
Anthony Giddens in the early 1990s, is a mode of decision making based on the possibility of future dangers and negative effects.19 Risk is not defined by
ignorance of threat, but rather by positive knowledge its existence; to be capable of best averting risk is to learn to see risk everywhere. Risks are the
reflection of human actions and omissions, the expression of highly developed productive forces. That means that the sources of danger are no longer
ignorance but knowledge20 This conception of risk shares with Foucaults notion of governmentality a preoccupation with developing control strategies
categories or classes of people become potential risks and objects of control. Threats to society are no longer seen as an action committed by homo
penalis, the law breaker, but as embodied and theoretically identifiable in homo criminalis, the criminal person. 23 Through the introduction of this type,
management: the first is the expansion of information systems, and the second is reliance on computerized technologies.25 Homo economicus can be
If one wishes to remain part of the social body, one must fall into
a category which is deemed worthy of inclusion . According to Beck, Even outside of work, industrial society is a
seen as a foil to homo criminalis.
wage labor society through and through in the plan of its life, in its joys and sorrows, in its concept of achievement, in its justification of inequality, in is
social welfare laws, in its balance of power and in its politics and culture.26 As Foucault has noted, this new mechanism of power is notable then, that it
permits extraction of time and immaterial labor from bodies as proof of worthiness, rather than tangible wealth and commodities as proof of capital
productivity.27 In such a regime the notion of a worker undergoes radical redefinition: The primary economic image offered to the modern citizen is not
that of the producer but of the consumer The worker is portrayed neither as an economic actor, rationally pursing financial advantage, nor as a social
creature seeking satisfaction of needs for solidarity and security. The worker is an individual in search of meaning, responsibility, as sense of personal
achievement, a maximized quality of life, and hence of work. 28 The neoliberal subject produces more than things, he produces himself. Homo
economicus is above all an entrepreneur of the self.29
A2: K
The aff functions as a negative state action- exposing social
control and the politics of facial recognition
Introna, 05. Lucas, Center for the Study of Technology and Organisation,
Lancaster University Management School, Lancaster, LA1 4YX, UK. Disclosive
ethics and information technology: disclosing facial recognition systems. JJZ
http://download.springer.com/static/pdf/187/art%253A10.1007%252Fs10676-0054583-2.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Farticle
%2F10.1007%2Fs10676-005-4583-2&token2=exp=1435154780~acl=%2Fstatic
%2Fpdf%2F187%2Fart%25253A10.1007%25252Fs10676-005-4583-2.pdf
%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Farticle
%252F10.1007%252Fs10676-005-45832*~hmac=d6beb5349104917c9211b8a938d7de486b38fb07b4d5abc23c044c64265
ce2bf
Facial recognition
algorithms, which we will discuss below, is a particularly good example of a opaque technology. The facial recognition capability can be imbedded into existing CCTV networks, making
it is non-intrusive, contact-free process (Woodward et al. 2003: 7). Its application is flexible. It can as easily be used by a supermarket to monitor potential shoplifters (as was proposed
and later abandoned, by the Borders bookstore), by casinos to track potential fraudsters, by law enforcement to monitor spectators at a Super Bowl match (as was done in Tampa, Florida), or used for identifying terrorists at airports
(as is currently in operation at various US airports). However, most important of all is the obscurity of its operation. Most of the software algorithms at the heart of facial recognition systems (and other information technology
products) are propriety software objects. Thus, it is very difficult to get access to them for inspection and scrutiny. More specifically, however, even if you can go through the code line by line, it is impossible to inspect that code in
operation, as it becomes implemented through multiple layers of translation for its execution. At the most basic level we have electric currents flowing through silicon chips, at the highest level we have programme instructions, yet it
is almost impossible to trace the connection between these as it is being executed. Thus, it is virtually impossible to know if the code you inspected is the code being executed, when executed. In short, software algorithms are
(Graham and Wood 2003). Thus, a profound sort of micro-politics can emerge as these opaque (closed) algorithms become enclosed in the social-
technical infrastructure of everyday life. We tend to have extensive community consultation and impact studies when we build a new motorway. However, we tend not to do this when we install CCTV in public places or when we
install facial recognition systems in public spaces such as airports, shopping malls, etc. To put is simply: most informed people tend to understand the cost (economic, personal, social, environmental) of more transparent
technologies such as a motorway, or a motorcar, or maybe even cloning. However, we would argue that they do not often understand the cost of the more opaque information technologies that increasingly pervade our everyday
life. We will aim to disclose this in the case of facial recognition systems below. Before we do this we want to give an account of what we mean by this disclosure of disclosive ethics. Ethics is always and already the other side of
For politics
to function as politics it seeks closure one could say enrolment in the actor
network theory language
politics (Critchley 1999). When we use the term politics (with a small p) as indicated above we refer to the actual operation of power in serving or enclosing particular interests, and not others.
. Decisions (and technologies) need to be made and programmes (and technologies) need to be implemented. Without closure politics cannot be effective as a
programme of action and change. Obviously, if the interests of the many are included in the enclosure as it were then we might say that it is a good politics (such as democracy). If the interests of only a few are included we
might say it is a bad politics (such as totalitarianism). Nevertheless, all political events of enclosing are violent as they always include and exclude as their condition of operation. It is the excluded the other on the outside as it
were that is the concern of ethics. Thus, every political action has, always and immediately, tied to its very operation an ethical question or concern it is the other side of politics. When making this claim it is clear that for us ethics
(with a small e) is not ethical theory or moral reasoning about how we ought live (Caputo 1993). It is rather the question of the actual operation of closure in which the interests of some become excluded as an implicit part of the
material operation of power in plans, programmes, technologies and the like. More particularly, we are concerned with the way in which the interest of some become excluded through the operation of closure as an implicit and
essential part of the design of information technology and its operation in socialtechnical networks. As those concerned with ethics, we can see the operation of this closure or enclosure in many related ways. We can see it
operating as already closed from the start where the voices (or interests) of some are shut out from the design process and use context from the start. We can also see it as an ongoing operation of closing where the possibility
for suggesting or requesting alternatives are progressively excluded. We can also see it as an ongoing operation of enclosing where the design decisions become progressively black-boxed so as to be inaccessible for further
scrutiny. And finally, we can see it as enclosed in as much as the artefacts become subsumed into larger socio-technical networks from which it becomes difficult to unentangle or scrutinise. Fundamental to all these senses of
. Agendas cannot be kept open forever, designs cannot be discussed and considered indefinitely. Thus,
Closure is a pragmatic condition for life. Equally, we are not arguing that the question of ethics can, and ought to be, divorced from
politics.
. The concern of ethics is always and already also a political concern. To choose, propose or argue for certain values such as justice, autonomy,
democracy and privacy as suggested by Brey (2000) is already a political act of closure. We may all agree with these values as they might seem to serve our interests, or not. Nevertheless, one could argue that they are very
whether it is intended or not. We know that power is most effective when it hides itself (Foucault 1975). Thus, power has a very good reason to seek and maintain nondisclosure. Disclosive ethics takes
as its moral imperative the disclosure of this nondisclosure the presumption that politics can operate without regard to ethics as well as the disclosure of all attempts at closing or enclosing that are implicitly part of the design and
use of information technology in the pursuit of social order. Many security analysts see FRSs as the ideal biometric to deal with the new emerging security environment (post 11 September). They claim that it is efficient (FaceIt only
requires a single 733 Mhz Pentium PC to run) and effective, often quoting close to 80% recognition rates from the FRVT 2002 evaluation while leaving out of the discussion issues of the quality of the images used in the FRVT, the size
of the database, the elapsed time between database image and probe image, etc. But most of all they claim that these systems performs equally well on all races and both genders. Does not matter if population is homogeneous or
heterogeneous in facial appearance (Faceit technical specification1). This claim is not only made by the suppliers of FRSs such as Identix and Imagis Technologies. It is also echoed in various security forums: Face recognition is
completely oblivious to differences in appearance as a result of race or gender differences and is a highly robust Biometrics2 Even the critical scholar Gary Marx (1995: 238) argued that algorithmic surveillance provides the
possibility of eliminating discrimination. The question is not whether these claims are correct or not. One could argue that in a certain sense they are correct. The significance of these claims is the way they frame the technology. It
presents the technology itself as neutral and unproblematic. More than this it presents the technology as a solution to the problem of terrorism. Atick of Identix claimed, in the wake of the 9/11 attacks, that with FaceIt the US has the
ability to turn all of these cameras around the country into a national shield (OHarrow 2001). He might argue that in the face of terrorism minor injustices (biases in the algorithms) and loss of privacy is a small price to pay for
security. This may be so, although we would disagree. Nevertheless, our main concern is that these arguments present the technical artefacts in isolation with disregard to the socio-technical networks within which they will become
We need
to disclose the network effects
There is every
reason to believe that the silent and non-invasiveness of FRSs make it highly
desirable as a biometric for digital surveillance. It is therefore important that this
technology becomes disclosed for its potential politics in the socio-technical
network of digital surveillance.
enclosed. As argued above, it is not just the micro-politics of the artefact that is the issue. It is how these become multiplied and magnified as they become tied to other social practices that is of significance.
, as it were, of the micro-politics of artefacts. This is especially so for opaque digital technology.
Thus, not just as isolated software objects as was done in the FRVTs but in its multiplicity of implementations and practices. We would claim it is
here where the seemingly trivial exclusions may become very important as they become incorporated into actual practices.
should relax the fears of a totalitarian risk; however, it should not lead to the neglect of the problem of accountability of multiple actors. Also, the various
brothers may look mainly benevolent, but how benevolent will depend on the state of health of democracy, namely pluralism, accountability, checks
and balances, binding protection of fundamental rights. This in turn will be influenced by the intelligence of democracy, which is the capacity to avoid both
the possibility of debating without knowing a charge often addressed by experts to lay citizens as well as parliaments and the tendency of knowing
without debating that characterises forms of secretive expertise. In the EU context some interesting developments can be noted, as well as some difficult
challenges ahead. The developments include the limited but nevertheless significant pluralism connected with the role of different EU and national
institutions, some emerging forms of public debate and some diffusion of expertise. Current efforts to increase the areas of the third (intergovernmental)
pillar on justice and home affairs to become part of the first (Community) one could increase the role of the EP, the related increase of oversight and the
likely enhanced access to information by civil society organisations. At the same time, the search for establishing a new legal basis for international cooperation based on third pillar provisions and explicitly on security grounds, e.g., in response to the ECJ judgement on Balancing security and democracy,
and the role of expertise: Biometrics politics in the European Union 133 PNR45, may counter-balance the communitarisation trend. With regard to
diffusion of expertise, this can be seen also in the context of the broader impact assessment culture of EU policy-making.46 Such a culture stresses the
need to explain and justify the policy options considered and selected. Obviously even the most refined procedure and methods for impact assessment
cannot completely avoid the technocratic temptation to find arguments to justify pre-selected options nor the risk of regulatory capture by well resourced
(including in terms of expertise) groups. Nevertheless the important link between expertise and accountability is explicitly drawn. Last but surely not least,
the still nonbinding legal status of the Charter of Fundamental Rights weakens its weight, even if the ECJ and some national courts are referring to it in
their judgements. In this regard, it would be desirable to have the Charter becoming binding either as part of a binding EU Constitutional Treaty or in other
form. To conclude, the EU experimental capacity will be put once again to a hard test by multiple and possibly contradictory expectations to deliver
security, be a champion of peace and democracy, provide welfare internally and not become a fortress. It may fail with hard consequences or may
reach maturity as a supranational democratic polity. Strengthening accountability and safeguarding fundamental rights can lead us there; weakening
them, or even opting out through undue exceptions in the name of security would undermine some of the very foundations of the European project.
Award for Worst Public Official to the City of Tampa for spying on all of the Super Bowl attendees with facial recognition. The annual award presents Orwell-inspired statues to the
on whether we want to make such a fundamental change.9 The objective of sur veillance studies must be to ensure that people are more than just objects of information. The power of
the panopticon is limited by the process of giving those observed a degree of control over, and knowledge of, facial recognition systems. As surveillance systems are implemented, they
must be carefully scrutinized to ensure accountability among those who gain power from the systems. Benefits to public safety must be clearly described, and government must justify
any secrecy. There are important openings for dissent in the nascent facial recognition society. The U.S. Supreme Court may have denied a right of privacy over facial features, but there
is sociological evidence suggesting people observe a customary right to facial privacy. Journalist Malcolm Gladwell (2002) says we tend to focus on audible communication and ignore
much of the visual information given in the face, because to do otherwise would challenge the ordinary boundaries of human relationships. Gladwell refers to an essay written by
psychologist Paul Ekman in which Ekman discusses Erving Goffmans sociological work: Goffman said that part of what it means to be civilized is not to steal information that is not
freely given to us. When someone picks his nose or cleans his ears, out of unthinking habit, we look away .... for Goffman the spoken word is the acknowledged information, the
information for which the person who states it is willing to take responsibility ... (2002) Gladwell writes that it is disrespectful and an invasion of privacy to probe peoples faces for
information their words leave out. Awareness of the information also entails an obligation, Gladwell says, to react to a message that was never intended to be transmitted. To see what
is intended to be hidden, or, at least, what is usually missed, Gladwell explains, opens up a world of uncomfortable possibilities (2002). Ideas such as these, that examine the forms of
interaction a facial recognition society would create, can be exploited in mounting a defence against the observation onslaught. They may be of little consequence now, when people
have yet to experience the full brunt of facial surveillance, but as its drawbacks become increasingly apparent, the arguments will become more salient. Paradoxically, it may be
As panoptic
surveillance continues to cover more of the urban space and be experienced more
constantly and intrusively by urban dwellers, there is a theoretical threshold point
beyond which the surveillance ceases to achieve control . If most members of a society develop the expectation that
precisely the potential for surveillance to influence behaviour that may ultimately destroy the possibility of exercising that influence.
their mistakes and indiscretions have been recorded and may be revealed, the stigmatisation of their behaviour that encourages orderliness will slowly disappear. If an individual can no
longer anticipate that his life - especially the rough edges - is safely hidden from view, there is less incentive for that person to maintain the false distinction between his actual and
reported behaviour. Society would gradually adopt new norms, ones that less strictly censure behaviours that were previously common yet concealed. Criticism could be pre-empted at
this stage by embracing publicly our foibles and declaring them normal before society at large can say otherwise. It is the same model used by the politician who calmly discloses that he
situation, but these institutions and individuals were dealt a favourable hand when the September 11 terrorist attacks aggravated the risk society and facilitated the manipulation of the
awareness of dangers, speculating on the future and taking small steps of resistance is a beginning.
Perm do bothPragmatism is key to resolve problems ignored by the stateBurns 08 Professor in History of Medicine at the Kings University College at
the University of Western Ontario (Lawrence, Identifying concrete ethical demands
in the face of the abstract other: Emmanuel Levinas pragmatic ethics, Philosophy
Social Criticism March, 2008, Vol. 34, No. 3)
The link between the face of the other and the demand for justification establishes
the pragmatic character of Levinas ethics. To see the other is to be obligated
to respond to a need. I may agree, disagree, explain why I cannot help the other,
or I may even act to help the other, but the obligation to respond is not diminished
no matter what my response may be. Thus, even though the other may call my
joyous possession of the world into question (TI, 756/73), I can still turn a blind eye
and a deaf ear to the face and voice of the other and develop an alibi. Given the
distinction between shouldering responsibility on the one hand and acting on that
responsibility on the other, i.e. between the experience of obligation (the
imperative) and the subjects responsive performance (Gibbs, 2000: 3), we need to
situate this pragmatic ethics at the proper level of analysis. Thus, the
responsive performance will require that the subject draw up a plan in which
traditional moral norms of the kind that Ricoeur envisions are invoked and repaired.
The guidance for that repair cannot come from the internal force of the norms
themselves because they are broken norms that cause suffering. Instead, a
prophetic response is required that looks beyond the norms in order to repair them.
However, even though it is necessary to look beyond the norms, those norms do not
disappear. They are repaired, revised, and justified, but only because of the
subjects assumption of responsibility for the other.