Anda di halaman 1dari 81

CHAPTER 1

GOOGLE GLASS INTRODUCTION

1.1.

GOOGLE GLASS

Google Glass is a type of wearable technology with an optical head-mounted display


(OHMD). It was developed by Google with the mission of producing a mass-market
ubiquitous computer. Google Glass displays information in a smart phone-like hands-free
format. Wearers communicate with the Internet via natural language voice commands.
Google started selling a prototype of Google Glass to qualified "Glass Explorers" in the US
on April 15, 2013, for a limited period for $1,500, before it became available to the public
on May 15, 2014, for the same price. On January 15, 2015, Google announced that it would
stop producing the Google Glass prototype but remained committed to the development of
the product. According to Google, Project Glass was ready to "graduate" from Google Labs,
the experimental phase of the project.
Google Inc. is an American multinational corporation specializing in Internet-related
services and products. These include search, cloud computing, software and online
advertising technologies. Google began in January 1996 as a research project by Larry Page
and Sergey Brin. It was originally a search engine who ranks the websites (Page Rank) and
return them as search results according to user query, with time Google grew and presently
it provides many other features than only search results i.e. it now provides image search,
YouTube(largest collection of online videos) and many more. It has its own R & D
department know as Google X, where the project Google Glass was made. Google
glass uses virtual and augmented reality to interact with user.
Virtual reality is a term that applies to computer-simulated environments that can
simulate physical presence in places in the real world, as well as in imaginary worlds. It
covers remote communication environments which provide virtual presence of users with
the concepts of telepresence and telexistence or a virtual artifact (VA). The simulated
environment can be similar to the real world in order to create a life like experience. Virtual
reality is often used to describe a wide variety of applications commonly associated with
immersive, highly visual, 3D environments. The development of CAD software, graphics
hardware acceleration, head mounted displays, database gloves, and miniaturization.

Augmented reality is a live, direct or indirect, view of a physical, real-world


environment whose elements are augmented by generated sensory input such as sound,
video, graphics or GPS data. It is related to a more general concept called mediated reality,
in which a view of reality is modified (possibly even diminished rather than augmented), by
a computer. As a result, the technology functions by enhancing ones current perception of
reality. By contrast, virtual reality replaces the real world with a simulated one.
Augmentation is conventionally in real-time and in semantic context with environmental
elements.
Project Glass is a research and development program by Google to develop an
augmented reality head-mounted display (HMD). It is part of the Google X Lab, which
works on other futuristic technologies. The intended purpose of Project Glass products
would be the hands-free displaying of information currently available to most smart phone
users, and allowing for interaction with the Internet via natural language voice commands.
The functionality and physical appearance (minimalist design of the aluminium strip with 2
nose pads) has been compared to Steve Mann's Eye Tap, which was also referred to as
"Glass" ("Eye Tap Digital Eye Glass", i.e. uses of the word "Glass" in singular rather than
plural form "Glasses"). The operating system software used in the glass will be Google's
Android.
Android is a Linux-based operating system designed primarily for touch screen
mobile devices such as smart phones and tablet computers. Initially developed by Android,
Inc., which Google backed financially and later bought in 2005, Android was unveiled in
2007 along with the founding of the Open Handset Alliance: a consortium of hardware,
software, and telecommunication companies devoted to advancing open standards for
mobile devices.

1.2.

OVERVIEW

As per many reports, Google is expected to start selling eyeglasses that will project
information, entertainment and, this being a Google product, advertisements onto the lenses.
These glasses will have the combined features of virtual reality and augmented reality. The
Google Glasses can use a 4G cell connection to pull in information from Googles mountain
of data and display info about the real world in augmented reality on the lens in front of
your eye. As you turn your head youll get information about your surroundings and nearby
objects from Google Goggles, info on buildings and establishments from Google Maps,
2

even your friends nearby check-ins from Latitude. The company has no plans to sell ads
into your newly augmented view of the world, but will consider it if the product really
catches on.

Fig. 1.1 Glass Design


The glasses are not being designed to be worn constantly although Google engineers
expect some users will wear them a lot but will be more like smart phones, used when
needed, with the lenses serving as a kind of see-through computer monitor. Google glasses
are basically wearable computers that will use the same Android software that powers
Android smart phones and tablets. Like smart phones and tablets, the glasses will be
equipped with GPS and motion sensors. They will also contain a camera and audio inputs
and outputs. Several people who have seen the glasses, but who are not allowed to speak
publicly about them, said that the location information was a major feature of the glasses.
Through the built-in camera on the glasses, Google will be able to stream images to its rack
computers and return augmented reality information to the person wearing them. For
instance, a person looking at a landmark could see detailed historical information and
comments about it left by friends. If facial recognition software becomes accurate enough,
the glasses could remind a wearer of when and how he met the vaguely familiar person
standing in front of him at a party. They might also be used for virtual reality games that use
the real world as the playground.

1.3 DEVELOPMENT
Google Glass (2013) and Steve Mann's Digital Eye Glass (1980) on exhibit at the "History
of AR Vision" exhibit at the 2013 Augmented World Expo. Both are shown recording video

with each device lit up accordingly. Google Glass was developed by Google X, the facility
within Google devoted to technological advancements such as driverless cars. Google Glass
is smaller and slimmer than previous head-mounted display designs. The Google Glass
prototype resembled standard eyeglasses with the lens replaced by a head-up display. In
mid-2011, Google engineered a prototype that weighed 8 pounds (3,600 g); by 2013 they
were lighter than the average pair of sunglasses. In April 2013, the Explorer Edition was
made available to Google I/O developers in the United States for $1,500. The product was
publicly announced in April 2012. Sergey Brin wore a prototype of the Glass to an April 5,
2012, Foundation Fighting Blindness event in San Francisco. In May 2012, Google
demonstrated for the first time how Google Glass could be used to shoot video. Google
provided four prescription frame choices for $225 and free with the purchase of any new
Glass unit. Google entered in a partnership with the Italian eyewear company Luxottica,
owners of the Ray-Ban, Oakley, and other brands, to offer additional frame designs. In June
2014, Nepal Government adopted Google Glass for tackling poachers of wild animals and
herbs of Chitwan International Park and other parks listed under World heritage sites.
Gurkha Military currently uses Google Glass to track the animals and birds in the jungle.
This operation led to the latest development in military operation. Google Glass was used in
military for the first time in the world by Nepal. In January 2015, Google ended the beta
period of Glass (the "Google Glass Explorer" program). In early 2013, interested potential
Glass users were invited to use a Twitter message, with hash tag to qualify as an early user
of the product. The qualifiers, dubbed "Glass Explorers" and numbering 8,000 individuals,
were notified in March 2013, and were later invited to pay $1,500 and visit a Google office
in Los Angeles, New York or San Francisco, to pick up their unit following "fitting" and
training from Google Glass guides. On May 13, 2014, Google announced a move to a
"more open beta", via its Google Plus page. In February 2015, The New York Times
reported that Google Glass was being redesigned by former Apple executive Tony Fadell,
and that it would not be released until he deemed it to be "perfect."

1.4 FEATURES

Touchpad: A touchpad is located on the side of Google Glass, allowing users to control
the device by swiping through a timeline-like interface displayed on the screen. Sliding
backward shows current events, such as weather, and sliding forward shows past
events, such as phone calls, photos, circle updates, etc.
4

Camera: Google Glass has the ability to take photos and record 720p HD video.
Display: The Explorer version of Google Glass uses a Liquid Crystal on Silicon (LCoS)
(based on an LCoS chip from Himax Technologies), field-sequential color, LED
illuminated display. The display's LED illumination is first P-polarized and then shines
through the in-coupling polarizing beam splitter (PBS) to the LCoS panel. The panel
reflects the light and alters it to S-polarization at active pixel sites. The in-coupling PBS
then reflects the S-polarized areas of light at 45 through the out-coupling beam splitter
to a collimating reflector at the other end. Finally, the out-coupling beam splitter (which
is a partially reflecting mirror, not a polarizing beam splitter) reflects the collimated
light another 45 and into the wearer's eye.

CHAPTER 2
GOOGLE GLASS TECHNOLOGY
5

2.1. TECHNOLOGY INTRODUCTION


During our introduction to the progression course, Techniques of Project Work, we were
instructed in how to create a proper problem-formulation, how to use the RUB-library, how
to use the search engines to find research materials, and the necessary tools for executing
them. Within the seminar we gained some great advice in how to handle internal conflicts
within the group, and additionally some comprehension on how to deal with these conflicts
when unknown factors could interfere with the overall focus of our work.
Like any project group that spends most of their days together in front of their
computer screens, we sometimes felt frustrated and annoyed with each other. It was here
that it became helpful to apply the knowledge acquired from the lecture we had on October
11th. We were told to remember one of our first group sessions, where we found out how
important it is to discuss and determine the strength and weaknesses within the group at an
early state, and furthermore, how it is possible to apply them to bring forth the most
productive features of the individual group members; and assign them a suitable role
according to their abilities. Additionally, problem-oriented project work aims to investigate
something the group did not already know. In turn we discovered a lack of knowledge in
regards to the project and decided to gather empirical data so that we could get started with
our project. At the core of our project we had to formulate our project statement. It was
formed by the combined diverse knowledge we had accumulated individually, and used to
agree upon the direction we wanted the project to take. As described in Problem-Oriented
Project Work a work book there is a certain methodology that works with the problemformulation phase, which is the search for methods, theory, and data where texts and articles
are disposed and discovered for their relevance, and by doing so, the project can move
forward.
During the midterm seminar we received a lot of helpful advice, and at an opportune
timing, where our friendship-group and their supervisor helped us realize the necessary
changes we had to make to our former problem-formulation; and that we needed to narrow
the focus of our project. At the given time we were in a severe case of writer's block due to
chewing of a bite too big to swallow and the seminar gave us the needed perspective, to
easily move forward with a renewed and focused outlook on things.

In our final course, we were given the tools to best prepare for our oral examination
and which things to keep in mind when striving to get a high grade. This course was much
needed since all of us were quite unsure of how the oral group examinations are done at
RUC. Furthermore, it was also very helpful to know what a supervisor values during the
examination, since we only have a set amount of time to show what we have learned. All in
all the Techniques of Project Work was a well taught course with a lot of useful information
to guide us, to make it easier for us, as a group, to progress comfortably in the desired
direction.

2.2 METHOLOGY
Due to the technological aspect of our project, we started by mainly researching how
Google Glass works and how we are shrivelled every day without being aware or cautious
about it. After acquiring books, articles, and videos demonstrating the Google Glass and
from these gather an understanding of how the technology works, we started looking at
reactions to Google Glass. How was the reception of Google Glass? This product is not yet
available to the public; still it has caused outrage and is already banned from several places.
We looked into the problems and solutions that come with having a life online, and
we took a philosophical approach to it. We have an analysis based on cases, combined to the
theories we found relevant. Since we have not been able to test the Google Glasses
ourselves, we had to rely on articles from people who have had the opportunity to try the
device. For this reason we have maintained a critical view in the analysis on the origin of
our sources. Furthermore we have had to deal with a lot of speculations, which fits well
with our philosophical angle.

2.3 METHODS
The following chapter will deal with a few select methods, which we found very relevant to
this project when looking at the problem definition. We have an analysis based on cases,
combined to the theories we found relevant. We have decided to incorporate a method for
each of the dimension, in order to answer the questions in the problem statement. This is
due to the vast amount of information we have found on the subject.

2.3.1 HYPOTHETICAL-DEDUCTIVE METHOD

In our project we will use the hypothetical-deductive method. The hypothetical-deductive


method is a scientific method, where a hypothesis is made and verified to see if the claimed
hypothesis is true. This method is based on a hypothesis or theory, from where concrete
predictions have been made through deduction. For this method to be scientifically true,
claims from the theory must be deductively falsified. Karl Popper, an Austrian philosopher
of science, established this method. He claimed that for a scientific theory to work, it must
be formulated to contradict by different observations. The better the theory, the easier it will
be to disprove it.
The course of the method can be divided into four stages:

Through observation we find a problem: What/why? Collect data and look for
explanations.

A hypothesis is made from theoretic knowledge including assumptions of a problem: If


nothing new is known, state an explanation.

Deduction made from an assumption (consequences, explanations): If we have the


assumption that step two is true, what are the consequences?

Empirical verification: Confirmation of the assumption put the theory to a test. Look
for any evidence that can conflict with the predictions in order to disprove it (GodfreySmith, 2003: 236).
Karl Popper was an enthusiast of Einsteins relativity theory, because Einstein had

very precise theories. He believed that to make a good scientific theory, it must be critically
verified; if a theory were not critically falsified he would not consider it a scientific theory.
By using this method, a theory can never be 100% verified, the method can only be
falsified. Einstein highlighted the way of the theory when he said: "No amount of
experimentation can ever prove me right; a single experiment can prove me wrong."
(Calaprice, 2005: 291)
This project works with the hypothesis that Glass has a connection to direct and
indirect surveillance, much like many social networks and via our use of smart phones. With
the new Glass it is said that Google can monitor everything the user sees and does with the
glasses. We see this very likely to be true, since we through our research can see how much
online surveillance is going on nowadays. Because Glass is very new and not yet available
for the public, we are not able to test it ourselves; our answers will therefore be based on
articles and the knowledge of people who have tested it, and the likeliness of our
hypothesis.
8

2.3.2 SEMIOTICS
Since we are working with a Text and Sign dimension in our project, we will be using
semiotics. We will incorporate Saussure and his way of working with signs, saying that it
consists of a signifier and a signified. The signifier is the signs image, as we perceive it. It
is the basic physical existence of the sign. The signified is then the mental concept to which
the signifier refers. This mental concept can be different for different cultures, but is
common for people from the same culture as they share the same language and thereby
understanding. The relation between these two things has the term signification, and the
signifier plus the signified does via signification turn out as the external reality or meaning
(John, 1982: 47-56). Social semiotic is relevant to our project, as we will focus on how the
human signifying habit is shaped and influenced by the cultural surrounding and social
circumstances. Exploring the human habits, this method also tries to make meaning out of
the human behaviour given a certain social condition. This method allows us to investigate
the impact of surveillance and especially Glass to society, and how the symbols that comes
with it, is decoded in a social context. The method delivers an analytic view on how
surveillance and Glass may be changing the way society, and we as individuals, function in
a future virtual world.

2.4 THEORETICAL FRAMEWORK


The chosen theories are all in relation to the project and consists of a variety of
technological aspects, where we look into the functionality of devices, as well as how
surveillance have moved from a visible camera, such as CCTV-cameras, to hidden cameras
and tools, as part of the mobile development of phones and laptops, which the users carry
with them everywhere. The project also has a very important philosophical and ethical
aspect, questioning if companies such as Google should be allowed to follow every move
the user makes online and collect all personal data.
The study uses theories formed by the theorists Foucault, Cooper, Chalkley, Pariser and
Jeremy Bentham who developed the theories side by side with the technological
advancement. By the use of the before mentioned theories we will reveal and compare the
thoughts concerning then technology and signs at that given time, to the present situation,
where people are still dealing with some similar worries about the technological revolution.
We will also discuss whether or not it is a good or bad thing how the worlds technology has
developed and continues to do so. We combine the theories to reach a better understanding
9

of network surveillance and why we as human beings have such a fascination of technology
and the newest devices.

2.5 DELIMITATION
Although the project has many objectives, it sets its focus on surveillance as a consequence
of the mediated technology. In context we have chosen to deal with Google Glass as our
main case and involve it as much as possible in our project as it is a valid example of how
we all contribute to the surveillance society. Google Glass is relevant because it is one of the
newest and amongst the most innovative products of mediated society, and it contains
endless possibilities as well as risks. Google Glass has, however, not yet been released to
the majority of the public. Our project will mainly concern the United States, where the
culture is very affected by mediated life and engaged with a variety of surveillance
capabilities; however, our focus will be on network surveillance.

2.6 COMMUNICATION IN MEDIATED LIFE


What is interesting about communication is the way its form is constantly changing and
evolving. Lately we have seen a technological revolution, a fast evolving evolution of
communication technology that has influenced the way of communication remarkably
(Hamelink, 2000: 23-29). There is a lot of positive thoughts and fascination about the
technology and mediated evolvement. In the past decade we have seen a vastly growing
constant development in new communication technologies. It is becoming more strange
than normal not to have a smart phone. According to renowned neuro-psychiatrist Dr.
Gary Small 4, the daily exposure of technology has caused the brain to evolve and change.
Due to the current technical revolution, our brains are evolving at a speed never seen before.
Even though we are unaware of the changes occurring in our brains, these altercations can
become permanent if repeated (Small et al., 2008: 1). If the brain is evolved towards
focusing on new technology and the skills that comes with it, it will drift away from
fundamental social skills such as reading subtle gestures, facial expression, and social
contact (Small et al., 2008: 2-3). The accusation goes on how modern communication
makes people spend more time looking each other up online, instead of physical quality
time. Our primary focus will be what the mediated technology has done to the way we
communicate with each other, this phenomenon is also known as CMC (ComputerMediated-Communication). CMC has been exploding over the last few years; take an
10

obvious example like Facebook, which grew from having one million active users in 2004,
5 where it started as an American college social network, to being spread worldwide used
by all age-groups, with the newest results showing billion active users in October 2013.6
This is a clear example on how the world has gradually accepted the change of how we
communicate and the evolution of communication. We cannot stay focused anymore when
we are physically socializing with friends and family.
We keep checking our phones and updating our social network sites, to see if anyone
has virtually poked 7 us. But it is hard to define whether or not this makes us more social
or asocial. People are now expected to answer to chats and texts right when they are
received, update their Facebook and re-tweet something of minor importance, which does
not necessarily make us rude or asocial if we are maintaining our online social profile while
spending time with our friends. Maybe we are not present in the current situation and
socializing with the one person we are physically next to, but we are present on so many
sites and socializing with thousands of people. It is becoming more of a common agreement
that it is okay to not really be present while we are uploading our life to World Wide Web
for everyone to see. We post self-chosen information about ourselves online. It can be
anything from photos of loved ones, relationship status or even photos of what we eat. CMC
has really brought this noteworthy phenomenon to our attention. It is on one hand a oneway communication to document how our life is going and on the other hand a public
communication starter. Say if someone is in the same situation as you and can relate to it
with a much appreciated like. The receiver is not chosen or specified from the sender
(i.e. the person leaking it), the information is self-chosen, and the sender is aware that the
leaked updates are public. But what are the consequences of this vast development? We
forget to think about them and what effects they might have. What are we actually agreeing
to when we click agree to terms? And how much privacy does our private setting really
ensure you.

2.7 THE SURVEILLANCE STATE OF AMERICA


2.7.1 THE PRESIDENTS SURVEILLANCE PROGRAM/THE PROGRAM
Defending Our Nation Securing the Future Those are the first six words to appear when
looking at the US National Security Agencys homepage. They are central to the purpose of
the organization and written in large font on every single page of their website. Defending
and securing the future of the United States is perhaps the most important thing a US
11

government agency can do. But it is worth noticing, that this is a very broadly defined
mission. If the NSA is securing and defending the future of the USA, is it not fair to say that
by limiting the powers of the NSA America becomes less safe? By objecting to the practices
of the NSA, is one actively attacking the future of America? That is what their broadly
defined mission statement implies. Rather than take the NSA at their word, it is worth
considering their actions over the recent past.
There is no doubt that the NSA wants the United States to be a safe place. As it
states on NSA & CSSs Core Values Brochure, NSAs main goal is to secure the future
and to keep the United States of America protected by Collect (including through
clandestine means), process, analyze, produce, and disseminate signals intelligence
information and data for foreign intelligence and counterintelligence purposes to support
national and departmental missions.
It sounds impressive, but what does it mean? How do they go about collecting and
analysing information? And from whom do they collect this information? Are there any
limits to their reach? The Presidents Surveillance Program ("The Program") was an NSA
program released by President George W. Bush shortly after the attacks on September 11,
2001. President Bush made it possible with the program to conduct a wide range of
surveillance activities inside the United States, activities that had not been possible before.
Since 2005 various whistleblowers and major newspapers have exposed a remarkable
amount of information gathered as a result from the program, such as call-detail records
collected from major telecommunications companies in the US. These records were
collected without a warrant or judicial oversight, but through The Program and The
Patriot Act, tens of millions of Americans were spied on 10 As US Senator Patrick Leahy,
who opposed the program, asked congress:
Are you telling me tens of millions of Americans are involved with Al-Qaida?
These are tens of millions of Americans who are not suspected of anything. According to a
research by USA Today, the call-detail records included customers' names, street
addresses, and other personal information and detailed records of calls they made across
town or across the country - to family members, co-workers, business contacts and others.
(Cauley, 2006) An anonymous source told USA Today that the agency's ambition was "to
create a database of every call ever made" (Cauley, 2006). All of this was done without a
warrant or any judicial supervision.

12

A few weeks later, former AT&T technician Mark Klein revealed to the New York
Times that those same telecommunications companies also had agreed to install complex
communication surveillance equipment in secret locations at strategic telecommunication
facilities around the country. The order came from the NSA. (Margoff, 2006) This
technology enabled the NSA to gain autonomous and free access to large streams of local
and international communication to be more specific, the NSA was now able to collect at
least 1.7 billion emails a day due to this surveillance equipment, according to The
Washington Post. Again, all of this was done without a warrant in violation of federal law
and the Constitution (Priest, 2010).
The Program was first criticised by the New York Times in 2005. President Bush
then admitted to a small aspect of the program; he concluded that the NSA, without
warrants, monitored the communications of 500-1000 people in the US with suspected
connections to Al Qaeda. He called this the Terrorist Surveillance Program (Priest, 2010).
But what about those people not connected to terror why collect and keep billions of
emails from innocent, non-suspicious citizens? And what happens with the collected data.

2.7.2 SHARED AND TRACKED INFORMATION


Since 2005, the use of social media and technological devices has expanded vastly. For
most of us, not a day goes by without CMC from our smartphone or computer. What
happens when we use the Internet is this: data is travelling from our device through
telecommunication companies' wires and fiber optics networks, and finally to our intended
recipient. To capture these communications, the government has installed devices known as
fiber-optic splitters in many of the main telecommunication connection points in the US.
The fiber-optic splitters make an exact copy of the data passing through them: then, one
stream is directed to the government, the NSA/CSS, while the other stream is directed to the
intended recipients (Margoff, 2006). Once NSA has the data, they have the right to retain
the records for up to six years that is if no potential threat or crime is present in the
information gathered (Greenwald, 2013). The reason why all data is found relevant is
explained by Gus Hunt, the chief technological officer of Central Intelligence Agency:
The value of any piece of information is only known when you can connect it with
something else which arrives at a future point in time. ... Since you cannot connect dots you
do not have, it drives us into this mode of: We fundamentally try to collect everything and
hang on to it forever. (Sledge, 2013) Sledges quote becomes highly chilling when
considered juxtaposed alongside George Orwells famous line from 1984, He who controls
13

the past controls the future. He who controls the present controls the past. (Orwell, 1949:
17) With so much data to pick and choose from, the NSA has a remarkable amount of power
of the past. Either by design or accident, they could use their compiled data to incriminate
the innocent. The amount of power granted to any organization is astounding, and the lack
of government oversight is highly disturbing. One would expect Big Brother was wrought
from a similar beginning. It is, however, not just the Government and the NSA that ends up
with our information. It is no secret that the NSA shares their collected data with the FBI,
CIA, and the DEA. Major companies also play a vital part when it comes to collecting and
using data - with more than 950 million users, Facebook obviously has great opportunity to
track and store data on millions of people around the world. (Schneier, 2013) As Bruce
Schneier, technologist and author, said: The Internet is a surveillance state. Whether we
admit it to ourselves or not and whether we like it or not, we are being tracked all the time.
Google tracks us, both on its pages and on other pages it has access to. Facebook does the
same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One
reporter used a tool called Collusion to track who was tracking him; 105 companies tracked
his Internet use during one 36-hour period. (Schneier, 2013).
The electronic footprints we leave are rapidly increasing as the technological
development expands, making it easy for anyone interested to track down our every move.
Movements that can be cross-indexed, correlated, and used for secondary purposes;
information about us has value. The Justice Department uses details from Google searches
to look for patterns that could help determine child pornographers and potential criminals.
Google uses that same information to deliver context-sensitive advertising messages
(Schneier, 2013). The majority of us have gladly given out personal information in
exchange for social media and specific services. What we object to is the surreptitious
collection of personal information and the secondary use of information once it is collected:
the buying and selling of our information behind our back.

2.7.3 THE CONVERGENCE OF NSA SPYING WITH THE DIGITAL ERA


Most of the fury and national discussion that the NSA leaks of 2006 caused faded away as
the news cycle rolled on. The economy crashed, Barack Obama was elected twice and
thousands of new fads, from twerking to tweeting arrived. So when the Guardian began
publishing the information leaked by former NSA consultant, Edward Snowden, in June
2013, it shocked not only the nation, but the entire world. It became clear that NSA
programs like PRISM, Xkeyscore and Tempora were collecting the data of not just US
14

citizens, but also leaders around the world, latest from the German Chancellor, Angela
Merkel12. Why had the invasions of privacy committed by the NSA scaled up so
dramatically, despite all the protest in 2006? The answer to this question requires a
consideration of the changes in the technological landscape that occurred in the 21st
century, best represented by the search engine Google. Google requires that we agree to
terms and conditions that allow Google to use our search information. Over the years
Google has changed its terms and conditions to allow the government access to users
search results. Prior to September 11th, users were completely anonymous under terms and
conditions. Googles privacy policy in December 2000 stated:
Google may also choose to use cookies to store user preferences. A cookie can tell
us, This is the same computer that visited Google two days ago, but it cannot tell us,
This person is Joe Smith or even, This person lived in the United States (10:43 in Terms
and Conditions May Apply, 2013). So in other words, Google states that we remain totally
anonymous. The patriot act expanded the ability of the federal government to do
surveillance in a lot of little ways. You dont need a judge's approval for instance to find out
what websites someone visited or what search terms they typed into Google Declan
McCullagh (9:47 in Terms and Conditions May Apply, 2013) Only one year later, in
December 2001, Googles privacy policy had changed to: Google does this by storing user
preferences in cookies and by tracking user trends and patterns of how people search.
Google will not disclose its cookies to third parties except as required by a valid legal
process such as a search warrant, subpoena, statute, or court order. (11:06 in Terms and
Conditions May Apply, 2013) This piece of text, from Googles own website, state that
when necessary, we are not anonymous. In December 2001 Google would not deliver user
information such as cookies, identification or user preferences to a third party without a
valid legal process such as a court order or search warrant. It is necessary to keep in mind
that Google has economic motives for tracking users. Googles business model is dependent
upon using user information to target advertising, rather than having users pay for the
service. People exchange their level of privacy for the free service. Other companies picked
up on this trend, which has made Twitter, Facebook, etc. which are such valuable
companies. All of these companies also have similar terms and conditions policy, with far
too much legal fine print to read in a reasonable amount of time. What is so interesting
about this story is not really that they changed their privacy policy but that they claim, on
their website, that their privacy policy from 2000 is the one from 2001. A non-profit internet

15

website called Wayback Machine takes snapshots of what websites used to look like and
save the photos in its own archive. It has been taking photos since the 1990. On Googles
own official webpage they list their history of Privacy Policy. They list every single Privacy
Policy that Google has ever had since the start of Google. But what Google shows as their
original Privacy Policy does not match the one from Wayback Machine. Instead it shows the
one from December 2001; the one that says, that when necessary we will not remain
anonymous.
In January of 2012, Google changed their privacy policy once again, so that all user
information could be combined into one personal profile. Detailed histories of every user
can be accessed in the database for the purpose of market research or background checks
(18:04 Terms and Conditions May Apply 2013). The current terms and conditions, when
compared to those before 9/11, are quite remarkable. What is disturbing about the massive
amounts of data that Google collects is not alone that they share it with advertisers, because
that is what consenting consumers agree to when they use Googles services. This is the
way Google pays its bills and allows them to create better services. What is so disturbing is
the ease by which governments around the world can use the information acquired by
Google. Google CEO Eric Schmidt says, the question of your, if you will, information
being retained by Google is not at this point a Google decision, its really a political or
public policy decision enforced by different governments in different ways. (10:04, Terms
and Conditions May Apply, 2013) This metadata or Big Data as it has come to be
known, in reference to Orwells Big Brother in 1984, is what the NSA claims is necessary
for protecting the future of the USA. General Keith Alexander told Congress, on December
11th, 2013, Threats are growing and explained that metadata collection is like a
traditional library index card system. Metadata is our way of knowing where those books
are in the libraryand where the bad books are (McCarthy, 2013). It is certainly true, that
the NSA is tasked with a very difficult mission. In the post 9/11 world, Americas enemies
are not entire nations, but cells of terrorists. The terrorists hide and plot against America, so
a level of surveillance is necessary. But where is that level of surveillance that is
permissible? It is an endless debate, most likely without a right answer. The intersection of
security and freedom is not a simple matter, and perhaps what is most important is that
people and governments keep the surveillance state in check.

2.8 THEORY

16

The next section will present our main case: Google Glass. Throughout the project we will
refer to
Google Glass simply as Glass.
Glass is amongst the newest device within communication technology. It is a platform that
enables
the user to experience augmented reality; the concept of images overlaying and emerging
in what
the user sees in their reality. In its basic form Glass is a computer that one can wear on ones
head.
Within the product exists a see-through screen in the top right corner of the eye where data
manifests. The product enables one to be always on, always available with the power to
stream
live video from the users point of view, advanced voice recognition software, and access to
the
worldwide web on the spot. Because the device is hands-free, the Glass functions, as
mentioned,
via voice recognition. A command is started by simply saying: OK Glass.. then the given
command,
e.g. take a picture to take a picture, record a video, browse the Internet, even speak the
message
you want to send. Glass can also translate any sign; for example if travelling in China, a
street sign
can be directly translated to English right in front of you.13 Not all commands are able to be
controlled by voice. Glass therefore has a touchpad to give additional control. This touchpad
is
placed in the right side of the frame next to where the camera is. This is also where the
Glasses can
be turned on and off with a tap on the pad.
With Glass, the user is integrated to a higher level of connectivity. It is essentially a
smartphone in
another packaging. Though having many of the identical parts of a smartphone, Glass
depends on

17

an actual app14 in order to have all the functions working. Apps are essential for Glass, as it
depends on developers to help Google invent new apps to use with Glass. The development
of new
apps create an opportunity to widen the usage of Glass and thereby extent the possible
outcome of
creativity.
Glass also gives the user an ability to interact in a new social and cultural way. This ability
to be
connected wherever we find ourselves to be is considered a tech-revolution. The new
technology
gives the user another dimension to his or hers own reality and impacts on the way we live
our life.
Google as a company is a major player in the tech-revolution and offers many services of
their own
to Glass, including: maps, calendar, Gmail, Google+ and Google Places, all of which
enables us to
simplify our lifestyle in a organized and convenient way that most people find appealing.
When
combining all of Googles services, they offer the online user a neat and ergonomic package
to
include in one's lifestyle with Glass.
Google has already released two versions of Glass to be tested by selected people; version
1, the
first version, and version 2, called Glass Explorer Edition. The differences between the
two are
very little. The newer version only has a slightly updated hardware, which does not make
any
remarkable change performance wise for the device. These changes will at best be a ten
percent
increase in the overall performance, which is shown with slightly smoother animations. The
most
noticeable difference is the mono ear bud that has been included in the product and how
specific

18

prescription lenses can be applied.15


By purchasing Glass, the Explorer Edition, Google stated the following in their Terms of
Sale:
You must be 18 years or older, a resident of the United States, and authorized by Google
as
part of the Glass Explorer program in order to purchase or use Glass Explorer Edition.
Unless
otherwise authorized by Google, you may only purchase one Device, and you may not
resell,
rent, or lease your Device to any other person. If you resell, rent, or lease your Device to
any
other person without Googles authorization, Google reserves the right to deactivate the
Device, and neither you nor the unauthorized person using the Device will be entitled to
any
refund, product support, or product warranty. 16
Google is hereby the first company to claim ownership of a product even after it has been
sold. Glass is a phenomenon of simplicity and it is slowly entering our daily lives. It is a
technology based
on existing technology, which most of us is already carrying around. The big difference is
that this
product is situated in something as old fashioned as a pair of glasses, which has been
designed to
be new and hip. The aim of the product is to introduce a new way of gaining access to the
content
you want: faster, quicker, trouble less, all in a slick design.

2.8.1 THE FILTER BUBBLE


The human lifestyle has changed radically over the past decades. This change is conjointly
related
to the exponential growth in the numerous mediated technologies and the online
connectivity we find
ourselves with today. There is no doubt that being online is a vital part of living in the 21th
century.

19

In December 2009, the world saw the beginning of something new. The search result from
Google
started to customize to each user a new era had begun for personalization (Pariser, 2011:
1). When
it comes to consuming information, this was a revolution (Pariser, 2011: 3). It can shape
which new
things we learn, how we learn, it could even affect how democracy works. By collecting as
much
data as possible, our online experience can be tailored. Our personal information are tracked
by
data companies and the result/consequence of this will be that each of us will live more and
more in
our own unique universe of information - a personal bubble. Eli Pariser calls this the The
filter
bubble. He says that from this personalization, most of the news we will receive is known,
pleasant,
and something familiar but it will not be possible to know what is hidden from us. Our
interest from
the past will decide what we are exposed to in the future, and learning from unpredicted
encounters
will be minimized (Pariser, 2011, 1st edition: 1st page).
According to Pariser, search results are customized and personalized for everyone, this
means we
are tracked and the Internet can get all your information and learn everything about you.
The filter bubble basically alters the way we come across ideas and information (Pariser,
2011: 2).
Pariser tells that we from the filter bubble are introduced to three new dynamics:
1. You are alone in it - The bubble does not reach out and let you be familiarized with
new
things outside of a comfort zone. Since it is personalized, it becomes a universe, where
you
are the center it revolves around, thus becoming more and more distant from other people
and new things.

20

2. The filter is invisible - the user is unaware of the filter. There is no warning or sign of
getting into this bubble. Each time a user is online, the filter bubble gets bigger and the user
is increasingly being trapped in their own little universe. The Internet becomes like a bottle,
with the filter being the bottle head. The further you go, meaning the more you are captured
by the Internet, the more narrow and recognizable the world online becomes and you are
thereby learning less from the world.
3. You do not choose to enter the bubble - the bubble consumes you when you do not
realize it. You are not actively choosing the filter. When turning on the TV to hear the news,
or reading a newspaper, you make the decision to hear and see, what you want. You choose
your filter, you are actively getting new information. With personalized filters, you are not
entirely able to choose what you want to see. Instead of you making a choice to learn about
something, they are coming to you, and they are becoming harder and harder to avoid
(Pariser, 2011: 9-10).
Pariser also tells about some positive things that come with the filter bubble. The bubble
keeps our
interests organized. A user is never bored, never annoyed it has an appealing prospect, it
makes a
return to a Ptolemaic universe17 in which we are the centre and the world and everything
else
revolves around us. But as often, everything comes with a cost, and if it is made more
personal we
are at risk of losing some of the qualities and traits which are the reasons the Internet was so
appealing to begin with (Pariser, 2011: 12). As Pariser says:
In the filter bubble there is less room for the chance encounters that bring insight, and
learning serendipity is at risk. We get a lot of bonding but very little bridging.
(Pariser, 2011: 17)
The first idea of personalization came from Nicholas Negroponte. He talked about it in the
midnineties,
but at the time people were not ready for personalization (Pariser, 2011: 21). In order for
companies to personalize, they needed a lot of data from the users. As a result Google came
up
with an innovative strategy in 2004. By providing other services, which would require users
to login,

21

such as Gmail, the users themselves would provide Google with a huge sum of data
(Pariser, 2011: 33). Pariser further explains how important personalized data is for
companies, e.g. how up to 60%
of Netflixs18 rentals are based on the personalized data of their users (Pariser, 2011: 8).
In recent years, big companies like Google and Facebook has been collecting more and
more
information about their users. The recent decade have shown a vast development in
companies
willing to pay for information concerning their demography in order to improve businessmarketing
(Lace, 2005: 99). With these data comes an opportunity to know a customer on another
level.
From the users point of view, the personalized data exists because it makes it easy for them
to
move around in the online universe. It specializes for one's particular needs and in the
process
makes it easy to interact. To have the ability to get to the needed content as fast as possible
is a key
factor for the online user, which the companies takes advantage of.
Pariser explains how it is of most importance for big companies to know how to choose the
right
content for their consumers. The three-step process of creating a personalized filter for the
user
(Pariser, 2011: 112) works because the users identity shapes the media. However, Pariser
argues
that the media also shapes the identity. So by shaping our media through our identity, our
identity is
also being shaped by the media we experience. This is a self-empowering concept, which
seems to
have no ending. Pariser talks about the consequences of these services and how they can in
fact
create a good fit between the individual and his media by changing him. In other words; to
choose

22

our own destiny is not an option here. Pariser believes that the destiny is already chosen for
us
(Pariser, 2011: 112).
The Filter Bubble makes the individuals online choices easier by reflecting their own
personality
upon the relevant material that they wants to see, but the consequence is that they do not get
a
chance to decide what that material should be. It is already decided for them. As Pariser
describes:
Personalized filtering can even affect your ability to choose your own destiny. In Of
Sirens
and Amish Children, a much- cited tract, information law theorist Yochai Benkler
describes
how more-diverse information sources make us freer. Auton-omy, Benkler points out, is a
tricky concept: To be free, you have to be able not only to do what you want, but to know
whats possible to do. (Pariser, 2011: 112)
Since Google are able to follow every step a user makes with Glass, it will make it easy for
a user to
get a filter bubble via the glasses too. Glass is an extension of our mobile phone and through
personalization, Google can use all the information they receive from Glass to create a
bigger filter since the they, just like a phone, will likely be used most of the time. Nowadays
people are
practically addicted to laptops. We could escape the filter by not being on our computer all
the time,
but now since all smartphones have network access, they are practically small laptops we
always
carry around with us wherever we are. The same goes for Glass, but with the glasses it is
barely
even necessary to do anything else but talk to them and therefore makes it easier to be
consumed
by the filter. Glass is not just thought of as an extension of the smartphone, but as an
extension of

23

our individual identities. With Glass we are moving even closer to a society where we watch
each
other.

2.8.2 LITTLE BROTHERS SOCIETY


Tony Chalkley develops on this thought as he also has an interesting theory of how we via
social
media actually contribute to surveillance. This concerns the massive sharing of information
about
each other and ourselves. We constantly upload statuses, pictures, videos, we check-in to let
people
know where we are and who we are with - this makes it easy for anybody to watch over us,
all of our
leaked information can easily be used by third parties. Still most of us do it more and more,
and it
makes it worth considering how much we are actually watched by a Big Brother - or, if we
are in fact
living in a society with omnopticon, which is the concept of everybody watching everybody.
Chalkley relates to this as a society of Little Brothers and Sisters instead of the traditional
Big
Brother; the point being that we are all little brothers and sisters sharing personal
information about
ourselves and each other with each other. Mostly we share it via the Internet where it is
accessible
for almost everybody. Of course some places like Facebook gives us the choice of
privatizing our
profiles, but as soon as it is uploaded on Facebook, it is situated in an Internet database and
therefore not private anymore. As just mentioned, many people nowadays choose to share
pictures,
feelings (via status updates), environment (via check-ins), interests (via follows or likes),
one can
argue that we try to show ourselves the way we wish to be seen. Most of us share with each
other,

24

and watch each other; in other words it could be said that we are all spying on each other. It
is,
however, important to remember that we only have access to spy on the things that people
have
allowed to be public, and it is therefore more reasonable to call it watching than spying
(Chalkley et al,
2012: 207-210).
In the last theory by Tony Chalkley, that is strongly connected to the Little Brother theory, it
is also
questioned why we accept so much surveillance in our society. Here he argues that first of
all, it
makes us feel safe. Even though we might be watched, there is a security in the fact that the
bad guys who are also observed, can therefore be stopped by the watching authorities.
Taking up the
Little Brother theory, it might be more a question of acceptance than security. Secondly we
accept
surveillance because of what he refers to as the normalization of surveillance. We have
become so
used to surveillance, that it has now become a part of our everyday life. We cannot even
imagine
our lives without our smartphones and the daily use of Internet. Most people are
exceedingly
dependent on the Internet; we might be surveilled on a daily basis, but we cannot stop using
the
Internet, because we need it for important research, work, practical information, as well as
the
argumentally less necessary things such as socializing and entertaining (Chalkley et al,
2012: 213).

2.8.3 THE THEORIES OF ETHICS


The following chapters will provide different theories on the effect of online surveillance
and the
ethics to be included when discussing privacy. The definition of ethics is therefore relevant
when
25

putting a philosophical and ethical angle to the subject. Defining whether something is
ethical all
comes down to the difference between morality and law. It is essential to define whether
ethics are
about right or wrong, or about choosing between good and right.
The theory of good over right is called teleological; it evaluates actions by the consequences
of them.
The two British philosophers Jeremy Bentham and John Stuart Mill developed a theory
called
Utilitarianism. The theory is that the moral doctrine we should act in order to produce the
greatest
happiness for everyone affected by an action (Spinello, 1995: 19-20). When using
utilitarian
analyzation, the focus is on the benefits and costs to all the individuals involved. The goal
is, as
mentioned before, the greatest happiness, but what is essential to remember is that it has to
be for
the greatest amount of people. Utility is another term for describing this type of good; it is
the
foundation of morality. The consequences are again what matters the most, even if there has
to be
done wrongs in order to achieve the goal - The goal being as much happiness as possible for
everyone affected.
When analyzing by the utilitarian theory we compare benefits vs. costs to see what
weighs the
most. If there will be a larger amount of benefits, it is from the utilitarian point of view,
defined as
worth the costs. Happiness is the goal and main-interest in utilitarianism, but there is the
unavoidable struggle with how to define happiness. This definition-dilemma of happiness
also
makes it hard to define when it is greater benefits than costs. According to utilitarianism,
happiness

26

is the main benefit, and it supposedly excuses whatever costs, as long as the achieved
happiness is
greater than the costs. But if we cannot measure how great the benefit is (benefit being the
happiness for the largest amount of people), it is hard to tell if the benefit is greater than the
costs;
thus the theory is a hypothetical one that cannot always be applied with a direct outcome.
Deontological framework is the opposite theory of teleological and obviously
utilitarianism. Here it
is about always choosing right over good, cause thereby the right is chosen over any wrongdoing at
all; it is simply a duty-based theory. Immanuel Kants (1724 1804) moral philosophy is a
great
example of deontological theory. It is completely opposed to utilitarianism; what he wants is
the law
of moral to be rational, just like all other laws of physics are. There will never be doubt
about what is
a moral decision, if you have first decided that you believe in the deontological definition of
morality.
It is easier to tell right from wrong than to tell what is actually good as just mentioned;
defining
happiness and good is a serious struggle on the utilitaristic field. What can be a flaw in
Kants
deontological theory, is how sometimes the duty is truly more hurtful than for example a so
called
white-lie. Take an extreme example where you have to lie to someone in order to save
them from a
cold blooded murder. Here there is no doubt that most people would find it more ethically
right to tell
a lie, than to let an innocent victim get murdered.
William David Ross (1877 1971) therefore came up with a more flexible extension of
Kants dutybased
theory, where he includes prima facie duties. These are basically duties that are superseded

27

by higher obligation. In other words he makes space for exceptions to the normally
preferable duties
(Spinello, 1995: 14-32).

2.8.4 FOUCAULTS DISCIPLINE AND PUNISHMENT


One of todays most influential contemporary sociological theorist, Michel Foucault, was
born in
France 1926, during the German occupation of the country. There is no doubt that the
occupation
and World War II had a great influence on the mindset of young Foucault. His fascination of
power
relations and human sciences are fundamental themes presented in both his philosophical,
psychological and historical work most acknowledged is his Surveiller et Punir (in
English;
Discipline and Punishment), known as Foucault's tome regarding disciplinary society,
punishment,
and most importantly in this case; surveillance.19
Discipline and Punishment is a book written in 1975, and divided into four parts, each
explaining
Foucault's theories and thoughts. As earlier mentioned, power is the core - Foucault's
concept of power was at the time considered radical and complex, as it; ... instead of a
simple relationship
between the oppressed and the oppressor, power involves a much more multifaceted chain
of
relations weaved throughout society (Sheridan, 1977: 196) According to Foucault, power
is not
exclusive between two parts, but works on a higher level where a great part of society is
included
due to the consequences of power relations. Foucault additionally argues that power
therefore
shapes the behaviours and actions of all individuals. This is explained in part one, along
with a
notion on how knowledge and power goes hand in hand; one cannot exert power without
knowledge,
28

while at the same time knowledge always provokes power. Foucault termed this concept
"power/knowledge (Sheridan, 1977).

2.8.5 FOUCAULT AND JEREMY BENTHAMS PANOPTICON


Moving on to part three of Discipline and Punishment, Foucault discusses the English
philosopher
and social theorist Jeremy Benthams term Panopticon. The Panopticon is an institutional
building
with a watchtower at the centre, designed to observe others without them knowing whether
they are
being watched or not as oppose to omnopticon - where everybody is watching everybody.
The name
Panopticon is a clear reference to the Greek mythology Panoptes; a giant with a hundred
eyes.20
In Foucaults example, the panopticon represents the way in which discipline and power
works in
modern society. The paragraph starts with a portrayal of methods used against the plague in
the
seventeenth century: separating of institutions, constant inspection and continuing
registration
methods that all suggest some sort of careful, comprehensive surveillance of citizens to
prevent a
plague from spreading. As Foucault describes; "there is an exceptional situation [during
which
surveillance is deployed]: against an extraordinary evil, power is mobilized (Sheridan,
1977: 200)
the surveillance is in this case temporary due to the peculiar situation, and the watcher is in
control.
The Panopticon emerged from the need of surveillance, leaving a permanent instrument for
observatory techniques. According to Foucault, the Panoptic model introduced a new way
of
thinking in modern society: the government and several institutions began to implement the
practice

29

of keeping observations. In Principle and Punishment, Foucault paints a picture of


Benthams
perfect prison, a prison that in some way resembles George Orwells novel 1984. The
Panopticon
would be located in the center of the prison, making it possible for the watcher to observe
the prisoners every move, without the prisoners knowing. Bentham saw this reform as a
perfect society;
to maintain authorization in a democratic and capitalist society, the public needs to believe
that any
person could be surveilled at any time. As described by Foucault;
"In order to be exercised, this power had to be given the instrument of permanent,
exhaustive,
omnipresent surveillance, capable of making all visible, as long as it could itself remain
invisible (Sheridan, 1977: 201).
Benthams hope was that, with time, people would adopt to this kind of structuralism and as
a result
internalize the panoptic tower and eventually police themselves:
"He who is subjected to a field of visibility, and who knows it, assumes responsibility for the
constraints of power; he makes them play spontaneously upon himself; he inscribes in
himself
the power relation in which he simultaneously plays both roles; he becomes the principle of
his
own subjection" (Sheridan, 1977: 202-203).
This change in power relations provoked Foucault as he claimed privacy to be a basic
human right.
His main point is how the surveillance constructs inequality and gives a huge power
advantage to
the watcher, whereas the watched is becoming a victim. He compares the prison
surveillance and
how it gives the guards an almost unlimited power over the convicts to how the government
is
watching the citizens, and will thereby always be updated on information that benefits them
in

30

keeping their supreme control and power (Chalkley et al, 2012: 203-204).
One can argue that this form of control has been aided in our own society by new
technological
advancements that allow the government and big corporations to track any movement and
behavior.

2.9 ANALYSIS
The purpose of the analytic section is to provide an understanding for the problem
statement: What
are the possible effects between communication technology and the surveillance state? We
will
apply the above-mentioned theories to selected cases in order to give a satisfactory
perspective on
the mediated life and technology that comes with it.

2.9.1 APPLICATION OF FOUCAULT ON THE UNITED STATES


The system of espionage being thus established, the country will swarm with informers,
spies,
delators, and all the odious reptile tribe that breed in the sunshine of despotic power. The
hours of the most unsuspected confidence, the intimacies of friendship or the recesses of
domestic retirement, will afford no security. The companion whom you must trust, the friend
in
whom you must confide, are tempted to betray your imprudence; to misrepresent your
words;
to convey them, distorted by calumny, to the secret tribunal where suspicion is the only
evidence that is heard. (Rep. Edward Livingston, Annals, 5th U.S. Congress, 1798)21
Judging from the citation above by Rep. Edward Livingston, numerous Americans have
seen
surveillance and hidden observation as an insult to their key beliefs. When comparing this
view with
the warrantless wiretaps used during the George W. Bush presidency, the shift experienced
within
American society is rather clear.

31

Foucault's panoptic society and the concept of passive bodies through the use of
surveillance is,
however, the main focus in this chapter. To briefly summarize, Foucault stated that a society
of
passive bodies would emerge thanks to constant surveillance. This could be relevant to
consider
when looking at the use of closed-circuit television (CCTV). Today, CCTV can be found in
almost
every place imaginable; businesses, institutions, public spheres etc. The purpose of
mounting these
cameras varies from supposedly preventing criminality to catching traffic law violators at
intersections. Examples of CCTV are also present within the household, as many
homeowners have
chosen to install cameras to add security to their property. Even if CCTV were not as
widespread as
it is today, we still think it would be meaningful to study CCTV to show how technology
has
progressed to a far-fetched quantity since the death of Foucault. Foucault did not have the
chance
to consider surveillance with these new advances and what this meant for the panoptic
model. This
unavoidable exclusion is especially obvious today when considering the commonness of
CCTV
cameras. It is estimated that there is one surveillance camera for every 96 people in the
United
States.22
So, how closely does American society mirror the panoptic model with CCTV in the
picture? An
article to help address this question, "How Closed-Circuit Television Surveillance Organizes
the Social: An Institutional Ethnography,"23 by Kevin Walby, a professor in sociology,
enlightens the
usage of CCTV in the United States and Canada. Immediate validation of some of
Foucault's theory

32

is in the fact that there has been a great increase in the use of CCTV, verifying Foucault's
idea that
surveillance will continue to increase. This increasing tendency is largely due to
technology. The
ones watching the cameras do not have to be in the same area, or even the same country,
where
the watching is taking place. Walby writes, "It is now common for banks and other
commercial
entities to outsource their video monitoring to settings situated thousands of kilometers
away. This
again strengthens Foucault's vision of a society with intense surveillance. One feature of the
Panopticon that Foucault emphasises is the prison's function of individualization. Each
prisoner is
supposed to receive detailed attention so that the persons needs are met. If Foucault's
beliefs are
correct, then this movement should have increased to American society at large.
Another professor, Graham Sewell, argues how this individualization of the panoptic model
definitively exists in the United States. He writes, "By scrutinizing our every activity,
surveillance
places us in categories-for example, criminals, consumers, patients, or workers - that are
easily
understood by our peers and ourselves alike. This argument displays itself in racial
outlining which
rises great suspicion on minorities. For example, "suspicious names" (the majority being
Muslim
names) are mentioned on a "No Fly List" in the United States. These people are particularly
watched
at the airport (Shawki, 2009). The phenomenon of surveillance-labeling is expressed
through
several patterns in the United States. An obvious example is security at any regular
shopping mall or
boutique. Security officers openly admit that they do not treat everyone as being equally
likely to

33

commit a crime. Walby interviewed several security officers who all admitted that they
customized
their way of surveillance by only observing certain types of people. Walby describes that the
security
guards he interviewed "... do not target suspicion equally towards all shoppers; rather, their
informal
watching rules direct intensified surveillance at racialized minorities, single mothers,
persons
receiving income assistance, and other socially constructed categories. The security guards
operate
off categories that give certain people special attention (Walby, 2005). The Panopticon
model here
gains validity as people are treated in a heterogeneous manner. The United States has not
entered
an age where everyone is treated with the same respect. Managers can also track their
employees
every movement with several kinds of Management Information Systems. A maldistribution
within
power relations occurs.
A professor in marketing and philosophy, Graham Sewell, argues that the "vision of elite
groups
exercising control using management information systems also bears a striking
resemblance to the
principles of panoptic surveillance. He continues:
It bears a striking resemblance because managers have such complete control over their
subordinates. Virtually all of their activities can be checked to make sure they are
performing
their duties correctly. (Sewell, 2006)
Is it important to emphasize that of course, not every single move of the employees are
monitored,
but the possibility alone could have a great influence on the employees behaviour.
Hypothetically,
and according to Foucault, these people would have adopted the fear of potential negative

34

consequences due to the fact that they know they are being observed. Therefore, there is no
actual
need for surveillance since they act as if they are always being watched. A direct connection
to the
arguments made in Discipline and Punishment.
So far, several arguments presented in Foucaults theory concerning surveillance have been
validated. But modern American society does not completely comply with Foucault's
hypothesis. The
problem is not in any specific detail, but is more found in an all-encompassing theme. In
Discipline
and Punishment, Foucault predicts that essentially all of society will function like the
Panopticon.
This is, to say the least, a bold claim. We can of course not reject the idea that society as
Foucault
describes it is yet to come, and that the Panopticon is indeed our future. But so far, at this
point one
cannot possibly argue that the United States has become the new Panopticon.
Foucault's book argues that prisoners in a Panopticon are constantly aware that someone
may be
watching them. This awareness triggers them to modify their behaviour, ultimately
becoming dead,
passive bodies. However, CCTV is an obvious example on how Foucault's theory fails.
Walby
claims that these CCTV cameras have become so common and discreet that Americans no
longer
register that someone might be watching them. He explains:
The prevalence of discrete and mundane surveillance practices does not create the
automatic
functioning of power that Foucault had envisioned. For instance, CCTV cameras are not
noticed by the people who fall under the optical gaze. The presence of cameras does not
directly alter people's behaviour. (Walby, 2005)
American citizens do not change their behaviour in response to the cameras, an explanation
could

35

be that technology has become so refined and such a big part of the culture that Americans
have, to an extent, forgotten about CCTV. For CCTV to function as it would in a panoptic
society, it would
have to be much more obvious, and exert much more power over individuals.
Another reason why Foucault's concepts of surveillance are inconsistent in today's world
comes
from another reflection by Walby. In the Panopticon, the guards simply cast a stare on the
inmates.
The inmates, on the other hand, must adjust their behaviour to avoid punishment. Yet in a
society in
which CCTV is discreet and Americans do not constantly think about its presence, it is
actually those
that do the gazing who alter their own actions. Walby elaborates:
"... it is the CCTV operators' watching behaviour which is normalized along institutional
lines by
being behind the camera at Suburban Mall, not the shoppers' behaviour by being pored
over
by the all-seeing eye. (Walby, 2005)
This is a twist to the Panopticon model. It is supposed to be the shoppers, not the guards,
who
change. Yet it is the guards who change regarding their view on specific people as they see
them
with greater suspicion than before they began their employment. The application of the
Panopticon
to American society has therefore not been completely efficient since it is actually those that
are
observing that have been influenced so much by surveillance that they change their
approaches and
behaviours. This is a strong contrast to Foucault's visualization - a complete reverse in
power
relations.

2.9.2 GOOGLES PANOPTICON

36

As mentioned above, Foucaults interpretation of the Panopticon and the following effects
was not
entirely applicable when looking at CCTV. But what if Kevin Walbys concern became a
reality? That
these cameras recording our every move were no longer subtle and discreet, but more inyour-face?
On-your-face?
Glass is expected to launch in early 2014, and is marketed as making augmented reality a
part of
our lifestyle. Imagine your brain being augmented by Google, Google CEO and cofounder Larry
Page said in a 2004 interview (Rhrict, 2013). By now, this may no longer be just an
imagination.
Many great things can be said about Glass this new kind of technology brings endless
opportunities, such as doctors who films a medical operation for education purposes,
walking
directions on the spot etc. But everything comes with a prize besides the $1,500, which
the
glasses costs (Rhrict, 2013), Glass also gather real-time information about our every move,
every
conversation, every text message. What we see, what we hear, where we go and what we do,
being
it at home, at work or in public, Google will know, and possibly use to their benefit. As
mentioned before, publicists and corporations have been data mining as long as it was an
option, and we as
consumers already give most of the information through the use of CMC, apps, GPS
devices etc.
Glass does not essentially do anything new, but by wearing Glass, we are basically offering
our
personal data on a silver platter. As a consequence, we are not only inviting companies not
to
forget the government to view our thoughts, patterns, and consumer habits. When using
Glass, we

37

are unavoidably inviting everyone into our private lives even further, literally placing them
directly in
front of our faces 24/7. Knowing this, Foucaults vision of a passive bodies-society may
not sound
so far away. We would all be monitoring each other, ultimately doing fieldwork for Google.
This has raised numerous alarming questions concerning privacy. The Australian privacy
commissioner and 36 other data protection authorities has written an open letter to CEO
Larry Page,
raising a concern towards Glass privacy settings. (Essers, 2013)
Jennifer Stoddart, Canada's privacy commissioner, signed the letter. One of their main
worries in the
letter is that people can use Glass to film and record others;
"We are writing to you as data protection authorities to raise questions from a privacy
perspective about the development of Google Glass. () Fears of ubiquitous surveillance of
individuals by other individuals, whether through such recordings or through other
applications
currently being developed, have been raised. Questions about Google's collection of such
data
and what it means in terms of Google's revamped privacy policy have also started to
appear.
() The details of how Google Glass operates, how it could be used and how Google might
use the data the technology collects have so far largely come from media reports that
contain
a great deal of speculation (Essers, 2013)
In addition, the authorities strongly urged Page to "engage in a real dialogue with data
protection
authorities about Glass." The letter also questioned Google on how the gathered
information is
shared with third parties (see Appendix 3), if Google had done a privacy risk assessment,
and if they
would share the outcomes. Googles vice president of public policy and government
relations, Susan

38

Molinari, replied on behalf of Google on June 7th. Several questions raised in the
Congresss letter
were not answered in Google's response, including a request for examples on what Google
would
do in order to secure the privacy of non-Glass users.
"Use of Google Glass will be governed by the terms of the Google Privacy Policy. No
changes
to the Google Privacy Policy are planned for Glass," the letter states. In response to the
Congresss question: "What proactive steps is Google taking to protect the privacy of non
users when Google Glass is in use?" Google responded, "We have built some social signals
into the way Glass is used. These signals help people understand what users are doing, and
give Glass users means for employing etiquette in any given situation." (Essers, 2013)
Google's response letter generally highlights the device's positive features and evaluates the
existing regulations as sufficient. Congressman Joe Barton later commented:
"I am disappointed in the responses we received from Google. There were questions that
were not adequately answered and some not answered at all. Glass has the potential to
change the way people communicate and interact. When new technology like this is
introduced that could change societal norms, I believe it is important that peoples rights be
protected and vital that privacy is built into the device..." (Essers, 2013)
Google has tried to convince Barton by demonstrating Glass to him in person, but with no
greater
impact.

2.9.3 THE INTEREST COLLAR


I fear the day technology will surpass our human interaction. The world will have a
generation
of idiots. - Albert Einstein (1879-1955).
Glass is not only a new way of surveilling, it is also a way of being more inclosed in ones
own world.
It could seem that Albert Einsteins fear (the quote above) is coming true, as we, especially
the
younger generation, seem to be more interested in being social online than in real life.
In Parisers theory, The Filter Bubble, it is argued how we are in fact being decentralized
into
39

individual bubbles. These bubbles are functioning as a collar-tag that lets everybody
know what
your specific interests are. It is a personal ID, which is unique to the person behind it. As
Pariser
describes in his book:
The future of the web is about personaliza-tion . . . now the web is about me. It is about
weaving the web together in a way that is smart and personalized for the user.
(Pariser, 2011: 8)
It has become a major factor to personalize through the web in the mediated society. Glass
is the
ultimate product of this evolution. This next step in the evolution of technology has the
potential to
become the direct channel in which the online users activities is summoned. Glass will be
the
frontier for many companies that wants to advertise and get their consumers attention
through
Google, which of all big internet companies, is well known for making their bread and
butter through
vast amount of advertisement. Thus there is no doubt that Google will take advantage of
their new
products abilities and incorporate ads within the products interface. It is logical to think
that Google
will merge the personalized data into Glass, as they have done with all of their other
products. As it
is described in the Filter Bubble theory: You are getting a free ser-vice, and the cost is
information
about you. (Pariser, 2011: 6). The fact that Google is most likely to deliver ads through
Glass is also
emphasized in the user agreements of the product, where it is clearly stated that some
features
require the user to have a Google account.24
Pariser believes the personalization of the user data is both an advantage and a
disadvantage. It

40

can often happen that we get a friend invitation from a program on Facebook that is not
relevant for
us. Regardless we are still going to get the invitation because the user data that we provided
by
being online, has generated an demographic stamp on our person; a stamp that tells the
program
that we should get this invitation (Pariser, 2011: 187-188). Likewise it is also plausible that
a Glass
user will keep getting ads about computers if that is what they are searching for when
online. The
user do not have a saying whether or not they will be getting ads about computers. The
automatic
software algorithmics has already decided this because it has analyzed the personalized data
and
found the user relevant enough to receive ads about computers. Pariser describes these
automated
programs as advertars (Pariser 2011: 190). The advertar has only been created for the
greater
purpose of commercial use. The advertar is a direct product of the personalized data, and
a lot of
companies are making a huge effort in trying to de-anonymizing the Web (Pariser, 2011:
111).
Whatever the individual does online will be reflected in Glass.
When we look at their website, Glass is presented as a new and smart technical object
which makes
life more enriching and ergonomic for the user. The process of gaining access to
information has
never been easier than now. When we wear Glass, we wear our information. The process of
physically going to a computer, to retrieve the information, is over. Now the user only has to
ask the
Glass for the information and it is given. It seems that Glass could very well function as an
extension

41

of the human body. It gives us a new way of interacting online, as well as on the social
levels. The new possibilities Glass represents is revolutionary for the modern age, but are
we free? Is it
possible to use this technology without losing one's identity? This is a very important
question in
Parisers Filter Bubble theory. Are we in fact taking over the identity that is given to us by
the
Internet, and just neglecting the original blueprints of our personality?
Does Glass hold the potential to arrange a new identity for its users? When having to
maintain a
working Google account in order for Glass to properly function, then there is no doubt
about the fact
that the content and ads presented on the users computer will also be presented on Glass on
the
background of the users personal data. It is, however, clear that Pariser wants more
transparency
with companies holding vast amounts of personal data. The big online companies functions
as
relevance-seeking machines, which is fuelled with vulnerable personal data (Pariser 2011:
229). They
must begin to realize that they have a much more important role in the society, other than
what they
initially intended. It may be that Googles motto is Dont be evil, but what if they
unintendedly
happened to be? As Pariser explains:
I once explained to a Google search engineer that while I did not think the company was
currently evil, it seemed to have at its fingertips everything it needed to do evil if it wished.
He
smiled broadly. Right, he said. We are not evil. We try really hard not to be evil. But if
we
wanted to, man, could we ever! (Pariser, 2011: 147)
Pariser is definitely trying to send a message to his readers; it may be that the individual
find their

42

online life easier with the help of the personalization of his data. Though however easy it
may seem,
there is no guarantee that their personal data will not be used in a bad consensus. If Google
wants
too, they can do whatever they want with the individuals data, and they cannot do anything
about it.
If an individual is using Google, that person has agreed to their terms and conditions, and if
Google
wants to use this personal data, they will have the right to do so, without the person having
any say
to it.
The Internet has changed radically from being a free source of information flow, with a
decent
amount of anonymity for its users, to becoming a possible surveillance and marketing tool
for big
organisations. This is a general concern for Pariser. He mentions that although he likes the
shortcuts from Google, he cannot ignore the fact that this new era of personalization is
completely
invisible to the general user. Thus it becomes harder to know what information the Internet
has on us as individuals, as well as how and where it applies that personal information. How
are we able to
trust those companies that has access to all this information?:
While the Internet has the potential to decentralize knowledge and control, in practice it is
concentrating control over what we see and what opportunities we are offered in the hands
of
fewer people than ever before. (Pariser, 2011: 218)
As Pariser says:
The Internet may know who we are, but we do not know who it thinks we are or how it is
using
that information. Technology designed to give us more control over our lives is actually
taking
control away. (Pariser 2011: 218-219)

43

To cast some more light on the consequences of the personalization of data we have chosen
to deal
with a story about a pregnant teenage daughter. She had an account at Target25 where she
was
searching for, and buying, specific articles that women are most likely to buy whilst
pregnant. What
she did not know, however, was that by each search she did online, she was actually
revealing
information about herself being pregnant. As a consequence of her online behaviour, she
began to
receive baby-related commercials from Target in the shape of coupons and online ads.
Eventually
her dad got angry and called Target to ask why they sent all this baby-related material to his
daughter, when she is still a kid, attending high school. Meanwhile he then discovered that
his
daughter was actually pregnant, and he had to apologize to the Target employee a few days
later.
So Target actually predicted the daughters pregnancy before her father found out about it.
They
were able to do it because they used exactly the same system as Google. Demanded the
users to
have an account at their company, and then looked at what they were searching for when
they were
online. In this case the pregnant daughter were looking at certain products related to
pregnant
women. If a woman bought more of these products, then surely there was a great chance
that she
would be pregnant, and the system would automatically kick in (Hill 2012).
The case of the pregnant daughter is an excellent example of the positive and negative sides
of
personalized data. It is nice that she can get all the relevant products that she would need as
pregnant, but on the other hand she is also receiving a mark that says she is in that category.
Her

44

search results has been collected and transformed into a personal data pattern, which Target
uses.
One might say that Target has targeted her. When looking at Targets logo it looks
remarkable as an actual Target (See appendix 1). Speaking in a symbolic manner we could
say that all the online
companies, including Target, has an aim on their users - a target. All individuals risks being
targeted,
but we are rarely conscious about it; therefore not very cautious about it. Whatever ones
activities
online are, these activities set us as a target for the major Internet companies. This way we
make
ourselves targets by searching and leaking personal data, simply by being ourselves online.

2.9.4 SOCIAL SEMIOTICS ON GLASS


Taking a look at a scenario from 2012, where an American tourist, Professor Steve Mann
was in
France, vacationing with his family. Dr. Mann is the inventor and wearer of the sightenhancing
Eyetap Digital Eyeglass (Mann, 2012), which helps his vision. Glass were a completely new
invention at the time, and are still not available in Europe, but since Mann was an American
tourist,
the perpetrators might have thought he was wearing Glass, simply based on what they have
heard
or read about the Glass. Dr. Manns Eyetap is attached to his skull and is only removable
with the
use of special tools, and on top of that, it looks quite similar to Glass.
The appearance of an Eyetap is very resembling to Glass (see Appendix 2).
It is very likely that the perpetrators were not even mistaken it for Glass, but for the idea of
Glass,
being glasses that can film, and where the user is practically spying on everyone around him
wherever he is going with these glasses. The biggest difference in the looks of Glass and
Eyetap is
the that the camera attached on Glass is hardly visibly, while on Dr. Manns Eyetap the
camera is
45

very noticeably and more likely to get the surroundings apprehensive. Glass really contains
so much
more than a camera, it is like having a tiny computer attached to your glasses, but it is quite
plausible that the perpetrators only thought as far as: camera, surveillance, lack of
privacy, get
that man out of here! A camera is a symbol of spying, surveillance, and makes some people
act
certain ways, from their associations. A pair of glasses is usually a symbol for intelligence,
and this
is probably also thought of by Google, when they designed their new device. Glasses
generally
gives an association to someone smart, which is exactly what Google are aiming for.
Dr. Mann initially received some suspicious question from an employee who wanted to
know what
those glasses were about, but Mann happened to have his medical papers with him, stating
that
these glasses were simply digital eyeglasses, which made the employee calm down. Then
again
when Dr. Mann was eating his food, not just one, but altogether three perpetrators verbally
insulted
him, tore up his medical papers, and ended up physically assaulting him, pushing him out of
the
restaurant, and, consequently, damaged his Eyetap glasses. This case is interesting on many
levels, but using social semiotics we can actually try to see it from
the perpetrators points of view. Why would they insult a man because of the glasses he is
wearing?
It must have been an honest mistake that they thought he was spying on them with his
Eyetap
camera, but why would they assume that? Why did it offend them so much? And why did
they not
believe Manns medical papers that proved he was just wearing Eyetap? This all might have
to do
with the modern society that we live in, where surveillance has become a factor that people

46

apparently worry about. Especially when it is not just on a workplace, where it is somehow
more
reasonable to have cameras installed to spy on the assistants as well as the customers, but to
be
surveilled when eating at McDonalds, that is an invasion of privacy.
According to Saussures semiotic theory a sign (sign being everything that communicates it can be
a picture, a word etc.) is consuming a signifier, which is the physical existence of the sign,
and a
signified, which is the mental concept; the way people read the sign based on their personal
background. The collaboration between the signifier and the signified is called signification,
and its
turnout, is then the external meaning (Fiske, 1982: 42-49). Applying Saussures theory on
this case, it
shows how the society that we live in, with increasing technological devices and
surveillance all over,
have caused these perpetrators to be completely certain about Dr. Manns glasses to be some
sort
of spying ware, possibly Glass. Though this case took place in France, and our main focus
in this
project is the United States of America, it has to be demarcated that the technological
revolution has
had a huge impact on all of the world, and especially the western part of the world; meaning
Europe
and North America. Manns incident at McDonalds illustrates how these perpetrators felt so
threatened by those digital glasses that they got aggressive, when really it was all because of
their
prejudices. If we, as Saussure proposed with his signification, see, read, and experience the
world
around us based on our background knowledge and previous experiences, this would be a
plausible
explanation for why the perpetrators reacted so strongly to Dr. Manns Eyetap.
This leaves us speculating about what consequences can be once people start wearing Glass

47

wherever they go. Is there going to be a group of anti-Glass people who assault all the Glass
users,
or is it going to be forbidden to wear them in most restaurants, shops, workplaces, clubs
etc.? Will
there be signs on the doors saying: No Glass Allowed, just like there is a No Smoking
sign? Would
it not ethically be right to keep some sort of privacy? Though we live in a society where
most people
are leaking all kinds of information about themselves, it is at least up for ourselves to
choose what
we share. If Glass does not get controlled at all, we will have no idea of when we are, not
just being
watched, but actually filmed, who is filming and more importantly; what they are going to
do with it. In order to avoid loose speculations, we will after a brief optimize of ethics,
apply the different ethical
theories to put a more philosophical angle on this case.

2.9.5 ETHICAL THEORY OF GLASS


In the following section of the analysis part of this project, we will be applying the before
mentioned teleological utilitarian theory of Jeremy Bentham and William D. Rosss
extension of
the deontological pluralism theory from Immanuel Kant to give a better understanding of
the
ethical aspects that come into play when they are applied to Glass.
When looking at the ethical rights and wrongs of using Google Glass in public, it is
imperative
that one keeps in mind that the laws, which are established in any given society, such as the
4th
Amendment,27 cannot be assumed to be morally acceptable. Additionally, the individual
person
and corporations also have different moral obligations in this sense, where the individual is
responsible for their own actions, and the corporation will take the longevity, name, and
image of
that corporation into account of their actions (Spinello, 1995: 17).
48

With the rapid growth in introduction to more and more devices that both gathers and gives
us
digital information on the go, there is a lack of agreed ethical ways, which have not yet had
time
to manifest, of using these devices in our everyday life. Is it wrong to sit and text our friends
on
our new smartphone while having a conversation with our parents? Is it okay to update our
Instagram gallery when we are dining at a restaurant with our friends? In order to find out
what is
deemed right or wrong when using Glass in public, more specifically the ability to record
video
without anyone but the user knowing, we will be applying the teleological utilitarianism
theory that
strives for the greatest happiness for the greatest number (Spinello 1995: 19). To begin
with and
in turn to use William D. Rosss extension of the deontological pluralism theory from
Immanuel
Kant, which is firmly opposed to utilitarianism (Spinello 1995: 24), and emphasizes
that we
should Do unto others as you would have them do unto you. (Spinello, 1995: 28), we will
now
apply it on the above case; Dr. Manns incident at McDonalds.

CHAPTER 3
DMD & ELECTRONIC OPERATION

3.1. DMD LIGHT SWITCH


The DMD light switch is a member of a class of devices known as micro electromechanical
systems. Other MEMS devices include pressure sensors, accelerometers, and micro
actuators. The DMD is monolithically fabricated by CMOS-like processes over a CMOS
memory. Each light switch has an aluminum mirror, 16 m square that can reflect light in
one of two directions depending on the state of the underlying memory cell. Rotation of the
mirror is accomplished through electrostatic attraction produced by voltage differences
49

developed between the mirror and the underlying memory cell. With the memory cell in the
on (1) state, the mirror rotates to +10 degrees. With the memory cell in the off (0) state, the
mirror rotates to -10 degrees. A close-up of DMD mirrors operating in a scanning electron
microscope (SEM) is shown in Figure.

Fig. 3.1 SEM video images of operating DMD


By combining the DMD with a suitable light source and projection optics, the mirror
reflects incident light either into or out of the pupil of the projection lens by a simple beamsteering technique. Thus, the (1) state of the mirror appears bright and the (0) state of the
mirror appears dark. Compared to diffraction-based light switches, the beam-steering action
of the DMD light switch provides a superior tradeoff between contrast ratio and the overall
brightness efficiency of the system.
By electrically addressing the memory cell below each mirror with the binary bit
plane signal, each mirror on the DMD array is electrostatically tilted to the on or off
positions. The technique that determines how long each mirror tilts in either direction is
called pulse width modulation (PWM). The mirrors are capable of switching on and off
more than 1000 times a second. This rapid speed allows digital gray scale and color
reproduction. At this point, DLP becomes a simple optical system. After passing through
condensing optics and a color filter system, the light from the projection lamp is directed at
the DMD. When the mirrors are in the on position, they reflect light through the projection
lens and onto the screen to form a digital, square-pixel projected image.

50

Fig. 3.2 DMD optical switching principal


Three mirrors efficiently reflect light to project a digital image. Incoming light hits the three
mirror pixels. The two outer mirrors that are turned on reflect the light through the
projection lens and onto the screen. These two "on" mirrors produce square, white pixel
images. The central mirror is tilted to the "off" position. This mirror reflects light away from
the projection lens to a light absorber so no light reaches the screen at that particular pixel,
producing a square, dark pixel image In the same way, the remaining 508,797 mirror pixels
reflect light to the screen or away from it. By using a color filter system and by varying the
amount of time each of the 508,800 DMD mirror pixels is on, a full-color, digital picture is
projected onto the screen.

3.2. GRAYSCALE AND COLOR OPERATION


Grayscale is achieved by binary pulse width modulation of the incident light. Color is
achieved by using color filters, either stationary or rotating, in combination with one, two,
or three DMD chips. The DMD light switch is able to turn light on and off rapidly by the
beam-steering action of the mirror. As the mirror

rotates, it either reflects light into or out

of the pupil of the projection lens, to create a burst of digital light pulses that the eye
interprets as an analog image. The optical switching time for the DMD light switch is ~2 s.
The mechanical switching time, including the time for the mirror to settle and latch, is ~15
s. The technique for producing the sensation of grayscale to the observers eye is called
binary pulse width modulation. The DMD accepts electrical words representing gray levels
of brightness at its input and outputs optical words, which are interpreted by the eye of the
observer as analog brightness levels.

51

The details of the binary pulse width modulation (PWM) technique are illustrated in
Figure. For simplicity, the PWM technique is illustrated for a 4-bit word (2 4 or 16 gray
levels).

Fig. 3.3 DMD binary pulse width modulation


Each bit in the word represents time duration for light to be on or off (1 or 0). The
time durations have relative values of 2 0, 2 1, 2 2, 2 3, or 1, 2, 4, 8. The shortest interval (1)
is called the least significant bit (LSB). The longest interval (8) is called the most significant
bit (MSB). The video field time is divided into four time durations of 1/15, 2/15, 4/15, and
8/15 of the video field time. The possible gray levels produced by all combinations of bits in
the 4-bit word are 2 4 or 16 equally spaced gray levels (0, 1/15, 2/15 . . . 15/15). Current
DLP systems are either 24-bit color (8 bits or 256 gray levels per primary color) or 30-bit
color (10 bits or 1024 gray levels per primary color).
In the simple example shown in Figure, spatial and temporal artifacts can be
produced because of imperfect integration of the pulsed light by the viewers eye. These
artifacts can be reduced to negligible levels by a bit-splitting technique. In this technique,
the longer duration bits are subdivided into shorter durations, and these split bits are
distributed through-out the video field time.

3.3. DMD CELL ARCHITECTURE AND FABRICATION


The DMD pixel is a monolithically integrated MEMS super-structure cell fabricated over a
CMOS SRAM cell. An organic sacrificial layer is removed by plasma etching to produce air
gaps between the metal layers of the superstructure. The air gaps free the structure to rotate
about two compliant torsion hinges. The mirror is rigidly connected to an underlying yoke.
The yoke, in turn, is connected by two thin, mechanically compliant torsion hinges to
support posts that are attached to the underlying substrate.
52

Fig. 3.4 DMD pixel exploded view


The address electrodes for the mirror and yoke are connected to the complementary
sides of the underlying SRAM cell. The yoke and mirror are connected to a bias bus
fabricated at the metal-3 layer. The bias bus interconnects the yoke and mirrors of each
pixel to a bond pad at the chip perimeter. An off-chip driver supplies the bias waveform
necessary for proper digital operation. The DMD mirrors are 16 micrometer square and
made of aluminum for maximum reflectivity. They are arrayed on 17 micrometer centers to
form a matrix having a high fill factor (~90%). The high fill factor produces high efficiency
for light use at the pixel level and a seamless (pixilation-free) projected image.
Electrostatic fields are developed between the mirror and its address electrode and
the yoke and its address electrode, creating an efficient electrostatic torque. This torque
works against the restoring torque of the hinges to produce mirror and yoke rotation in the
positive or negative direction. The mirror and yoke rotate until the yoke comes to rest (or
lands) against mechanical stops that are at the same potential as the yoke. Because geometry
determines the rotation angle, as opposed to a balance of electrostatic torques employed in
earlier analog devices, the rotation angle is precisely determined.

53

The fabrication of the DMD superstructure begins with a completed CMOS memory
circuit. A thick oxide is deposited over metal-2 of the CMOS and then planarized using a
chemical mechanical polish (CMP) technique. The CMP step provides a completely flat
substrate for DMD superstructure fabrication, ensuring that the projectors brightness
uniformity and contrast ratio are not degraded.
Through the use of six photo mask layers, the superstructure is formed with layers of
aluminum for the address electrode (metal-3), hinge, yoke and mirror layers and hardened
photo-resist for the sacrificial layers (spacer-1 and spacer-2) that form the two air gaps. The
aluminum is sputter-deposited and plasma-etched using plasma-deposited SiO2 as the etch
mask. Later in the packaging flow, the sacrificial layers are plasma-ashed to form the air
gaps.
The packaging flow begins with the wafers partially sawed along the chip scribe
lines to a depth that will allow the chips to be easily broken apart later. The partially sawed
and cleaned wafers then proceed to a plasma etcher that is used to selectively strip the
organic sacrificial layers from under the DMD mirror, yoke, and hinges. Following this
process, a thin lubrication layer is deposited to prevent the landing tips of the yoke from
adhering to the landing pads during operation. Before separating the chips from one another,
each chip is tested for full electrical and optical functionality by a high-speed automated
wafer tester. Finally, the chips are separated from the wafer, plasma-cleaned, relubricated,
and hermetically sealed in a package.

Fig. 3.5 package of DMD chip

54

An 848 x 600 Digital Micro mirror Device. The central, reflective portion of the
device consists of 508,800 tiny, tilt able mirrors. A glass window seals and protects the
mirrors.

3.4. ELECTRONIC OPERATION


The DMD pixel is inherently digital because of the way it is electronically driven. It is
operated in an electro statically bistable mode by the application of a bias voltage to the
mirror to minimize the address voltage requirements. Thus, large rotation angles can be
achieved with a conventional 5-volt CMOS address circuit.
The pulse width modulation scheme for the DMD requires that the video field time
be divided into binary time intervals or bit times. During each bit time, while the mirrors of
the array are modulating light, the underlying memory array is refreshed or updated for the
next bit time. Once the memory array has been updated, all the mirrors in the array are
released simultaneously and allowed to move to their new address states.
This simultaneous update of all mirrors, when coupled with the PWM bit-splitting
algorithm produces an inherently low-flicker display. Flicker is the visual artifact that can
be produced in CRTs as a result of brightness decay with time of the phosphor. Because
CRTs are refreshed in an interlaced scan-line format, there is both a line-to-line temporal
phase shift in brightness as well as an overall decay in brightness. DLP-based displays have
inherently low flicker because all pixels are updated at the same time (there is no line-toline temporal phase shift) and because the PWM bit-splitting algorithm produces shortduration light pulses that are uniformly distributed throughout the video field time (no
temporal decay in brightness).
Proper operation of the DMD is achieved by using the bias and address sequence
shown in Figure and detailed in Table

Fig. 3.6 DMD addresses and reset sequence

55

S. No.
1.

Reset Sequence
Memory ready

Opcodes
All memory cells under the DMD have been loaded with

Reset

the new address states for the mirrors.


All mirrors are reset in parallel (voltage pulse applied to

3.

Unlatch

bias bus).
The bias is turned off to unlatch mirrors and allow them

4.

Differentiate

to release and begin to rotate to flat state.


Retarding fields are applied to the yoke and mirrors in

2.

order to rotationally separate the mirrors that remain in


the same state from those that are to cross over to a new
5.

Land and latch

state.
The bias is turned on to capture the rotationally
separated mirrors and enable them to rotate to the

6.

Update memory array

addressed states, then settle and latch.


The bias remains turned on to keep the mirrors latched
so as to prevent them from responding to changes in the
memory, while the memory is written with new video

7.

Last Sequence

data.
Repeat sequence beginning at step 1.

Table 3.1 DMD address and reset sequence


The bias voltage has three functions. First, it produces a bistable condition to
minimize the address voltage requirement, as previously mentioned. In this manner, large
rotation angles can be achieved with conventional 5-volt CMOS. Second, it
electromechanically latches the mirrors so that they cannot respond to changes in the
address voltage until the mirrors are reset. The third function of the bias is to reset the pixels
so that they can reliably break free of surface adhesive forces and begin to rotate to their
new address states.
Although the metal surfaces of the superstructure are coated with a passivation layer
or lubrication layer, the remaining Vander Waal or surface forces between molecules require
more than the hinge-restoring force to reliably reset the mirrors. A reset voltage pulse
applied to the mirror and yoke causes the spring tips of the yoke to flex.

56

Fig. 3.7 SEM photomicrograph of Yoke and spring tips


As the spring tips unflex, they produce a reaction force that causes the yoke landing
tips to accelerate away from the landing pads, producing a reliable release from the surface.

3.5. DLP SYSTEM DESCRIPTION AND OPERATION

Fig. 3.8 generic DLP system diagram

57

Figure illustrates a generic three-chip DLP system broken down into its functional
components (video front-end, digital processor, digital formatter, and digital display). The
generic video front-end accepts a variety of video sources (digital, digital compressed,
digital graphics, analog composite, analog video, and analog graphics). The video front-end
performs the functions of decompression, decoding, and analog-to-digital conversion,
depending on the nature of the video source.
The first operation in the digital processor is progressive-scan conversion. This
conversion is required if the original source material is interlaced. An interlaced format
provides even lines of video during one video field time and odd lines during the next field
time. Progressive-scan conversion is the process of creating (by an interpolation algorithm)
new scan lines between the odd or even lines of each video field.
Interlacing has been historically used in CRT-based systems to reduce the video
bandwidth requirements without producing objectionable flicker effects created by the
temporal decay in phosphor brightness. For progressively scanned CRTs, interlacing is
unnecessary because additional bandwidth is allocated so that every line of the CRT is
refreshed during each field time. Progressive scanning that incorporates motion-adaptive
algorithms helps to reduce interlaces scanning artifacts such as interline flicker, raster line
visibility, and field flicker. These are particularly noticeable in larger display formats.
The next operation in the digital processor is digital resampling (or scaling). This
operation resizes the video data to fit the DMDs pixel array, expands letterbox video
sources, and maintains a correct aspect ratio for the square pixel DMD format. After the
scaling operation, the video data is input to the color space conversion block. If the video is
not already in a red, green, blue (R, G, B) format, it is converted from luminance and color
difference encoding (e.g., Y, CR, CB) into R, G, B. Next, degamma (inverse gamma)
function is per-formed .CRT systems have non-linear signal-to-light characteristics. In order
to compensate for this error, an error correction, called Gamma correction is done on
images. But as the DLP system has linear signal-to-light characteristics, this correction is to
be removed, which is done in the degamma section. Degamma operation can produce lowlight-level contouring effects, but these are minimized by using an error diffusion technique.
Finally the R, G, B signals is input to the digital formatter. First, the scan-line format
data is converted into an R, G, B bit-plane format. The bit planes are stored in a dualsynchronous DRAM (SDRAM) frame buffer for fast access of the bit-plane data. The bitplane data is then output to the DMDs in a PWM bit-splitting sequence. The DMD chip has

58

multiple data inputs that allow it to match the frequency capability of the on-chip CMOS
with the required video data rates. The bit-plane data coming out of the frame buffer is
multiplexed 16:1 and fed to the multiple data inputs of each DMD. The bit-plane data is
then demultiplexed 1:16 and fed to the frame-memory underlying the DMD pixel array.

3.6. PROJECTION OPTICS


DLP optical systems have been designed in a variety of con-figurations distinguished by the
number of DMD chips (one, two, or three) in the system. The one chip and two chip
systems rely on a rotating color disk to time-multiplex the colors. The one chip
configuration is used for lower brightness applications and is the most compact. Two chip
systems yield higher brightness performance but are primarily intended to compensate for
the color deficiencies resulting from spectrally imbalanced lamps (e.g., the red deficiency in
many metal halide lamps). For the highest brightness applications, three chip systems are
required.

Fig. 3.9 DLP three chip projection system


A DLP optical system with three chips is shown in Figure. Because the DMD is a simple
array of reflective light switches, no polarizes are required. Light from a metal halide or
xenon lamp is collected by a condenser lens. For proper operation of the DMD light switch,
this light must be directed at 20 degrees relative to the normal of the DMD chip. To
accomplish this in a method that eliminates mechanical interference between the

59

illuminating and projecting optics, a total internal reflection (TIR) prism is interposed
between the projection lens and the DMD color-splitting/-combining prisms.
The color-splitting/-combining prisms use dichroic interference filters deposited on
their surfaces to split the light by reflection and transmission into red, green, and blue
components. The red and blue prisms require an additional reflection from a TIR surface of
the prism in order to direct the light at the correct angle to the red and blue DMDs. Light
reflected from the on-state mirrors of the three DMDs is directed back through the prisms
and the color components are recombined. The combined light then passes through the TIR
prism and into the projection lens because its angle has been reduced below the critical
angle for total internal reflection in the prism air gap.

Fig. 3.10 DLP three chip prototype projection engine


A DLP three-chip prototype projection engine is shown in Figure. It projects 1100
lumens with a 500-watt xenon lamp. The size of the engine is 19.5x12.8x10 inch. and it
weighs 38 pounds. One of the DMD package assemblies with thermoelectric cooler and fan
is visible.

3.7. MICRO ELECTROMECHANICAL SYSTEMS


Micro electromechanical systems (MEMS) (also written as micro-electro-mechanical,
Micro Electro Mechanical or microelectronic and micro electromechanical systems and the
related micro mechatronics) is the technology of very small devices; it merges at the nano-

60

scale into nano electromechanical systems (NEMS) and nanotechnology. MEMS are also
referred to as micro machines (in Japan), or micro systems technology MST (in Europe).
MEMS are separate and distinct from the hypothetical vision of molecular nanotechnology
or molecular electronics. MEMS are made up of components between 1 to 100 micrometres
in size (i.e. 0.001 to 0.1 mm), and MEMS devices generally range in size from 20
micrometres (20 millionths of a metre) to a millimetre (i.e. 0.02 to 1.0 mm). They usually
consist of a central unit that processes data (the microprocessor) and several components
that interact with the surroundings such as microsensors. At these size scales, the standard
constructs of classical physics are not always useful. Because of the large surface area to
volume ratio of MEMS, surface effects such as electrostatics and wetting dominate over
volume effects such as inertia or thermal mass. The potential of very small machines was
appreciated before the technology existed that could make themsee, for example, Richard
Feynman's famous 1959 lecture There's Plenty of Room at the Bottom. MEMS became
practical once they could be fabricated using modified semiconductor device fabrication
technologies, normally used to make electronics. These include molding and plating, wet
etching (KOH, TMAH) and dry etching (RIE and DRIE), electro discharge machining
(EDM), and other technologies capable of manufacturing small devices. An early example
of a MEMS device is the resonistor and electromechanical monolithic resonator.

3.7.1. MATERIALS FOR MEMS MANUFACTURING


The fabrication of MEMS evolved from the process technology in semiconductor device
fabrication, i.e. the basic techniques are deposition of material layers, patterning by
photolithography and etching to produce the required shapes.
3.7.1.1. Silicon
Silicon is the material used to create most integrated circuits used in consumer electronics in
the modern industry. The economies of scale, ready availability of cheap high-quality
materials and ability to incorporate electronic functionality make silicon attractive for a
wide variety of MEMS applications. Silicon also has significant advantages engendered
through its material properties. In single crystal form, silicon is an almost perfect Hookean
material, meaning that when it is flexed there is virtually no hysteresis and hence almost no
energy dissipation. As well as making for highly repeatable motion, this also makes silicon
very reliable as it suffers very little fatigue and can have service lifetimes in the range of
billions to trillions of cycles without breaking.
3.7.1.2. Polymers
61

Even though the electronics industry provides an economy of scale for the silicon industry,
crystalline silicon is still a complex and relatively expensive material to be produced.
Polymers on the other hand can be produced in huge volumes, with a great variety of
material characteristics. MEMS devices can be made from polymers by processes such as
injection molding, embossing or stereo lithography and are especially well suited to micro
fluidic applications such as disposable blood testing cartridges.
3.7.1.3. Metals
Metals can also be used to create MEMS elements. While metals do not have some of the
advantages displayed by silicon in terms of mechanical properties, when used within their
limitations, metals can exhibit very high degrees of reliability. Metals can be deposited by
electroplating, evaporation, and sputtering processes. Commonly used metals include gold,
nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver.
3.7.1.4. Ceramics
The nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics
are increasingly applied in MEMS fabrication due to advantageous combinations of
material properties. AlN crystallizes in the quartzite structure and thus shows pyroelectric
and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and
shear forces. TiN, on the other hand, exhibits a high electrical conductivity and large elastic
modulus allowing realizing electrostatic MEMS actuation schemes with ultrathin
membranes. Moreover, the high resistance of TiN against bio corrosion qualifies the
material for applications in biogenic environments and in biosensors.

3.7.2. MEMS BASIC PROCESSES


3.7.2.1. Deposition processes
One of the basic building blocks in MEMS processing is the ability to deposit thin films of
material with a thickness anywhere between a few nanometres to about 100 micrometres.
There are two types of deposition processes, as follows.
Physical deposition: - Physical vapour deposition ("PVD") consists of a process in which a
material is removed from a target, and deposited on a surface. Techniques to do this include
the process of sputtering, in which an ion beam liberates atoms from a target, allowing them
to move through the intervening space and deposit on the desired substrate, and Evaporation
(deposition), in which a material is evaporated from a target using either heat (thermal
evaporation) or an electron beam (e-beam evaporation) in a vacuum system.

62

Chemical deposition: - Chemical deposition techniques include chemical vapour


deposition ("CVD"), in which a stream of source gas reacts on the substrate to grow the
material desired. This can be further divided into categories depending on the details of the
technique, for example, LPCVD (Low Pressure chemical vapour deposition) and PECVD
(Plasma Enhanced chemical vapour deposition). Oxide films can also be grown by the
technique of thermal oxidation, in which the (typically silicon) wafer is exposed to oxygen
and/or steam, to grow a thin surface layer of silicon dioxide.
3.7.2.2. Patterning
Patterning in MEMS is the transfer of a pattern into a material.
Lithography: - Lithography in MEMS context is typically the transfer of a pattern into a
photosensitive material by selective exposure to a radiation source such as light. A
photosensitive material is a material that experiences a change in its physical properties
when exposed to a radiation source. If a photosensitive material is selectively exposed to
radiation (e.g. by masking some of the radiation) the pattern of the radiation on the material
is transferred to the material exposed, as the properties of the exposed and unexposed
regions differ. This exposed region can then be removed or treated providing a mask for the
underlying substrate. Photolithography is typically used with metal or other thin film
deposition, wet and dry etching.
Diamond patterning: - A simple way to carve or create patterns on the surface of nano
diamonds without damaging them could lead to new photonic devices. Diamond patterning
is a method of forming diamond MEMS. It is achieved by the lithographic application of
diamond films to a substrate such as silicon. The patterns can be formed by selective
deposition through a silicon dioxide mask, or by deposition followed by micromachining or
focused ion beam milling.
3.7.2.3. Etching processes
There are two basic categories of etching processes: wet etching and dry etching. In the
former, the material is dissolved when immersed in a chemical solution. In the latter, the
material is sputtered or dissolved using reactive ions or a vapor phase etchant. for a
somewhat dated overview of MEMS etching technologies.
Wet etching: - Wet chemical etching consists in selective removal of material by dipping a
substrate into a solution that dissolves it. The chemical nature of this etching process
provides a good selectivity, which means the etching rate of the target material is
considerably higher than the mask material if selected carefully.

63

Isotropic etching: - Etching progresses at the same speed in all directions. Long and
narrow holes in a mask will produce v-shaped grooves in the silicon. The surface of these
grooves can be atomically smooth if the etch is carried out correctly, with dimensions and
angles being extremely accurate.
Anisotropic etching: - Some single crystal materials, such as silicon, will have different
etching rates depending on the crystallographic orientation of the substrate. This is known
as anisotropic etching and one of the most common examples is the etching of silicon in
KOH (potassium hydroxide), where Si <111> planes etch approximately 100 times slower
than other planes (crystallographic orientations). Therefore, etching a rectangular hole in a
(100)-Si wafer results in a pyramid shaped etch pit with 54.7 walls, instead of a hole with
curved sidewalls as with isotropic etching.
HF etching: - Hydrofluoric acid is commonly used as an aqueous etchant for silicon
dioxide (SiO2, also known as BOX for SOI), usually in 49% concentrated form, 5:1, 10:1 or
20:1 BOE (buffered oxide etchant) or BHF (Buffered HF). They were first used in medieval
times for glass etching. It was used in IC fabrication for patterning the gate oxide until the
process step was replaced by RIE. Hydrofluoric acid is considered one of the more
dangerous acids in the clean room. It penetrates the skin upon contact and it diffuses straight
to the bone. Therefore the damage is not felt until it is too late.
Electrochemical etching: - Electrochemical etching (ECE) for dopant-selective removal of
silicon is a common method to automate and to selectively control etching. An active p-n
diode junction is required, and either type of dopant can be etching resistant ("etch-stop")
material. Boron is the most common etch-stop dopant. In combination with wet anisotropic
etching as described above, ECE has been used successfully for controlling silicon
diaphragm thickness in commercial piezoresistive silicon pressure sensors. Selectively
doped regions can be created either by implantation, diffusion, or epitaxial deposition of
silicon.
Vapor etching: - Xenon difluoride (XeF2) is a dry vapour phase isotropic etches for silicon
originally applied for MEMS in 1995 at University of California, Los Angeles. Primarily
used for releasing metal and dielectric structures by undercutting silicon, XeF2 has the
advantage of a stiction-free release unlike wet etchants. Its etch selectivity to silicon is very
high, allowing it to work with photo resist, SiO2, silicon nitride, and various metals for
masking. Its reaction to silicon is "plasmaless", is purely chemical and spontaneous and is

64

often operated in pulsed mode. Models of the etching action are available and university
laboratories and various commercial tools offer solutions using this approach.
Plasma etching: - Modern VLSI processes avoid wet etching, and use plasma etching
instead. Plasma etchers can operate in several modes by adjusting the parameters of the
plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure,
commonly used in vacuum engineering, equals approximately 133.3 Pascals.) The plasma
produces energetic free radicals, neutrally charged, that react at the surface of the wafer.
Since neutral particles attack the wafer from all angles, this process is isotropic. Plasma
etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface
approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a
smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in
deep reactive ion etching. The use of the term anisotropy for plasma etching should not be
conflated with the use of the same term when referring to orientation dependent etching.
The source gas for the plasma usually contains small molecules rich in chlorine or fluorine.
For instance, carbon tetrachloride (CCl4) etches silicon and aluminium, and tri fluoro
methane etches silicon dioxide and silicon nitride.
A plasma containing oxygen is used to oxidize ("ash") photo resist and facilitate its
removal. Ion milling, or sputter etching, uses lower pressures, often as low as 104 Torr (10
mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock
atoms from the substrate by transferring momentum. Because the etching is performed by
ions, which approach the wafer approximately from one direction, this process is highly
anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching
(RIE) operates under conditions intermediate between sputter and plasma etching (between
103 and 101 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to
produce deep, narrow features.
Reactive ion etching (RIE): - In reactive ion etching (RIE), the substrate is placed inside a
reactor, and several gases are introduced. Plasma is struck in the gas mixture using an RF
power source, which breaks the gas molecules into ions. The ions accelerate towards, and
react with, the surface of the material being etched, forming another gaseous material. This
is known as the chemical part of reactive ion etching. There is also a physical part, which is
similar to the sputtering deposition process. If the ions have high enough energy, they can
knock atoms out of the material to be etched without a chemical reaction. It is a very
complex task to develop dry etches processes that balance chemical and physical etching,

65

since there are many parameters to adjust. By changing the balance it is possible to
influence the anisotropy of the etching, since the chemical part is isotropic and the physical
part highly anisotropic the combination can form sidewalls that have shapes from rounded
to vertical. RIE can be deep (Deep RIE or deep reactive ion etching (DRIE)).
Deep RIE (DRIE) is a special subclass of RIE that is growing in popularity. In this
process, etch depths of hundreds of micrometres are achieved with almost vertical
sidewalls. The primary technology is based on the so-called "Bosch process", named after
the German company Robert Bosch, which filed the original patent, where two different gas
compositions alternate in the reactor. Currently there are two variations of the DRIE.
The first variation consists of three distinct steps (the Bosch Process as used in the
Plasma-Thermo tool) while the second variation only consists of two steps (ASE used in the
STS tool).
In the 1st Variation, the etch cycle is as follows:
(i) SF6 isotropic etch;
(ii) C4F8 passivation;
(iii) SF6 anisoptropic etch for floor cleaning.
In the 2nd variation, steps (i) and (iii) are combined.
Both variations operate similarly. The C4F8 creates a polymer on the surface of the
substrate, and the second gas composition (SF6 and O2) etches the substrate. The polymer
is immediately sputtered away by the physical part of the etching, but only on the horizontal
surfaces and not the sidewalls. Since the polymer only dissolves very slowly in the chemical
part of the etching, it builds up on the sidewalls and protects them from etching. As a result,
etching aspect ratios of 50 to 1 can be achieved. The process can easily be used to etch
completely through a silicon substrate, and etch rates are 36 times higher than wet etching.
3.7.2.4. Die preparation
After preparing a large number of MEMS devices on a silicon wafer, individual dies have to
be separated, which is called die preparation in semiconductor technology. For some
applications, the separation is preceded by wafer back grinding in order to reduce the wafer
thickness. Wafer dicing may then be performed either by sawing using a cooling liquid or a
dry laser process called stealth dicing.

3.7.3. MEMS MANUFACTURING TECHNOLOGIES


3.7.3.1. Bulk micromachining

66

Bulk micromachining is the oldest paradigm of silicon based MEMS. The whole thickness
of a silicon wafer is used for building the micro-mechanical structures. Silicon is machined
using various etching processes. Anodic bonding of glass plates or additional silicon wafers
is used for adding features in the third dimension and for hermetic encapsulation. Bulk
micromachining has been essential in enabling high performance pressure sensors and
accelerometers that changed the sensor industry in the 1980 and 90.
3.7.3.2. Surface micromachining
Surface micromachining uses layers deposited on the surface of a substrate as the structural
materials, rather than using the substrate itself.[16] Surface micromachining was created in
the late 1980 to render micromachining of silicon more compatible with planar integrated
circuit technology, with the goal of combining MEMS and integrated circuits on the same
silicon wafer. The original surface micromachining concept was based on thin
polycrystalline silicon layers patterned as movable mechanical structures and released by
sacrificial etching of the underlying oxide layer. Interdigital comb electrodes were used to
produce in-plane forces and to detect in-plane movement capacitively. This MEMS
paradigm has enabled the manufacturing of low cost accelerometers for e.g. automotive airbag systems and other applications where low performance and/or high g-ranges are
sufficient. Analog Devices have pioneered the industrialization of surface micromachining
and have realized the co-integration of MEMS and integrated circuits.
3.7.3.3. High aspect ratio (HAR) silicon micromachining
Both bulk and surface silicon micromachining are used in the industrial production of
sensors, ink-jet nozzles and other devices. But in many cases the distinction between these
two has diminished. A new etching technology, deep reactive-ion etching, has made it
possible to combine good performance typical of bulk micromachining with comb
structures and in-plane operation typical of surface micromachining. While it is common in
surface micromachining to have structural layer thickness in the range of 2 m, in HAR
silicon micromachining the thickness can be from 10 to 100 m. The materials commonly
used in HAR silicon micromachining are thick polycrystalline silicon, known as epi-poly,
and bonded silicon-on-insulator (SOI) wafers although processes for bulk silicon wafer also
have been created (SCREAM). Bonding a second wafer by glass frit bonding, anodic
bonding or alloy bonding is used to protect the MEMS structures. Integrated circuits are
typically not combined with HAR silicon micromachining.

3.7.4. APPLICATIONS OF MEMS


67

In one viewpoint MEMS application is categorized by type of use.


1.

Sensor

2.

Actuator

3.

Structure

In another view point MEMS applications are categorized by the field of application
(commercial applications include):
1. Inkjet printers, which use piezoelectric or thermal bubble ejection to deposit ink on
paper
2. Accelerometers in modern cars for a large number of purposes including airbag
deployment in collisions
3. Accelerometers and MEMS gyroscopes in radio controlled, or autonomous, helicopters,
planes and multirotors (also known as drones), used for automatically sensing and
balancing flying characteristics of roll, pitch and yaw
4. Accelerometers in consumer electronics devices such as game controllers (Nintendo
Wii), personal media players / cell phones (Apple iPhone, various Nokia mobile phone
models, various HTC PDA models) and a number of Digital Cameras (various Canon
Digital IXUS models) Also used in PCs to park the hard disk head when free-fall is
detected, to prevent damage and data loss
5. MEMS gyroscopes used in modern cars and other applications to detect yaw; e.g., to
deploy a roll over bar or trigger dynamic stability control
6. MEMS microphones in portable devices, e.g., mobile phones, head sets and laptops
7. Silicon pressure sensors e.g., car tire pressure sensors, and disposable blood pressure
sensors
8. Displays e.g., the DMD chip in a projector based on DLP technology
9. Optical switching technology
10. Bio-MEMS applications in medical and health related technologies
11. Interferometric modulator display (IMOD) applications
12. Fluid acceleration such as for micro-cooling
13. Micro-scale Energy harvesting including piezoelectric, electrostatic

and

electromagnetic microharvester
14. Micromachined Ultrasound Transducer including Piezoelectric Micromachined
Ultrasonic Transducers and Capacitive Micromachined Ultrasonic Transducers
Companies with strong MEMS programs come in many sizes. The larger firms
specialize in manufacturing high volume inexpensive components or packaged solutions for
end markets such as automobiles, biomedical, and electronics. The successful small firms
provide value in innovative solutions and absorb the expense of custom fabrication with

68

high sales margins. In addition, both large and small companies work in R&D to explore
MEMS technology.

3.8.

EVOLUTION OF THE DMD ARCHITECTURE

The basic bistable concept was developed in the Central Research Laboratories of Texas
Instruments (now Corporate Research & Development). The first structure, known as the
conventional pixel, did not hide the mechanical structures of the hinges or the support posts.
This resulted in less area available for the mirror and greater light diffraction from the
exposed mechanical structures. The result was a contrast ratio and optical efficiency that
could not support a commercial business.

Fig. 3.11 Evolution of DMD Pixel


The first improvement made by the newly formed Digital Imaging Venture Project
of Texas Instruments was to hide the hinges and support posts under the mirror (Hidden
Hinge 1). This modification resulted in a greater mirror area and less light diffraction with
an attendant improvement in contrast ratio (>100:1) and greater optical efficiency. But this
structure could not work reliably with 5 volt CMOS levels. Two more superstructure
designs were required before reliable operation was achieved. The current structure (Hidden
Hinge 3) maximizes the available area for electrostatic attraction, using both the yoke and
69

mirror as active elements. Thus, almost every bit of area is used to develop electrostatic
torque, resulting in greater electrical efficiency and reliability.

3.9. PROJECTION OPERATION


3.9.1. DMD OPTICAL SWITCHING PRINCIPLE
Light from a projection source illuminates the DMD array at an angle of +2qL from the
normal to the plane of mirrors in their flat state. The angle qL is the rotation angle of the
mirror when the yoke is touching its mechanical stops, or landed. The mirror in its flat state
reflects the incident light to an angle of -2qL. The projection lens is designed so that flat
state light misses the pupil of the projection lens, allowing very little light to be projected
through the lens. But the mirrors are only briefly at the flat state as they make a transition
from one landed state to the other. When the mirror is in its off state, the reflected light is
further removed from the pupil of the projection lens and even less light is collected by the
projection lens. When the mirror is in its on state, the reflected light is directed into the
pupil of the projection lens, and nearly all the light is collected by the projection lens and
imaged to the projection screen. Because of the large rotation angles of the mirror, the off
state light and on-state light are widely separated, allowing fast projection optics to be used.
The result is efficient light collection while maintaining a high contrast ratio.

3.9.2. GRAY SCALE


As previously mentioned, the DMD accepts electrical words representing gray levels of
brightness at its input and outputs optical words. Suppose, for the sake of simplicity, that the
input words have 4 bits.
Each bit in the word represents time duration for light to be on or off (1 or 0). The
time durations have relative values of 20, 21, 22, 23, or 1, 2, 4, 8. The first bit (or least
significant bit, LSB) represents a duration of 1/15, the second 2/15, the third 4/15, and the
fourth bit (or most significant bit, MSB) represents a duration of 8/15 of the video field
time. The possible gray levels produced by all combinations of bits in the 4-bit word are
(2)4 or 16 equally spaced gray levels (0, 1/15, 2/15, , 15/15). For example, (0000) = 0,
(1000) = 8/15, and (1111) = 15/15. The DMD commonly uses 8-bit words, representing (2)8
or 256 possible gray levels.
In this simple example, the DMD array is illuminated with constant intensity light
(not shown) and only 4-bit words are input to the array, representing 16 possible gray levels.
A projection lens focuses and magnifies the light reflected from each pixel onto a distant
70

projection screen. For clarity, only the central column is addressed. It is assumed that the
others are addressed to the dark state (0000). An electrical word is input into the memory
element of each light switch one bit at a time, beginning with the MSB for each word.

Fig. 3.12 Binary time intervals for 4-bit gray scale


When the entire array of light switches has been addressed with the MSB, the
individual pixels are enabled (reset) so that they can respond in parallel to their MSB state
(1 or 0). During each bit time, the next bit is loaded into the memory array. At the end of
each bit time, the pixels are reset and they respond in parallel to the next address bit. The
process is repeated until all address bits are loaded into memory.
Incident light is reflected from the light switches and is switched or modulated into
light bundles having durations represented by each bit in the electrical word. To an observer,
the light bundles occur over such a small time compared to the integration time of the eye
that they give the physical sensation of light having a constant intensity represented by the
value of the 4-bit input word.

3.9.3. OPTICAL SWITCHING TIME


Conventionally, the DMD is addressed with an 8-bit word yielding (2)8 = 256 gray levels.
For 8- bit gray scale, the minimum duration of a light bundle has to be 1/256 of the total
field time. For a one-chip projection system, the DMD is sequentially illuminated with the
three primary colours, red, green, and blue (RGB). For NTSC video, the time occupied by
one colour field is 163 ms or 5.3 ms. The LSB time is, therefore, (16/3) x (1256) = 0.021 ms

71

or 21 s. The optical switching time of the DMD and projection lens combination must be
small compared to 21 s in order to support 8-bit gray scale for a single -chip projector.

Fig. 3.13 Mechanical and Optical Switching Response


Figure shows the measured switching response of the DMD. Three variables are plotted as a
function of time: the bias/reset voltage, the cross-over transition from +10 degrees to -10
degrees, and the same-side transition for a mirror that is to remain at +10 degrees. Shortly
before the reset pulse is applied, all the SRAM memory cells in the DMD array are updated.
The mirrors have not responded to the new memory states because the bias voltage keeps
them electromechanically latched. The mechanical switching time is the interval between
when the reset pulse is applied and the crossover mirrors have landed and settled to a level
where they are electro mechanically latched and the SRAM cells can once again be updated.
The optical switching time is the time from when the light first enters the aperture of the
projection lens to when the aperture is fully filled with light from the rotating mirror. Figure
shows that the mechanical switching time is measured as ~15 s and the optical switching
time is ~2 s. The optical switching time is ~10% of the LSB time, and therefore supports
8-bit gray scale under the most demanding condition of a single chip projector.

72

CHAPTER 4
NEW IN DIGITAL LIGHT PROCESSING

4.1. TECHNOLOGICAL ADVANTAGES


4.1.1. DIGITAL ADVANTAGES
The audio world started the trend toward digital technology well over a decade ago.
Recently, an abundance of new digital video technology has been introduced to the
entertainment and communications markets. The digital satellite system (DSS) quickly
became the fastest selling consumer electronics product of all time, selling record numbers
of units in its first year of introduction. Sony, JVC, and Panasonic have all recently
introduced digital camcorders.
Epson, Kodak, and Apple are a few of the companies that now have digital cameras
on the market. The digital versatile disc (DVD), a widely anticipated new storage medium,
will feature full-length films with better than laser disc video quality by placing up to 17
gigabytes of information on a single disc. Today we have the ability to capture, edit,
broadcast, and receive digital information, only to have it converted to an analog signal just
before it is displayed. DLP has the ability to complete the final link to a digital video
infrastructure as well as to provide a platform on which to develop a digital visual
communications environment. Each time a signal is converted from digital to analog (D/A)
or analog to digital (A/D), signal noise enters the data path. Fewer conversions translates to
lower noise and leads to lower cost as the number of A/D and D/A converters decreases.
DLP offers a scalable projection solution for displaying a digital signal, thus completing an
all-digital infrastructure (figure shown below).

73

Fig. 4.1 Digital infrastructure


DLP offers the final link to a complete digital video infrastructure. Another digital
advantage is DLPs accurate reproduction of gray scale and color levels. And because each
video or graphics frame is generated by a digital, 8- to 10-bits-per-color gray scale, the
exact digital picture can be recreated time and time again. For example, an 8-bits-per-color
gray scale gives 256 different shades of each of the primary colors, which allows for 256,
or 16.7 million, different color combinations that can be digitally created (Figure shown
below).

Fig. 4.2 Digital color control


DLP can generate digital gray scale and color levels. Assuming 8 bits per color, 16.7 million
digitally created color combinations are possible. Above are several combinations of
different gray scale levels for each of the primary colors.

4.1.2. THE REFLECTIVE ADVANTAGE


Because the DMD is a reflective device, it has a light efficiency of greater than 60%,
making DLP systems more efficient than LCD projection displays. This efficiency is the
product of reflectivity, fill factor, diffraction efficiency, and actual mirror "on" time.
LCDs are polarization-dependent, so one of the polarized light components is not
used. This means that 50% of the lamp light never even gets to the LCD because it is
filtered out by a polarizer. Other light is blocked by the transistors, gate, and source lines in
the LCD cell. In addition to these light losses, the liquid crystal material itself absorbs a

74

portion of the light. The result is that only a small amount of the incident light is transmitted
through the LCD panel and onto the screen. Recently, LCDs have experienced advances in
apertures and light transmission, but their performance is still limited because of their
dependence on polarized light.

4.1.3. SEAMLESS PICTURE ADVANTAGE


The square mirrors on DMDs are 16 m, separated by 1 m gaps, giving a fill factor of up
to 90%. In other words, 90% of the pixel/mirror area can actively reflect light to create a
projected image. Pixel size and gap uniformity are maintained over the entire array and are
independent of resolution. LCDs have, at best, a 70% fill factor. The higher DMD fill factor
gives a higher perceived resolution, and this, combined with the progressive scanning,
creates a projected image that is much more natural and lifelike than conventional
projection displays.

Fig. 4.3 Projection display


Photograph used to demonstrate the DLP advantage. This digitized photograph of a
parrot was used to demonstrate the seamless, film like DLP picture advantage. A leading
video graphics adapter (VGA) LCD projector was used to project the image of the parrot
shown in Figure.
The same image of the parrot was projected using a DLP projector and is displayed
in Figure. Because of the high fill factor of DLP, the screen-door effect is gone. What is
seen is a digitally projected image made up of square pixels of information. With DLP, the
human eye sees more visual information and perceives higher resolution, although, as
demonstrated, the actual resolution shown in both projected images is the same. As the
photographs illustrate, DLP offers compellingly superior picture quality.

4.2. BENEFITS
4.2.1. CLARITY

75

As it is already explained, the DLP system provides high clarity images due to its improved
digital technology.

4.2.2. BETTER RESOLUTIONT


The mirrors are very closely packed to give high fill factor of 90%. This high fill factor
gives a higher perceived resolution which results in much more natural and lifelike
projected image.

4.2.3. MAXIMUM BRIGHTNESS


At high luminous flux densities (lumens/cm2), optical absorption creates heating effects.
Excessive temperature can cause degradation of performance for both LCDs and DMDs. In
the case of LCDs, excessive heating causes degradation of the polarizer. Furthermore,
without adequate cooling of the LCD panel, the temperature of the LCD material can rise
above its clearing temperature Tc. This renders the LCD material useless for polarization
rotation and the display fails. For transmissive AM-LCD panels, a heat sink cannot be
attached to the substrate, so forced air cooling must be relied upon. Larger transmissive
panels mitigate this problem. Currently, AM-LCD projectors having 3000-lumen outputs
use 5.8x5.8 inch panels.
Excessive temperatures can also affect the long-term reliability of the DMD by
accelerating hinge deformation (metal creep) that can occur under high-duty-factor
operation of the mirror. Special hinge alloys have been developed to minimize this
deformation and guarantee reliable operation. High duty factors occur when the mirror is
operated in one direction for a much greater part of the time, on average, than in the other
direction. For example, 95/5 duty factor operation means that a mirror is 95% of the time at
one rotation angle (e.g., -10 degrees) and 5% of the time at the other rotation angle (e.g.,
+10 degrees). This situation would correspond to DMD operation with a video source
having a temporal average brightness of 5% (or 95%) of the peak brightness. Although
these extreme temporal averages are un-likely to occur for extended periods of time, 95/5
duty factor is chosen as a worst case reliability test condition for hinge deformation. With
current hinge metal alloys, long-term, reliable DMD operation at the 95/5 duty factor is
assured, provided the operating temperature of the hinge is limited to <65C.
For high-brightness applications, the mirrors can absorb enough energy to raise the
hinge temperature above 65C unless active cooling is applied to the package. Because the
DMD is reflective and built on a single-crystal silicon (X-silicon) backplane, the absorbed
76

heat can be efficiently extracted by connecting a thermoelectric cooler (TEC) to the


backside of the DMD package. In Figure, one of the DMD package assemblies with the
thermoelectric cooler is visible. The DMD package contains a thermal via to provide a
low-thermal- impedance path between the DMD chip and the TEC. A thermal model
predicts that for a three-chip SXGA projector producing 10,000 screen lumens, the hinge
temperature can be held to <65C (with TEC cooling and an internal ambient air temperature
of 55C) .

4.2.4. BRIGHTNESS UNIFORMITY


Brightness uniformity is also an important part of image quality. Uniformity represents the
percentage of brightness carried throughout a projected image. A higher uniformity
percentage indicates that the projector delivers brightness more evenly from center to the
corners of the projected image, eliminating hot spots and distortion. DLP system has got a
brightness uniformity of more than 85%.

4.2.5. LOW FLICKER EFFECT


Flicker is the visual articraft that can be produced in CRTs as a result of brightness decay
with time. Usually in scatter scanning technique, the lines in the video frames are scanned
sequentially. So at a particular instant of a time different points on the screen will have
different brightness levels. This will give rise to flicker. But in DLP, all the mirror pixels are
updated in parallel at the same time. This minimizes the flicker.

4.2.6. LIFE LIKE COLOR


As already seen the digital technology will give lifelike color and makes it possible to
display 16.7 million different color combinations on the screen at a time.

4.2.7. CONTRAST RATIO


Contrast ratio is the difference between the lightest and the darkest portions of an image.
The larger the contrast ratio, the greater the ability of a projector to show subtle color details
and tolerate a rooms ambient light. The inherent contrast ratio of the DMD is determined
by measuring the ratio of the light flux with all pixels turned on versus the flux with all
pixels turned off. The system contrast ratio is determined by measuring the light flux ratio
between bright and dark portions of a 4 4 checkerboard image according to ANSI
specifications. The checkerboard measurement takes into account light scatter and
reflections in the lens, which can degrade the inherent contrast ratio of the DMD.
The full on/off contrast ratio determines the dark level for scenes having a low
average luminance level (e.g., outdoor night scenes) as well as the video black level. The
77

checker-board contrast ratio is a measure of the contrast for objects in scenes containing a
full range of luminance levels. The inherent contrast ratio of the DMD is limited by light
diffraction from the mirror edges, from the underlying substrate, and from the mirror via
(the metalized hole in the middle of the mirror that acts as the mirror support post, as shown
in Figure). Recent architectural improvements to the DMD pixels have led to improved
contrast ratios.

4.2.8. PORTABILITY
Because of its very low size and weight, DLP projector system is highly portable. A
projector giving an output of 2000 lumens weigh only 6.6 pounds and that giving 1000
lumens, which is called as the micro projector weighs only 2 pounds.

4.2.9. ACCURACY AND STABILITY


Current high-brightness projection displays for use in the audio/visual rental and staging
business and for private and corporate use have a number of limitations. These include
warm-up or stabilization time; setup time for convergence, color balance, and gamma; and,
finally, the stability of the image quality once the system is operating. Maintaining stability
over a wide range of environmental conditions encountered in outdoor applications is
particularly difficult.
For video wall applications or other applications requiring multiple side-by-side
projectors, the setup time to make the entire displays look identical is often unacceptable.
Even when great care has been taken in this procedure, lack of stability makes periodic
adjustments necessary.
DLP-based projection systems offer the potential of short setup time and stable,
adjustment-free images. Initial stabilization time is minimal. The working is also very fast.
This is because of the fact that the optical switching time of the mirror is only 2us and the
mechanical switching time including the time for the mirror to settle and latch is only 15 us.
Convergence is fixed by internal alignment of the three DMDs and is stable with time and
in-dependent of throw distance. Color balance, uniformity, and gamma are digitally
controlled by Pulse Width Modulation and are not affected by temperature. Brightness roll
off is stable (fixed by a light integrator) and can be made small to accommodate video wall
applications.

4.3. DMD RELIABILITY


Steady improvements in DMD reliability have been made. Some of these are listed below:
78

1. An improved hinge material that reduces metal creep that can occur under high duty
factor and high-temperature operating conditions. The hinge material is manufactured
using thin-film technology to get less stiff material.
2. Improved packaging techniques that preserve the lubricity of the landing surface over
a wide range of environmental conditions.
3. A new architecture that incorporates spring tips at the landing tip of the yoke. These
springs store energy upon landing and push the mirror away from the surface upon
release. The result is greater operating margins as the yoke releases (resets) from the
underlying surface.
4. A particle reduction program that has dramatically reduced particle contamination
within the DMD package.
The DMD has passed a series of tests to simulate actual DMD environmental operating
conditions, including thermal shock, temperature cycling, moisture resistance, mechanical
shock, vibration, and acceleration testing and has passed all of these tests. In addition to
these, other tests have been conducted to determine the long-term result of repeated cycling
of mirrors between the on and off states. Mirror cycling tests look for hinge fatigue (broken
hinges) and failure of the mirrors to release because of increased adhesion (reset failure). To
date, in accelerated tests, a lifetime of more than 765 billion cycles has been demonstrated
(equivalent lifetime >76,000 hours ie, approximately 20 years of reliable operation) for a 10bit/primary color, three-chip projector configuration).

79

CHAPTER 5
CONCLUSION

DLP brand projection displays are well-suited to high-brightness and high-resolution


applications. The digital light switch is reflective and has a high fill factor that results in
high optical efficiency at the pixel level and low pixilation effects in the projected image.
The DMD family of chips uses a common pixel design and a monolithic CMOS-like
process. These factors, taken together, mean that scaling to higher resolutions is
straightforward, without loss of pixel optical efficiency.
At higher resolutions, the DLP brand projector becomes even more efficient in its use
of light because of higher lamp-coupling efficiency. Because the DMD is a reflective
technology, the DMD chip can be effectively cooled through the chip substrate, thus
facilitating the use of high-power projection lamps without thermal degradation of the
DMD. DLP brand systems are all-digital (digital video in, digital light out) that give
accurate, stable reproduction of the original source material.
This single digital light display system is obviously going to be the technology of
future revolutionizing the field of video display technology providing high clarity, highresolution, high-brightness seamless images.
The convergence of market needs and technology advances has created a unique
business opportunity for an all-digital display technology based on the Digital Micromirror
Device (DMD). This paper presents an overview of this important new technology in terms
of its architecture, projection operation, fabrication, and reliability. Digital Light Processing
(DLP) systems incorporating the DMD are being developed for projection displays and
hardcopy applications. Hardcopy systems using DLP are in an evaluation phase, with
promising, near photographic quality printing having already been demonstrated. DLP-based
projection display systems have been demonstrated in a variety of sizes and form factors. By
the end of 1995, the first projection displays based on DLP will be available on the market.

80

REFERENCES

1. R.J. Gove, "DMD Display Systems: The Impact of an All-Digital Display," Society for
Information Display International Symposium (June 1994).
2. L.J. Hornbeck and W.E. Nelson, "Bistable Deformable Mirror Device," OSA Technical
Digest Series Vol. 8, Spatial Light Modulators and Applications, p. 107 (1988).
3. L.J. Hornbeck, "Deformable-Mirror Spatial Light Modulators," Spatial Light
Modulators and Applications III, SPIE Critical Reviews, Vol. 1150, pp. 86-102 (August
1989).
4. W.E. Nelson, L.J. Hornbeck, Micromechanical Spatial Light Modulator for
Electrophotographic Printers, SPSE Fourth International Congress on Advances in Non
Impact Printing Technologies, pp. 427, March 20, 1988.
5. J.B. Sampsell, "An Overview of Texas Instruments Digital Micromirror Device (DMD)
and Its Application to Projection Displays," Society for Information Display Internal
Symposium Digest of Tech. Papers, Vol. 24, pp. 1012-1015 (May 1993).
6. L.J. Hornbeck, "Current Status of the Digital Micromirror Device (DMD) for Projection
Television Applications (Invited Paper)," International Electron Devices Technical
Digest, pp. 381-384 (1993)
7. J.M Younse and D.W. Monk, "The Digital Micromirror Device (DMD) and Its
Transition to HDTV," Proc. of 13th International Display Research Conf. (Late News
Papers), pp. 613-616 (August 31-September 3, 1993)
8. J.B. Sampsell, "The Dig ital Micromirror Device," 7th ICSS&A, Yokohama, Japan
(1993).
9. J.M. Younse, "Mirrors on a Chip," IEEE Spectrum, pp. 27-31 (November 1993).
10. M.A. Mignardi, "Digital Micromirror Array for Projection TV," Solid State Technology,
Vol. 37, pp. 63-66 (July 1994).
11. V. Markandey et al., "Motion Adaptive Deinterlacer for DMD (Digital Micromirror
Device) Based Digital Television," IEEE Trans. on Consumer Electronics, Vol. 40, No.
3, pp. 735-742 (August 1994).
12. V. Markandey and R. Gove, "Digital Display Systems Based on the Digital Micromirror
Device," SMPTE 136th Technical Conference and World Media Expo (October 1994).

81

Anda mungkin juga menyukai