Anda di halaman 1dari 25

Augmented Reality: Technologies, Applications, and Limitations

D.W.F. van Krevelen


krevelen@cs.vu.nl

Vrije Universiteit Amsterdam, Department of Computer Science


De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands

April 18, 2007

In the near future we may enrich our perception of re- Augmented reality (AR) is this technology to cre-
ality through revolutionary virtual augmentation. Aug- ate a next generation, reality-based interface (Jacob,
mented reality (AR) technologies offer an enhanced per- 2006) and in fact already exists, moving from labora-
ception to help us see, hear, and feel our environments tories around the world into various industries and con-
in new and enriched ways that will benefit us in fields sumer markets. AR supplements the real world with vir-
such as education, maintenance, design, reconnaissance, tual (computer-generated) objects that appear to coexist
to name but a few. This essay describes the field of AR, in the same space as the real world. According to Tech-
including its definition, its development history, its en- nology Review1 , AR is recognised at MIT as one of ten
abling technologies and the technological problems that emerging technologies of 2007, and we are at the verge
developers need to overcome. To give an idea of the state of embracing this very new and exciting kind of human-
of the art, some recent applications of AR technology are computer interaction.
also discussed as well as a number of known limitations For anyone who is interested and wants to get ac-
regarding human factors in the use of AR systems. quainted with the field, this essay provides an overview
of important technologies, applications and limitations of
1 Introduction AR systems. First, a short definition and history of AR
are given below. Section 2 describes technologies that en-
Imagine a technology with which you could see more able an augmented reality experience. To give an idea of
than others see, hear more than others hear, and perhaps the possibilities of AR systems, a number of recent appli-
even touch, smell and taste things that others can not. cations are discussed in Section 3. In Section 4 a number
What if we had technology to perceive completely com- of technological challenges and limitations regarding hu-
putational elements and objects within our real world ex- man factors are discussed that AR system designers have
perience, entire creatures and structures even that help to take into consideration. Finally, the essay concludes
us in our daily activities, while interacting almost uncon- in Section 5 with a number of directions that the author
sciously through mere gestures and speech? envisions AR research might take.
With such technology, mechanics could see instruc-
tions what to do next when repairing an unknown piece
of equipment, surgeons could see ultrasound scans of or-
gans while performing surgery on them, firefighters could
see building layouts to avoid otherwise invisible hazards,
soldiers could see positions of enemy snipers spotted by
unmanned reconnaissance aircraft, and we could read re-
views for each restaurant in the street were walking in,
Figure 1: Reality-virtuality continuum, adapted from
or battle 10-foot tall aliens on the way to work (Feiner,
Azuma et al. (2001).
2002).

This essay is an effort to explore the field of augmented reality beyond
the summary presented in the course material (Dix et al., 2004, pp.
736-7). The author wishes to thank the course lecturer, Bert Bongers,
1
for his feedback and shared insights. http://www.techreview.com/special/emerging/

1
2 Enabling technologies

1.1 Definition during the 1970s and 1980s. During this time mobile de-
vices like the Sony Walkman (1979), digital watches and
On the reality-virtuality continuum by Milgram personal digital organisers were introduced. This paved
and Kishino (1994) (Fig. 1), AR is one part of the gen- the way for wearable computing (Mann, 1997; Starner
eral area of mixed reality. Both virtual environments (or et al., 1997) in the 1990s as personal computers became
virtual reality) and augmented virtuality, in which real small enough to be worn at all times. Early palmtop com-
objects are added to virtual ones, replace the surrounding puters include the Psion I (1984), the Apple Newton Mes-
environment by a virtual one. In contrast, AR takes place sagePad (1993), and the Palm Pilot (1996). Today, many
in the real world. Following the definitions in (Azuma, mobile platforms exist that may support AR, such as per-
1997; Azuma et al., 2001), an AR system: sonal digital assistants (PDAs), tablet PCs, and mobile
phones.
combines real and virtual objects in a real environ-
It took until the early 1990s before the term aug-
ment;
mented reality was coined by Caudell and Mizell (1992),
registers (aligns) real and virtual objects with each scientists at Boeing Corporation who were developing
other; and an experimental AR system to help workers put together
wiring harnesses. True mobile AR was still out of reach,
runs interactively, in three dimensions, and in real but a few years later Loomis et al. (1993) developed a
time. GPS-based outdoor system that presents navigational as-
sistance to the visually impaired with spatial audio over-
Three aspects of this definition are important to men- lays. Soon computing and tracking devices became suf-
tion. Firstly, it is not restricted to particular display tech- ficiently powerful and small enough to support graphical
nologies such as a head-mounted display (HMD). Nor is overlay in mobile settings. Feiner et al. (1997) created an
the definition limited to the sense of sight, as AR can early prototype of a mobile AR system (MARS) that reg-
and potentially will apply to all senses, including hear- isters 3D graphical tour guide information with buildings
ing, touch, and smell. Finally, removing real objects by and artifacts the visitor sees.
overlaying virtual ones, approaches known as mediated By the late 1990s, as AR became a research field
or diminished reality, is also considered AR. of its own, several conferences on AR began, includ-
ing the International Workshop and Symposium on Aug-
mented Reality, the International Symposium on Mixed
Reality, and the Designing Augmented Reality Environ-
ments workshop. Around this time, well-funded organi-
sations were formed such as the Mixed Reality Systems
Laboratory3 (MRLab) in Japan and the Arvika consor-
tium4 in Germany. Also, it became possible to rapidly
build AR applications thanks to freely available soft-
ware toolkits like the ARToolKit. In the mean time, sev-
eral surveys appeared that give an overview on AR ad-
Figure 2: The worlds first head-mounted display with vances, describe its problems, and summarise develop-
the Sword of Damocles (Sutherland, 1968). ments (Azuma, 1997; Azuma et al., 2001). By 2001,
MRLab finished their pilot research, and the symposia
were united in the International Symposium on Mixed
and Augmented Reality5 (ISMAR), which has become
1.2 Brief history the major symposium for industry and research to ex-
change problems and solutions.
The first AR prototypes (Fig. 2), created by computer
graphics pioneer Ivan Sutherland and his students at Har-
vard University and the University of Utah, appeared in 2 Enabling technologies
the 1960s and used a see-through HMD2 to present 3D
graphics (Sutherland, 1968). The technological demands for AR are much higher than
A small group of researchers at U.S. Air Forces Arm- for virtual environments or VR, which is why the field of
strong Laboratory, the NASA Ames Research Center, the AR took longer to mature than that of VR. However, the
Massachusetts Institute of Technology, and the Univer-
sity of North Carolina at Chapel Hill continued research 3
http://www.mr-system.com/
4
http://www.arvika.de/
2 5
http://www.telemusic.org:8080/ramgen/realvideo/HeadMounted.rm http://www.augmented-reality.org/

2
2.1 Displays

key components needed to build an AR system have re-


mained the same since Ivan Sutherlands pioneering work
of the 1960s. Displays, trackers, and graphics comput-
ers and software remain essential in many AR experi-
ences. Following the definition of AR step by step, this
section first describes display technologies that combine
the real and virtual worlds, followed by sensors and ap-
proaches to track user position and orientation for correct
registration of the virtual with the real, and user interface
technologies that allow real-time, 3D interaction. Finally
some remaining AR requirements are discussed.

2.1 Displays
Of all modalities in human sensory input, sight, sound
and/or touch are currently the senses that AR systems
commonly apply. This section mainly focuses on visual
displays, however aural (sound) displays are mentioned Figure 3: Visual display techniques and positioning
briefly below. Haptic (touch) displays are discussed with (Bimber and Raskar, 2005a).
the interfaces in Section 2.3, while olfactory (smell) and
gustatory (taste) displays are less developed or practically
non-existent and will not be discussed in this essay. perception alone but displays only the AR overlay by
means of transparent mirrors and lenses. The third ap-
proach is to project the AR overlay onto real objects
2.1.1 Aural display themselves resulting in projective displays. The three
Aural display application in AR is mostly limited techniques may be applied at varying distance from the
to self-explanatory mono (0-dimensional), stereo (1- viewer (Fig. 3). Each combination of technique and dis-
dimensional) or surround (2-dimensional) headphones tance is listed in the overview presented in Table 1 with a
and loudspeakers. True 3D aural display is currently comparison of their individual advantages.
found in more immersive simulations of virtual environ-
ments and augmented virtuality or still in experimental Video see-through Besides being the cheapest and
stages. easiest to implement, this display technique offers the
Haptic audio refers to sound that is felt rather than following advantages. Since reality is digitised, it is eas-
heard (Hughes et al., 2006) and is already applied in con- ier to mediate or remove objects from reality. This in-
sumer devices such as Turtle Beachs Ear Force6 head- clude removing or replacing fiducial markers or place-
phones to increase the sense of realism and impact, but holders with virtual objects (see for instance Fig. 11 and
also to enhance user interfaces of e.g. mobile phones 28). Also, brightness and contrast of virtual objects are
(Chang and OSullivan, 2005). Recent developments matched easily with the real environment. The digitised
in this area are presented in workshops such as the in- images allow tracking of head movement for better reg-
ternational workshop on Haptic Audio Visual Environ- istration. It also becomes possible to match perception
ments7 and the upcoming second international workshop delays of the real and virtual. Disadvantages of video
on Haptic and Audio Interaction Design8 . see-through include a low resolution of reality, a limited
field-of-view (although this can easily be increased), and
2.1.2 Visual display user disorientation due to a parallax (eye-offset) due to
the cameras positioning at a distance from the viewers
There are basically three ways to visually present an aug- true eye location, causing significant adjustment effort
mented reality. Closest to virtual reality is video see- for the viewer (Biocca and Rolland, 1998). This prob-
through, where the virtual environment is replaced by a lem was solved at the MR Lab by aligning the video cap-
video feed of reality and the AR is overlayed upon the ture (Takagi et al., 2000). A final drawback is the focus
digitised images. Another way that includes Sutherlands distance of this technique which is fixed in most display
approach is optical see-through and leaves the real-world types, providing poor eye accommodation. Some head-
6 mounted setups can however move the display (or a lens
http://www.turtlebeach.com/site/products/earforce/x2/
7
http://www.discover.uottawa.ca/have2006/ in front of it) to cover a range of .25 meters to infinity
8
http://www.haid2007.org/ within .3 seconds (Sugihara and Miyasato, 1998).

3
2 Enabling technologies

(a)

(a)

(b)

Figure 4: Mobile optical see-through setups (Bimber


and Raskar, 2005a).

Optical see-through These displays not only leave the


real-world resolution in tact, they also have the advan-
tage of being cheaper, safer, and parallax-free (no eye-
offset due to camera positioning). Optical techniques (b)
are safer because users can still see when power fails,
making this an ideal technique for military and medical
purposes. However, other input devices such as cameras Figure 5: Spatial optical see-through setups (Bimber
are required for interaction and registration. Also, com- and Raskar, 2005a).
bining the virtual objects holographically through trans-
parent mirrors and lenses creates disadvantages as it re-
duces brightness and contrast of both the images and the field-of-view in (head-worn) optical see-through dis-
real-world perception, making this technique less suited plays. A low-power laser draws a virtual image directly
for outdoor use. The all-important field-of-view is lim- onto the retina which yields high brightness and a wide
ited for this technique and may cause clipping of vir- field-of-view (Fig. 6). RSD quality is not limited by the
tual images at the edges of the mirrors or lenses. Fi- size of pixels but only by diffraction and aberrations in
nally, occlusion (or mediation) of real objects is diffi- the light source, making (very) high resolutions possi-
cult because their light is always combined with the vir- ble as well. In postgraduate course material, Fiambo-
tual image. Kiyoshi Kiyokawa solved this problem for lis (1999) provides further information on RSD technol-
head-worn displays by adding an opaque overlay using ogy. Together with their low power consumption these
an LCD panel with pixels that opacify areas to be oc- displays are well-suited for extended outdoor use. Still
cluded (Kiyokawa et al., 2003). under development at Washington University and funded
As shown in Figures 4 and 5, optical see-through tech- by MicroVision9 and the U.S. military, current RSDs are
niques with beam-splitting holographic optical elements mostly monochrome (red only) and monocular (single-
(HOEs) may be applied in head-worn displays (Fig. 4a), eye) displays. Schowengerdt et al. (2003) already devel-
hand-held displays (Fig. 4b), and spatial setups where oped a full-colour, binocular version with dynamic refo-
the AR overlay is mirrored from a planar screen (Fig. 5a) cus to accommodate the eyes (Fig. 7) that is promised to
or through a curved screen (Fig. 5b). be low-cost and light-weight.
Virtual retinal displays or retinal scanning displays
(RSDs) solve the problems of low brightness and low 9
http://www.microvision.com

4
2.1 Displays

(b)

Figure 6: Head-worn retinal scanning display (Bimber


and Raskar, 2005a).

(a) (c)

Figure 8: Reflection types and mobile projective setups


(Bimber and Raskar, 2005a).

reflective material. Objects and instruments covered in


this material will reflect the projection directly towards
the light source which is close to the viewers eyes, thus
not interfering with the projection. From top to bottom,
Fig. 8a shows how light reflects from three different sur-
faces: a mirror from which light is bounced or reflected, a
normal (non-black) surface from which light is spread or
diffused, and a coated surface from which light bounces
right back to its source or retro-reflected like a car-light
Figure 7: Binocular (stereoscopic) vision (Schowengerdt
reflector. Next to these surfaces are two projective AR
et al., 2003).
displays: head-worn (Fig. 8b) and handheld (Fig. 8c) in
the center. A spatial setup is shown in Figure 9.

Projective These displays have the advantage that they


do not require special eye-wear thus accommodating 2.1.3 Display positioning
users eyes during focusing, and they can cover large sur-
AR displays may be classified into three categories based
faces for a wide field-of-view. Projection surfaces may
on their position between the viewer and the real environ-
range from flat, plain coloured walls to complex scale
ment: head-worn, hand-held, and spatial (see Fig. 3).
models (Bimber and Raskar, 2005b). However, as with
optical see-through displays, other input devices are re-
quired for (indirect) interaction. Also, projectors need Head-worn Visual displays attached to the head in-
to be calibrated each time the environment or the dis- clude the video/optical see-through head-mounted dis-
tance to the projection surface changes (crucial in mo- play (HMD), virtual retinal display (VRD), and head-
bile setups). Fortunately, calibration may be automated mounted projective display (HMPD). Cakmakci and Rol-
using cameras in e.g. a multi-walled Cave automatic vir- land (2006) give a recent detailed review of head-worn
tual environment (CAVE) with irregular surfaces (Raskar display technology. A current drawback of head-worn
et al., 1999). Furthermore, this type of display is lim- displays is the fact that they have to connect to graphics
ited to indoor use only due to low brightness and con- computers like laptops that restrict mobility due to lim-
trast of the projected images. Occlusion or mediation of ited battery life. Battery life may be extended by moving
objects is also quite poor, but for head-worn projectors computation to distant locations and provide (wireless)
this may be improved by covering surfaces with retro- connections using standards such as IEEE 802.11b/g or

5
2 Enabling technologies

Courtesy: Canon/MR Lab Courtesy: MicroVision

(a) (b)

Figure 9: Spatial projective setup (Bimber and Raskar,


2005a). Courtesy: Konica Minolta

(c) (d)

BlueTooth.
Fig. 10 shows examples of four (parallax-free) head- Figure 10: Head-worn visual displays
worn display types: Canons Co-Optical Axis See-
through Augmented Reality (COASTAR) video see-
through display (Tamura et al., 2001) (Fig. 10a), Konica displays) introduced the small Pico Projector (PicoP)
Minoltas holographic optical see-through Forgettable which is 8mm thick, provides full-colour imagery of
Display prototype (Kasai et al., 2000) (Fig. 10b), Micro- 13661024 pixels at 60Hz using three lasers, and will
Visions monochromatic and monocular Nomad retinal probably appear embedded in mobile phones soon.
scanning display (Hollerer and Feiner, 2004) (Fig. 10c),
and an organic light-emitting diode (OLED) based head- Spatial The last category of displays are placed stat-
mounted projective display (Rolland et al., 2005) (Fig. ically within the environment and include screen-based
10d). video see-through displays, spatial optical see-through
displays, and projective displays. These techniques lend
themselves well for large presentations and exhibitions
Hand-held This category includes hand-held with limited interaction. Early ways of creating AR are
video/optical see-through displays as well as hand- based on conventional screens (computer or television)
held projectors. Although this category of displays is that show a camera feed with an AR overlay. This tech-
bulkier than head-worn displays, it is currently the best nique is now being applied in the world of sports tele-
work-around to introduce AR to a mass market due vision where environments such as swimming pools and
to low production costs and ease of use. For instance, race tracks are well defined and easy to augment. Head-
hand-held video see-through AR acting as magnifying up displays (HUDs) in military cockpits are a form of
glasses may be based on existing consumer products like spatial optical see-through and are becoming a standard
mobile phones (Mohring et al., 2004) (Fig. 11a) that extension for production cars to project navigational di-
show 3D objects, or personal digital assistants/PDAs rections in the windshield (Narzt et al., 2006). User view-
(Wagner and Schmalstieg, 2003) (Fig. 11b) with e.g. points relative to the AR overlay hardly change in these
navigation information. Stetten et al. (2001) apply cases due to the confined space. Spatial see-through dis-
optical see-through in their hand-held sonic flashlight plays may however appear misaligned when users move
to display medical ultrasound imaging directly over the around in open spaces, for instance when AR overlay is
scanned organ (Fig. 12a). One example of a hand-held presented on a transparent screen such as the invisible
projective display or AR flashlight is the iLamp by interface by Ogi et al. (2001) (Fig. 13a). 3D holographs
Raskar et al. (2003). This context-aware or tracked solve the alignment problem, as Goebbels et al. (2001)
projector adjusts the imagery based on the current show with the ARSyS TriCorder10 (Fig. 13b) by the Ger-
orientation of the projector relative to the environment
(Fig. 12b). Recently, MicroVision (from the retinal 10
http://www.arsys-tricorder.de

6
2.2 Tracking sensors and approaches

Positioning Head-worn Hand-held Spatial


Technique Retinal Optical Video Projective All Video Optical Projective
Mobile + + + + +
Outdoor use +
Interaction + + + + + Remote
Multi-user + + + + + Limited Limited
Brightness + + Limited + + Limited Limited
Contrast + + Limited + + Limited Limited
Resolution Growing Growing Growing Growing Limited Limited + +
Field-of-view Growing Limited Limited Growing Limited Limited + +
Full-colour + + + + + + + +
Stereoscopic + + + + + +
Dynamic refocus (eye + + + +
strain)
Occlusion + Limited + Limited Limited
Power economy +
Opportunities Future Current dominance Realistic, Cheap, off- Tuning, ergonomy
dominance mass- the-shelf
market
Drawbacks Tuning, Delays Retro- Processor, No see- Clipping Clipping,
tracking reflective memory through shadows
material limits metaphor

Table 1: Characteristics of surveyed visual AR displays.

man Fraunhofer IMK (now IAIS11 ) research center. annotation should point to the non-occluded parts only
(Bell et al., 2001).
2.2 Tracking sensors and approaches Fortunately, most environmental models do not need
to be very detailed about textures or materials. Usually
Before an AR system can display virtual objects into a a cloud of unconnected 3D sample points suffices for
real environment, the system must be able to sense the example to present occluded buildings and essentially let
environment and track the viewers (relative) movement users see through walls. To create a traveller guidance
preferably with six degrees of freedom (6DOF): three service (TGS), Kim et al. (2006) used models from a ge-
variables (x, y, an z) for position and three angles (yaw, ographical information system (GIS).
pitch, and roll) for orientation.
Stoakley et al. (1995) present users with the spatial
There must be some model of the environment to allow
model itself, an oriented map of the environment or world
tracking for correct AR registration. Furthermore, most
in miniature (WIM), to assist in navigation.
environments have to be prepared before an AR system is
able to track 6DOF movement, but not all tracking tech-
niques work in all environments. To this day, determining
the orientation of a user is still a complex problem with Modelling techniques Creating 3D models of large
no single best solution. environments is a research challenge in its own right. Au-
tomatic, semiautomatic, and manual techniques can be
2.2.1 Modelling environments employed, and Piekarski and Thomas (2001) even em-
ployed AR itself for modelling purposes. Conversely, a
Both tracking and registration techniques rely on envi- laser range finder used for environmental modelling may
ronmental models, often 3D geometrical models. To an- also enable users themselves to place notes into the envi-
notate for instance windows, entrances, or rooms, an AR ronment (Patel et al., 2006).
system needs to know where they are located with regard
to the users current position and field of view. There are significant research problems involved in
Sometimes the annotations themselves may be oc- both the modelling of arbitrary complex 3D spatial mod-
cluded based on environmental model. For instance when els as well as the organisation of storage and querying of
an annotated building is occluded by other objects, the such data in spatial databases. These databases may also
need to change quite rapidly as real environments are of-
11
http://www.iais.fraunhofer.de/ ten also dynamic.

7
2 Enabling technologies

(a)

(a)

(b)

(b)
Figure 12: Hand-held optical and projective displays

Figure 11: Hand-held video see-through displays


Raab et al. (1979). These are also still in use today and
had much more impact on VR and AR research.
2.2.2 User movement tracking

Compared to virtual environments, AR tracking devices Global positioning systems For outdoor tracking by
must have higher accuracy, a wider input variety and global positioning system (GPS) there exist the American
bandwidth, and longer ranges (Azuma, 1997). Registra- 24-satellite Navstar GPS (Getting, 1993), the Russian
tion accuracy depends not only on the geometrical model counterpart constellation Glonass, and the 30-satellite
but also on the distance of the objects to be annotated. GPS Galileo, currently being launched by the European
The further away an object (i) the less impact errors in Union and operational in 2010.
position tracking have and (ii) the more impact errors in Direct visibility with at least four satellites is no longer
orientation tracking have on the overall misregistration necessary with assisted GPS12 (A-GPS), a worldwide
(Azuma, 1999). network of servers and base stations enable signal broad-
Tracking is usually easier in indoor settings than in cast in for instance urban canyons and indoor environ-
outdoor settings as the tracking devices do not have to ments. Plain GPS is accurate to about 10-15 meters, but
be completely mobile and wearable or deal with shock, with the wide area augmentation system (WAAS) tech-
abuse, weather, etc. In stead the indoor environment nology may be increased to 3-4 meters. For more accu-
is easily modelled and prepared, and conditions such racy, the environments have to be prepared with a local
as lighting and temperature may be controlled. Cur- base station that sends a differential error-correction sig-
rently, unprepared outdoor environments still pose track- nal to the roaming unit: differential GPS yields 1-3 me-
ing problems with no single best solution. ter accuracy, while the real-time-kinematic or RTK GPS,
based on carrier-phase ambiguity resolution, can estimate
positions accurately to within centimeters. Update rates
Mechanical, ultrasonic, and magnetic Early tracking of commercial GPS systems such as the MS750 RTK re-
techniques are restricted to indoor use as they require ceiver by Trimble13 have increased from five to twenty
special equipment to be placed around the user. The first times a second and are deemed suitable for tracking fast
HMD by Sutherland (1968) was tracked mechanically motion of people and objects (Hollerer and Feiner, 2004).
(Fig. 2) through ceiling-mounted hardware also nick-
named the Sword of Damocles. Devices that send and
Radio Other tracking methods that require environ-
receive ultrasonic chirps and determine the position, i.e.
ment preparation by placing devices are based on ultra
ultrasonic positioning, were already experimented with
wide band radio waves. Active radio frequency iden-
by Sutherland (1968) and are still used today. A decade
or so later Polhemus magnetic trackers that measure dis- 12
http://www.globallocate.com/
tances within electromagnetic fields were introduced by 13
http://www.trimble.com/

8
2.2 Tracking sensors and approaches

Inertial Accelerometers and gyroscopes are sourceless


inertial sensors, usually part of hybrid tracking sys-
tems, that do not require prepared environments. Timed
measurements can provide a practical dead-reckoning
method to estimate position when combined with ac-
curate heading information. To minimise errors due to
drift, the estimates must periodically be updated with
(a)
accurate measurements. The act of taking a step can
also be measured, i.e. they can function as pedometers.
Currently micro-electromechanical (MEM) accelerome-
ters and gyroscopes are already making their way into
mobile phones (e.g. Samsungs SCH-S310 and Sharps
V603SH) to allow writing of phone numbers in the air,
etc.
Courtesy: Fraunhofer IMK/IAIS

(b)
Optical Promising approaches for 6DOF pose estima-
tion of users and objects in general settings are vision-
Figure 13: Spatial visual displays based. In closed-loop tracking, the field of view of the
camera coincides with that of the user (e.g. in video see-
through) allowing for pixel-perfect registration of virtual
objects. Conversely in open-loop tracking, the system
relies only on the sensed pose of the user and the envi-
ronmental model.
Using one or two tiny cameras, model-based ap-
proaches can recognise landmarks (given an accurate en-
vironmental model) or detect relative movement dynam-
ically between frames. There are a number of techniques
to detect scene geometry (e.g. landmark or template
matching) and camera motion in both 2D (e.g. optical
flow) and 3D which require varying amounts of com-
putation. Julier and Bishop (2002) combine some in a
number of test scenarios. Although early vision-based
tracking and interaction applications in prepared envi-
ronments use fiducial markers (e.g. Naimark and Foxlin,
Figure 14: An ultra wide band (UWB) based positioning 2002; Schmalstieg et al., 2000) or light emitting diodes
system c 2007 PanGo. (LEDs) to see how and where to register virtual objects,
there is a growing body of research on markerless AR
for tracking physical positions (Chia et al., 2002; Com-
tification (RFID) chips may be positioned inside struc- port et al., 2003; Ferrari et al., 2001; Gordon and Lowe,
tures such as aircraft (Willers, 2006) to allow in situ po- 2004; Koch et al., 2005). Robustness is still low and
sitioning. Complementary to RFID one can apply the computational costs high, but results of these pure vision-
wide-area IEEE 802.11b/g standards for both wireless based approaches (hybrid and/or markerless) for general-
networking and tracking as well. The achievable reso- case, real-time tracking are very promising.
lution depends on the density of deployed access points
in the network. Several techniques are researched by
Hybrid Commercial hybrid tracking systems became
Bahl and Padmanabhan (2000); Castro et al. (2001) and
available during the 1990s and use for instance elec-
vendors like PanGo14 (Fig. 14), AeroScout15 and Eka-
tromagnetic compasses (magnetometers), gravitational
hau16 offer integrated systems for personnel and equip-
tilt sensors (inclinometers), and gyroscopes (mechanical
ment tracking in for instance hospitals.
and optical) for orientation tracking and ultrasonic, mag-
netic, and optical position tracking. Hybrid tracking ap-
14 proaches are currently the most promising way to deal
http://www.pango.com/
15
http://aeroscout.com/ with the difficulties posed by general indoor and outdoor
16
http://www.ekahau.com/ mobile AR environments (Hollerer and Feiner, 2004).

9
2 Enabling technologies

Azuma et al. (2006) investigate hybrid methods without require well-tuned motor skills from the users. Ideally
vision-based tracking suitable for military use at night the number of extra devices that have to be carried around
in an outdoor environment with less than ten beacons in mobile UIs is reduced, but this may be difficult with
mounted on for instance unmanned air vehicles (UAVs). current mobile computing and UI technologies.
Devices like the mouse are tangible and unidirec-
2.3 User interface and interaction tional, they communicate from the user to the AR system
only. Common 3D equivalents are tangible user inter-
Besides registering virtual data with the users real world faces (TUIs) like paddles and wands. Ishii and Ullmer
perception, the system needs to provide some kind of in- (1997) discuss a number of tangible interfaces devel-
terface with both virtual and real objects. oped at MITs Tangible Media Group17 including ph-
icons (physical icons) and sliding instruments. Some
New UI paradigm WIMP (windows, icons, menus, and TUIs have placeholders or markers on them so the AR
pointing), as the conventional desktop UI metaphor is re- system can replace them visually with virtual objects.
ferred to, does not apply that well to AR systems. Not Poupyrev et al. (2001) use tiles with fiducial markers,
only is interaction required with six degrees of freedom while in StudierStube, Schmalstieg et al. (2000) allow
(6DOF) rather than 2D, the use of conventional devices users to interact through a Personal Interaction Panel with
like a mouse and keyboard are cumbersome to wear and 2D and 3D widgets that also recognises pen-based ges-
reduce the AR experience. tures in 6DOF (Fig. 15).
Like in WIMP UIs, AR interfaces have to support se-
lecting, positioning, and rotating of virtual objects, draw-
ing paths or trajectories, assigning quantitative values
(quantification) and text input. However as a general UI
principle, AR interaction also includes the selection, an-
notation, and, possibly, direct manipulation of physical
objects. This computing paradigm is still a challenge
(Hollerer and Feiner, 2004).

Figure 16: SensAbles PHANTOM Premium 3.0 6DOF


haptic device.

Haptic UI and gesture recognition TUIs with


bi-directional, programmable communication through
touch are called haptic UIs. Haptics is like teleoperation,
but the remote slave system is purely computational, i.e.
virtual. Haptic devices are in effect robots with a single
task: to interact with humans (Hayward et al., 2004).
Figure 15: StudierStubes general-purpose Personal In- The haptic sense is divided into the kinaesthetic sense
teraction Panel with 2D and 3D widgets and (force, motion) and the tactile sense (tact, touch). Force
a 6DOF pen (Schmalstieg et al., 2000). feedback devices like joysticks and steering wheels can
suggest impact or resistance and are well-known among
gamers. A popular 6DOF haptic device in teleoperation
and other areas is the PHANTOM (Fig. 16). It option-
Tangible UI and 3D pointing Early mobile AR systems
ally provides 7DOF interaction through a pinch or scis-
simply use mobile trackballs, trackpads and gyroscopic sors extension. Tactile feedback devices convey parame-
mice to support continuous 2D pointing tasks. This is ters such as roughness, rigidity, and temperature. Benali-
largely because the systems still use a WIMP interface
and accurate gesturing to WIMP menus would otherwise 17
http://tangible.media.mit.edu/

10
2.3 User interface and interaction

Khoudja et al. (2004) survey tactile interfaces used in ing and storing everything that is taking place in front of
teleoperation, 3D surface simulation, games, etc. the mobile AR system user.
Common in indoor virtual or augmented environments
is the use of additional orientation and position track-
ers to provide 6DOF hand tracking for manipulating vir-
tual objects. For outdoor environments, Foxlin and Har-
rington (2000) experimented with ultrasonic tracking of
finger-worn acoustic emitters using three head-worn mi-
crophones.

Gaze tracking Using tiny cameras to observe user


pupils and determine the direction of their gaze is a tech-
nology with potential for AR. The difficulties are that it
needs be incorporated into the eye-wear, calibrated to the
user to filter out involuntary eye movement, and posi-
tioned at a fixed distance. With enough error correction,
gaze tracking alternatives for the mouse such as Stan-
fords EyePoint21 (Kumar and Winograd, 2007) provides
a dynamic history of users interests and intentions that
Figure 17: SensAbles CyberTouch tactile data glove may help the UI adapt to the future contexts.
provides pulses and vibrations to each finger.
Aural UI and speech recognition To reach the ideal
Data gloves use diverse technologies to sense and actu- of an inconspicuous UI, auditory UIs may become an
ate and are very reliable, flexible and widely used in VR important part of the solution. Microphones and ear-
for gesture recognition. In AR however they are suitable phones are easily hidden and allow auditory UIs to deal
only for brief, casual use, as they impede the use of hands with speech recognition, speech recording for human-to-
in real world activities and are somewhat awkward look- human interaction, audio information presentation, and
ing for general application. Buchmann et al. (2004) con- audio dialogue. Although noisy environments pose prob-
nected buzzers to the fingertips informing users whether lems, audio can be valuable in multimodal and multime-
they are touching a virtual object correctly for manip- dia UIs.
ulation, much like the CyberGlove with CyberTouch by
SensAble18 (Fig. 17). Text input Achieving fast and reliable text input to a
mobile computer remains hard. Standard keyboards re-
Visual UI and gesture recognition In stead of using quire much space and a flat surface, and the current com-
hand-worn trackers, hand movement may also be tracked mercial options such as small, foldable, inflatable, or
visually, leaving the hands unencumbered. A head-worn laser-projected virtual keyboards are cumbersome, while
or collar-mounted camera pointed at the users hands can soft keyboards take up valuable screen space. Popular
be used for gesture recognition. Through gesture recog- choices in the mobile community include chording key-
nition, an AR could automatically draw up reports of boards such as the Twiddler2 by Handykey22 that re-
activities (Mayol and Murray, 2005). For 3D interac- quire key combinations to encode a single character. Of
tion, UbiHand19 uses wrist-mounted cameras enable ges- course mobile AR systems based on hand-held devices
ture recognition (Ahmad and Musilek, 2006), while the like tablet PCs, PDAs or mobile phones already support
Mobile Augmented Reality Interface Sign Interpretation alphanumeric input through keypads or pen-based hand-
Language20 (Antoniac and Pulli, 2001) recognises hand writing recognition (facilitated by e.g. dictionaries or
gestures on a virtual keyboard displayed on the users shape writing technologies), but this cannot be applied
hand (Fig. 18). in all situations. Glove-based and vision-based hand ges-
Cameras are also useful to record and document the ture tracking do not yet provide the ease of use and ac-
users view, e.g. for providing a live video feed for tele- curacy for serious adoption. Speech recognition however
conferencing, for informing a remote expert about the has improved over the years in both speed and accuracy
findings of AR field-workers, or simply for document- and, when combined with a fall-back device (e.g., pen-
based systems or special purpose chording or miniature
18
http://www.sensable.com/
19 21
http://www.ece.ualberta.ca/ fahmad http://hci.stanford.edu/research/GUIDe/
20 22
http://marisil.org/ http://www.handykey.com/

11
2 Enabling technologies

Figure 18: Mobile Augmented Reality Interface Sign Interpretation Language


c 2004 Peter Antoniac.

keyboards), may be a likely candidate for providing text


input to mobile devices in a wide variety of situations
(Hollerer and Feiner, 2004).

Hybrid UI With each modality having its drawbacks


and benefits, AR systems are likely to use a multimodal
UI. A synchronised combination of for instance gestures, Figure 19: Typical AR system framework tasks
speech, sound, vision and haptics may provide users with
a more natural and robust, yet predictable UI.
Frameworks AR systems have to perform some typ-
ical tasks like tracking, sensing, display and interac-
Context awareness The display and tracking devices tion (Fig. 19). These can be supported by fast pro-
discussed earlier already provide some advantages for an totyping frameworks that are developed independently
AR interface. A mobile AR system is aware of the users from their applications. Easy integration of AR devices
position and orientation and can adjust the UI accord- and quick creation of user interfaces can be achieved
ingly. Such context awareness can reduce UI complexity with frameworks like the ARToolKit23 , probably the
for ecample by dealing only with virtual or real objects best known and most widely used. Other frameworks
that are nearby or within visual range. include COTERIE24 , StudierStube25 (Szalavari et al.,
1998), DWARF26 , DFusion by Total Immersion27 , etc.
Towards human-machine symbiosis Another class
of sensors gathers information about the users state. Bio- Networks and databases AR systems usually present
metric devices can measure heart-rate and bioelectric sig- a lot of knowledge to the user which is obtained through
nals, such as galvanic skin response, electroencephalo- networks. Especially mobile and collaborative AR sys-
gram (neural activity), or electromyogram (muscle activ- tems will require suitable (wireless) networks to support
ity) data in order to monitor biological activity. Affective data retrieval and multi-user interaction over larger dis-
computing (Picard, 1997) aims to make computers more tances. Moving computation load to remote servers is
aware of the emotional state of their users and able to one approach to reduce weight and bulk of mobile AR
adapt accordingly. Although the future may hold human- systems (Behringer et al., 2000; Mann, 1997). How to
machine symbioses (Licklider, 1960), current integration get to the most relevant information with the least effort
of UI technology is restricted to devices that are worn from databases, and how to minimise information pre-
or perhaps embroidered to create computationally aware sentation are still open research questions.
clothes (Farringdon et al., 1999).
Content The author believes that commercial success
2.4 More AR requirements of AR systems will depend heavily on the available types
of content. Scientific and industrial applications are usu-
Besides tracking, registration, and interaction, Hollerer ally based on specialised content, but presenting com-
and Feiner (2004) mention three more requirements for
23
a mobile AR system: computational framework, wire- http://artoolkit.sourceforge.net/
24
http://www.cs.columbia.edu/graphics/projects/coterie/
less networking, and data storage and access technology. 25
http://studierstube.icg.tu-graz.ac.at/
Content is of course also required, so some authoring 26
http://www.augmentedreality.de/
tools are mentioned here as well. 27
http://www.t-immersion.com/

12
3.1 Personal information systems

mercial content to the common user will remain a chal-


lenge if AR is not applied in everyday life.
Some of the available AR authoring tools are the
CREATE tool from Information in Place28 , the DART
toolkit29 and the MARS Authoring Tool30 . Companies
like Thinglab31 assist in 3D scanning or digitising of ob-
jects. Optical capture systems, capture suits, and other
tracking devices available at companies like Inition32 are
tools for creating some life AR content beyond simple
annotation.
Creating or recording dynamic content could bene-
fit from techniques already developed in the movie and
games industries, but also from accessible 3D drawing
software like Google SketchUp33 . Storing and replaying
user experiences is a valuable extension to MR system
Figure 20: Personal Awareness Assistant
c Accenture.
functionality and are provided for instance in HyperMem
(Correia and Romero, 2006).
Personal Assistance Available from Accenture is the
Personal Awareness Assistant (Fig. 20) which automati-
3 Applications cally stores names and faces of people you meet, cued by
words as how do you do. Speech recognition also pro-
Over the years, researchers and developers find more and vides a natural interface to retrieve the information that
more areas that could benefit from augmentation. The was recorded earlier. Journalists, police, geographers and
first systems focused on military, industrial and medical archaeologists could use AR to place notes or signs in the
application, but AR systems for commercial use and en- environment they are reporting on or working in.
tertainment appeared soon after. Which of these appli-
cations will trigger wide-spread use is anybodys guess.
This section discusses some areas of application grouped
similar to the ISMAR 2007 symposium34 categorisation.

3.1 Personal information systems

Hollerer and Feiner (2004) believe one of the biggest


potential markets for AR could prove to be in personal
wearable computing. AR may serve as an advanced, im- (a) (b)
mediate, and more natural UI for wearable and mobile
computing in personal, daily use. For instance, AR could
integrate phone and email communication with context- Figure 21: Pedestrian navigation (Narzt et al., 2003) and
aware overlays, manage personal information related to traffic warning (Tonnis et al., 2005).
specific locations or people, provide navigational guid-
ance, and provide a unified control interface for all kinds
of appliances in and around the home. Such a platform
also presents direct marketing agencies with many op- Navigation Navigation in prepared environments has
portunities to offer coupons to passing pedestrians, place been tried and tested for some time. Rekimoto (1997)
virtual billboards, show virtual prototypes, etc. With all presented NaviCam for indoor use that augmented a
these different uses, AR platforms should preferably of- video stream from a hand held camera using fiducial
fer a filter to manage what content they display. markers for position tracking. Starner et al. (1997) con-
sider applications and limitations of AR for wearable
28
http://www.informationinplace.com/ computers, including problems of finger tracking and fa-
29
http://www.gvu.gatech.edu/dart/ cial recognition. Narzt et al. (2003, 2006) discuss nav-
30
http://www.cs.columbia.edu/graphics/projects/mars/ igation paradigms for (outdoor) pedestrians (Fig. 21a)
31
http://www.thinglab.co.uk/
32
http://www.inition.co.uk/
and cars that overlay routes, highway exits, follow-me
33
http://www.sketchup.com/ cars, dangers, fuel prices, etc. They prototyped video
34
http://www.ismar07.org/ see-through PDAs and mobile phones and envision even-

13
3 Applications

tual use in car windshield heads-up displays. Tonnis et al. and Wohlgemuth, 2002). The MR Lab used data from
(2005) investigate the success of using AR warnings to DaimlerChryslers cars to create Clear and Present Car,
direct a car drivers attention towards danger (Fig. 21b). a simulation where one can open the door of a virtual
Kim et al. (2006) describe how a 2D traveler guidance concept car and experience the interior, dash board lay
service can be made 3D using GIS data for AR naviga- out and interface design for usability testing (Tamura,
tion. Nokias MARA project35 researches deployment of 2002; Tamura et al., 2001). Notice how the steering
AR on current mobile phone technology. wheel is drawn around the hands, rather than over them
(Fig. 22b).
Touring Hollerer et al. (1999) use AR to create situ-
ated documentaries about historic events, while Vlahakis
et al. (2001) present the ArcheoGuide project that recon-
structs a cultural heritage site in Olympia, Greece. With
this system, visitors can view and learn ancient architec-
ture and customs. Similar systems have been developed
for the Pompeii site (Magnenat-Thalmann and Papagian-
nakis, 2005; Papagiannakis et al., 2005). The lifeClip-
per36 project does about the same for structures and tech-
nologies in medieval Germany and is moving from an art Figure 23: Robot sensor data visualisation (Collett
project to serious AR exhibition. Bartie and Mackaness and MacDonald, 2006).
(2006) introduced a touring system to explore landmarks
in the cityscape of Edinburgh that works with speech Another interesting application presented by Collett
recognition. and MacDonald (2006) is the visualisation of robot pro-
grams (Fig. 23). With small robots such as the auto-
3.2 Industrial and military applications mated vacuum cleaner Roomba from iRobot37 entering
our daily lives, visualising their sensor ranges and in-
Design, assembly, and maintenance are typical areas
tended trajectories might be welcome extensions.
where AR may prove useful. These activities may be
augmented in both corporate and military settings.
Assembly BMW is experimenting with AR to improve
welding processes on their cars (Sandor and Klinker,
2005). Assisting the production process at Boeing,
Mizell (2001) use AR to overlay schematic diagrams
and accompanying documentation directly onto wooden
boards on which electrical wires are routed, bundled, and
sleeved. Curtis et al. (1998) verify the AR and find that
workers using AR create wire bundles as well as con-
ventional approaches, even though tracking and display
technologies were limited at the time.
(a) (b)

Figure 22: Spacedesign (Fiorentino et al., 2002) and


Clear and Present Car (Tamura, 2002; Tamura
et al., 2001).

Design Fiorentino et al. (2002) introduced the


SpaceDesign MR workspace (based on the StudierStube
framework) that allows for instance visualisation and Figure 24: Airbus water system assembly (Willers,
modification of car body curvature and engine layout 2006).
(Fig. 22a). Volkswagen intends to use AR for comparing
calculated and actual crash test imagery (Friedrich At EADS, supporting EuroFighters nose gear assem-
bly is researched (Friedrich and Wohlgemuth, 2002)
35
http://research.nokia.com/research/projects/mara/
36 37
http://www.torpus.com/lifeclipper/ http://www.irobot.com/

14
3.3 Medical applications

while (Willers, 2006) research AR support for Airbus 3.3 Medical applications
cable and water systems (Fig. 24). Leading (and talking)
Similar to maintenance personnel, roaming nurses and
workers through the assembly process of large aircraft is
doctors could benefit from important information being
not suited for stationary AR solutions, yet mobility and
delivered directly to their glasses (Hasvold, 2002). Sur-
tracking with so much metal around also prove to be chal-
geons however require very precise registration while AR
lenging.
system mobility is less of an issue.
An extra benefit of augmented assembly and construc-
tion is the possibility to monitor and schedule individ-
ual progress in order to manage large complex construc-
tion projects. An example by Feiner et al. (1999) gener-
ates overview renderings of the entire construction scene
while workers use their HMD to see which strut is to
be placed where in a space-frame structure. Distributed
interaction on construction is further studied by Olwal
and Feiner (2004).

Figure 25: Simulated visualisation in laparoscopic


Maintenance Complex machinery or structures require surgery for left and right eye (Fuchs et al.,
a lot of skill from maintenance personnel and AR is prov- 1998).
ing useful in this area, for instance in providing x-ray
vision or automatically probing the environment with An early optical see-through augmentation is pre-
extra sensors to direct the users attention to problem sented by Fuchs et al. (1998) for laparoscopic surgery39
sites. Klinker et al. (2001) present an AR system for where the overlayed view of the laparoscopes inserted
the inspection of power plants at Framatome ANP (today through small incisions is simulated (Fig. 25). Pietrzak
AREVA). Friedrich and Wohlgemuth (2002) show the in- et al. (2006) confirm that the use of 3D imagery in laparo-
tention to support electrical troubleshooting of vehicles at scopic surgery still has to be proven, but the opportunities
Ford and according to a MicroVision employee38 , Honda are well documented.
and Volvo ordered Nomad Expert Vision Technician sys-
tems to assist their technicians with vehicle history and
repair information (Kaplan-Leiserson, 2004).

Combat and simulation Satellite navigation, heads-


up displays for pilots, and also much of the current
AR research at universities and corporations are the re-
sult of military funding. Companies like Information
(a) (b)
in Place have contracts with the Army, Air Force and
Coast Guard, as land warrior and civilian use of AR may
overlap in for instance navigational support, communi-
cations enhancement, repair and maintenance and emer- Figure 26: AR overlay of a medical scan (Merten, 2007).
gency medicine. Extra benefits specific for military users
may be training in large-scale combat scenarios and sim- There are many AR approaches being tested in
ulating real-time enemy action, as in the Battlefield Aug- medicine with live overlays of ultrasound, CT, and MR
mented Reality System (BARS) by Julier et al. (1999) scans. Navab et al. (1999) take advantage of the physi-
and research by Piekarski et al. (1999). Not overloading cal constraints of a C-arm x-ray machine to automatically
the user with too much information is critical and is be- calibrate the cameras with the machine and register the x-
ing studied by Julier et al. (2000). The BARS system also ray imagery with the real objects. Vogt et al. (2006) use
provides tools to author the environment with new 3D video see-through HMD to overlay MR scans on heads
information that other system users see in turn (Baillot and provide views of tool manipulation hidden beneath
et al., 2001). Azuma et al. (2006) investigate the projec- tissue and surfaces, while Merten (2007) gives an impres-
tion of reconnaissance data from unmanned air vehicles sion of MR scans overlayed on feet (Fig. 26).
for land warriors. 39
Laparoscopic surgery uses slender camera systems (laparoscopes) and
instruments inserted in the abdomen and/or pelvis cavities through
38
http://microvision.blogspot.com/2004/05/looking-up.html small incisions for reduced patient recovery times.

15
3 Applications

3.4 AR for entertainment

Like VR, AR can be applied in the entertainment industry


to create AR games, but also to increase visibility of im-
portant game aspects in life sports broadcasting. In these
cases where a large public is reached, AR can also serve
advertisers to show virtual ads and product placements.

(a) (b)

Figure 28: Mobile AR tennis with the phones used as


rackets (Henrysson et al., 2005).

A number of games have been developed for pre-


(a) (b) pared indoor environments, such as the alien-battling
AquaGauntlet (Tamura et al., 2001), dolphin-juggling
ContactWater, ARHockey, and 2001 AR Odyssey
Figure 27: AR in life sports broadcasting: NASCAR rac- (Tamura, 2002). In AR-Bowling Matysczok et al.
ing and football (Azuma et al., 2001). (2004) study game-play, and Henrysson et al. (2005) cre-
ated AR tennis for the Nokia mobile phone (Fig. 28).
Early AR games also include AR air hockey (Ohshima
et al., 1998), collaborative combat against virtual ene-
Sports broadcasting Swimming pools, football fields, mies (Ohshima et al., 1999), and an AR-enhanced pool
race tracks and other sports environments are well-known game (Jebara et al., 1997).
and easily prepared, which video see-through augmenta-
tion through tracked camera feeds easy. One example is 3.5 AR for the office
the FoxTrax system (Cavallaro, 1997), used to highlight
the location of a hard-to-see hockey puck as it moves Besides in games, collaboration in office spaces is an-
rapidly across the ice, but AR is also applied to anno- other area where AR may prove useful, for example in
tate racing cars (Fig. 27a), snooker ball trajectories, life public management or crisis situations, urban planning,
swimmer performances, etc. Thanks to predictable en- etc.
vironments (uniformed players on a green, white, and
brown field) and chroma-keying techniques, the annota- Collaboration Having multiple people view, discuss,
tions are shown on the field and not on the players (Fig. and interact with 3D models simultaneously is a major
27b). potential benefit of AR. Collaborative environments al-
low seamless integration with existing tools and prac-
tices and enhance practice by supporting remote and
Games Extending on a platform for military simula-
collocated activities that would otherwise be impossi-
tion (Piekarski et al., 1999) based on the ARToolKit,
ble (Billinghurst and Kato, 1999). Benford et al. (1998)
Thomas et al. (2000) created ARQuake where mobile
name four examples where shared MR spaces may ap-
users fight virtual enemies in a real environment. A
ply: doctors diagnosing 3D scan data, construction engi-
general purpose outdoor AR platform, Tinmith-Metro
neers discussing plans and progress data, environmental
evolved from this work and is available at the Wearable
planners discussing geographical data and urban devel-
Computer Lab40 (Piekarski and Thomas, 2001, 2002).
opment, and distributed control rooms such as Air Traffic
At a rage41 a similar platform for outdoor games such
Control operating through a common visualisation.
as Sky Invaders is available, and Cheok et al. (2002)
Augmented Surfaces by Rekimoto and Saitoh (1999)
introduce the adventurous Game-City. Crabtree et al.
leaves users unencumbered but is limited to adding vir-
(2004); Flintham et al. (2003) discuss experiences with
tual information to the projected surfaces. Examples of
mobile MR game Bystander where virtual online play-
collaborative AR systems using see-through displays in-
ers avoid capture from real-world cooperating runners.
clude both those that use see-through handheld displays
40
http://www.tinmith.net/ (such as Transvision (Rekimoto, 1996) and MagicBook
41
http://www.a-rage.com/ (Billinghurst et al., 2001)) and see-through head-worn

16
Kaufmann (2002); Kaufmann et al. (2000) introduce
the Construct3D tool for math and geometry education,
based on the StudierStube framework (Fig. 30a). In
MARIE (Fig. 30b), based in turn on the Construct3D
tool, Liarokapis et al. (2004) employ screen-based AR
with Web3D to support engineering education. Recently
MIT Education Arcade introduced game-based learning
in Mystery at the museum and Environmental Detec-
(a) (b) tives where each educative game has an engaging back-
story, differentiated character roles, reactive third parties,
guided debriefing, synthetic activities, and embedded re-
call/replay to promote both engagement and learning
Figure 29: MR2 (Tamura, 2002) and ARTHUR (Broll (Kaplan-Leiserson, 2004).
et al., 2004).

4 Limitations
displays (such as Emmie (Butz et al., 1999), and Studier-
Stube (Szalavari et al., 1998), MR2 (Tamura, 2002) and AR faces technical challenges regarding for example
ARTHUR (Broll et al., 2004)). Privacy management is binocular (stereo) view, high resolution, colour depth, lu-
handled in the Emmie system through such metaphors as minance, contrast, field of view, and focus depth. How-
lamps and mirrors. Making sure everybody knows what ever, before AR becomes accepted as part of users ev-
someone is pointing at is a problem that StudierStube eryday life, just like a mobile phone and a personal dig-
overcomes by using virtual representation of physical ital assistant (PDA), issues regarding intuitive interfaces,
pointers. Similarly, Tamura (2002) presented a mixed costs, weight, power usage, ergonomics, and appearance
reality meeting room (MR2 ) for 3D presentations (Fig. must also be addressed. A number of limitations, some of
29a). For urban planning purposes, Broll et al. (2004) which have been mentioned earlier, are categorised here.
introduced ARTHUR, complete with pedestrian flow vi-
sualisation (Fig. 29b) but lacking augmented pointers. Portability and outdoor use Most mobile AR systems
mentioned in this survey are cumbersome, requiring a
heavy backpack to carry the PC, sensors, display, bat-
3.6 Education and training
teries, and everything else. Connections between all the
Close to earlier mentioned collaborative applications like devices must be able to withstand outdoor use, including
games and planning are AR tools that support educa- weather and shock, but universal serial bus (USB) con-
tion with 3D objects. Many studies research this area nectors are known to fail easily. However, recent devel-
of application such as (Billinghurst, 2003; G. Heide- opments in mobile technology like cell phones and PDAs
mann and Ritter, 2005; Hughes et al., 2005; Kommers are bridging the gap towards mobile AR.
and Zhiming; Kritzenberger et al., 2002; Pan et al., 2006; Optical and video see-through displays are usually un-
Winkler et al., 2003). suited for outdoor use due to low brightness, contrast,
resolution, and field of view. However, recently devel-
oped at MicroVision, laser-powered displays offer a new
dimension in head-mounted and hand-held displays that
overcomes this problem.
Most portable computers have only one CPU which
limits the amount of visual and hybrid tracking. More
generally, consumer operating systems are not suited for
real-time computing, while specialised real-time operat-
ing systems dont have the drivers to support the sensors
and graphics in modern hardware.
(a) (b)

Tracking and (auto)calibration Tracking in unpre-


pared environments remains a challenge but hybrid ap-
Figure 30: Construct3D (Kaufmann et al., 2000) and proaches are becoming small enough to be added to mo-
MARIE (Liarokapis et al., 2004). bile phones or PDAs. Calibration of these devices is
still complicated and extensive, but this may be solved

17
References

through calibration-free or auto-calibrating approaches Social acceptance Getting people to use AR may be
that minimise set-up requirements. The latter use re- more challenging than expected, and many factors play a
dundant sensor information to automatically measure and role in social acceptance of AR ranging from unobtrusive
compensate for changing calibration parameters (Azuma fashionable appearance (gloves, helmets, etc.) to privacy
et al., 2001). concerns. For instance, Accentures Assistant (Fig. 20)
blinks a light when it records for the sole purpose of alert-
ing the person who is being recorded. These fundamental
Latency A large source of dynamic registration errors
issues must be addressed before AR is widely accepted
are system delays (Azuma et al., 2001). Techniques like
(Hollerer and Feiner, 2004).
precalculation, temporal stream matching (in video see-
through such as live broadcasts), and prediction of future
viewpoints may solve some delay. System latency can 5 Conclusion
also be scheduled to reduce errors through careful system
design, and pre-rendered images may be shifted at the This essay presented a state of the art survey of AR tech-
last instant to compensate for pan-tilt motions. Similarly, nologies, applications and limitations. The comparative
image warping may correct delays in 6DOF motion (both table on displays (Table 1) and survey of frameworks and
translation and rotation). content authoring tools (Section 2.4) are specifically new
to the AR research literature. With over a hundred ref-
erences this essay has become a comprehensive investi-
Depth perception One difficult registration problem gation into AR and hopefully provides a suitable starting
is accurate depth perception. Stereoscopic displays point for readers new to the field.
help, but additional problems including accommodation- AR has come a long way but still has some distance
vergence conflicts or low resolution and dim displays to go before industries, the military and the general pub-
cause object to appear further away than they should be lic will accept it as a familiar user interface. For exam-
(Drascic and Milgram, 1996). Correct occlusion amelio- ple, Airbus CIMPA still struggles to get their AR systems
rates some depth problems (Rolland and Fuchs, 2000), for assembly support accepted by the workers (Willers,
as does consistent registration for different eyepoint lo- 2006). On the other hand, companies like Information in
cations (Vaissie and Rolland, 2000). Place estimate that by 2014, 30% of mobile workers will
be using augmented reality. Within 5-10 years, Feiner
Adaptation In early video see-through systems with a (2002) believes that augmented reality will have a more
parallax, users need to adapt to vertical displaced view- profound effect on the way in which we develop and in-
points. In an experiment by Biocca and Rolland (1998), teract with future computers. With the advent of such
subjects exhibit a large overshoot in a depth-pointing task complementary technologies as tactile networks by Red-
after removing the HMD. Tacton42 , artificial intelligence, cybernetics, and (non-
invasive) brain-computer interfaces, AR might soon pave
the way for ubiquitous (anytime-anywhere) computing
Fatigue and eye strain Like the parallax problem,
(Weiser, 1991) of a more natural kind (Abowd and My-
biocular displays (where both eyes see the same image)
natt, 2000) or even human-machine symbiosis as envi-
cause significantly more discomfort than monocular or
sioned by Licklider (1960).
binocular displays, both in eye strain and fatigue (Ellis
et al., 1997).
References
Overload and over-reliance Aside from technical
A BOWD , G. D. AND M YNATT, E. D. Charting past,
challenges, the user interface must also follow some present, and future research in ubiquitous comput-
guidelines as not to overload the user with informa- ing. ACM Trans. on Computer-Human Interaction 7,
tion while also preventing the user to overly rely on the 1 (Mar. 2000), pp. 2958. 5
AR system such that important cues from the environ-
ment are missed (Tang et al., 2003). At BMW, Bengler A HMAD , F. AND M USILEK , P. A keystroke and pointer
and Passaro (2004) use guidelines for AR system design control input interface for wearable computers. In Per-
in cars, including orientation on the driving task, no mov- Com06: Proc. 4th IEEE Intl Conf. Pervasive Com-
ing or obstructing imagery, add only information that im- puting and Communications (2006), pp. 211. 2.3
proves driving performance, avoid side effects like tunnel
vision and cognitive capture, and only use information A NTONIAC , P. AND P ULLI , P. Marisilmobile user
that does not distract, intrude or disturb given different interface framework for virtual enterprise. In Proc.
situations. 42
http://www.redtacton.com/

18
References

7th Intl Conf. Concurrent Enterprising (Bremen, June B ENALI -K HOUDJA , M., H AFEZ , M., A LEXANDRE , J.-
2001), pp. 171180. 2.3 M. AND K HEDDAR , A. Tactile interfaces: a state-of-
the-art survey. In ISR04: Proc. 35th Intl Symp. on
A ZUMA , R. T. A survey of augmented reality. Presence Robotics (Paris, France, Mar. 2004). 2.3
6, 4 (Aug. 1997), pp. 355385. 1.1, 1.2, 2.2.2
B ENFORD , S., G REENHALGH , C., R EYNARD , G.,
B ROWN , C. AND KOLEVA , B. Understanding and
A ZUMA , R. T. Mixed Reality: Merging Real and Virtual
constructing shared spaces with mixed-reality bound-
Worlds. Ohmsha/Springer, Tokyo/New York, 1999,
aries. ACM Trans. on Computer-Human Interaction 5,
ch. The challenge of making augmented reality work
3 (Sep. 1998), pp. 185223. 3.5
outdoors, pp. 379390. ISBN 3-540-65623-5. 2.2.2
B ENGLER , K. AND PASSARO , R. Augmented reality
A ZUMA , R. T., BAILLOT, Y., B EHRINGER , R., in cars: Requirements and constraints. In ISMAR04:
F EINER , S. K., J ULIER , S. AND M AC I NTYRE , B. Proc. 3rd Intl Symp. on Mixed and Augmented Reality
Recent advances in augmented reality. Computer (2004). 4
Graphics and Applications 21, 6 (Nov./Dec. 2001),
pp. 3447. 1, 1.1, 1.2, 27, 4, 4 B ILLINGHURST, M. Augmented reality in education.
New Horizons in Learning 9, 1 (Oct. 2003). 3.6
A ZUMA , R. T., N EELY III, H., DAILY, M. B ILLINGHURST, M. AND K ATO , H. Collaborative
AND L EONARD , J. Performance analysis of an out-
mixed reality. In ISMR99: Proc. Intl Symp. on Mixed
door augmented reality tracking system that relies Reality (Yokohama, Japan, Mar. 1999), Springer Ver-
upon a few mobile beacons. In ISMAR06: Proc. 5th lag, pp. 261284. 3.5
Intl Symp. on Mixed and Augmented Reality (Santa
Barbara, CA, USA, Oct. 2006), pp. 101104. 2.2.2, B ILLINGHURST, M., K ATO , H. AND P OUPYREV, I. The
3.2 MagicBookmoving seamlessly between reality and
virtuality. Computer Graphics and Applications 21,
BAHL , P. AND PADMANABHAN , V. N. RADAR: an in- 3 (May/June 2001), pp. 68. 3.5
building RF-based user location and tracking system.
In Proc. IEEE Infocom (Tel Aviv, Israel, 2000), IEEE B IMBER , O. AND R ASKAR , R. Modern approaches to
CS Press, pp. 775784. ISBN 0-7803-5880-5. 2.2.2 augmented reality. In SIGGRAPH05 Tutorial on Spa-
tial AR (2005a). 3, 4, 5, 6, 8, 9
BAILLOT, Y., B ROWN , D. AND J ULIER , S. Authoring of B IMBER , O. AND R ASKAR , R. Spatial Augmented Re-
physical models using mobile computers. In ISWC01: ality: Merging Real and Virtual Worlds. A. K. Peters,
Proc. 5th Intl Symp. on Wearable Computers (Wash- Ltd., Wellesley, MA, USA, 2005b. ISBN 1-56881-
ington, DC, USA, 2001), IEEE CS Press, p. 39. ISBN 230-2. 2.1.2
0-7695-1318-2. 3.2
B IOCCA , F. A. AND ROLLAND , J. P. Virtual eyes can re-
BARTIE , P. J. AND M ACKANESS , W. A. Development arrange your bodyadaptation to visual displacement
of a speech-based augmented reality system to support in see-through, head-mounted displays. Presence 7, 3
exploration of cityscape. Trans. in GIS 10, 1 (2006), (June 1998), pp. 262277. 2.1.2, 4
pp. 6386. 3.1
B ROLL , W., L INDT, I., O HLENBURG , J.,

W ITTK AMPER , M., Y UAN , C., N OVOTNY, T.,
B EHRINGER , R., TAM , C., M C G EE , J., S UN - FATAH GEN . S CHIECK , A., M OTTRAM , C.
DARESWARAN , S. AND VASSILIOU , M. Wearable
AND S TROTHMANN , A. ARTHUR: A collabo-
augmented reality testbed for navigation and control, rative augmented environment for architectural design
built solely with commercial-off-the-shelf (COTS) and urban planning. Journal of Virtual Reality and
hardware. In ISAR00: Proc. Intl Symp. Augmented Broadcasting 1, 1 (Dec. 2004). 29, 3.5
Reality (Washington, DC, USA, Oct. 2000), IEEE CS
Press, pp. 1219. 2.4 B UCHMANN , V., V IOLICH , S., B ILLINGHURST,
M. AND C OCKBURN , A. FingARtips: Ges-

B ELL , B., F EINER , S. AND H OLLERER , T. View ture based direct manipulation in augmented reality.
management for virtual and augmented reality. In GRAPHITE04: Proc. 2nd Intl Conf. on Com-
In UIST01: Proc. ACM Symp. on User Interface Soft- puter graphics and interactive techniques (Singapore,
ware and Technology (Orlando, FL, 2001), vol. 3 of 2004), ACM Press, pp. 212221. ISBN 1-58113-883-
CHI Letters, ACM Press, pp. 101110. 2.2.1 0. 2.3

19
References


B UTZ , A., H OLLERER , T., F EINER , S., M AC I N - C ORREIA , N. AND ROMERO , L. Storing user experi-
TYRE , B. AND B ESHERS , C. Enveloping users ences in mixed reality using hypermedia. The Visual
and computers in a collaborative 3D augmented real- Computer 22, 12 (2006), pp. 9911001. 2.4
ity. In IWAR99: Proc. 2nd Intl Workshop on Aug-
mented Reality (Washington, DC, USA, 1999), IEEE C RABTREE , A., B ENFORD , S., RODDEN , T., G REEN -
CS Press, pp. 3544. ISBN 0-7695-0359-4. 3.5 HALGH , C., F LINTHAM , M., A NASTASI , R.,
D ROZD , A., A DAMS , M., ROW-FARR , J., TANDA -
C AKMAKCI , O. AND ROLLAND , J. Head-worn dis- VANITJ , N. AND S TEED , A. Orchestrating a mixed
plays: A review. Display Technology 2, 3 (Sep. 2006), reality game on the ground. In Proc. CHI Conf. on
pp. 199216. Invited. 2.1.3 Human Factors in Computing Systems (Vienna, Aus-
tria, Apr. 2004), ACM Press. 3.4
C ASTRO , P., C HIU , P., K REMENEK , T. AND M UNTZ ,
R. A probabilistic room location service for wire- C URTIS , D., M IZELL , D., G RUENBAUM , P.
less networked environments. In UbiComp01: Proc. AND JANIN , A. Several devils in the details:
Ubiquitous Computing (New York, NY, USA, 2001), Making an AR application work in the airplane fac-
vol. 2201 of Lecture Notes in Computer Science, ACM tory. In IWAR98: Proc. Intl Workshop on Augmented
Press, pp. 1835. 2.2.2 Reality (Natick, MA, USA, 1998), A. K. Peters, Ltd.,
pp. 4760. ISBN 1-56881-098-9. 3.2
C AUDELL , T. P. AND M IZELL , D. W. Augmented
reality: An application of heads-up display technol- D IX , A., F INLAY, J. E., A BOWD , G. D. AND B EALE ,
ogy to manual manufacturing processes. In Proc. R. Human-Computer Interaction, 3rd ed. Pearson
Hawaii Intl Conf. on Systems Sciences (Washington, Education Ltd., Harlow, England, 2004. ISBN 0130-
DC, USA, 1992), IEEE CS Press. 1.2 46109.
C AVALLARO , R. The foxtrax hockey puck tracking D RASCIC , D. AND M ILGRAM , P. Perceptual issues in
system. Computer Graphics and Applications 17, 2 augmented reality. In Proc. SPIE, Stereoscopic Dis-
(Mar./Apr. 1997), pp. 612. 3.4 plays VII and Virtual Systems III (Bellingham, WA,
C HANG , A. AND OS ULLIVAN , C. Audio-haptic feed- USA, 1996), vol. 2653, SPIE Press, pp. 123134. 4
back in mobile phones. In CHI05: Proc. SIGCHI
E LLIS , S. R., B REANT, F., M ANGES , B., JACOBY, R.
Conf. on Human Factors in Computing Systems (Port-
AND A DELSTEIN , B. D. Factors influencing opera-
land, OR, USA, 2005), ACM Press, pp. 12641267.
tor interaction with virtual objects viewed via head-
ISBN 1-59593-002-7. 2.1.1
mounted seethrough displays: Viewing conditions and
C HEOK , A. D., WAN , F. S., YANG , X., W EIHUA , W., rendering latency. In VRAIS97: Proc. Virtual Reality
H UANG , L. M., B ILLINGHURST, M. AND K ATO , H. Ann. Intl Symp. (Washington, DC, USA, 1997), IEEE
Game-city: A ubiquitous large area multi-interface CS Press, pp. 138145. ISBN 0-8186-7843-7. 4
mixed reality game space for wearable computers.
FARRINGDON , J., M OORE , A. J., T ILBURY, N.,
IEEE CS Press. ISBN 0-7695-1816-8/02. 3.4
C HURCH , J. AND B IEMOND , P. D. Wearable sen-
C HIA , K. W., C HEOK , A. D. AND P RINCE , S. J. D. sor badge and sensor jacket for context awareness.
Online 6 DOF augmented reality registration from nat- In ISWC99: Proc. 3rd Intl Symp. on Wearable Com-
ural features. In ISMAR02: Proc. Intl Symp. on puters (San Francisco, CA, 1999), IEEE CS Press,
Mixed and Augmented Reality (Darmstadt, Germany, pp. 107113. 2.3
Oct. 2002), IEEE CS Press. 2.2.2

F EINER , S., M AC I NTYRE , B., H OLLERER , T.
C OLLETT, T. H. J. AND M AC D ONALD , B. A. De- AND W EBSTER , A. A touring machine: Prototyp-
veloper oriented visualisation of a robot program: An ing 3D mobile augmented reality systems for explor-
augmented reality approach. In HRI06 (Salt Lake City, ing the urban environment. In ISWC97: Proc. 1st
Utah, USA, Mar. 2006), ACM Press, pp. 4956. ISBN Intl Symp. on Wearable Computers (Cambridge, MA,
1-59593-294-1/06/0003. 23, 3.2 1997), IEEE CS Press, pp. 7481. 1.2

C OMPORT, A. I., M ARCHAND , E. AND C HAUMETTE ,


F EINER , S., M AC I NTYRE , B. AND H OLLERER , T.
F. A real-time tracker for markerless augmented real- Mixed Reality: Merging Real and Virtual Worlds.
ity. In ISMAR03: Proc. 2nd Intl Symp. on Mixed and Ohmsha/Springer, Tokyo/New York, 1999, ch. Wear-
Augmented Reality (Tokyo, Japan, Oct. 2003), IEEE ing it out: First steps toward mobile augmented reality
CS Press. 2.2.2 systems, pp. 363377. 3.2

20
References

F EINER , S. K. Augmented reality: A new way of seeing. H. F., S ADER , R., T HIERINGER , F., A LBRECHT, K.,
Scientific American (Apr. 2002). 1, 5 P RAXMARER , K., K EEVE , E., H ANSSEN , N., K ROL ,
Z. AND H ASENBRINK , F. Development of an aug-
F ERRARI , V., T UYTELAARS , T. AND VAN G OOL , L. mented reality system for intra-operative navigation in
Markerless augmented reality with a real-time affine maxillo-facial surgery. In Proc. BMBF Statustagung
region tracker. In ISAR01: Proc. 2nd Intl Symp. Aug- (Stuttgart, Germany, 2001), pp. 237246. 2.1.3
mented Reality (Washington, DC, USA, 2001), IEEE
CS Press. 2.2.2 G ORDON , I. AND L OWE , D. G. What and where: 3D
object recognition with accurate pose. In ISMAR04:
F IAMBOLIS , P. Virtual retinal display (VRD) tech- Proc. 3rd Intl Symp. on Mixed and Augmented Real-
nology. Available at http://www.cs.nps.navy. ity (Arlington, VA, USA, Nov. 2004), IEEE CS Press.
mil/people/faculty/capps/4473/projects/ 2.2.2
fiambolis/vrd/vrd_full.html. Aug. 1999. 2.1.2
H ASVOLD , P. In-the-field health informatics. In The
F IORENTINO , M., DE A MICIS , R., M ONNO , G. Open Group conf. (Paris, France, 2002). 3.3
AND S TORK , A. Spacedesign: A mixed reality
workspace for aesthetic industrial design. In IS- H AYWARD , V., A STLEY, O., C RUZ -H ERNANDEZ , M.,
MAR02: Proc. Intl Symp. on Mixed and Augmented G RANT, D. AND ROBLES -D E L A T ORRE , G. Hap-
Reality (Darmstadt, Germany, Oct. 2002), IEEE CS tic interfaces and devices. Sensor Review 24 (2004),
Press, p. 86. ISBN 0-7695-1781-1. 22, 3.2 pp. 1629. 2.3

F LINTHAM , M., A NASTASI , R., B ENFORD , S., H EM - H ENRYSSON , A., B ILLINGHURST, M. AND O LLILA ,
MINGS , T., C RABTREE , A., G REENHALGH , C., M. Face to face collaborative AR on mobile phones.
RODDEN , T., TANDAVANITJ , N., A DAMS , M. In ISMAR05: Proc. 4th Intl Symp. on Mixed and Aug-
AND ROW-FARR , J. Where on-line meets on-the- mented Reality (Vienna, Austria, Oct. 2005), IEEE CS
streets: Experiences with mobile mixed reality games. Press, pp. 8089. ISBN 0-7695-2459-1. 28, 3.4
2003. 3.4

H OLLERER , T., F EINER , S. AND PAVLIK , J. Situated
F OXLIN , E. AND H ARRINGTON , M. Weartrack: A self- documentaries: Embedding multimedia presentations
referenced head and hand tracker for wearable com- in the real world. In ISWC99: Proc. 3rd Intl Symp.
puters and portable VR. In ISWC00: Proc. 4th Intl on Wearable Computers (San Francisco, CA, 1999),
Symp. on Wearable Computers (Atlanta, GA, 2000), IEEE CS Press, pp. 7986. 3.1
IEEE CS Press, pp. 155162. 2.3

H OLLERER , T. H. AND F EINER , S. K. Telegeoinformat-
F RIEDRICH , W. AND W OHLGEMUTH , W. ARVIKA ics: Location-Based Computing and Services. Taylor
augmented reality for development, production and & Francis Books Ltd., 2004, ch. Mobile Augmented
servicing. ARVR Beitrag, 2002. 3.2, 3.2, 3.2 Reality. 2.1.3, 2.2.2, 2.2.2, 2.3, 2.3, 2.4, 3.1, 4

F UCHS , H., L IVINGSTON , M. A., R ASKAR , R., H UGHES , C. E., S TAPLETON , C. B., H UGHES , D. E.
C OLUCCI , D., K ELLER , K., S TATE , A., C RAW- AND S MITH , E. M. Mixed reality in education, enter-
FORD , J. R., R ADEMACHER , P., D RAKE , S. H. tainment, and training. Computer Graphics and Appli-
AND M EYER , A. A. Augmented reality visualization cations (Nov./Dec. 2005), pp. 2430. 3.6
for laparoscopic surgery. In MICCAI98: Proc. 1st
Intl Conf. Medical Image Computing and Computer- H UGHES , C. E., S TAPLETON , C. B. AND OC ONNOR ,
Assisted Intervention (Heidelberg, Germany, 1998), M. The evolution of a framework for mixed reality
Springer-Verlag, pp. 934943. ISBN 3-540-65136-5. experiences. In Emerging Technologies of Augmented
25, 3.3 Reality: Interfaces and Design (Hershey, PA, USA,
Nov. 2006), Idea Group Publishing. 2.1.1
G. H EIDEMANN , H. B EKEL , I. B. AND R ITTER , H. In-
teractive online learning. Pattern Recognition and Im- I SHII , H. AND U LLMER , B. Tangible bits: Towards
age Analysis 15, 1 (2005), pp. 5558. 3.6 seamless interfaces between people, bits and atoms.
In CHI97: Proc. SIGCHI Conf. on Human Factors in
G ETTING , I. The global positioning system. Spectrum Computing Systems (New York, NY, USA, Mar. 1997),
30, 12 (1993), pp. 3647. 2.2.2 ACM Press, pp. 234241. 2.3

G OEBBELS , G., T ROCHE , K., B RAUN , M., I VANOVIC , JACOB , R. J. K. What is the next generation of

A., G RAB , A., VON L OBTOW , K., Z EILHOFER , human-computer interaction? In CHI06: Proc.

21
References

SIGCHI Conf. on Human Factors in Computing Sys- K IYOKAWA , K., B ILLINGHURST, M., C AMPBELL , B.
tems (Montreal, Quebec, Canada, Apr. 2006), ACM AND W OODS , E. An occlusion-capable optical see-
Press, pp. 17071710. ISBN 1-59593-298-4. 1 through head mount display for supporting co-located
collaboration. In ISMAR03: Proc. 2nd Intl Symp.
J EBARA , T., E YSTER , C., W EAVER , J., S TARNER , on Mixed and Augmented Reality (Tokyo, Japan, Oct.
T. AND P ENTLAND , A. Stochasticks: Augment- 2003), IEEE CS Press, p. 133. ISBN 0-7695-2006-5.
ing the billiards experience with probabilistic vision 2.1.2
and wearable computers. In ISWC97: Proc. 1st
Intl Symp. on Wearable Computers (Washington, DC, K LINKER , G., S TRICKER , D. AND R EINERS , D. Aug-
USA, 1997), IEEE CS Press, pp. 138145. 3.4 mented Reality and Wearable Computers. Lawrence
Erlbaum Press, 2001, ch. Augmented Reality for Ex-
J ULIER , S. AND B ISHOP, G., E . Computer Graphics and terior Construction Applications. 3.2
Applications, Special Issue on Tracking, vol. 6. IEEE
CS Press, Washington, DC, USA, 2002. 2.2.2 KOCH , R., KOESER , K., S TRECKEL , B. AND E VERS -
S ENNE , J.-F. Markerless image-based 3D tracking for
J ULIER , S., K ING , R., C OLBERT, B., D URBIN , J. real-time augmented reality applications. 2005. 2.2.2
AND ROSENBLUM , L. The software architecture of a
real-time battlefield visualization virtual environment. KOMMERS , P. A. AND Z HIMING , Z. Virtual reality
In VR99: Proc. IEEE Virtual Reality (Washington, for education. Available at http://projects.edte.
DC, USA, 1999), IEEE CS Press, pp. 2936. ISBN utwente.nl/proo/kommers.htm. 3.6
0-7695-0093-5. 3.2
K RITZENBERGER , H., W INKLER , T. AND H ERCZEG ,
M. Collaborative and constructive learning of ele-
J ULIER , S. J., L ANZAGORTA , M., BAILLOT, Y.,
mentary school children in experiental learning spaces
ROSENBLUM , L. J., F EINER , S., H OLLERER , T.
along the virtuality continuum. In Mensch & Com-
AND S ESTITO , S. Information filtering for mobile
puter 2002: Vom interaktiven Werkzeug zu koopera-
augmented reality. In ISAR00: Proc. Intl Symp. Aug-
tiven Arbeits- und Lernwelten, M. Herczeg, W. Prinz,
mented Reality (Washington, DC, USA, 2000), IEEE
and H. Oberquelle, Eds. B. G. Teubner, Stuttgart, Ger-
CS Press, pp. 311. 3.2
many, 2002. 3.6
K APLAN -L EISERSON , E. Trend: Augmented reality
K UMAR , M. AND W INOGRAD , T. GUIDe: Gaze-
check. Learning Circuits (Dec. 2004). 3.2, 3.6
enhanced UI design. In CHI07: Proc. SIGCHI Conf.
on Human Factors in Computing Systems (San Jose,
K ASAI , I., TANIJIRI , Y., E NDO , T. AND U EDA , H.
CA, Apr./May 2007). 2.3
A forgettable near eye display. In ISWC00: Proc.
4th Intl Symp. on Wearable Computers (Atlanta, GA, L IAROKAPIS , F., M OURKOUSSIS , N., W HITE , M.,
USA, 2000), IEEE CS Press, pp. 115118. 2.1.3 DARCY, J., S IFNIOTIS , M., P ETRIDIS , P., BASU , A.
AND L ISTER , P. F. Web3D and augmented reality to
K AUFMANN , H. Construct3D: an augmented reality support engineering education. World Trans. on Engi-
application for mathematics and geometry education. neering and Technology Education 3, 1 (2004), pp. 1
In MULTIMEDIA02: Proc. 10th ACM Intl Conf. 4. UICEE. 30, 3.6
on Multimedia (Juan-les-Pins, France, 2002), ACM
Press, pp. 656657. ISBN 1-58113-620-X. 3.6 L ICKLIDER , J. Man-computer symbiosis. IRE Trans. on
Human Factors in Electronics 1 (1960), pp. 411. 2.3,
K AUFMANN , H., S CHMALSTIEG , D. AND WAGNER , 5
M. Construct3D: A virtual reality application for
mathematics and geometry education. Education and L OOMIS , J., G OLLEDGE , R. AND K LATZKY, R. Per-
Information Technologies 5, 4 (Dec. 2000), pp. 263 sonal guidance system for the visually impaired using
276. 30, 3.6 GPS, GIS, and VR technologies. In Proc. Conf. on Vir-
tual Reality and Persons with Disabilities (Millbrae,
K IM , S., K IM , H., E OM , S., M AHALIK , N. P. CA, 1993). 1.2
AND A HN , B. A reliable new 2-stage distributed inter-
active tgs system based on gis database and augmented M AGNENAT-T HALMANN , N. AND PAPAGIANNAKIS ,
reality. In IEICE Trans. on Information and Systems, G. Virtual worlds and augmented reality in cultural
vol. E89-D. Jan. 2006. 2.2.1, 3.1 heritage applications. 2005. 3.1

22
References

M ANN , S. Wearable computing: A first step toward per- NAVAB , N., BANI -H ASHEMI , A. AND M ITSCHKE , M.
sonal imaging. Computer 30, 2 (Feb. 1997), pp. 2532. Merging visible and invisible: Two camera-augmented
1.2, 2.4 mobile C-arm (CAMC) applications. In IWAR99:
Proc. 2nd Intl Workshop on Augmented Reality
M ATYSCZOK , C., R ADKOWSKI , R. AND B ERSSEN - (Washington, DC, USA, 1999), IEEE CS Press,
BRUEGGE , J. AR-bowling: immersive and realistic pp. 134141. ISBN 0-7695-0359-4. 3.3
game play in real environments using augmented real-
ity. In ACE04: Proc. ACM SIGCHI Intl Conf. on Ad- O GI , T., YAMADA , T., YAMAMOTO , K. AND H IROSE ,
vances in Computer Entertainment technology (Singa- M. Invisible interface for immer sive virtual world.
pore, 2004), ACM Press, pp. 269276. ISBN 1-58113- In IPT01: Proc. Immersive Projection Technology
882-2. 3.4 Workshop (Stuttgart, Germany, 2001), pp. 237246.
2.1.3
M AYOL , W. W. AND M URRAY, D. W. Wearable
hand activity recognition for event summarization. O HSHIMA , T., S ATOH , K., YAMAMOTO , H.
In ISWC05: Proc. 9th Intl Symp. on Wearable Com- AND TAMURA , H. AR2 Hockey: A case study
puters (Osaka, Japan, Oct. 2005), IEEE CS Press. 2.3 of collaborative augmented reality. In VRAIS98:
Proc. Virtual Reality Ann. Intl Symp. (Washington,
M ERTEN , M. Erweiterte realitatverschmelzung zweier
DC, USA, 1998), IEEE CS Press, pp. 268275. 3.4
welten. Deutsches Arzteblatt 104, 13 (Mar. 2007). 26,
3.3 O HSHIMA , T., S ATOH , K., YAMAMOTO , H.
AND TAMURA , H. RV-Border Guards: A multi-
M ILGRAM , P. AND K ISHINO , F. A taxonomy of mixed
reality visual displays. IEICE Trans. on Information player mixed reality entertainment. Trans. Virtual
and Systems E77-D, 12 (Dec. 1994), pp. 1321. 1.1 Reality Soc. Japan 4, 4 (1999), pp. 699705. 3.4

M IZELL , D. Fundamentals of Wearable Computers and O LWAL , A. AND F EINER , S. Unit: modular develop-
Augumented Reality. Lawrence Erlbaum Assoc, Mah- ment of distributed interaction techniques for highly
wah, NJ, 2001, ch. Boeings wire bundle assembly interactive user interfaces. In GRAPHITE04: Proc.
project, pp. 447467. 3.2 2nd Intl Conf. on Computer graphics and interactive
techniques (Singapore, 2004), ACM Press, pp. 131

M OHRING , M., L ESSIG , C. AND B IMBER , O. Video 138. ISBN 1-58113-883-0. 3.2
see-through AR on consumer cell-phones. In IS-
MAR04: Proc. 3rd Intl Symp. on Mixed and Aug- PAN , Z., Z HIGENG , A. D., YANG , H., Z HU , J.
mented Reality (Arlington, VA, USA, Nov. 2004), AND S HI , J. Virtual reality and mixed reality for vir-
IEEE CS Press, pp. 252253. ISBN 0-7695-2191-6. tual learning environments. Computers & Graphics
2.1.3 30, 1 (Feb. 2006), pp. 2028. 3.6

NAIMARK , L. AND F OXLIN , E. Circular data ma- PAPAGIANNAKIS , G., S CHERTENLEIB , S.,
trix fiducial system and robust image processing for OK ENNEDY, B., A REVALO -P OIZAT, M.,
a wearable vision-inertial self-tracker. In ISMAR02: M AGNENAT-T HALMANN , N., S TODDART, A.
Proc. Intl Symp. on Mixed and Augmented Reality AND T HALMANN , D. Mixing virtual and real scenes
(Darmstadt, Germany, Oct. 2002), IEEE CS Press. in the site of ancient pompeii. Computer Animation
2.2.2 and Virtual Worlds 16 (2005), pp. 1124. 3.1

NARZT, W., P OMBERGER , G., F ERSCHA , A., KOLB , PATEL , S. N., R EKIMOTO , J. AND A BOWD , G. D.

D., M ULLER
, R., W IEGHARDT, J., H ORTNER , H. iCam: Precise at-a-distance interaction in the physical
AND L INDINGER , C. Pervasive information acquisi- environment. Feb. 2006. 2.2.1
tion for mobile ar-navigation systems. In WMCSA03:
Proc. 5th Workshop on Mobile Computing Systems and P ICARD , R. W. Affective Computing. MIT Press, Cam-
Applications (Oct. 2003), IEEE CS Press, pp. 1320. bridge, MA, USA, 1997. 2.3
ISBN 0-7695-1995-4/03. 21, 3.1
P IEKARSKI , W. AND T HOMAS , B. Tinmith-Metro: New
NARZT, W., P OMBERGER , G., F ERSCHA , A., KOLB , outdoor techniques for creating city models with an

D., M ULLER
, R., W IEGHARDT, J., H ORTNER , H. augmented reality wearable computer. In ISWC01:
AND L INDINGER , C. Augmented reality navigation Proc. 5th Intl Symp. on Wearable Computers (Zurich,
systems. Universal Access in the Information Society Switzerland, 2001), IEEE CS Press, pp. 3138. 2.2.1,
4, 3 (2006), pp. 177187. 2.1.3, 3.1 3.4

23
References

P IEKARSKI , W. AND T HOMAS , B. ARQuake: the out- ROLLAND , J. P., B IOCCA , F., H AMZA -L UP, F., H A ,
door augmented reality gaming system. Communica- Y. AND M ARTINS , R. Development of head-mounted
tions of the ACM 45, 1 (2002), pp. 3638. 3.4 projection displays for distributed, collaborative, aug-
mented reality applications. Presence 14, 5 (2005),
P IEKARSKI , W., G UNTHER , B. AND T HOMAS , B. In- pp. 528549. 2.1.3
tegrating virtual and augmented realities in an outdoor
application. In IWAR99: Proc. 2nd Intl Workshop S ANDOR , C. AND K LINKER , G. A rapid prototyp-
on Augmented Reality (Washington, DC, USA, 1999), ing software infrastructure for user interfaces in ubiq-
IEEE CS Press, pp. 4554. 3.2, 3.4 uitous augmented reality. Personal and Ubiquitous
Computing 9, 3 (May 2005), pp. 169185. 3.2
P IETRZAK , P., A RYA , M., J OSEPH , J. V. AND PATEL ,
H. R. H. Three-dimensional visualization in laparo- S CHMALSTIEG , D., F UHRMANN , A. AND H ESINA , G.
scopic surgery. British Journal of Urology Interna- Bridging multiple user interface dimensions with aug-
tional 98, 2 (Aug. 2006), pp. 253256. 3.3 mented reality. In ISAR00: Proc. Intl Symp. Aug-
mented Reality (2000). 2.2.2, 15, 2.3
P OUPYREV, I., TAN , D., B ILLINGHURST, M., K ATO ,
H., R EGENBRECHT, H. AND T ETSUTANI , N. Tiles: S CHOWENGERDT, B. T., S EIBEL , E. J., K ELLY, J. P.,
A mixed reality authoring interface. 2001. 2.3 S ILVERMAN , N. L. AND F URNESS III, T. A. Binoc-
ular retinal scanning laser display with integrated fo-
R AAB , F. H., B LOOD , E. B., S TEINER , T. O.
cus cues for ocular accommodation. In Proc. SPIE,
AND J ONES , H. R. Magnetic position and orientation
Electronic Imaging Science and Technology, Stereo-
tracking system. Trans. on Aerospace and Electronic
scopic Displays and Applications XIV (Bellingham,
Systems 15, 5 (1979), pp. 709717. 2.2.2
WA, USA, Jan. 2003), vol. 5006, SPIE Press. 2.1.2,
R ASKAR , R., W ELCH , G. AND C HEN , W.-C. Table-top 7
spatially-augmented realty: Bringing physical models
to life with projected imagery. In IWAR99: Proc. 2nd S TARNER , T., M ANN , S., R HODES , B., L EVINE , J.,
Intl Workshop on Augmented Reality (Washington, H EALEY, J., K IRSCH , D., P ICARD , R., AND P ENT-
LAND , A. Augmented reality through wearable com-
DC, USA, 1999), IEEE CS Press, pp. 6470. ISBN
0-7695-0359-4. 2.1.2 puting. Presence 6, 4 (1997), pp. 386398. 1.2, 3.1

R ASKAR , R., VAN BAAR , J., B EARDSLEY, P., S TETTEN , G., C HIB , V., H ILDEBRAND , D.
W ILLWACHER , T., R AO , S. AND F ORLINES , C. iL- AND B URSEE , J. Real time tomographic reflection:

amps: geometrically aware and self-configuring pro- Phantoms for calibration and biopsy. In ISAR01:
jectors. ACM Trans. on Graphics 22, 3 (2003), Proc. 2nd Intl Symp. Augmented Reality (Washington,
pp. 809818. 2.1.3 DC, USA, 2001), IEEE CS Press, pp. 1119. ISBN
0-7695-1375-1. 2.1.3
R EKIMOTO , J. Transvision: A hand-held augmented re-
ality system for collaborative design. In VSMM96: S TOAKLEY, R., C ONWAY, M. AND PAUSCH , R. Vir-
Proc. Virtual Systems and Multimedia (Gifu, Japan, tual reality on a WIM: Interactive worlds in miniature.
1996), Intl Soc. Virtual Systems and Multimedia, In CHI95: Proc. SIGCHI Conf. on Human Factors in
pp. 8590. 3.5 Computing Systems (1995), pp. 265272. 2.2.1

R EKIMOTO , J. NaviCam: A magnifying glass approach S UGIHARA , T. AND M IYASATO , T. A lightweight 3-D
to augmented reality. Presence 6, 4 (Aug. 1997), HMD with accommodative compensation. In SID98:
pp. 399412. 3.1 Proc. 29th Society for Information Display (San Jose,
CA, USA, May 1998), pp. 927930. 2.1.2
R EKIMOTO , J. AND S AITOH , M. Augmented surfaces:
A spatially continuous workspace for hybrid comput- S UTHERLAND , I. E. A head-mounted three-dimensional
ing environments. In CHI99: Proc. SIGCHI Conf. display. In Proc. Fall Joint Computer Conf. (Washing-
on Human Factors in Computing Systems (Pittsburgh, ton, DC, 1968), Thompson Books, pp. 757764. 2,
Pennsylvania, USA, 1999), ACM Press, pp. 378385. 1.2, 2.2.2
ISBN 0-201-48559-1. 3.5
, Z., S CHMALSTIEG , D., F UHRMANN , A.
S ZALAV ARI
ROLLAND , J. P. AND F UCHS , H. Optical versus video AND G ERVAUTZ , M. Studierstube: An environment
seethrough head-mounted displays in medical visual- for collaboration in augmented reality. Virtual Reality
ization. Presence 9, 3 (June 2000), pp. 287309. 4 3, 1 (1998), pp. 3749. 2.4, 3.5

24
References

TAKAGI , A., YAMAZAKI , S., S AITO , Y. VOGT, S., K HAMENE , A. AND S AUER , F. Reality aug-
AND TANIGUCHI , N. Development of a stereo mentation for medical procedures: System architec-
video see-through HMD for AR systems. In ISAR00: ture, single camera marker tracking, and system eval-
Proc. Intl Symp. Augmented Reality (Munchen, uation. Intl Journal of Computer Vision 70, 2 (2006),
Germany, 2000), ACM Press. ISBN 0-7695-0846-4. pp. 179190. 3.3
2.1.2
WAGNER , D. AND S CHMALSTIEG , D. First steps to-
TAMURA , H. Steady steps and giant leap toward practi- wards handheld augmented reality. In ISWC03: Proc.
cal mixed reality systems and applications. In VAR02: 7th Intl Symp. on Wearable Computers (Washington,
Proc. Intl Status Conf. on Virtual and Augmented Re- DC, USA, 2003), IEEE CS Press, p. 127. ISBN 0-
ality (Leipzig, Germany, Nov. 2002). 22, 3.2, 3.4, 29, 7695-2034-0. 2.1.3
3.5
W EISER , M. The computer for the 21st century. Scien-
TAMURA , H., YAMAMOTO , H. AND K ATAYAMA , A. tific American 265, 3 (Sep. 1991), pp. 94104. 5
Mixed reality: Future dreams seen at the border be-
W ILLERS , D. Augmented reality at airbus. In IS-
tween real and virtual worlds. Computer Graphics and
MAR06: Proc. 5th Intl Symp. on Mixed and Aug-
Applications 21, 6 (Nov./Dec. 2001), pp. 6470. 2.1.3,
mented Reality (Santa Barbara, CA, USA, Oct. 2006),
22, 3.2, 3.4
Airbus CIMPA GmbH, IEEE CS Press. 2.2.2, 24, 3.2,
5
TANG , A., OWEN , C., B IOCCA , F. AND M OU , W.
Comparative effectiveness of augmented reality in ob- W INKLER , T., K RITZENBERGER , H. AND H ERCZEG ,
ject assembly. In CHI03: Proc. SIGCHI Conf. on Hu- M. Mixed reality environments as collaborative and
man Factors in Computing Systems (Ft. Lauderdale, constructive learning spaces for elementary school
Florida, USA, 2003), ACM Press, pp. 7380. ISBN children. 2003. 3.6
1-58113-630-7. 4

T HOMAS , B., C LOSE , B., D ONOGHUE , J., S QUIRES ,


J., DE B ONDI , P., M ORRIS , M. AND P IEKARSKI , W.
ARQuake: An outdoor/indoor augmented reality first
person application. In ISWC00: Proc. 4th Intl Symp.
on Wearable Computers (2000), pp. 139146. ISBN
0-7695-0795-6. 3.4


T ONNIS , M., S ANDOR , C., K LINKER , G., L ANGE , C.
AND B UBB , H. Experimental evaluation of an aug-
mented reality visualization for directing a car drivers
attention. In ISMAR05: Proc. 4th Intl Symp. on
Mixed and Augmented Reality (Vienna, Austria, Oct.
2005), IEEE CS Press, pp. 5659. ISBN 0-7695-2459-
1. 21, 3.1

VAISSIE , L. AND ROLLAND , J. Accuracy of ren-


dered depth in head-mounted displays: Choice of eye-
point locations. In Proc. AeroSense (Bellingham, WA,
2000), vol. 4021, SPIE, pp. 343353. 4

V LAHAKIS , V., K ARIGIANNIS , J., T SOTROS , M.,


G OUNARIS , M., A LMEIDA , L., S TRICKER , D.,
G LEUE , T., C HRISTOU , I. T., C ARLUCCI , R.
AND I OANNIDIS , N. Archeoguide: first results of an
augmented reality, mobile computing system in cul-
tural heritage sites. In VAST01: Proc. Conf. on Vir-
tual reality, archeology, and cultural heritage (Gly-
fada, Greece, 2001), ACM Press, pp. 131140. ISBN
1-58113-447-9. 3.1

25

The author has requested enhancement of the downloaded file. All in-text references underlined in blue are linked to publications on ResearchGate.

Anda mungkin juga menyukai