Anda di halaman 1dari 3

Mobile Augmented Reality System Architecture for Ubiquitous e-Learning

Jayfus T. Doswell; M. Brian Blake and Jerome Butcher-Green The Juxtopia Group Baltimore, MD USA; Georgetown University, Washington, DC USA doswellj@hotmail.com; {mb7,jdb52}@georgetown.edu
Abstract
Mobile Augmented Reality Systems (MARS) e-learning has the potential to provide continuous, context-based, and autonomous instruction to human learners anytime, anyplace, and at any-pace. MARS e-learning enables mobility for the learner and hands free human computer interactivity. Advances to MARS based learning provides the advantage of a natural human-computer interface, flexible mobility, and context-aware instruction allowing learners to develop psychomotor skills while interacting with their natural environment with augmented perceptual cues. These perceptual cues combining multi-modal animation, graphics, text, video, and voice along with empirical instructional techniques can elegantly orchestrate a mobile instructional tool. The challenge, however, is building a MARS e-learning tool with capabilities for adapting to various learning environments while also considering the cultural, geographical, and other contexts about the learner. This paper discusses a novel system/software architecture, CAARS, for developing context-aware mobile augmented reality instructional systems. instruction time; previous knowledge of the learner; and cognitive capabilities (e.g., any cognitive disabilities). This paper discusses a Context Aware Agent Supported Augmented Reality System (CAARS) architecture that was developed to support an infrastructure for generating context-aware MARS elearning. MARS e-learning applications may include, but are not limited to military, industrial, medical, and manufacturing training applications as well as edutainment applications.

2. Background
A MARS falls within Milgrams mixed reality continuum as illustrated in Figure 1.

Figure 1: Milgrams Mixed Reality Continuum In augmented reality, digital objects are added to the real environment. In augmented virtuality, real objects are added to virtual ones. In virtual environments (or virtual reality), the surrounding environment is completely digital [7][8]. MARS combines research in Augmented Reality (AR) and mobile/pervasive computing in which a wearable see-through heads-up display and increasingly small computing devices facilitate communication over wireless networks. The MARS system infrastructure allows users to freely perceive real world objects over which digital annotations are superimposed. Additionally, MARS provides flexible mobility and location independent services without constraining the individual to a specific geographical location. By doing so, MARS technology holds the potential to revolutionize the way in which information is presented to learners and has enormous potential for on-demand, context-aware, and collaborative training [4]. With this mobile instructional tool, individual and collaborate learning may be enhanced in areas ranging from K-12 history, science, technology,

1. Introduction
Ubiquitous learning with a Mobile Augmented Reality System (MARS) that delivers on-demand instructional services requires a well engineered system/software architecture. Target applications generated from the architecture require instructional capabilities for understanding individual learning strengths while tailoring empirically evaluated pedagogical and andragogical techniques to enhance learning for children and adults, respectively. In order to significantly impact learning, a MARS e-learning tool needs to consistently measure learning progress and continuously update information about the learner for the duration of the learning interaction. Hence, a MARS elearning tool may continually process learning data associated to a given context for a given learner. This contextual data may include, but are not limited to, age/gender/culture of learner; instruction location;

Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 2006

engineering, and math training to medical, manufacturing, military, and security workforce training [1][4][5][6].

3. CAARS Archiecture and Prototype


Researchers from Georgetown University and the Juxtopia Group, Inc. designed and prototyped a MARS elearning architecture, the Context Aware Agent-Supported Augmented Reality System (CAARS). Components of CAARS were designed and developed to include the CAARS goggles and distributed CAARS Service.

performance. A wireless antenna was integrated into the CAARS goggles to function as a transceiver. This allows the CAARS goggle sensors to remotely transmit image, position coordinates/head orientation coordinates, and sound data from mini-cameras, inertia sensors, and minimicrophone input-sensors, respectively. Other components were incorporated into the CAARS Goggles so that learners may request just-in-time tasks assistance through simple voice commands and, as a result, visually see and hear automated instruction assistance through the optical see through display and mini-speakers, respectively.

3.1 CAARS System/Software Architecture

3.2 CAARS Software Architecture

Figure 2: CAARS Goggle Architecture The CAARS goggles are depicted as a schematic representation in Figure 2. The CAARS goggles were prototyped as a ruggedized, light-weight, head mounted display designed for comfort and safety and with several major sensor components. These components include optical see-through display, mini-antenna, mini-speakers, mini-microphone, lithium ion batteries, and minimicrophone. Considering the fact that current optical seethrough displays are bulky and have awkward structures research investigators designed a novel optical see through display that is both light-weight and comfortable. Research investigators chose the optical see through displays as opposed to video see-through display because optical see through displays preserves the real image while reducing degradation. Additionally, the CAARS goggles were designed with a miniature PC compatible camera to capture visual elements in a learners environment. As images are captured by the camera, they are transmitted, wirelessly to the CAARS Training Service for processing by software agents. The camera also is used to audit training sessions by recording multiple image frames during a tasks and analyzing corresponding meta-data to measure the result of the learners Figure 3: CAARS High Level Architecture Figure 3 illustrates the CAARS high level software architecture. This MARS e-learning is facilitated by a service level oriented software architecture made up of several subsystems that provide visual, human computer interface (HCI), mobility, and training services. The services are high level software interfaces that encapsulate the lower level objects allowing for improved software algorithms to be continually implemented and incorporated into the system. The Visual subsystem handles the image recognition and analysis to facilitate accurate real-world object recognition (e.g., recognize historical landmark). The HCI subsystem controls speech recognition and speech synthesis services that enable hands free and more natural interaction with the CAARS learning environment. The Training subsystem uses a combination of intelligent software agents and pedagogical models to guide learners understand concepts and perform step-by-step procedures for completing tasks.

Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 2006

CAARS Automotive Training

Generated from the CAARS software architecture, a system/software prototype was developed for demonstrating the effectiveness of CAARS e-learning for training automotive manufacturing production workers off line and for providing decision-support on the line. The CAARS prototype applied a pedagogical technique, scaffolding, to guide workers to complete a wire harness assembly task. Hence, while the production worker learns a wire harness assembly task, CAARS 1.) Continually processes automotive objects against the expected order of the assembly process; and 2.) Gradually decreases stepby-step instruction until the auto-worker consistently completes the task without error. The prototype was engineered using a combination of open source as well as C++ developed components that reside in a distributed system that is decoupled from the CAARS goggles. Hence, the CAARS software operates as a service. For the visual system subsystem, an open source image recognition framework using OpenCV, was leveraged to build a custom object recognition and realworld object annotation service. The Human Computer Interface (HCI) subsystem including a speech interface was developed based on Java Speech API to create a speech recognition and synthesis service.

military war-fighter & homeland security training exercises, training for hearing and visually impaired individuals, and various forms of manufacturing training. Continued investigation is planned to extend the CAARS system/software architecture to support these other training areas. Additionally, researchers plan to conduct longitudinal studies by building upon the existing CAARS prototype and in various mobile contexts (e.g., inside facilities and outdoor environments).

Acknowledgements
The authors acknowledge garnering system/software expertise from Mr. Charles Griffin and Mr. Ojai Mallory. This research was funded by the National Science Foundation under award number 0512610.

References
[1] Aist, G., Kort, B.,Reilly, R.,Mostow, J., and Picard, R. Experimentally Augmenting an Intelligent Tutoring System with Human-Supplied Capabilities: Adding Human-Provided Emotional Scaffolding to an Automated Reading Tutor that Listens 4th IEEE International Conference on Multimodal Interfaces, pp 483-490, October 2002. Blake, M.B. and Doswell, J.T. Context-Aware Augmented Reality System (CAARS). SALT New Learning Technologies Conference, Orlando, FL, Feb. 2006. Boulanger, P. Application of Augmented Reality to Industrial Tele-Training. Proc. of the First Canadian Conference on Computer and Robot Vision (CRV04). Pp. 320-328. 2004. Doil, F., Schreiber, W., Alt, T., Patron, C.. Augmented Reality for Manufacturing Planning. In Proceedings of the Workshop on Virtual Environments. ACM Press. Zurich, Switzerland. pp: 71 76. 2003. Hollerer, T., et. al.. User Interface Management Techniques for Collaborative Mobile Augmented Reality. Computers and Graphics;25(5): pp799-810. 2001. Hllerer, T., Feiner, S. Mobile Augmented Reality. In Karimi, H. and Hammad, A, editors, Telegeoinformatics: Location based Computing and Services. Taylor and Francis Books Ltd. London, UK, 2004. Hughes, C., Stapleton, C.B., et. al. Mixed Reality in Education Entertainment, and Training. IEEE Computer Graphics and Applications. pp. 24-30. November/December 2005. Milgram, P., Kishino, A.F. Taxonomy of Mixed RealityVisual Displays, IEICE Trans. Information and Systems, vol. E77-D, no. 12, 1994, pp. 1321-1329. Reiners, D., et. al. Augmented Reality for Construction Tasks: Doorlock Assembly, 1st International Workshop on Augmented Reality (IWAR98). 1998.

[2]

[3]

[4]

[5] [6]

Figure 4: CAARS Training Architecture For the Training Subsystem, a fully custom developed intelligent agent subsystem was completed. Figure 4 illustrates the software control entities for the training workflow that is facilitated by the CAARS software.

[7]

[8] [9]

Conclusion and Future Work

Applications created from the CAARS research prototype have implication in K-12 learning in both the classroom and in informal learning environments (e.g., home, outside, etc.), edutainment games, medical training,

Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 2006

Anda mungkin juga menyukai