Anda di halaman 1dari 6

EMERGENCY FIRE INCIDENT TRAINING IN A VIRTUAL

WORLD

Julie Dugdale, Bernard Pavard, Nico Pallamin, Mehdi el Jed


GRIC IRIT (Cognitive Engineering Research Group Computer Science Research Institute of Toulouse).
UPS-CNRS (UMR 5505), Toulouse France.
Email: {dugdale | pavard | pallamin | eljed}@irit.fr

Commander Laurent Maugan


EDIS (Ecole Departemental dIncendie et de Secours Departmental school of fire and rescue). 11, Avenue des Peupliers,
91705 Fleury-Merogis, France.

Keywords: Virtual reality, Fire-fighting, Training simulator

Abstract: The effectiveness of close to reality training simulations is due to the fact that they provide a sense of
immersion and allow several participants to interact naturally. However, they are expensive, time-
consuming, difficult to organise and have a limited scope. We present a virtual reality training simulator
which overcomes these disadvantages. We describe the approach and methodology and conclude with a
discussion of the most crucial challenges when developing such a system. In this paper we would like to
introduce the notion of cultural technologies which produce a sense of social as well as cultural immersion.
We will discuss the main ingredients of such an immersion, in particular the notion of situated virtual
interaction (how interactions in a virtual world can be comparable with human interactions in real
situations). We also discuss on the role of interfaces (real time motion capture) and emotional expression in
the design of such environments.

manuals, close to reality simulations are often


employed. In these simulations emergency incidents
(e.g. a hotel fire or a serious road traffic accident)
1 INTRODUCTION are organised by the instructors in controlled
surroundings. The firemen then react to the
Social and cultural interactive technologies are simulated event as if it was a real incident and are
an exciting new development because, for the first consequently informed of their performance in a
time, they carry the possibility of immersing users in post-simulation debriefing session. The benefits of
situations where they can interact using not only such training in a closely realistic, controlled and
their cognition but also their emotions, social and relatively safe situation are well proven and stem
cultural intelligence. To validate this concept we are from the fact that such simulations provide a good
currently working on the development of a sense of immersion. In effect, this means that the
emergency situation simulator where people have to trainees decisions are strongly influenced by what
take decisions in stressful environments. In they see and hear in the immediate environment and
particular, we focus on human human relationships the feelings that it evokes (e.g. an emotional
in the context of the organisation of emergency response to critical situations). What is equally
operations with fire-fighters and medics (red plans, important is that there are other individuals present
etc.) who are also actively participating in the event and
The training of firemen is a complex, expensive with whom the trainee can naturally interact and
and time consuming process. In addition to using communicate. Thus, the trainees behaviour is
traditional methods such as written procedures and similar to what would occur in the real situation and
Figure 1: A close to reality simulation of a hotel fire. Two hotel guests (played by fire-fighters)
are attempting to escape from the top floor balcony after being trapped by the fire. The fire-
fighter, on the balcony below, attempts to reach the guests using a ladder.

is not simply a calculated response taken in isolation military training: Marsella and Gratch, 2001, Rickel
in a sterile training environment. et al., 2002). Despite some impressive results, there
Unfortunately close to reality simulations are are several drawbacks: such systems typically
expensive to organise, difficult to design correctly, require huge computational resources, use simplistic
time-consuming to perform, and occupy resources models of cognitive behaviour and emotion to
which may be needed for a real emergency. simulate intelligence, and usually only allow one
Furthermore, it is impossible to re-run the exercise user at a time to participate in the training scenario.
changing only one or two aspects and observing the We present a virtual reality training tool which
new consequences. allows several users to participate actively in a
The challenge lies in developing a cost effective training scenario. In our approach the trainees
and efficient training tool where the trainees are communicate with each other through the virtual
sufficiently immersed in the situation allowing them environment thus reducing the need for representing
to exhibit normal social and emotional behaviours human intelligence in a virtual character. As a result,
and where they can communicate and interact the necessary computational resources are greatly
naturally with other participants. decreased. With our approach we utilise the trainees
own problem solving capabilities including the
influential emotional and cultural aspects, by placing
the control of the virtual character in the hands of
2 APPROACH AND the user.
METHODOLOGY Together with the Departmental School of fire
and rescue (EDIS) in the south of Paris, we are
Virtual reality is a powerful technology for developing a virtual reality training tool for fire-
creating a sense of immersion and intelligent fighters. We have adopted a highly participatory
virtual characters have recently been integrated into approach to the design of the system with the
virtual worlds for the purpose of training (e.g. in extensive involvement and support from EDIS.

Figure 2: Schema of several trainees guiding their virtual characters in the virtual training environment.
Figure 3: Left: a user driving his virtual character in the training environment. Right: screenshot of the environment.

The virtual world is based on a close to reality gestures play a crucial role in human communication
simulation of a hotel fire which is one training (Cassell et al., 1998, McNeill, 1992) this stage was
scenario currently used by the Fire Service. The necessary for the design of realistic interactions
simulation involves several participants: the hotel between the virtual characters. The analysis was
owner (whose main concern is for the hotel to be conducted at multiple levels with the aim of
saved), several guests (some are trapped inside the identifying a set of generic gestures performed
hotel and others are hanging from a balcony and are during conversations as well as a set of rules to
hysterical), the support firemen (who act according determine when and how such gestures have to be
to orders), and the chief fireman (who is in charge of performed (Dugdale et al., 2003).
directing the emergency rescue operation). Since the Note that in our virtual reality simulator not all
users drive their own virtual characters, each user of the avatars gestures are directly controlled by the
can play the role of each participant. Thus, one user human trainee. We make a distinction between
can play the part of the chief fireman whilst another intentional (primary animation) and non-intentional
could drive the virtual character representing the gestures (secondary animation). Intentional gestures,
hotel owner. such as walking forwards or backwards or using a
The primary goal is to train the chief fireman to tool, are performed consciously by the human
supervise correctly a rescue operation. Thus, he trainee and are transmitted to his avatar equivalent in
should coordinate his squads actions, ensure the the virtual world by means of a data glove,
safety of his crew and by-standers, and establish a keyboard or joystick. The avatar then performs
general overview of the emergency situation in order (mirrors) the same action. Non-intentional gestures,
to report back to the control centre and request such as unconsciously performed head-nodding
further resources if necessary. during a conversation to signal agreement, or
The methodology used to develop the training automatically turning the head to greet another
tool consisted of four stages (Darcy et al., 2003): person, are undertaken autonomously by the avatar
Firstly, we conducted extensive field studies of as part of its basic communication abilities in
several close to reality simulations in order to assess managing social interaction.
the aims of the training and to establish what formal The actions of the avatar are also influenced by
and informal procedures were used in the fire the contextual environment (see figure 4). The
emergency process. The field studies involved an behaviour of the virtual character is also modified
analysis of formal documents, interviews with according to the characters perception of the
instructors and senior fire-fighters and on-site situation, its emotional state (satisfaction,
observations, where we gathered approximately 12 disappointment, fear and anger), its personality (5
hours of video footage. This work was undertaken in dimensions), and its mood. For the emotional model
close collaboration with EDIS who allowed us to we use an adaptation of the OCC model proposed by
film the training scenarios, to conduct post- Ortony, Clore and Collins (Ortony et al., 1988) and
observation interviews with fire-fighters, and to for the model of personality we base our work on the
attend the trainees debriefing exercise (Dugdale and Five Factor Model (FFM) (McCrae and John, 1992).
Pavard, 2002). In our model, mood is an affective filter for
Secondly, a video based analysis of the field data moderating the intensity of the felt emotion and it is
was performed in order to identify the relationship modelled using a simple vector representation.
between gestures and verbal activities and to assess Further details on the modelling of emotion, mood
the role of non-verbal communication. Since and personality can be found in (El Jed et al., 2004).
Figure 4: Example of how a virtual characters walking action changes as a function of its contextual environment. The
leftmost picture shows a purely intentional animation of a virtual character walking towards a victim (e.g. with the user
animating the character with a joy-stick). On the right-hand side, the virtual character takes into account the environmental
context (a nearby explosion) and adjusts its way of walking independently of the will of the user. This change in behaviour can
then be visually interpreted by the other trainee users (what we term cognitive and socially situated animation).

In order for the character to exhibit believable the implementation of the training environment
non-verbal communicative behaviour, we first using a suite of virtual reality development tools1.
compute the most probable emotional state and then The fourth and final stage of the methodology
dynamically generate gestures and facial dealt with the evaluation of the virtual environment;
expressions. Figure 5 shows the architecture of the here we adopted a three-pronged approach (Dugdale
emotional model which contributes to the behaviour et al., 2004):
of the virtual character. This architecture is The first part of the evaluation concentrated
composed of four modules: the PERCEPTION solely on validating the virtual characters
module which detects the events perceived in the (isolated from their normal fire-training virtual
virtual reality environment; the EMOTION module environment) in order to assess their expressiveness.
which evaluates the importance of the perceived The aim of this was to assess if the virtual characters
event depending on the personality and mood, and are conversationally equivalent to a human being.
decides on the emotion felt. The output of these two If a human-beings interpretation of the
modules is passed to the BEHAVIOUR module that communicative acts of the virtual character is
will select the relevant emotive counterpart for each equivalent to the interpretation of its human
of the three emotional reactions: affective reaction counterpart then we have successfully grounded our
(facial expression), behavioural reaction (emotional model of the character. Thus, the character should
gesture) and the cognitive reaction (change in seem believable by exhibiting meaningful non-
cognitive parameters). Finally, the ACTION module verbal behaviour and possess credible
executes the actions decided upon by the communication skills (use of gestures, body
BEHAVIOUR module. positioning, head movements, eye-gaze, etc.). Four
The third stage of our methodology concerned short video fragments showing an avatar during a

1
Figure 5: Architecture of the emotional model Virtools, Filmbox, Poser 4.
conversation were created. These videos and the interaction and communication among the users and
equivalent four videos showing a human performing their embodiments in the virtual environment.
the same actions were then presented to two separate
groups of third-year psychology students at the
Mirail University in Toulouse, France (the first
group of 36 students watched only the videos of the 3 RESULTS AND DISCUSSION
avatar, whilst the other group of 24 watched the
videos of the human). The participants were then There are several important technological and
asked to complete a questionnaire focusing on the scientific challenges involved in the development of
interpretation of the gestures and emotional this tool.
expressions of the characters in the video. The heart of the problem is to ensure a
The second part of our evaluation concentrated believable and natural interaction between the
on validating the contextual virtual environment. virtual characters and between the human user and
The aim of this was to assess if the virtual his or her virtual character. The virtual characters
environment can provide the same decision making must exhibit the right degree of expressiveness to be
support as that which is found in the real situation. believable (thus promoting a sense of immersion).
Stated simply, we analyse decisions or actions that Secondly, since the virtual character must have
have taken place in the real world and then see if it some degree of autonomy (i.e. the trainee cannot be
is possible to make the same decisions or actions in expected to direct the virtual characters smallest
the virtual world. The rationale is as follows: a movements) we need to delimit which aspects are
training environment must allow the human trainee under the control of the agent and which aspects are
to make the same decisions (potentially mistakes) as under the control of the trainee.
in the real world. Whereas the aim of the first stage Finally, the trainee needs to transmit his
was to see if virtual characters are conversationally actions (such as pointing) to the virtual character (so
equivalent to humans, here we want to assess if the the character can do the same action).
virtual world is logically equivalent (in a decision- From the results of the validation experiments
making sense) to the real-world. Using data obtained we can drawn some general conclusions. Broadly,
from our field studies we constructed a library of the virtual character was considered to be realistic,
short video clips which showed firemen performing and in the few cases when it was considered
an action or making a decision in the real world unrealistic it was largely due to the facial appearance
concerning the management of the emergency fire of the avatar and not the animations, which were
situation. The video clips were then analysed in seen as being quite natural. These results have
order to establish the relevant information that was driven us to adopt a different approach of using
used to make the decision (e.g. visual perception of textures based on the photographs of real people in
an object or the gesture of another person). The order to try to increase facial realism of the virtual
virtual environment was then analysed to see if the characters.
same information was available in the virtual world. In general, the avatars gestures were seen as
In effect, this would mean that the virtual characters appropriate but small details, such as gesture fluidity
or the human user had the necessary information to or self-touching, can lead to some small differences
make the decision or effect an action that could in how the gesture is interpreted. Furthermore, we
occur in the real world. If the virtual world lacked found that the accuracy in the movement of the
the relevant information we attempted to find a gestures is particularly important for eye-gaze,
tangible solution which could provide the missing which is often used a main cue to infer the state of
information. The solutions was often of a technical the actor.
nature (e.g. redesigning the virtual world to add A final important result concerns the number of
extra functionality or considering another input gestures that an avatar may perform. Our results
device for the user). show that embodying an avatar with a large set of
The third part of our evaluation was to conduct a gestures can lead to ambiguous interpretations as to
fine-grained ethnomethodological analysis to their meaning. Consequently, in some cases, the
compare the interaction among virtual actors in the same gesture can be used to express in a completely
virtual world with human interaction in the real sufficient way several different meanings if it is
world situation. This comparison was not intended presented in different contexts.
as a validation but as a resource for calibrating the Regarding the assessment of whether the virtual
environment. Specifically, we were concerned with environment offers the same decision making
the ways in which bodily conduct can be support as found in the real world, our results show
implemented in the virtual characters in order to that in general the training environment does allow
improve the experience of realism and to support the the trainees to make the same decisions as they
would in the real world. Specifically, whilst the REFERENCES
virtual environment will allow trainees users to
gather all the necessary visual and auditory Cassell, J., McNeill, D., and McCullough, K. E.
contextual information to make the necessary 1998. Speech-gesture mismatches: evidence for
decisions, it does not allow users to obtain the one underlying representation of linguistic and
necessary tactile information (e.g. physical contact nonlinguistic information. Pragmatics and
was used between trainees to reinforce a verbal Cognition, 6:2.
message or to initiate an action). This problem could Darcy, S., Dugdale J., El-Jed M., Pallamin N.,
be rectified by the use of tactile suits but this Pavard B. 2003. Virtual Reality Story Building
solution was considered to be outside the scope of Story Telling. International Conference on
the project. Other results have lead us to make some Virtual Storytelling, Toulouse, France.
minor technical modifications to the capabilities of November, 2003.
our virtual characters, or to add new interface
devices (e.g. electronic whiteboard) to allow the Dugdale, J., Pavard, B., 2002. The complexity of an
trainees to communicate via the virtual world using emergency fire incident: the case of a hotel fire.
the same artefacts as in the real world. Complexity in Social Science (COSI) project
In our calibration work we were interested in the report, work task two.
ways in which human bodily conduct can be Dugdale, J., Pallamin, N., Pavard, B., Sanchez-
implemented in the virtual characters. Comparing Svensson, M., Heath, C. A modelling framework
the interactions which take place in the virtual world for improving the interactivity in virtual worlds.
with those in the real world we found that it is GRIC-IRIT, Toulouse and WIT, King's college,
extremely difficult to formulate rules that can London. 2003. Complexity in Social Science
generate relevant actions and produce behaviours of (COSI) project report, work task three.
the virtual actors that are close to reality. The Dugdale, J., Pallamin, N., Pavard, B., Sanchez-
problem with simple formal rules is that humans Svensson, M., Heath, C. Holistic validation and
constantly make moment-by-moment interpretations calibration of a mixed virtual reality
of the context and the contributions of their own and environment. Complexity in Social Science
others actions. A human gesture is a response to the (COSI) project report, work task four.
recognition of the action of another and is a device El Jed, M., Pallamin, N., Dugdale, J., Pavard, B.
for influencing next action. Modelling character emotion in an interactive
One major difference between interactions in the virtual environment. In proceedings of AISB
real-world and those in the virtual world is the ways 2004 Symposium: Motion, Emotion and
in which the human participants orientate to each Cognition. 29 March 1 April 2004, Leeds, UK
other before an encounter and the ways in which the (Forthcoming).
beginning of utterances changes the situation. Gratch, J., and Marsella, S. Tears and Fears:
A second point concerns the multiple role of Modeling emotions and emotional behaviors in
gestures (e.g. a gesture can be used as a resource to synthetic agents. In Proceedings of the 5th
co-ordinate the conversation as well as a way of International Conference on Autonomous
gaining the attention of the another participant) Agents, Montreal, Canada, June 2001.
These points raise a number of issues concerning
McCrae, J., and John, O.P. An introduction to the
the design and development of virtual environments.
five-factor model and its applications. Special
Firstly, it is relevant to consider the ways in which
Issue: The five-factor model: Issues and
gestures can be made more explicit. Secondly, the
applications. Journal of Personality: 60, pages
generation of automatic action needs to be sensitive
175-215, 1992.
to a complex web of prior actions. Thirdly, it is
important that the user is able to drive the virtual McNeill, D. 1992 Hand and mind: What gestures
person in such a way as to produce the appearance of reveal about thought. Chicago: University of
orientating to other virtual persons. This could, for Chicago Press.
example, signal to another virtual character that they Ortony, A., Clore, G. L. and Collins, A. 1988. The
are looking at each other. Looking at another may be cognitive structure of emotions. Cambridge
a device to judge whether a gesture is necessary. University Press.
Rickel, J., Marsella, S., Gratch, J., Hill, R., Traum,
D., Swartout, B. "Towards a New Generation of
Virtual Humans for Interactive Experiences," in
IEEE Intelligent Systems July/August 2002, pp.
32-38.

Anda mungkin juga menyukai