Anda di halaman 1dari 11

Cognition, Technology & Work (2002) 4:3747

Ownership and Copyright


# 2002 Springer-Verlag London Limited

Cognition
Technology &
Work

Role of a Common Frame of Reference in


Cognitive Cooperation: Sharing Tasks
between Agents in Air Trac Control
J.-M. Hoc and X. Carlier
CNRS, Institut de Recherche en Communications et Cybernetique de Nantes, Nantes, France
Abstract: This study deals with cognitive cooperation in the context of the design of cooperative computer support for sharing aircraft conflict detection
and resolution tasks between human and machine in air traffic control. In order to specify some necessary cooperative capabilities of such a system, we have
observed an artificial situation on a simulator where two radar controllers (RCs) had to cooperate for the management of a heavy traffic within a single
sector. This paper reports the analysis of the verbal communications between the two RCs recorded during the simulation. The results enabled us to describe
the elements of a common frame of reference (COFOR), elaborated and updated by the two RCs, as a crucial cooperative activity in this kind of situation.
They also show the role of this COFOR in the implicit detection and resolution of interference between the RCs individual activities. Their contribution
to design, associated with other investigations and the state of the art, is discussed.
Keywords: Air traffic control; Cognitive task analysis; Common frame of reference; Design; Dynamic task allocation; Humanhuman cooperation; Human
machine cooperation

1. INTRODUCTION
This study deals with humanhuman cooperation in air
traffic control (ATC) as a model to guide the design of
humanmachine cooperation. It is based on the assumption
that sharing aircraft conflict resolution between human
controllers and an automatic detection and resolution
device (ADRD) could reduce the human workload and
maintain the same safety level. Now, such a cooperation in
conflict detection and resolution does not fully exist, either
between human controllers or between controllers and
ADRD. Thus, we have designed an experiment in which
two radar controllers (RCs) were forced to cooperate in this
way, sharing aircraft conflict resolution within the same
sector. Although ecological validity of the experiment is
not satisfied with respect to the current work situation, this
artificial situation was adopted to simulate a future work
situation where a single RC would have to cooperate with
an ADRD. Using a second controller instead of a machine
in the experiment, our aim was to identify efficient
cooperation mechanisms between humans before designing
a cooperative ADRD, in the sense of humanhuman
cooperation. Apart from this task-sharing principle, the
ecological validity of the experimental situation was high
in terms of work practices. Here, ecological validity is taken

as the presence in an artificial situation of an actual


activity, which is a future one when designing a new
system, as opposed to improving an existing system (Hoc in
press).
After several studies on the same topic within a dynamic
task allocation paradigm (Debernard et al 1992; Lemoine et
al 1996; Hoc and Lemoine 1998), the current study
addresses the question of humanmachine cooperation
during problem solving in this dynamic situation with
moderate, but actual temporal constraints. The cognitive
aspects of cooperation appeared to be crucial in these
attempts to design artificial agents, sharing tasks with the
human operators at an acceptable level of performance and
reliability (Jones and Mitchell 1994; Jones and Jasek 1997;
Older et al 1997; Roth et al 1997; Hoc 1998; Millot and
Lemoine 1998; McCarthy et al 2000).
In France, the En Route air space is divided into sectors
controlled by two operators. The RC is in radio connection
with the aircraft and has the tactical task of guiding the
aircraft through the sector. The planning controller (PC) is
in charge of the coordination between adjacent sectors, of
the aircraft conflict pre-detection, and of the regulation of
the RCs workload (e.g., by negotiating aircraft entries or
exit levels with the adjacent sectors). ATC controllers are
confronted with a continuous increase in air traffic (about

38

7% per year in Europe). Several solutions to this problem


have been implemented or envisioned: regulation of the
clearance for taking off at the European level, decomposition of sectors into smaller ones, free flight principles with
automatic anti-collision device aboard the aircraft, etc.
(Hopkin 1995). Various ergonomic approaches have also
been explored: representational support (e.g., the erato
project: Leroux 1991), operational support (e.g., spectra
using an automatic aircraft conflict detection and resolution device: Debernard et al 1992; Lemoine et al 1996; Hoc
and Lemoine 1998).
From the three spectra studies cited that made use of an
ADRD, capable of resolving some two-aircraft conflicts,
three main lessons were drawn with regard to design
improvement:
. The simultaneous allocation of a tactical task (conflict
detection and resolution) and of a strategic task (conflict
allocation to the human or to the machine) to the same
controller could result in an increase in workload to the
detriment of performance (as has also been recently
shown in commercial aircraft piloting: Jentsch et al
1999).
. In order to enable the controller to supervise the overall
traffic, the tasks allocated to the ADRD should be
defined as such in the controllers problem space. This
means, for example, that where the controller identifies a
three- or four-aircraft conflict, the allocation of a twoaircraft sub-problem to the ADRD will not be consistent
with the controllers decomposition of the traffic into
independent problems. In addition, problem independence must be identified taking intentions into account
(e.g., the realisation of a resolution intention for one
conflict can result in sending a new aircraft to another
conflict, increasing its complexity). Such inconsistency
could result in the controllers refusal to allocate a
conflict to the ADRD, reducing the opportunity to
alleviate workload.
. The classical complacency phenomenon was moderately observed in an automatic allocation mode. Such a
phenomenon is well known in other domains (Roth et al
1997). When the automatic system becomes brittle, near
the limits of its validity domain, even expert operators,
aware of the fact, continue using it, whereas manual
control would have been more efficient (e.g., in the
cockpit: Layton et al 1994; Smith et al 1997). This
phenomenon implied two kinds of drawback in the
spectra studies. First, the controllers adopted solutions,
likely to minimise interference with the ADRD (e.g.,
avoiding aircraft deviations at the same level where the
ADRD was resolving a conflict), but suboptimal.
Second, mutual control (of the controller over the
ADRD) was limited when the controller was not in

J.-M. Hoc and X. Carlier

charge of task allocation. This resulted in a detrimental


reduction of the controllers supervision field.
For all these reasons a better integration of the ADRD
assistance tool into the controllers activity, especially into
the controllers overall problem space representation, was
considered. This study was part of a multidisciplinary
context, integrating supervisory control researchers and
cognitive ergonomics psychologists. The former were
exploring the possible design of an ADRD able to receive
a plan concerning one aircraft (e.g., turn to the right), to
compute an optimal trajectory (with regard to fuel
consumption, planned route, possible conflicts with other
aircraft, etc.), to return feedback to the human operator in
terms of possible problems, and to put the aircraft back on
its route as efficiently as possible. Before coming to precise
specifications of the new ADRD, the psychologists were in
charge of anticipating what kinds of benefit could be
expected from such an idea and what kinds of problem
could be identified. The psychologists selected two questions. The first one was addressed in a separate study and
was related to the evaluation of the controllers anticipation ability to interact with the ADRD during plan
specification, stressing the role of subsymbolic processes
(Hoc et al 2000). The purpose of the present study (the
second one) was to design an artificial situation where a PC
is working with two RCs. The aim was to study human
human cooperation between RCs to produce some
recommendations to the supervisory control researchers
in terms of communication between controllers and the
ADRD. With this kind of method, humanhuman
cooperation is taken as a reference model to design
humanmachine cooperation. Obviously the transfer is
not perfect, due to machine limitations. In order to address
this question of cognitive cooperation, we have adopted a
theoretical and methodological framework (Hoc 1998,
2001) already applied to ATC (between RC and PC, and
between RC and an ADRD: Hoc and Lemoine 1998), and
to cooperation in the two-seater fighter aircraft cockpit
(Loiselet and Hoc 1999).

2. THEORETICAL FRAMEWORK
The theoretical framework adopted here is the approach to
cognitive cooperation introduced by Hoc (1998, 2001),
especially to deal with humanmachine cooperation, going
beyond too restricted a view of humanmachine interaction when the machine presents a certain degree of
autonomy. It stresses two minimal conditions for cooperation to develop. This minimal approach considers the
machine limitations, but does not exclude the fact that
more requirements could usefully result in an efficient
cooperation (especially between humans):

Common Frame of Reference in Cognitive Cooperation

. Each (cooperative) agent strives towards goals and can


interfere with others on goals, resources, procedures, etc.
. Each agent tries to detect and manage such interference
to make the individual and common activities easier.

The concept of interference is borrowed from studies on


planning (Hoc 1988) and may appear under diverse forms
(e.g., precondition relation, when one of the goals can be
seen as a subgoal of the other; interaction relation, when
the two procedures must be changed; see Castelfranchi
1998). At first glance, interference can always be
considered as negative. In fact, it increases the workload.
However, very often interference plays a positive role (e.g.,
mutual control, when one agent expresses a disagreement
on the others activity and introduces an improvement).
The positive or negative aspect of interference depends
upon an evaluation that should consider the management
of a trade-off between the cost of interference resolution
and the benefit with respect to the adaptive power of the
multi-agent system.
Following this conception, cooperation is not simply
considered as a multi-agent structure (structural relationships between agents), but also and mainly as a cognitive
activity that cannot be seen as developed by any agent
working alone. Three levels of cooperative activities are
considered, in terms of abstraction and temporal span:
cooperation in action, cooperation in planning (Fig. 1) and
metacooperation. Abstraction is understood here as the
access to high-level representations that are not restricted
to the specific conditions of the immediate action (Hoc
1988; Rasmussen 1986). Abstraction enables the agent to

Fig. 1. Cooperative activities in action and in planning

39

enlarge the temporal span of their activities, from


representations just valid for the present action (e.g.,
interference detection during the short-term course of
action), to representations valid for larger episodes (e.g.,
elaboration of a plan covering the medium term), and
finally to representations of a long-term value, for several
tasks (e.g., elaboration of a representation of oneself and of
the other agents).
The two following subsections define the cooperative
activities identified in the present study. The coding
scheme and some examples are detailed in Appendix 1.
2.1. Cooperation in Action
This class of activities is directly embedded into the action
execution level. It integrates interference creation (e.g.,
mutual control), detection and resolution (at the action level
by local adjustments). It also embodies identification of the
other agents goals, enabling interference management by
moderate anticipation (interference anticipation). Such an
identification can be immediately derived from expertise in
the work domain (e.g., if a controller turns an aircraft,
deviating it from its route that crosses another aircrafts
route, one can infer from domain knowledge that the goal is
resolving a conflict).
2.2. Cooperation in Planning
These activities develop at the planning level and can
improve the performance of the activities belonging to the
first class. They contribute to the elaboration and the
maintenance of a common frame of reference (COFOR).
This notion is very close to team situation awareness (Salas
et al 1995) and to common ground (Clark 1996). Salas et al
(1995) defined team situation awareness as, at least, the
shared understanding of a situation among team members
at one point in time. It embodies shared representations
like contextual ones, agreements on problem or task
definition, etc. Our conception of situation is not
restricted to the external situation (as is the case of the
concept of situation awareness: Endsley 1995), but
integrates the agents goals, plans and metaknowledge.
This is why we prefer using another terminology (COFOR)
in the line of French studies on humanhuman cooperation
(De Terssac and Chabaud 1990).
COFOR is a mental structure playing a functional role
in cooperation. As is the case of individual representation,
COFOR is only accessible to the observer by means of
external entities, such as inputs and outputs (communications between agents), or externalised representations on
common media (e.g., a duty roster). Conversely, what is
lacking in the COFOR at a certain time can be inferred
from cooperation deficiencies. For example, an agent can
perform a subtask while another agent is already doing the

40

same subtask, for lack of knowing the other agents current


activity.
Thus, COFOR can integrate two kinds of element:
representations of the process under control and representations of the control activity (Fig. 1). For the purpose of the
present study, we have decomposed the representations of
the process into two categories: individual aircraft and group
of aircraft. In the first case, controllers are processing an
individual aircraft for itself; in the second case, they are
considering a conflict between several aircraft. Representations of the control activity were decomposed into three
categories: common plan or goal, action fulfilment (in relation
to monitoring the execution of a plan) and action evaluation.
Finally, in terms of depth of processing, COFOR
maintenance and COFOR elaboration can be distinguished
on the basis of the number of speech turns. Only two turns
are needed for maintenance. An agent communicates a
piece of information and the other simply acknowledges
reception by an agreement. In the case of elaboration,
several turns are needed to manage any lack of understanding or disagreement and to reach a consensus.
2.3. Metacooperation
This level of cooperative activities was not addressed in this
study. It can take a very long time to develop, but can
considerably improve the cooperative activities at the two
previous levels (see Hoc 1998, 2001, for a more precise
presentation). It integrates the elaboration of compatible
representations (the ability to translate ones representations into other kinds of representation more compatible
with the others goals or type of knowledge), and the
elaboration of a model of oneself or of the others. In this
study of cooperation between two professional RCs these
elements were already available.

3. METHOD
3.1. Subjects
Seven pairs of professional ATC controllers collaborated in
this study as RCs operating together with the same PC in
the regional En Route ATC Centre of Bordeaux.
3.2. Experimental Task
A near full-scale air traffic simulator was used to reproduce
a realistic scenario on a sector familiar to the controllers. A
high traffic load was defined in order to justify the presence
of two RCs (and possible assistance from a machine). The
scenario lasted about 40 minutes and included 46 aircraft,
34 aircraft being implied in 5 two-aircraft conflicts, 4 threeaircraft conflicts, and 3 four-aircraft conflicts. Before the
experiment, an aircraft allocation to the controllers was

J.-M. Hoc and X. Carlier

chosen in order to maximise the possible interference


between their activities. The aircraft conflicts generated by
the simulator could not be resolved without communication between controllers to agree on solutions. Namely,
each time a new aircraft appeared the controllers were
informed of the allocation by a colour. A controller could
only give instructions to an aircraft under control
(allocated to this controller). Before the present experiment, controllers were trained on the platform for 6 hours.
Sharing the air traffic between two RCs is a purely
experimental situation, not encountered in real work.
However, we have respected the controllers work habits,
generating scenarios from real traffic samples in the sector.
3.3. Types of Conict
There were two types of conflict:
. conflict between several aircraft controlled by the same
RC, but the resolution of the conflict might produce
another conflict with an aircraft belonging to the other
RC;
. conflict between several aircraft distributed among the
two RCs.

In the first case, the resolution was the task of one RC but
adjustment of the two RCs activities was necessary to
resolve possible interference. In the second case, the
resolution was a common task. RCs had to resolve the
conflict and the possible future problems together (e.g., if
the resolution caused another conflict).
3.4. Data Collection and Analysis
Three types of data were recorded:
. actions on the interface (e.g., instructions to aircraft);
. main events in the traffic (e.g., aircraft entry, strip1
entry, etc.);
. spontaneous oral communications between controllers.

The cognitive activities, especially the cooperative ones,


were inferred from the data flow and coded using the
theoretical framework (Hoc 2001) and the MacSHAPA
software (Sanderson et al 1994). The activities are
described following a predicate/argument formalism.
When the emphasis is put on the activity in itself, the
predicates code the elementary activities. When the accent
is placed on the representation generated by the activity,
the predicates code the representations. The results
presented here only concern cooperative activities, which
were coded in a way consistent with the categorisation
presented above see Fig. 1 and the appendices that
describe the coding scheme, coding examples (Appendix 1)
1
Paper sheet summing up the main features of an aircraft, including its
planned trajectory in the sector.

Common Frame of Reference in Cognitive Cooperation

41

Fig. 2. Cooperative activities in action.

and an excerpt of protocol (Appendix 2). These activities


were inferred from the various data that were available, but
especially from oral communications. The coding scheme
allowed us to decompose the protocol into elementary
units, on the basis of the predicate occurrences.
The structure of the coding scheme has already been
presented in Fig. 1. Four predicates code cooperation
activity in action: interference creation, anticipation,
detection and resolution. Two predicates code representational aspects of cooperation activity in planning (COFOR
elaboration and maintenance): common representation of
the process under control and common representation of
the control activity. Arguments specify these common
representations: individual aircraft or group of conflicting
aircraft for the process under control, common plan or goal,
action fulfilment or evaluation for the control activity.

Although part of the cooperative activities could remain


implicit (i.e., performed by actions without oral communication), COFOR management appears much more
frequent than local interference management in the
explicit communications.

4. RESULTS

On average, 71.8 units per protocol were identified at this


level. On the whole, COFOR maintenance is almost twice
as frequent (64.7% 11.7%) as COFOR elaboration
(35.3%).
Three distributions will now be presented. In each one,
COFOR elaboration and maintenance are distinguished.
The first one corresponds to the decomposition of the
COFOR elements into two categories: process under
control and control activity (Fig. 3). The second one
distributes the representations of the process under control
over the following two categories: individual aircraft and
group of aircraft (Fig. 4). The third one decomposes the
representations of the control activity into three categories:
common plan and goal, action fulfilment, action evaluation
(Fig. 5).

First, we will consider the distribution over the two activity


levels before considering the distribution over the cooperative activities situated within a given level.
4.1. Levels of Cooperative Activities
On average, there were 90.1 units per protocol, of which
the overall distribution was:
. cooperation in planning: 79.7% 10.6%;2
. cooperation in action: 20.3%.
2

in the whole paper indicates the half width of the confidence interval
at the level a = 0.10.

4.2. Cooperation in Action


The distribution of the elementary cooperative activities at
the action level (on average, 18.3 units per protocol) is
presented in Fig. 2. These activities are mainly distributed
over interference creation (mutual control), anticipation
(identification of the other agents goal) and detection.
Interference resolution plays a minor role in communication.
4.3. Cooperation in Planning

42

J.-M. Hoc and X. Carlier

Fig. 3. Cooperative activities in planning: process under control vs. control activity.

Fig. 4. Cooperative activities in planning: process under control: individual aircraft vs. group of aircraft.

Communications deal much more with the control


activity than with the process under control (Fig. 3). The
latter (process under control) takes the form of COFOR
maintenance much more than elaboration. For the former
(control activity) it is not possible to be conclusive because
of an overlap between the confidence intervals. However,
the observed data show a quite equal distribution.
COFOR elements concerning the process under control

are much more related to groups of aircraft (conflicts) than


individual aircraft (Fig. 4). COFOR maintenance is more
frequent than elaboration in the two cases.
Finally, among the COFOR elements concerning the
control activity, the representations of a common plan or
goal are much more frequent than the others. The
representations of action fulfilment (plan execution
monitoring) are a little more frequent than the representa-

Common Frame of Reference in Cognitive Cooperation

43

Fig. 5. Cooperative activities in planning: control activity: common plan and goal, action fulfilment, action evaluation.

tions of action evaluation (however, the difference is


negligible). Due to an overlap between the confidence
intervals, it is impossible to clearly separate COFOR
maintenance and elaboration activities with regard to the
common plan or goal. However, the observed data show a
higher proportion of elaboration activities. Action fulfilment is mainly checked by maintenance activities. It is
impossible to be conclusive between maintenance and
elaboration for action evaluation, but the observed data are
compatible with an equal distribution.

5. DISCUSSION
The allocation of aircraft among the RCs was chosen to
maximise possible interference between their activities.
Comparison between the two levels of cooperative
activities shows the prominence of the planning level.
This means that interference between RCs activities was
largely and positively managed at a global coordination
level (planning) as opposed to a local interference
resolution level, where solutions could be suboptimal.
This result introduces more requirements in terms of
machine intelligence, because of anticipation requirements.
Although the distribution can vary from one type of
COFOR element to another, on average COFOR maintenance is an important aspect in this activity characterised
by time constraints (as is the case of cooperation in a twoseater fighter aircraft: Loiselet and Hoc 1999). Most of the
time COFOR management is described as a demanding

activity, implying elaboration and explanation. In this


kind of situation (sharing of expertise and time constraints), it can be performed easily by simple maintenance
activities. COFOR maintenance is probably crucial for
cooperation to develop adequately. This is clearly attainable by machines since simple transfer of available
information is required.
Not much interference resolution was reported at the
action level. This is probably due to the prominence of the
planning level in the management of interference and to
some automatic activity. Although interference detection
is not negligible, interference creation (mutual control)
and anticipation, which plays a positive role in cooperation, are well represented at the action level. Almost half of
the interference occasions are anticipated and do not
actually occur. Shared expertise in the domain is possibly
the main reason for this positive aspect of cooperation. This
could be partly attainable by a machine provided that it is
informed of the controllers intentions.
The major part of cooperation in planning can be
explained by the fact that sharing a large amount of
information on the situation (process under control and
control activity) can make constraints on RCs activity
more salient. Thus a large proportion of interference
detection and resolution is completely implicit (implicitly
anticipated). The analysis of cooperation at the planning
level reinforces this interpretation:
. At this level we can see that information shared by RCs
mainly concerns the control activity (common plan and
goal, execution and evaluation of actions) as opposed to

44

J.-M. Hoc and X. Carlier

the process under control. Thus, interference that could


occur at the action level is resolved at this planning
level. This justifies the enlargement of the concept of
situation awareness, which is too often restricted to the
external situation. The superiority of the COFOR
maintenance (vs. elaboration) activities concerning the
process under control shows that the controllers
frequently agreed on the external situation analysis and
simply checked it. For the control activity, elaboration
could take on more importance, but our data are not
precise enough to draw any definite conclusion. For a
machine, it could be easy to maintain a common
problem space, but it could be difficult to perform a
common elaboration.
. The common representation of the process under control
is much more devoted to conflicting aircraft than to
individual ones. This means that cooperation in the
management of the problem space (representation of the
conflicts) plays a major role. This reinforces the need for
an adequate integration of the machine into the
controllers representation of the problem space. The
prominence of COFOR maintenance activities in both
cases reduces the need for explanation if the machine is
able to enter into the controllers problem space. This
can be done by having the machine informed of the list
of the aircraft conflicts the controller has in mind in real
time (see below).

The elaboration of a common goal or plan accounts for a


large proportion of the elementary cooperative activities
related to the control activity at the planning level. This
result may be explained by the fact that many conflicts
constituted a common task for the RCs due to the
allocation of conflicting aircraft between the RCs. This
type of conflict needs to share a large amount of
information concerning plans and goals to be resolved. If
the machine is fully under the operators control, this could
be easily solved. Concerning plans and goals, COFOR
elaboration could be the greater, but maintenance is not
negligible. Common plan elaboration with a machine could
be difficult to implement. We have also noted that plan
monitoring is not negligible at this level and is effected
through maintenance activities. This should be introduced
in the machine design.
The COFOR appeared to fulfil two functions related to
the problem space:
1. It enables agents to understand each others activities.
2. It enables agents to adjust the compatibility of each
RCs individual problem spaces.
Problem space elaboration goes hand in hand with
resolution elaboration. For example, if an RC considers
the resolution of a conflict between two aircraft and the
other RC identifies that this solution may cause a new

conflict with one of the aircraft under her/his control, then


the first RC must then reconsider the decision; thus it is no
longer a two-aircraft conflict but a three-aircraft conflict.
With this example we can see the impact of cooperation on
the evolution of the individual problem spaces. One RC
considers a resolution corresponding to a certain representation of the conflict. This individual communicates a
solution regarding this representation. Then, the other RC,
who has a different representation of the same problem, can
intervene on the first RCs problem representation by
introducing a new aircraft that must be considered in the
choice of a new resolution. Thus, the two RCs now have a
common representation of the conflict. The new ADRD
being designed currently will implement this principle (see
below).

6. TOWARDS MACHINE DESIGN


From this study, the associated one on anticipation, and the
state of the art in the domain, the development of a new
ADRD (amanda) is in progress. Starting from the supervisory control researchers idea of developing a machine
capable of detecting two-aircraft conflicts and of implementing abstract plans on individual aircraft (e.g., turn to
the left, or make an aircraft cross behind another), we have
tried to render this machine compatible with the
controllers strategies. In the current situation with two
controllers PC and RC we think that the PC can
progressively define the problem space and solution
proposals with acknowledgements and possible modifications from RC. It actually belongs to the PCs task to
prepare the RCs work (for the moment by writing
comments on strips). Thus, we have opted for a suppression
of strips and a replacement by a second screen, adjacent to
the radar screen, representing the problem space, with a
zooming function on a particular problem. Zooming in on a
problem (a conflict between two or more aircraft) will give
access to a radar view (of the same size as the adjacent one)
with some filtering (of non-relevant aircraft) and highlighting of the problematic aircraft (as defined by the
controllers).
With the problem view, the RC will be able to access
amanda and communicate with it, sending plans, following
solution proposals, introducing or releasing constraints, and
monitoring plan implementation. When elaborating a
solution from a plan, the machine will be able to detect
new conflicts created by plan implementation. Thus, the
machine will be capable of critiquing plans and of
questioning the controllers current representation of the
problem space. In addition, the controllers will have access
to a simulation facility on each problem view to evaluate
the present status and the results of possible actions. The

Common Frame of Reference in Cognitive Cooperation

interface will satisfy the two COFOR functions previously


described. Firstly, amanda will be capable of considering
the controllers intention (receiving plans to implement)
and problem representations (receiving problem definitions
under the form of lists of aircraft belonging to each
problem). Secondly, when amanda is unable to implement
a plan due to an aircraft not considered by the controllers
in their problem definition, it will be capable of enriching
the controllers problem representation introducing the
neglected aircraft, or receiving a plan for this aircraft. In
this way, with a machine that is not very complex, we think
we can almost reach something like a common COFOR
elaboration mechanism between the controllers and the
machine, initiated by the RC.
An experiment in preparation will evaluate the three
main principles of amanda:
. Problem space maintenance: updating the list of
problems in real time, organising it according to a time
to collision criterion.
. Plan and problem space validation: elaborating a precise
solution or informing the controller that the plan cannot
be implemented, because of a bad problem definition.
. Solution delegation: executing the solution and returning the deviated aircraft to their routes.

When analysing the results of this experiment, we will


devote particular attention to the integration of amanda
into the overall controllers activity. We want to avoid
that: trying to alleviate the workload, the use of the system
introduces new sources of workload. For example, a large
part of the controllers activity is subsymbolic, with low
requirements on the symbolic attentional control. amanda,
which needs symbolic communication to be efficient, could
result in a bad trade-off between subsymbolic and symbolic
processes.

7. CONCLUSION
The results of this study show that the most crucial task in
ATC is the elaboration and maintenance of an appropriate
problem space. Within this structure the choice of solutions
and their interpretations is almost obvious. That is why
COFOR management activities are prominent in human
human cooperation when two controllers share the traffic.
Certainly, the allocation of schematic solutions and of their
implementation to a machine could alleviate the RCs
workload, especially working memory load, since the
execution of the operations needed by conflict resolution
are distributed over 10 or 20 minutes. But the relationships
between these solutions and a changing problem space
should be carefully considered.

45

The results presented in this study are useful for the


design of the computer system that is being developed
through collaboration between supervisory control researchers and cognitive ergonomics psychologists. The
current limitations of machines in terms of COFOR
elaboration cannot let us hope that the humanhuman
cooperation model suggested by these results could be fully
transferred to humanmachine cooperation. However, we
have suggested that support to COFOR be designed to serve
as a communication medium between the RC and the
machine. The use of such a medium could be very different
for the human and for the machine. The RC could use it to
spatially define and update the problem space on an
interface similar to the radar screen (the problem view).
The machine could use it in a less interpretative way, just
displaying information on conflicts unanticipated by the
RC on the problem views and gathering information from
the RC on aircraft allocation and plans.
In the present experiment, the COFOR allowed the RC
to know constraints on the others activities and then to
forecast:
. interference produced on the other RCs activity;
. interference resulting from the other RCs activity.

RCs had then to integrate interference between individual


activities for decision making.
The relevance of this methodology, consisting in using
data on humanhuman cooperation to design human
machine cooperation, is only related to the fact that one
can think that expert controllers strategies are efficient.
Obviously, if one has the least doubt on that, measures of
effectiveness are required to identify design problems (Long
1996). The next experiment will consist in evaluating the
design principle presented above. Measures of effectiveness
in different modes of implementation of the principle will
be utilised to identify further design problems (e.g., the
trade-off between subsymbolic and symbolic processes) and
to suggest solutions for local adjustments of the principle, as
is always the case following incremental design.
Acknowledgements
This study has been conducted with the financial support
of CENA (the French national research centre in ATC)
when the authors were working in LAMIH (CNRS,
Valenciennes, France). We thank the controllers of the
regional ATC centre of Bordeaux for their collaboration,
and especially Bernard Diot. We also thank the members
of the LAMIH multidisciplinary research team involved
in this program for their technical contribution: Igor
Crevits, Serge Debernard, Pascal Denecker and Thierry
Poulain.

46

References
Castelfranchi C (1998). Modelling social action for agents. Artificial
Intelligence 103:157182.
Clark HH (1996). Using language. Cambridge University Press, Cambridge, MA.
De Terssac G, Chabaud C (1990). Referentiel operatif commun et fiabilite
[Operative frame of reference and reliability]. In Leplat J, De Terssac G
(eds). Les facteurs humains de la fiabilite [Human factors in reliability].
Octare`s, Toulouse, pp 110139.
Debernard S, Vanderhaegen F, Millot P (1992). An experimental
investigation of dynamic allocation of tasks between air traffic controller
and AI system. Paper presented at the 5th IFAC/IFIP/IFORS/IEA MMS,
Amsterdam.
Endsley M (1995). Toward a theory of situation awareness in dynamic
systems. Human Factors 37:3264.
Hoc JM (1988). Cognitive psychology of planning. Academic Press,
London.
Hoc JM (1998). How can we evaluate the quality of humanmachine
cooperation? In Darses F, Zarate P (eds). COOP98, Third international
conference on the design of cooperatives systems. INRIA, Le Chesnay,
pp 121130.
Hoc JM (2001). Towards a cognitive approach to humanmachine
cooperation in dynamic situations. International Journal of Human
Computer Studies 54:509540.
Hoc JM (in press). Toward ecological validity of research in cognitive
ergonomics. Theoretical Issues in Ergonomics Science.
Hoc JM, Lemoine MP (1998). Cognitive evaluation of humanhuman and
humanmachine cooperation modes in air traffic control. International
Journal of Aviation Psychologie 8:132.
Hoc JM, Morineau T, Denecker P (2000). Gestion de lespace proble`me et
organisation temporelle de lactivite de controleurs aeriens professionnels sur simulateur [Problem space management and temporal
organisation of professional air traffic controllers on simulator]
(Research Report). UVHC, LAMIH, PERCOTEC, Valenciennes.
Hopkin VD (1995). Human factors in air traffic control. Taylor & Francis,
London.
Jentsch F, Barnett J, Bowers C, Salas E (1999). Who is flying this plane
anyway? What mishaps tell us about crew member role assignment and
air crew situation awareness. Human Factors 41:114.
Jones PM, Jasek CA (1997). Intelligent support for activity management
(ISAM): an architecture to support distributed supervisory control. IEEE
Transactions on Systems, Man, and Cybernetics Part A: Systems and
Humans 27:274288.
Jones PM, Mitchell CM (1994). Model-based communicative acts:
humancomputer collaboration in supervisory control. International
Journal of HumanComputer Studies 41: 527551.
Layton C, Smith PJ, McCoy E (1994). Design of a cooperative problemsolving system for en-route flight planning: an empirical evaluation.
Human Factors 36:94119.
Lemoine MP, Debernard S, Crevits I, Millot P (1996). Cooperation
between humans and machines: first results of an experiment with a
multi-level cooperative organisation in air traffic control. Computer
Supported Cooperative Work 5:299321.
Leroux M (1991). ERATO: cognitive engineering applied to air traffic
control (research report). CENA, Toulouse.
Loiselet A, Hoc JM (1999). Assessment of a method to study cognitive
cooperation in fighter aircraft piloting. Paper presented at CSAPC99,
Villeneuve dAscq, September.
Long J (1996). Specifying relations between research and the design of
humancomputer interaction. International Journal of HumanComputer Studies 44:875920.
McCarthy JC, Fallon E, Bannon L. (2000). Dialogues on function
allocation [special issue]. International Journal of HumanComputer
Studies 52(2).
Millot P, Lemoine MP (1998). An attempt for generic concepts toward
humanmachine cooperation. Paper presented at IEEE SMC. San
Diego, CA, October.

J.-M. Hoc and X. Carlier


Older MT, Waterson PE, Clegg CW (1997). A critical assessment of task
allocation methods and their applicability. Ergonomics 40:151171.
Rasmussen J (1986). Information processing and humanmachine
interaction. North-Holland, Amsterdam.
Roth EM, Malin JT, Schreckenghost DL (1997). Paradigms for intelligent
interface design. In Helander M, Landauer TK, Prabhu P (eds).
Handbook of humancomputer interaction. North-Holland, Amsterdam, pp 11771201.
Salas E, Prince C, Baker P, Shrestha L (1995). Situation awareness in team
performance: implications for measurement and training. Human
Factors 37:123136.
Sanderson P, Scott J, Jonhson T, Mainzer J, Watanabe L, James J (1994).
MacSHAPA and the enterprise of exploratory sequential data analysis
(ESDA). International Journal of HumanComputer Studies 41:633
681.
Smith PJ, McCoy E, Layton C (1997). Brittleness in the design of
cooperative problem-solving systems: the effects on user performance.
IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems
and Humans 27:360371.
Correspondence and offprint requests to: J.-M. Hoc, CNRS, IRCCyN,
PsyCoTec, Ecole Centrale de Nantes, BP 92101, F-44321 Nantes Cedex 3,
France. Email: Jean-Michel.Hoc@irccyn.ec-nantes.fr

APPENDIX 1: CODING SCHEME AND


CODING EXAMPLES
Six predicates have been utilised to encode the cooperative
activities. They are distributed over two classes: cooperation in action and cooperation in planning.
1. Cooperation in Action
The arguments of the predicates in this class are not
presented because they were not utilised in this paper.
. INT.CREAT: Interference creation
Are you sure . . . You want to turn the AZA behind the
MPH?
(mutual control: detection of a possible error, the two
aircraft being under the control of the same RC).
. INT.ANTICIP: Interference anticipation
If I turn my AZA to the right, you must turn your AEA to
the right too.
(anticipation of a possible problem for the partner if an
action is executed).
. INT.DETECT: Interference detection
Your VIV is behind my DIATC and goes faster.
(detection of an immediate problem between two aircraft
allocated to two different RCs, i.e., between the two RCs
activities).
. INT.RESOL: Interference resolution
Between my AWD and your HLF, you must turn yours or
we will have a problem.
(immediate solution proposal).

Common Frame of Reference in Cognitive Cooperation

2. Cooperation in Planning

47

Two predicates have been utilised at the planning level.


Each predicate has two common arguments : <dep.proc>
coding the depth of processing (values: elaboration and
maintenance) and <type> coding the subtype of activity
(with specific values for each predicate, see below).

conflicting aircraft, turning AWD to the right is the best


solution. Unfortunately, doing this may produce a new
conflict between AWD and HLF. So, the RCs have to
discuss the situation if they dont want to produce a new
conflict between their aircraft. Consequently, the problem
consists in three aircraft distributed over the controllers and
cooperation is needed.

. REF.COM.PC: Common representation of the process


under control

Verbal reports

Coding scheme

Comments

...

...

RC1 is resolving the conflict


between the AWD and the RKA.

RC1: Where is
my AWD going?
It is going
to Nantes.
RC1: So, I will
turn it to the
right.

REF.COM.PC
(individual
aircraft,
maintenance)
REF.COM.CA
(common plan,
maintenance)

RC1 gives some information about


one of his aircraft.

RC2: Beware!

REF.COM.CA
(action
evaluation,
maintenance)
REF.COM.PC
(group of aircraft,
maintenance)

For this predicate, the argument <TYPE> can have two


specific values:
Individual aircraft
The BAL is heading 220.
Group of aircraft
AFR1101 and IEA531, they will come across.
. REF.COM.CA: Common representation of the control
activity

For this predicate, the argument <TYPE> can have three


specific values:
Common plan/ goal
I turn my MON to the left and you maintain your
HLF.
Action fulfilment
Ive put my AZA direct to Sauveterre.
(This RC is telling the other one that a planned
action has been done).
Action evaluation
The resolution between the BAL and the IBE is
almost complete.

APPENDIX 2: EXCERPT OF A
PROTOCOL
Situational context: There are three aircraft involved in a
single conflict. AWD (aircraft) is conflicting with RKA.
These two aircraft are under RC1s (1st radar controllers)
control. Behind RKA, there is another aircraft HLF
under RC2s control. In the context limited to these two

RC2: You are


going to cross
my HLF.

RC1 informs the other RC of his


intention to resolve the conflict by
turning the AWD to the right.
RC2 evaluates this solution
negatively because it will produce
another conflict with his HLF.
RC2 informs RC1 of the
disturbing presence of the HLF.

RC1: Oh yes,
there is also the
HLF. Ive not
seen it.

REF.COM.PC
RC1 agrees with the disturbing
(group of aircraft, presence of the HLF.
maintenance)

RC2: So, you


turn your RKA?

REF.COM.CA
(common plan,
elaboration)

RC2 suggests a new solution.

RC1: No, its


okay. I dont
move my RKA.
RC1: I will turn
my AWD and
you have to
turn your HLF.

REF.COM.CA
(common plan,
elaboration)
REF.COM.CA
(common plan,
elaboration)

RC1 rejects this new solution.

...

...

The discussion between the two


RCs continues until they reach
an agreement.

RC1 suggests to maintain his


previous solution but also suggests
that RC2 turns his own aircraft
too.

Anda mungkin juga menyukai