Anda di halaman 1dari 27

Decision Making

The information we process may be simple or complex,


clear or distorted, and complete or filled with gaps.

Because of this variability in information complexity and


completeness, we adopt different decision processes
depending on the situation.

Sometimes, we carefully calculate and evaluate


alternatives, but we often just interpret it to the best of
our ability and make educated guesses about what to
do.

Some decisions are so routine that we might not even


consider them to be decisions.

(Natural decision making?)

1
What is a decision-making task?

Generally, it is a task in which

(a) a person must select one option from a number of


alternatives,

(b) there is some amount of information available with


respect to the option,

(c) the timeframe is relatively long (longer than a second),


and

(d) the choice is associated with uncertainty; that is, it is


not necessarily clear which is the best option. By
definition, decision making involves risk, and a good
decision maker effectively assesses risks associated
with each option.

Decision making can generally be represented by three


phases, each of which itself can be elaborated into subphases:

(1) acquiring and perceiving information or cues relevant for


the decision

(2) generating and selecting hypotheses or situation


assessments about what the cues mean, regarding the
current and future state relevant to the decision,

(3) planning and selecting choices to take, on the basis of the


inferred state, and the costs and values of different
outcomes.

The three stages often cycle and iterate in a single decision.

2
Most of the initial research on decision making focused
on the study of optimal, rational decision making.

The assumption was that if researchers could specify


the values (costs or benefits) associated with different
choices, mathematical models could be applied to
those values, yielding the optimal choice that would
maximize these values.

Rational models of decision making are also


sometimes called normative models, because they
specify what people ideally should do;

they do not necessarily describe how people actually


perform decision-making tasks

Later researchers became interested in describing the


cognitive processes associated with human decision
making behavior and developed a number of
descriptive models.

Normative Decision Models

Normative decision models revolve around the central


concept of utility, the overall value of a choice, or how much
each outcome or product is "worth to the decision maker.

This model has application in engineering decisions as well


as decisions in personal life.

Choosing between different corporate investments, materials


for product, jobs, or even cars are all examples of choices
that can be modeled using multi-attribute utility theory.

Multi-attribute utility theory can be used to guide


engineering design decisions.

Similarly, it has been used to resolve conflicting objectives, to


guide environmental cleanup of contaminated sites and to
support operators of flexible manufacturing systems.

3
The number of potential options, the number of attributes or
features that describe each option, and the difficulty in
comparing alternatives on very different dimensions make
decisions complicated.

Multi attribute utility theory addresses this complexity, using


a utility function to translate the multidimensional space of
attributes into a single dimension that reflects the overall
utility or value of each option.

In theory, this makes it possible to compare apples and


oranges and pick the best one.

So does this mean we use the multi attribute utility theory?

Multi attribute utility theory assumes that the


overall value of a decision option is the sum of the
magnitude of each attribute multiplied by the utility
of each attribute,

where U(v) is the overall utility of an option, a(i) is


the magnitude of the option on the ith attribute, u(i)
is the utility (goodness or importance) of the ith
attribute, and n is the number of attributes.

4
The figure shows the analysis of four different options,
where the options are different cars that a student might
purchase.

Each car is described by five attributes.

These attributes might include the initial purchase


price, the fuel economy, insurance costs, sound quality
of the stereo system, and maintenance costs.

The utility of each attribute reflects its importance to the


student.

For example, the student cannot afford frequent and


expensive repairs, so the utility or importance of the fifth
attribute (maintenance costs) is quite high (8), whereas
the student does not care about the sound quality of the
stereo and so the fourth attribute (stereo system quality)
is quite low (1).

5
For this example, higher values reflect a more
desirable situation.

For example, the third car has a poor stereo but


low maintenance costs.

In contrast, the first car has a slightly better stereo


but high maintenance costs.

Combining the magnitude of all the attributes


shows that third car (option 3) is most appealing or
"optimal" choice and that the first car (option 1) is
least appealing.

Multi-attributed utility theory, shown in the previous


figure, assumes that all outcomes are certain.

However, life is uncertain, and probabilities often define the


likelihood of various outcomes (e.g., you cannot predict
maintenance costs precisely and when parts will fail or
break-down).

Another example of a normative model is expected value


theory, which addresses uncertainty.

This theory replaces the concept of utility in the previous


context with that of expected value and applies to any
decision that involves a "gamble" type of decision, where
each choice has one or more outcomes with an associated
worth and probability.

For example, a person might be offered a choice between

1. Winning $50 with a probability of .20, or


2. Winning $20 with a probability of .60.

6
Expected value theory assumes that the overall value of a
choice is the sum of the worth of each outcome multiplied
by its probability where E(v) is the expected value of the
choice, p(i) is the probability of the ith outcome, and v(i) is
the value of the ith outcome.

The expected value of the first choice for the example is $50 X
20, or $10, meaning that if the choice were selected many
times, one would expect an average gain of $10.

The expected value of the second choice is $20 X 60, or $12,


which is a higher overall value.

Therefore, the optimal or normative decision maker should


always choose the second gamble.

In a variety of decision tasks, researchers have compared


results of the normative model to actual human decision
making and found that people often vary from the optimal
choice.

This model does not predict the decisions people actually


make.

Expected value theory is relatively limited in scope because it quickly


becomes clear that many choices in life have different values to different
people.

For example, one person might value fuel efficiency in an automobile,


whereas another might not.

7
This facet of human decision making led to the
development of subjective expected utility (SEU)
theory.

SEU theory still relies on the concepts of subjective


probability times worth or value for each possible
outcome.

However, the worth component is subjective,


determined for each person; that is, instead of an
objective (e.g., monetary) worth, an outcome has some
value or utility to each individual.

Thus, each choice a person can make is associated


with one or more outcomes, and each outcome has an
associated probability and some subjective utility.

Descriptive Decision Models

Numerous researchers have evaluated the extent to which


humans follow normative decision models, especially SEU
theory.

The conclusion, based on several years of experimentation, is


that human decision making frequently violates key
assumptions of the normative models.

Because actual decision making commonly showed violations


of normative model assumptions, researchers began to
search for more descriptive models that would capture how
humans actually make decisions.

These researchers believed that rational consideration of all


factors associated with all possible choices, as well as their
outcomes, is frequently just too time consuming and effort
demanding.

8
They suggested descriptive models of decision making
where people rely on simpler and less-complete means of
selecting among choices.

People often rely on simplified shortcuts or rules-of-thumb


that are sometimes referred to as heuristics.

One well-known example of an early descriptive model is


Simon's concept of satisficing.

Simon (1957) argued that people do not usually follow a goal of making the
absolutely best or optimal decision. Instead, they opt for a choice that is
"good enough" for their purposes, something satisfactory. This shortcut
method of decision making is termed satisficing. In satisficing, the decision
maker generates and considers choices only until one is found that is
acceptable. Going beyond this choice to identify something that is better
simply has too little advantage to make it worth the effort.

Satisficing is a very reasonable approach given that


people have limited cognitive capacities and limited
time.

Indeed, if minimizing the time (or effort) to make a


decision is itself considered to be an attribute of the
decision process, then satisficing (or other
shortcutting heuristics) can sometimes be said to be
optimal.

For example, when a decision must be made before a


deadline, or all is lost.

Heuristics such as satisficing are often quite effective


(Gigerenzer & Todd, 1999) but they can also lead to
biases and poor decisions.

9
In summary, if the amount of information is relatively
small and time is unconstrained, careful analysis of
the choices and their utilities is desirable and possible.

To the extent that the amount of information exceeds


cognitive processing limitations, time is limited, or
both, people shift to using simplifying heuristics.

The following slides describes some common


heuristics and associated biases.

Following the discussion of heuristics and biases, we


describe the range of decision-making processes that
people adopt and how the decision making process
depends on the decision-making context.

10
HEURISTICS AND BIASES

Cognitive heuristics represent rules-of-thumb that


are easy ways of making decisions.

Heuristics are usually very powerful and efficient


(Gigerenzer & Todd, 1999), but they do not always
guarantee the best solution, (Kahneman et al.,
1982).

Unfortunately, because they represent


simplifications, heuristics occasionally lead to
systematic flaws and errors.

The systematic flaws represent deviations from a


rational or normative model and are sometimes
referred to as biases, and can be represented in
terms of a basic information-processing model.

Information Processing Limits in Decision Making

Following figure shows a relatively simple information-


processing framework that highlights some of the
cognitive limits critical to conscious, effortful decision
making.

Just as they were related to troubleshooting and


problem solving, selective attention, activities
performed within working memory, and information
retrieval from long-term memory all have an important
influence on decision making.

These processes impose important limits on human


decision making and are one reason why people use
heuristics to make decisions.

11
Information-processing model of decision making. Cues are selectively
sampled (on the left); hypotheses are generated through retrieval from
long-term memory. Possible actions are retrieved from long- -term
memory, and an action is selected on the basis of risks and the values of
their outcomes.

According to this model, the following occur in working


memory:

1. Cue reception and integration. A number of cues,


or pieces of information, are received from the
environment and go into working memory.

For example, an engineer trying to identify the


problem in a manufacturing process might receive a
number of cues, including unusual vibrations,
particularly rapid tool wear, and strange noises. The
cues must be selectively attended, interpreted and
somehow integrated with respect to one another.
The cues may also be incomplete, fuzzy, or
erroneous; that is, they may be associated with
some amount of uncertainty.

12
2. Hypothesis generation and selection. A person may then
use these cues to generate one or more hypotheses,
"educated" guesses, diagnoses, or inferences as to what the
cues mean. This is accomplished by retrieving information
from longterm memory.

For example, an engineer might hypothesize that the set of


cues described above is caused by a worn bearing. Many of
the decision tasks studied in human factors require such
inferential diagnosis, which is the process of inferring the
underlying or "true" state of a system. Examples of inferential
diagnosis include medical diagnosis, fault diagnosis of a
mechanical or electrical system, inference of weather
conditions based on measurement values or displays, and so
on. Sometimes this diagnosis is of the current state, and
sometimes it is of the predicted or forecast state, such as in
weather forecasting or economic projections.

The hypotheses brought into working memory are


evaluated with respect to how likely they are to be
correct.

This is accomplished by gathering additional cues


from the environment to either confirm or disconfirm
each hypothesis.

In addition, hypotheses may need to be revised, or


a new one may need to be generated.

When a hypothesis is found to be adequately


supported by the information, that hypothesis is
chosen as the basis for a course of action.

13
3. Plan generation and action choice.
One or more alternative actions are
generated by retrieving possibilities from
memory.

For example, after diagnosing acute


appendicitis, the surgeon in our scenario
generated several alternative actions,
including waiting, conducting additional
tests, and performing surgery. Depending
on the decision time available, one or more
of the alternatives are generated and
considered.

To choose an action, the decision maker might


evaluate information such as

• possible outcomes of each action (where there


may be multiple possible outcomes for each action),

• the likelihood of each outcome, and

• the negative and positive factors associated with


each outcome, following certain procedures.

Each action is associated with multiple possible


outcomes, some of which are more likely than
others.

In addition, these outcomes may vary from mildly to


extremely positive or from mildly to extremely
negative.

14
If the working hypothesis, plan, or action proves unsatisfactory,
the decision maker may generate a new hypothesis, plan, or
action.

When a plan is finally selected, it is executed, and the person


monitors the environment to update his or her situation
assessment and to determine whether changes in procedures
must be made.

The decision process depends on limited cognitive resources,


such as working memory.

Heuristics and Biases in Receiving and Using Cues

1. Attention to a limited number of cues.

Due to working memory limitations, people can use


only a relatively small number of cues to develop a
picture of the world or system. This is one reason
why configural displays that visually integrate several
variables or factors into one display are useful.
2. Cue primacy and anchoring.

In decisions where people receive cues over a period of time,


there are certain trends or biases in the use of that information.

The first few cues receive greater than average weight or


importance.

This is a primacy effect, found in many information-processing


tasks, where preliminary information tends to carry more weight
than subsequent information.

15
It often leads people to "anchor" on hypotheses supported
by initial evidence and is therefore sometimes called the
anchoring heuristic, characterizing the familiar
phenomenon that first impressions are lasting.

The order of information has an effect because people


use the information to construct plausible stories or mental
models of the world or system.

The key point is that, for whatever reason, information


processed early is often most influential, and this will
ultimately affect decision making.

3. Inattention to later cues.

In contrast to primacy, cues occurring later in time or cues


that change over time are often likely to be totally ignored,
which may be attributable to attentional factors.

In medical diagnosis, this would mean that symptoms, or


cues, that are presented first would be more likely to be
brought into working memory and remain dominant.

It is important to consider that in many dynamic


environments with changing information, limitations 2 and 3
can be counterproductive… How?

To the extent that older information-recalled when primacy


is dominant, may be less accurate as time goes on, more
likely to be outdated, and updated by more recent changes.

16
4. Cue salience.

Perceptually salient cues are more likely to capture


attention and be given more weight.

As one can expect, salient cues in displays are things


such as information at the top of a display, the loudest
alarm, the largest display, and so forth.

Unfortunately, the most salient display cue is not


necessarily the most diagnostic.

5. Overweighting of unreliable cues.

Not all cues are equally reliable.

In a trial, some witnesses, for example, will always tell the


truth.

Others might have faulty memories, and still others might


intentionally lie.

However, when integrating cues, people often simplify the


process by treating all cues as if they are all equally valid
and reliable.

The result is that people tend to give too much weight to


unreliable information.

17
Heuristics and Biases in Hypothesis Generation,
Evaluation and Selection

After a limited set of cues is processed in working memory,


the decision maker generates hypotheses by retrieving one
or more from long-term memory.

There are a number of heuristics and biases that affect this


process:

1. Generation of a limited number of hypotheses.

People generate a limited number of hypotheses because of


working memory limitations.

Thus, people will bring in somewhere between one and four


hypotheses for evaluation.

People consider a small subset of possible hypotheses at


one time and often never consider all relevant hypotheses

2. Availability heuristic.

Memory research suggests that people more easily


retrieve hypotheses that have been considered recently or
that have been considered frequently.

Unusual illnesses are simply not the first things that come
to mind to a physician.

This is related to another heuristic, the availability


heuristic.

This heuristic assumes that people make certain types of


judgment, for example, estimates of frequency, by
cognitively assessing how easily the state or event is
brought to mind.

18
3. Representativeness Heuristic.

Sometimes people diagnose a situation because the


pattern of cues "looks like" or is representative of the
prototypical example of this situation.

This is the representativeness heuristic

4. Overconfidence.

People are often biased in their confidence with respect to


the hypotheses they have brought into working memory,
believing that they are correct more often than they
actually are and reflecting the more general tendency for
overconfidence

5. Cognitive tunneling.

As we have noted above in the context of anchoring, once a


hypothesis has been generated or chosen, people tend to
underutilize subsequent cues.

We remain stuck on our initial hypothesis.

6. Confirmation bias.

Closely related to cognitive fixation are the biases when


people consider additional cues to evaluate working
hypotheses.

First, they tend to seek out only confirming information and


not disconfirming information, even when the disconfirming
evidence can be more diagnostic

19
Heuristics and Biases in Action Selection

1. Retrieve a small number of actions.

Long-term memory may provide many possible action plans,


but people are limited in the number they can retrieve and
keep in working memory.

2. Availability heuristic for actions.

In retrieving possible courses of action from long-term


memory, people retrieve the most "available" actions.

In general, the availability of items from memory are a


function of recency, frequency, and how strongly they are
associated with the hypothesis or situational assessment that
has been selected through the use of "if-then" rules.

3. Availability of possible outcomes.

Other types of availability effects will occur, including


the generation retrieval of associated outcomes.

When more than one possible action is retrieved, the


decision maker must select one based on how well the
action will yield desirable outcomes.

Each action often has more than one associated


consequence, which are probabilistic.

As an example, a worker might consider adhering to a


safety procedure and wear a hardhat versus ignoring
the procedure and going without one.

20
4. Framing bias.

The framing bias is the influence of the framing or


presentation of a decision on a person's judgment
(Kahneman & Tversky, 1984).

According to the normative utility theory model, the way the


problem is presented should have no effect on the
judgement.

Anyone would likely feel that he/she is performing better if


he/she is being told that he/she answered 80 percent of the
questions on the exam correctly compared to being told that
he/she answered 20 percent of the questions incorrectly.

The way a decision is framed can bias decisions.

DEPENDENCY OF DECISION MAKING


ON THE DECISION CONTEXT

The long list of decision-making biases and heuristics above


may suggest that people are not very effective decision
makers in everyday situations.

In fact, however, this is not the case. Most people do make


good decisions most of the time, but the list can help account
for the infrequent circumstances.

One reason that most decisions are good, is that heuristics are
accurate most of the time.

A second reason is that people have a profile of resources:


information-processing capabilities, experiences, and decision
aids (e.g., a decision matrix) that they can adapt to the
situations they face.

To the extent that people have the appropriate resources and


can adapt them, they make good decisions.

21
One way people adapt to different decision
circumstances is by moving from an analytical
approach, where they might try to maximize utility, to
the use of simplifying heuristics, such as satisficing.

This is commonly found in complex and dynamic


operational control environments, such as hospitals,
power or manufacturing plant control rooms, air traffic
control towers, and aircraft cockpits.

Naturalistic decision situations lead people to adopt


different strategies than what might be observed in
controlled laboratory situations.

Skill-, Rule-, and Knowledge-Based Behavior

The distinctions of skill-, rule-, and knowledge-


based behavior describe different decisions-making
processes that people can adopt depending on their
level of expertise and the decision situation
(Rasmussen).

Rasmussen's SRK (skill, rule, knowledge) model


behavior has received increasing attention in the
field of human factors.

It is consistent with accepted and empirically


supported models of cognitive information
processing and has also been used in popular
accounts of human error.

22
Shows the three levels of cognitive control: skill-based behavior,
rule-based behavior, and knowledge-based behavior.

Sensory input lower left, as a function of attentional


processes.

This input results in cognitive processing at either the


skill-based level, the rule-based level, or the knowledge
based level, depending on the operator's degree of
experience with the particular circumstance.

People who are extremely experienced with a task


tend to process the input at the skill-based level.

They do not have to interpret and integrate the cues or


think of possible actions, but only respond to cues as
signals that guide responses.

Signs are used to select the appropriate motor pattern


for the situation.

23
When people are familiar with the task but do not have
extensive experience, they process input and perform
at the rule-based level.

The input is recognized in relation to typical system


states, termed signs, which trigger rules accumulated
from past experience.

This accumulated knowledge can be in the person's


head or written down in formal procedures.

Following a recipe to bake bread is an example of


rule-based behavior.

The rules are “if-then” associations between cue sets


and the appropriate actions.

n operator might interpret the meter reading as a sign


and reduce the flow because the procedure is to
reduce the flow when the meter is above the set point.

When the situation is novel, decision makers do not have


any rules stored from previous experience to call on.

They therefore have to operate at the knowledge-based


level, which is essentially analytical processing using
conceptual information.

After the person assigns meaning to the cues and


integrates them to identify what is happening, he or she
processes the cues as symbols that relate to the goals
and an action plan.

Sensory input, a meter for example, can be


interpreted as a signal, sign, or symbol.

24
The SRK levels can describe different levels of expertise.

A novice can work only at the analytical knowledge-based


level or, if there are written procedures, at the rule-based
level.

At an intermediate point of learning, people have some rules


in their repertoire from training or experience.

They work mostly at the rule-based level but must move to


knowledge-based processing when encountering new
situations.

The expert has a greatly expanded rule base and a skill base
as well. Thus, the expert tends to use skill-based behavior,
but moves between the three levels depending on the task.
When a novel situation arises, such as a system disturbance
not previously experienced, lack of familiarity with the
situation moves even the expert back to the analytical
knowledge-based level. Effective decision making depends
on all three levels of behavior.

Integrated model: Adaptive decision making.

25
The figure shows that not everyone needs to, or is able to,
achieve all three levels for every decision making
situation.

The level of SA required for adequate performance


depends on the degree to which the person depends on
skill-, rule-, or knowledge-based behavior for a particular
decision.

In many real world decisions, a ·person may iterate many


times through the steps we have described.

With clear and diagnostic feedback people can correct


poor decisions.

In previous figure, it is apparent that there are a variety of


factors and cognitive limitations that strongly influence
decision making.

These include the following factors:

• Inadequate cue integration. This can be due to


environmental constraints (such as poor or unreliable data)
or to cognitive factors, that disrupt selective attention, and
biases that lead people to weigh cues inappropriately.

• Inadequate or poor-quality knowledge the person holds in


long-term memory that is relevant to a particular activity
(possible hypotheses, courses of action, or likely outcomes).
This limited knowledge results in systematic biases when
people use poorly refined rules, such as those associated
with representativeness and availability heuristics.

• Tendency to adopt a single course of action and fail to


consider the problem space broadly, even when time is
available. Working-memory limits make it difficult to consider
many alternatives simultaneously, and the tendency towards
cognitive fixation leads people to neglect cues after
identifying an initial hypothesis.

26
• Incorrect or incomplete mental model that leads to
inaccurate assessments of system state or the
effects of an action.

• Working-memory capacity and attentional limits


that result in a very limited ability to consider all
possible hypotheses simultaneously, associated
cues, costs and benefits of outcomes, and so forth.

• Poor awareness of a changing situation and the


need to adjust the application of a rule-for example,
failing to adjust your car's speed when the road
becomes icy.

• Inadequate metacognition leading to an


inappropriate decision strategy for the situation. For
example, persisting with a rule-based approach
when a more precise analytic approach is needed.

• Poor feedback regarding past decisions makes


error recovery or learning difficult.

27

Anda mungkin juga menyukai