You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused
capacity which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for
them. Your sexual adjustment has presented problems for you. Disciplined and self-controlled outside, you tend to be worrisome and insecure
inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount
of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You pride yourself as an independent thinker
and do not accept others' statements without satisfactory proof. You have found it unwise to be too frank in revealing yourself to others. At
times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved. Some of your aspirations tend to be pretty
unrealistic. Security is one of your major goals in life.
On average, the rating was 4.26, but only after the ratings were turned in was it revealed that each student had
received identical copies assembled by Forer from various horoscopes.
[2]
As can be seen from the profile, there are a
number of statements that could apply equally to anyone. These statements later became known as Barnum
statements, after P. T. Barnum.
In another study examining the Barnum effect, students took the MMPI personality assessment and researchers
evaluated their responses. The researchers wrote accurate evaluations of the students personalities, but gave the
students both the accurate assessment and a fake assessment using vague generalities. Students were then asked to
choose which personality assessment they believe was their own, actual assessment. More than half of the students
(59%) chose the fake assessment as opposed to the real one.
[3]
The Forer effect is more frequently referred to as "The Barnum Effect". This term was coined in 1956 by American
psychologist Paul Meehl in his essay, "Wanted - A Good Cookbook". He relates the vague personality descriptions
used in certain "pseudo-successful" psychological tests to those given by entertainer and businessman P. T. Barnum,
who was a notorious hoaxer.
[4][5]
Forer effect
100
Repeating the study
Two factors are important in ensuring that the study is replicable. The content of the description offered is important,
with specific emphasis on the ratio of positive to negative trait assessments. The other important factor is that the
subject trusts the person who is giving feedback to give them feedback based on honest and subjective
assessment.
[6][7]
The effect is so consistent because the statements are so vague. People are able to read their own meaning into the
statements they receive, and thus, the statement becomes "personal" to them. The most effective statements contain
statements based around the phrase: "at times." Such as: "At times you feel very sure of your self, while at other
times you are not as confident." This phrase can apply to almost anybody, and thus each person can read their own
meaning into it. Keeping statements vague in this manner will ensure high rates of reliability when repeating the
study.
[8]
Variables influencing the effect
Studies have shown that the Barnum effect is seemingly universal - it has been observed in people from many
different cultures and geographic locations. In 2009, psychologists Paul Rogers and Janice Soule conducted a study
that compared the tendencies of Westerners to accept Barnum personality profiles to the tendencies of Chinese
people. They were unable to find any significant differences
[9]
.
However, later studies have found that subjects give higher accuracy ratings if the following are true:
the subject believes that the analysis applies only to him or her, and thus applies their own meaning to the
statements.
[10]
the subject believes in the authority of the evaluator.
the analysis lists mainly positive traits.
See Dickson and Kelly for a review of the literature.
[11]
Sex has also proven to play a role in how accurate the subject believes the description to be: women are more likely
than men to believe that the vague statement is accurate.
[12]
The method in which the Barnum personality profiles are presented can also affect the extent to which people accept
them as their own. For instance, Barnum profiles that are more personalized - perhaps containing a specific person's
name - are more likely to yield higher acceptability ratings than those that could be applied to anyone.
[13]
Recent research
Belief in the paranormal
There is evidence that having prior belief in the paranormal leads to greater influence of the effect
[14]
. Subjects who,
for example, believe in the accuracy of horoscopes have a greater tendency to believe that the vague generalities of
the response apply specifically to them. Other examples of beliefs in the paranormal, called schizotypies, include
belief in magical powers, spiritual happenings, or other influences. Studies on the relationship between schizotypies
and belief in the Barnum effect have shown high amounts of correlation.
[15]
However, Rogers and Soule's 2009
study (see "Variables Influencing the Effect" above) also tested subjects' astrological beliefs, and both the Chinese
and Western skeptics were more likely to identify the ambiguity within the Barnum profiles. This suggests that
individuals who do not believe in astrology are possibly influenced less by the effect.
Forer effect
101
Self-serving bias
Self-serving bias has been shown to cancel the Barnum effect. According to the self-serving bias, subjects accept
positive attributes about themselves while rejecting negative ones. In one study, subjects were given one of three
personality reports. One contained Barnum profiles with socially desirable personality traits, one contained profiles
full negative traits (also called "common faults"), and the last contained a mixture of the two. Subjects who received
the socially desirable and mixed reports were far more likely to agree with the personality assessments than the
subjects who received negative reports, though it should be noted that there was not a significant difference between
the first two groups. In another study, subjects were given a list of traits instead of the usual "fake" personality
assessment. The subjects were asked to rate how much they felt these traits applied to them. In line with the
self-serving bias, the majority of subjects agreed with positive traits about themselves, and disagreed with negative
ones. The study concluded that the self-serving bias is powerful enough to cancel out the usual Barnum effect.
[16]
In popular culture
A similar experiment was made during the second episode of the seventh season of the TV show Penn & Teller:
Bullshit!. The episode was about astrology, and also discussed confirmation bias. The results were similar to Forer's
study.
A version of the original experiment was performed by illusionist Derren Brown. He described the experiment in his
book Tricks of the Mind.
References
[1] Marks, David F (2000). The Psychology of the Psychic (2 ed.). Amherst, New York: Prometheus Books. pp.41. ISBN1-57392-798-8.
[2] Forer, B.R. (1949). "The fallacy of personal validation: A classroom demonstration of gullibility". Journal of Abnormal and Social
Psychology (American Psychological Association) 44 (1): 118123. doi:10.1037/h0059240.
[3] Cline, Austin. "Flaws in Reasoning and Arguments: Barnum Effect & Gullibility" (http:/ / atheism. about. com/ od/ logicalflawsinreasoning/
a/ barnum. htm). About.com. . Retrieved 12 November 2012.
[4] Meehl, Paul (1956). Wanted - A Good Cookbook (http:/ / psycnet. apa. org/ index. cfm?fa=fulltext. journal& jcode=amp& vol=11& issue=6&
page=263& format=PDF). pp.266. .
[5] Dutton, Denis. "The Cold Reading Technique" (http:/ / denisdutton. com/ cold_reading. htm). . Retrieved 28 November 2012.
[6] Claridge, G; Clark, K., Powney, E., & Hassan, E. (2008). "Schizotypy and the barnum effect.". Personality and Individual Differences. 44
(2): 436-444.
[7] "Something for Everyone - The Barnum Effect" (http:/ / thearticulateceo. typepad. com/ my-blog/ 2012/ 01/
something-for-everyone-the-barnum-effect.html). The Articulate CEO. . Retrieved 25 November 2012.
[8] Krauss-Whitbourne, Susan. "When it comes to personality tests, skepticism is a good thing." (http:/ / www. psychologytoday. com/ blog/
fulfillment-any-age/ 201008/ when-it-comes-personality-tests-dose-skepticism-is-good-thing). Psychology Today. . Retrieved 25 November
2012.
[9] Rogers, Paul; Janice Soule (2009). "Cross-Cultural Differences in the Acceptance of Barnum Profiles Supposedly Derived From Western
Versus Chinese Astrology" (http:/ / jcc. sagepub.com/ content/ 40/ 3/ 381. full. pdf+ html). Journal of Cross-Cultural Psychology. . Retrieved
11/11/2012.
[10] Krauss-Whitbourne, Susan. "When it comes to personality tests, skepticism is a good thing." (http:/ / www. psychologytoday. com/ blog/
fulfillment-any-age/ 201008/ when-it-comes-personality-tests-dose-skepticism-is-good-thing). Psychology Today. . Retrieved 25 November
2012.
[11] Dickson, D.H.; Kelly, I.W. (1985). "The 'Barnum Effect' in Personality Assessment: A Review of the Literature". Psychological Reports
(Missoula) 57 (1): 367382. ISSN0033-2941. OCLC1318827.
[12] Piper-Terry, M.L.; Downey, J.L. (1998). "Sex, gullibility, and the barnum effect". Psychological Reports 82: 571-575.
[13] Farley-Icard, Roberta Lynn (2007). Factors that influence the Barnum Effect: Social desirability, base rates and personalization.
[14] "Balance-Today - Astroology" (http:/ / balance-today. org/ bias/ bias_examples/ astrology. html). . Retrieved 28 November 2012.
[15] Claridge, G; Clark, K., Powney, E., & Hassan, E. (2008). "Schizotypy and the barnum effect.". Personality and Individual Differences. 44
(2): 436-444.
[16] MacDonald, D.J.; Standing, L.G. (2002). "Does self-serving bias cancel the barnum effect?". Social behavior and personality 30 (6):
625-630.
Forer effect
102
External links
The Fallacy of Personal Validation: A Classroom Demonstration of Gullibility By: Bertram R. Forer (Full Text)
(http:/ / www. scribd. com/ doc/ 17378132/
The-Fallacy-of-Personal-Validation-a-Classroom-Demonstration-of-Gullibility)
An autotest (http:/ / homepage. bluewin. ch/ Ysewijn/ english_Barnum. htm)
Online test demonstrating the effect (http:/ / forer. netopti. net/ )
Skeptic's Dictionary: the Forer effect (http:/ / www. skepdic. com/ forer. html)
Framing effect (psychology)
Framing effect is an example of cognitive bias, in which people react differently to a particular choice depending on
whether it is presented as a loss or as a gain.
[1]
Experiments
Amos Tversky and Daniel Kahneman (1981) explored how different phrasing affected participants' responses to a
choice in a hypothetical life and death situation.
Participants were asked to choose between two treatments for 600 people affected by a deadly disease. Treatment A
was predicted to result in 400 deaths, whereas treatment B had a 33% chance that no one would die but a 66%
chance that everyone would die. This choice was then presented to participants either with positive framing, ie how
many people would live, or with negative framing, ie how many people would die.
Framing Treatment A Treatment B
Positive "Saves 200 lives" "A 33% chance of saving all 600 people, 66% possibility of saving no one."
Negative "400 people will die" "A 33% chance that no people will die, 66% probability that all 600 will die."
Treatment A was chosen by 72% of participants when it was presented with positive framing ("saves 200 lives")
dropping to only 22% when the same choice was presented with negative framing ("400 people will die").
This effect has been shown in other contexts:
93% of PhD students registered early when a penalty fee for late registration was emphasised, with only 67%
doing so when this was presented as a discount for earlier registration.
[2]
62% of people disagreed with allowing "public condemnation of democracy", but only 46% of people agreed that
it was right to "forbid public condemnation of democracy".(Rugg, as cited in Plous, 1993)
More people will support an economic policy if the employment rate is emphasised than when the associated
unemployment rates is highlighted.
[3]
It has been argued that pretrial detention may increase a defendant's willingness to accept a plea bargain, since
imprisonment, rather than freedom, will be his baseline, and pleading guilty will be viewed as an event that will
cause his earlier release rather than as an event that will put him in prison.
[4]
Framing effect (psychology)
103
Applications
Frame analysis has been a significant part of scholarly work on topics like social movements and political opinion
formation in both sociology and political science.
Political polls will often be framed to encourage a response beneficial to the organisation that has commissioned the
poll. The effect is so pronounced that it has been suggested that political polls may discredit by such framing.
[5]
Amelioration
One of the dangers of framing effects is that people are often provided options within the context of only one of the
two frames.
[6]
Furthermore, framing effects may persist even when monetary incentives are provided.
[7]
Thus,
individuals' decisions may be malleable through manipulation with the framing effect, and the consequences of
framing effects may be inescapable. However, Druckman (2001b) conveys that the framing effects and their societal
implications may be emphasized more than they should be. He demonstrated that the effects of framing can be
reduced, or even eliminated, if ample, credible information is provided to people.
[8]
Causes
Framing impacts people because individuals perceive losses and gains differently, as illustrated in prospect theory
(Tversky & Kahneman, 1981). The value function, founded in prospect theory, illustrates an important underlying
factor to the framing effect: a loss is more devastating than the equivalent gain is gratifying (Tversky & Kahneman,
1981). Thus, people tend to avoid risk when a positive frame is presented but seek risks when a negative frame is
presented (Tversky & Kahneman, 1981). Additionally, the value function takes on a sigmoid shape, which indicates
that gains for smaller values are psychologically larger than equivalent increases for larger quantities (Tversky &
Kahneman, 1981). Another important factor contributing to framing is certainty effect and pseudocertainty effect in
which a sure gain is favored to a probabilistic gain (Clark, 2009), but a probabilistic loss is preferred to a definite
loss.
[9]
For example, in Tversky and Kahneman's (1981) experiment, in the first problem, treatment A, which saved a
sure 200 people, was favored due to the certainty effect.
According to Fuzzy-trace theory, this phenomenon is attributed to categorical gist attitudes, a tendency to generalize
percentages as into categories such as high, some, or none. In the Asian Disease Problem, one is more likely to
choose the sure option in the gain frame to avoid the risk of no one being saved. In the loss frame, we are more likely
to choose the risky option. This is because we generalize the options into some people will die (sure option) or
some people will die or no one will die (risky option).
[10]
References
Druckman, J. (2001a). "Evaluating framing effects". Journal of Economic Psychology 22: 96101.
Druckman, J. (2001b). "Using credible advice to overcome framing effects". Journal of Law, Economics, and
Organization 17: 6282. doi:10.1093/jleo/17.1.62.
Clark, D (2009). Framing effects exposed. Pearson Education.
Gtcher, S., Orzen, H., Renner, E., & Stamer, C. (in press). Are experimental economists prone to framing
effects? A natural field experiment. Journal of Economic Behavior & Organization.
Plous, Scott (1993). The psychology of judgment and decision making. McGraw-Hill. ISBN978-0-07-050477-6.
Tversky, Amos; Kahneman, Daniel (1981). "The Framing of decisions and the psychology of choice". Science
211 (4481): 453458. doi:10.1126/science.7455683. PMID7455683.
Khberger, Anton; Tanner, Carmen (2010). "Risky choice framing: Task versions and a comparison of prospect
theory and fuzzy-trace theory". Journal of Behavioral Decision Making 23 (3): 314329. doi:10.1002/bdm.656.
Framing effect (psychology)
104
References
[1] [1] Plous, 1993
[2] Gtcher Orzen, Renner, & Stamer, in press
[3] [3] Druckman, 2001b
[4] Stephanos Bibas (June 2004). Plea Bargaining outside the Shadow of Trial. 117. Harvard Law Review. pp.24632547
[5] [5] Druckman, 2001b
[6] [6] Druckman, 2001a
[7] Tversky & Kahneman, 1981
[8] [8] Druckman, 2001b
[9] Tversky & Kahneman, 1981
[10] [10] Khberger, 2010
Gambler's fallacy
The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a
Monte Carlo Casino in 1913),
[1][2]
and also referred to as the fallacy of the maturity of chances, is the belief that if
deviations from expected behaviour are observed in repeated independent trials of some random process, future
deviations in the opposite direction are then more likely.
An example: coin-tossing
Simulation of coin tosses:
Each frame, a coin is
flipped which is red on one
side and blue on the other.
The result of each flip is
added as a colored dot in
the corresponding column.
As the pie chart shows, the
proportion of red versus
blue approaches 50-50 (the
Law of Large Numbers).
But the difference between
red and blue does not
systematically decrease to
zero.
The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin.
With a fair coin, the outcomes in different tosses are statistically independent and the
probability of getting heads on a single toss is exactly
1
2
(one in two). It follows that the
probability of getting two heads in two tosses is
1
4
(one in four) and the probability of
getting three heads in three tosses is
1
8
(one in eight). In general, if we let A
i
be the event
that toss i of a fair coin comes up heads, then we have,
.
Now suppose that we have just tossed four heads in a row, so that if the next coin toss
were also to come up heads, it would complete a run of five successive heads. Since the
probability of a run of five successive heads is only
1
32
(one in thirty-two), a believer in
the gambler's fallacy might believe that this next flip is less likely to be heads than to be
tails. However, this is not correct, and is a manifestation of the gambler's fallacy; the
event of 5 heads in a row and the event of "first 4 heads, then a tails" are equally likely,
each having probability
1
32
. Given the first four rolls turn up heads, the probability that
the next toss is a head is in fact,
.
While a run of five heads is only
1
32
= 0.03125, it is only that before the coin is first
tossed. After the first four tosses the results are no longer unknown, so their probabilities
are 1. Reasoning that it is more likely that the next toss will be a tail than a head due to
the past tosses, that a run of luck in the past somehow influences the odds in the future, is
the fallacy.
Gambler's fallacy
105
Explaining why the probability is 1/2 for a fair coin
We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152.
However, the probability of flipping a head after having already flipped 20 heads in a row is simply
1
2
. This is an
application of Bayes' theorem.
This can also be seen without knowing that 20 heads have occurred for certain (without applying of Bayes' theorem).
Consider the following two probabilities, assuming a fair coin:
probability of 20 heads, then 1 tail = 0.5
20
0.5 = 0.5
21
probability of 20 heads, then 1 head = 0.5
20
0.5 = 0.5
21
The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in
2,097,152. Therefore, it is equally likely to flip 21 heads as it is to flip 20 heads and then 1 tail when flipping a fair
coin 21 times. Furthermore, these two probabilities are equally as likely as any other 21-flip combinations that can
be obtained (there are 2,097,152 total); all 21-flip combinations will have probabilities equal to 0.5
21
, or 1 in
2,097,152. From these observations, there is no reason to assume at any point that a change of luck is warranted
based on prior trials (flips), because every outcome observed will always have been as likely as the other outcomes
that were not observed for that particular trial, given a fair coin. Therefore, just as Bayes' theorem shows, the result
of each trial comes down to the base probability of the fair coin:
1
2
.
Other examples
There is another way to emphasize the fallacy. As already mentioned, the fallacy is built on the notion that previous
failures indicate an increased probability of success on subsequent attempts. This is, in fact, the inverse of what
actually happens, even on a fair chance of a successful event, given a set number of iterations. Assume a fair
16-sided die, where a win is defined as rolling a 1. Assume a player is given 16 rolls to obtain at least one win
(1p(rolling no ones)). The low winning odds are just to make the change in probability more noticeable. The
probability of having at least one win in the 16 rolls is:
However, assume now that the first roll was a loss (93.75% chance of that,
15
16
). The player now only has 15 rolls
left and, according to the fallacy, should have a higher chance of winning since one loss has occurred. His chances of
having at least one win are now:
Simply by losing one toss the player's probability of winning dropped by 2 percentage points. By the time this
reaches 5 losses (11 rolls left), his probability of winning on one of the remaining rolls will have dropped to ~50%.
The player's odds for at least one win in those 16 rolls has not increased given a series of losses; his odds have
decreased because he has fewer iterations left to win. In other words, the previous losses in no way contribute to the
odds of the remaining attempts, but there are fewer remaining attempts to gain a win, which results in a lower
probability of obtaining it.
The player becomes more likely to lose in a set number of iterations as he fails to win, and eventually his probability
of winning will again equal the probability of winning a single toss, when only one toss is left: 6.25% in this
instance.
Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are
equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an
equal probability, although a rational gambler might attempt to predict other players' choices and then deliberately
avoid these numbers. Low numbers (below 31 and especially below 12) are popular because people play birthdays as
their so-called lucky numbers; hence a win in which these numbers are over-represented is more likely to result in a
Gambler's fallacy
106
shared payout.
A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an aircraft, a man decides
to always bring a bomb with him. "The chances of an aircraft having a bomb on it are very small," he reasons, "and
certainly the chances of having two are almost none!" A similar example is in the book The World According to
Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the
chances of another plane hitting the house have just dropped to zero.
Reverse fallacy
The reversal is also a fallacy (not to be confused with the inverse gambler's fallacy) in which a gambler may instead
decide that tails are more likely out of some mystical preconception that fate has thus far allowed for consistent
results of tails. Believing the odds to favor tails, the gambler sees no reason to change to heads. Again, the fallacy is
the belief that the "universe" somehow carries a memory of past results which tend to favor or disfavor future
outcomes.
Caveats
In most illustrations of the gambler's fallacy and the reversed gambler's fallacy, the trial (e.g. flipping a coin) is
assumed to be fair. In practice, this assumption may not hold.
For example, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152 (above). If the coin is
fair, then the probability of the next flip being heads is 1/2. However, because the odds of flipping 21 heads in a row
is so slim, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by
hidden magnets, or similar.
[3]
In this case, the smart bet is "heads" because the empirical evidence21 "heads" in a
rowsuggests that the coin is likely to be biased toward "heads", contradicting the general assumption that the coin
is fair.
Childbirth
Instances of the gamblers fallacy when applied to childbirth can be traced all the way back to 1796, in Pierre-Simon
Laplaces A Philosophical Essay on Probabilities. Laplace wrote of the ways men calculated their probability of
having sons: "I have seen men, ardently desirous of having a son, who could learn only with anxiety of the births of
boys in the month when they expected to become fathers. Imagining that the ratio of these births to those of girls
ought to be the same at the end of each month, they judged that the boys already born would render more probable
the births next of girls." In short, the expectant fathers feared that if more sons were born in the surrounding
community, then they themselves would be more likely to have a daughter.
[4]
Some expectant parents believe that, after having multiple children of the same sex, they are "due" to have a child of
the opposite sex. While the TriversWillard hypothesis predicts that birth sex is dependent on living conditions (i.e.
more male children are born in "good" living conditions, while more female children are born in poorer living
conditions), the probability of having a child of either gender is still regarded as 50/50.
Monte Carlo Casino
The most famous example happened in a game of roulette at the Monte Carlo Casino in the summer of 1913, when
the ball fell in black 26 times in a row, an extremely uncommon occurrence (but not more nor less common than any
of the other 67,108,863 sequences of 26 red or black, neglecting the 0 slot on the wheel), and gamblers lost millions
of francs betting against black after the black streak happened. Gamblers reasoned incorrectly that the streak was
causing an "imbalance" in the randomness of the wheel, and that it had to be followed by a long streak of red.
[1]
Gambler's fallacy
107
Non-examples of the fallacy
There are many scenarios where the gambler's fallacy might superficially seem to apply, but actually does not. When
the probability of different events is not independent, the probability of future events can change based on the
outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this
is cards drawn without replacement. For example, if an ace is drawn from a deck and not reinserted, the next draw is
less likely to be an ace and more likely to be of another rank. The odds for drawing another ace, assuming that it was
the first card drawn and that there are no jokers, have decreased from
4
52
(7.69%) to
3
51
(5.88%), while the odds for
each other rank have increased from
4
52
(7.69%) to
4
51
(7.84%). This type of effect is what allows card counting
schemes to work (for example in the game of blackjack).
Meanwhile, the reversed gambler's fallacy may appear to apply in the story of Joseph Jagger, who hired clerks to
record the results of roulette wheels in Monte Carlo. He discovered that one wheel favored nine numbers and won
large sums of money until the casino started rebalancing the roulette wheels daily. In this situation, the observation
of the wheel's behavior provided information about the physical properties of the wheel rather than its "probability"
in some abstract sense, a concept which is the basis of both the gambler's fallacy and its reversal. Even a biased
wheel's past results will not affect future results, but the results can provide information about what sort of results the
wheel tends to produce. However, if it is known for certain that the wheel is completely fair, then past results provide
no information about future ones.
The outcome of future events can be affected if external factors are allowed to change the probability of the events
(e.g., changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced
player's success may decrease after opposing teams discover his weaknesses and exploit them. The player must then
attempt to compensate and randomize his strategy. (See Game theory).
Many riddles trick the reader into believing that they are an example of the gambler's fallacy, such as the Monty Hall
problem.
Non-example: unknown probability of event
When the probability of repeated events are not known, outcomes may not be equally probable. In the case of coin
tossing, as a run of heads gets longer and longer, the likelihood that the coin is biased towards heads increases. If one
flips a coin 21 times in a row and obtains 21 heads, one might rationally conclude a high probability of bias towards
heads, and hence conclude that future flips of this coin are also highly likely to be heads. In fact, Bayesian inference
can be used to show that when the long-run proportion of different outcomes are unknown but exchangeable
(meaning that the random process from which they are generated may be biased but is equally likely to be biased in
any direction) previous observations demonstrate the likely direction of the bias, such that the outcome which has
occurred the most in the observed data is the most likely to occur again.
[5]
Psychology behind the fallacy
Origins
Gambler's fallacy arises out of a belief in the law of small numbers, or the erroneous belief that small samples must
be representative of the larger population. According to the fallacy, "streaks" must eventually even out in order to be
representative.
[6]
Amos Tversky and Daniel Kahneman first proposed that the gambler's fallacy is a cognitive bias
produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the
probability of a certain event by assessing how similar it is to events they have experienced before, and how similar
the events surrounding those two processes are.
[7][8]
According to this view, "after observing a long run of red on the
roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence
than the occurrence of an additional red",
[9]
so people expect that a short run of random outcomes should share
properties of a longer run, specifically in that deviations from average should balance out. When people are asked to
Gambler's fallacy
108
make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to
tails stays closer to 0.5 in any short segment than would be predicted by chance (insensitivity to sample size);
[10]
Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be
representative of longer ones.
[11]
The representativeness heuristic is also cited behind the related phenomenon of the
clustering illusion, according to which people see streaks of random events as being non-random when such streaks
are actually much more likely to occur in small samples than people expect.
[12]
The gambler's fallacy can also be attributed to the mistaken belief that gambling (or even chance itself) is a fair
process that can correct itself in the event of streaks, otherwise known as the just-world hypothesis.
[13]
Other
researchers believe that individuals with an internal locus of control - that is, people who believe that the gambling
outcomes are the result of their own skill - are more susceptible to the gambler's fallacy because they reject the idea
that chance could overcome skill or talent.
[14]
Variations of the gambler's fallacy
Some researchers believe that there are actually two types of gambler's fallacy: Type I and Type II. Type I is the
"classic" gambler's fallacy, when individuals believe that a certain outcome is "due" after a long streak of another
outcome. Type II gambler's fallacy, as defined by Gideon Keren and Charles Lewis, occurs when a gambler
underestimates how many observations are needed to detect a favorable outcome (such as watching a roulette wheel
for a length of time and then betting on the numbers that appear most often). Detecting a bias that will lead to a
favorable outcome takes an impractically large amount of time and is very difficult, if not impossible, to do,
therefore people fall prey to the Type II gambler's fallacy.
[15]
The two types are different in that Type I wrongly
assumes that gambling conditions are fair and perfect, while Type II assumes that the conditions are biased, and that
this bias can be detected after a certain amount of time.
Another variety, known as the retrospective gambler's fallacy, occurs when individuals judge that a seemingly rare
event must come from a longer sequence than a more common event does. For example, people believe that an
imaginary sequence of die rolls is more than three times as long when a set of three 6's is observed as opposed to
when there are only two 6's. This effect can be observed in isolated instances, or even sequentially. A real world
example is when a teenager becomes pregnant after having unprotected sex, people assume that she has been
engaging in unprotected sex for longer than someone who has been engaging in unprotected sex and is not
pregnant.
[16]
Relationship to hot-hand fallacy
Another psychological perspective states that gambler's fallacy can be seen as the counterpart to basketball's
Hot-hand fallacy. In the hot-hand fallacy, people tend to predict the same outcome of the last event (positive
recency) - that a high scorer will continue to score. In gambler's fallacy, however, people predict the opposite
outcome of the last event (negative recency) - that, for example, since the roulette wheel has landed on black the last
six times, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for
the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an
inanimate object can become "hot."
[17]
Human performance is not perceived as "random," and people are more likely
to continue streaks when they believe that the process generating the results is nonrandom.
[6]
Usually, when a person
exhibits the gambler's fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one
construct is responsible for the two fallacies.
[18]
The difference between the two fallacies is also represented in economic decision-making. A study by Huber,
Kirchler, and Stockl (2010) examined how the hot hand and the gambler's fallacy are exhibited in the financial
market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin
tosses, use an "expert" opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial
reward. Participants turned to the "expert" opinion to make their decision 24% of the time based on their past
Gambler's fallacy
109
experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the
expert's opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the
gambler's fallacy, with their selection of either heads or tails decreasing after noticing a streak of that outcome. This
experiment helped bolster Ayton and Fischer's theory that people put more faith in human performance than they do
in seemingly random processes.
[19]
Neurophysiology
While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler's
fallacy, research suggests that there may be a neurological component to it as well. Functional magnetic resonance
imaging has revealed that, after losing a bet or gamble ("riskloss"), the frontoparietal network of the brain is
activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate and
ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler's fallacy - the more
activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler's fallacy. These results
suggest that gambler's fallacy relies more on the prefrontal cortex (responsible for executive, goal-directed
processes) and less on the brain areas that control affective decision-making.
The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome
contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly.
After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In
individuals exhibiting the gambler's fallacy, this choice-outcome contingency method is impaired, and they continue
to make risks after a series of losses.
[20]
Possible solutions
The gambler's fallacy is a deep-seated cognitive bias and therefore very difficult to eliminate. For the most part,
educating individuals about the nature of randomness has not proven effective in reducing or eliminating any
manifestation of the gambler's fallacy. Participants in an early study by Beach and Swensson (1967) were shown a
shuffled deck of index cards with shapes on them, and were told to guess which shape would come next in a
sequence. The experimental group of participants was informed about the nature and existence of the gambler's
fallacy, and were explicitly instructed not to rely on "run dependency" to make their guesses. The control group was
not given this information. Even so, the response styles of the two groups were similar, indicating that the
experimental group still based their choices on the length of the run sequence. Clearly, instructing individuals about
randomness is not sufficient in lessening the gambler's fallacy.
[21]
It does appear, however, that an individual's susceptibility to the gambler's fallacy decreases with age. Fischbein and
Schnarch (1997) administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students
specializing in teaching mathematics. None of the participants had received any prior education regarding
probability. The question was, "Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip
the coin again. What is the chance of getting heads the fourth time?" The results indicated that as the older the
students got, the less likely they were to answer with "smaller than the chance of getting tails," which would indicate
a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the
negative recency effect. Only 10% of the 11th graders answered this way, however, and none of the college students
did. Fischbein and Schnarch therefore theorized that an individual's tendency to rely on the representativeness
heuristic and other cognitive biases can be overcome with age.
[22]
Another possible solution that could be seen as more proactive comes from Roney and Trick, Gestalt psychologists
who suggest that the fallacy may be eliminated as a result of grouping. When a future event (ex: a coin toss) is
described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates
to the past events, resulting in the gambler's fallacy. When a person considers every event as independent, however,
the fallacy can be greatly reduced.
[23]
Gambler's fallacy
110
In their experiment, Roney and Trick told participants that they were betting on either two blocks of six coin tosses,
or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads
or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block.
Participants exhibited the strongest gambler's fallacy when the seventh trial was part of the first block, directly after
the sequence of three heads or tails. Additionally, the researchers pointed out how insidious the fallacy can be - the
participants that did not show the gambler's fallacy showed less confidence in their bets and bet fewer times than the
participants who picked "with" the gambler's fallacy. However, when the seventh trial was grouped with the second
block (and was therefore perceived as not being part of a streak), the gambler's fallacy did not occur.
Roney and Trick argue that a solution to gambler's fallacy could be, instead of teaching individuals about the nature
of randomness, training people to treat each event as if it is a beginning and not a continuation of previous events.
This would prevent people from gambling when they are losing in the vain hope that their chances of winning are
due to increase.
References
[1] Lehrer, Jonah (2009). How We Decide. New York: Houghton Mifflin Harcourt. p.66. ISBN978-0-618-62011-1.
[2] Blog - "Fallacy Files" (http:/ / www.fallacyfiles. org/ gamblers. html) What happened at Monte Carlo in 1913.
[3] Martin Gardner, Entertaining Mathematical Puzzles, Dover Publications, 69-70.
[4] [4] Barron, G. and Leider, S. (2010). The role of experience in the gambler's fallacy. Journal of Behavioral Decision Making, 23, 117-129.
[5] O'Neill, B. and Puza, B.D. (2004) Dice have no memories but I do: A defence of the reverse gambler's belief. (http:/ / cbe. anu. edu. au/
research/ papers/ pdf/ STAT0004WP.pdf). Reprinted in abridged form as O'Neill, B. and Puza, B.D. (2005) In defence of the reverse
gambler's belief. The Mathematical Scientist 30(1), pp. 1316.
[6] [6] Burns, B.D. and Corpus, B. (2004). Randomness and inductions from streaks: "Gambler's fallacy" versus "hot hand." Psychonomic Bulletin
and Review. 11, 179-184
[7] Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 11241131.
doi:10.1126/science.185.4157.1124. PMID17835457.
[8] Tversky, Amos; Daniel Kahneman (1971). "Belief in the law of small numbers". Psychological Bulletin 76 (2): 105110.
doi:10.1037/h0031322.
[9] Tversky & Kahneman, 1974.
[10] Tune, G.S. (1964). "Response preferences: A review of some relevant literature". Psychological Bulletin 61 (4): 286302.
doi:10.1037/h0048618. PMID14140335.
[11] Tversky & Kahneman, 1971.
[12] Gilovich, Thomas (1991). How we know what isn't so. New York: The Free Press. pp.1619. ISBN0-02-911706-2.
[13] [13] Rogers, P. (1998). The cognitive psychology of lottery gambling: A theoretical review. Journal of Gambling Studies, 14, 111-134
[14] [14] Sundali, J. and Croson, R. (2006). Biases in casino betting: The hot hand and the gambler's fallacy. Judgment and Decision Making, 1, 1-12.
[15] [15] Keren, G. and Lewis, C. (1994). The two fallacies of gamblers: Type I and Type II. Organizational Behavior and Human Decision
Processes, 60, 75-89.
[16] [16] Oppenheimer, D.M. and Monin, B. (2009). The retrospective gambler's fallacy: Unlikely events, constructing the past, and multiple
universes. Judgment and Decision Making, 4, 326-334.
[17] Ayton, P.; Fischer, I. (2004). "The hot hand fallacy and the gambler's fallacy: Two faces of subjective randomness?". Memory and Cognition
32: 13691378.
[18] Sundali, J.; Croson, R. (2006). "Biases in casino betting: The hot hand and the gambler's fallacy". Judgment and Decision Making 1: 112.
[19] Huber, J.; Kirchler, M.; Stockl, T. (2010). "The hot hand belief and the gambler's fallacy in investment decisions under risk". Theory and
Decision 68: 445462.
[20] Xue, G.; Lu, Z.; Levin, I.P.; Bechara, A. (2011). "An fMRI study of risk-taking following wins and losses: Implications for the gambler's
fallacy". Human Brain Mapping 32: 271281.
[21] Beach, L.R.; Swensson, R.G. (1967). "Instructions about randomness and run dependency in two-choice learning". Journal of Experimental
Psychology 75: 279282.
[22] Fischbein, E.; Schnarch, D. (1997). "The evolution with age of probabilistic, intuitively based misconceptions". Journal for Research in
Mathematics Education 28: 96105.
[23] Roney, C.J.; Trick, L.M. (2003). "Grouping and gambling: A gestalt approach to understanding the gambler's fallacy". Canadian Journal of
Experimental Psychology 57: 6975.
Hindsight bias
111
Hindsight bias
Hindsight bias, also known as the knew-it-all-along effect or creeping determinism, is the inclination to see
events that have already occurred as being more predictable than they were before they took place.
[1]
It is a
multifaceted phenomenon that can affect different stages of designs, processes, contexts, and situations.
[2]
Hindsight
bias may cause memory distortion, where the recollection and reconstruction of content can lead to false theoretical
outcomes. It has been suggested that the effect can cause extreme methodological problems while trying to analyze,
understand, and interpret results in experimental studies. A basic example of the hindsight bias is when, after
viewing the outcome of a potentially unforeseeable event, a person believes he or she "knew it all along." Such
examples are present in the writings of historians describing outcomes of battles, physicians recalling clinical trials,
and in judicial systems trying to attribute responsibility and predictability of accidents.
[3]
History
The hindsight bias, although not hitherto named as such, was not a new concept when it emerged in psychological
research in the 1970s. In fact it had been indirectly described numerous times by historians, philosophers and
physicians.
[3]
In 1973 Baruch Fischhoff attended a seminar where Paul E. Meehl stated an observation that clinicians
often overestimate their ability to have foreseen the outcome of a particular case, as they claim to have known it all
along.
[4]
Baruch, a psychology graduate student at the time, saw an opportunity in psychological research to explain
these observations.
[4]
Daniel Kahneman
In the early seventies investigation of heuristics and biases was a large area
of study in psychology, led by Amos Tversky and Daniel Kahneman.
[4]
Two heuristics developed by Tversky and Kahneman were of immediate
importance in the development of the hindsight bias, and these were the
availability heuristic and the representativeness heuristic.
[5]
In an
elaboration of these heuristics, Beyth and Fischhoff devised the first
experiment directly testing the hindsight bias.
[6]
They asked participants to
judge the likelihood of several outcomes of U.S. President Richard Nixons
upcoming visit to Peking (now romanized as Beijing) and Moscow. Some
time after President Nixons return, participants were asked to recall, or
reconstruct the probabilities they had assigned to each possible outcome,
and their perceptions of likelihood of each outcome was greater or
overestimated for events that actually had occurred.
[6]
This study is
frequently referred to in definitions of the hindsight bias, and the title of the
paper, I knew it would happen, may have contributed to the hindsight bias being interchangeable with the term
knew it all along hypothesis.
In 1975 Fischhoff developed another method for investigating the hindsight bias, which at the time was referred to as
the "creeping determinism hypothesis".
[3]
This method involves giving participants a short story with four possible
outcomes, one of which they are told is true, and are then asked to assign the likelihood of each particular
outcome.
[3]
Participants frequently assign a higher likelihood of occurrence to whichever outcome they have been
told is true.
[3]
Remaining relatively unmodified, this method is still used in psychological and behavioural
experiments investigating aspects of the hindsight bias. Having evolved from the heuristics of Tversky and
Kahneman into the creeping determinism hypothesis and finally into the hindsight bias as we now know it, the
concept has many practical applications and is still at the forefront of research today. Recent studies involving the
hindsight bias have investigated the effect age has on the bias, how hindsight may impact interference and confusion,
and how it may affect banking and investment strategies.
[7][8][9]
Hindsight bias
112
Function
The hindsight bias is defined as a tendency to change a recollection from an original thought to something different
because of newly provided information.
[10]
Since 1973, when Fischhoff started the hindsight bias research, there has
been a focus on two main explanations of the bias: distorted event probabilities and distorted memory for judgments
of factual knowledge.
[11]
In tests for hindsight bias a person is asked to remember a specific event from the past or
recall some descriptive information that they had been tested on earlier. In between the first test and final test they
are given the correct information about the event or knowledge. At the final test he or she will report that they knew
the answer all along when they truly have changed their answer to fit with the correct information they were given
after the initial test. Hindsight bias has been found to take place in both memory for experienced situations (events
that the person is familiar with) and hypothetical situations (made up events where the person must imagine being
involved). More recently it has been found that hindsight bias also exists in recall with visual material.
[11]
When
tested on initially blurry images the subjects learn what the true image was after the fact and they would then
remember a clear recognizable picture.
Cognitive models
To understand how a person can so easily change the foundation of knowledge and belief for events after receiving
new information three cognitive models of hindsight bias have been reviewed.
[12]
The three models are SARA
(Selective Activation and Reconstructive Anchoring), RAFT (Reconstruction After Feedback with Take the Best)
and CMT (Causal Model Theory). SARA and RAFT focus on distortions or changes in a memory process while
CMT focuses on probability judgments of hindsight bias.
The SARA model explains hindsight bias for descriptive information in memory and hypothetical situations and was
created by Rdiger Pohl and associates.
[12][13]
SARA assumes that people have a set of images to draw their
memories from. They suffer from the hindsight bias due to selective activation or biased sampling of that set of
images. Basically, people only remember small select amounts of information and when asked to recall it at a later
time they will use that biased image to support their own opinions about the situation. The set of images is originally
processed in the brain when first experienced. When remembered this image is reactivated, and the ability for editing
and alteration of the memory is possible which takes place in hindsight bias when new and correct information is
presented, leading one to believe that this new information when remembered at a later time is the persons original
memory. Due to this reactivation in the brain a more permanent memory trace can be created. The new information
acts as a memory anchor causing retrieval impairment.
[14]
The RAFT model
[15]
explains hindsight bias with comparisons of objects using knowledge based probability then
applying interpretations to those probabilities.
[12]
When given two choices a person will recall the information on
both topics and will make assumptions based on how reasonable they find the information to be. An example would
be comparing two cities to find which is larger. If either city is well known (i.e. popular sporting team) while the
other is not as recognizable, the persons mental cues for the more popular city will increase. They will then 'Take
the best' option in their assessment of their own probabilities. They recognize a city due to a sports team then assume
that the city will be the most populated. Take the Best refers to a cue that is viewed as most valid and becomes
support for the persons interpretations. RAFT is a by-product of adaptive learning. Feedback information will
update a person's knowledge base. This can lead to a person who is unable to retrieve the initial information since the
information cue has been replaced by a cue that they thought was more fitting. The 'best' cue has been replaced and
the person only remembers the answer that is most likely and believes that they thought this was the best point the
whole time.
[12]
Both SARA and RAFT descriptions include a memory trace impairment or cognitive distortion that is caused by
feedback of information and reconstruction of memory.
CMT is a non-formal theory based on work by many researchers to create a collaborative process model for
hindsight bias that involves event outcomes.
[12]
People try to make sense of an event that has not turned out how
Hindsight bias
113
they expected by creating causal reasoning for the starting event conditions. This can give that person the idea that
the event outcome was inevitable and there was nothing that could take place to prevent it from happening. CMT can
be caused by a discrepancy between a persons expectation of the event and the reality of an outcome. They
consciously want to make sense of what has happened and selectively retrieve memory that supports the current
outcome. The causal attribution can be motivated by wanting to feel more positive about the outcome and possibly
themselves.
[16]
Are people liars or are they tricking themselves into believing that they knew the right answer? These models would
show that memory distortions and personal bias play a role.
Memory distortions
Hindsight bias has similarities to other memory distortions such as misinformation effect and false autobiographical
memory.
[10]
Misinformation effect occurs after an event is witnessed; new information received after the fact
influences how the person remembers the event, and can be called post-event misinformation. This is an important
issue with eyewitness testimony. False autobiographical memory takes place when suggestions or additional outside
information is provided to distort and change memory of events; this can also lead to false memory syndrome. At
times this can lead to creation of new memories that are completely false and have not taken place. All three of these
memory distortions contain a three-stage procedure.
[10]
The details of each procedure are different but can result in
some psychological manipulation and alteration of memory. Stage one is different between the three paradigms
although all involve an event, an event that has taken place (misinformation effect), an event that has not taken place
(false autobiographical memory), and a judgment made by a person about an event that must be remembered
(hindsight bias). Stage two consists of more information that is received by the person after the event has taken
place. The new information given in hindsight bias is correct and presented up front to the person, while the extra
information for the other two memory distortions is wrong and presented in an indirect and possibly manipulative
way. The third stage consists of recalling the starting information. The person must recall the original information
with hindsight bias and misinformation effect while a person that has a false autobiographical memory is expected to
remember the incorrect information as a true memory.
[10]
For a false autobiographical memory to be created, the person must believe a memory that is not real. To seem real,
the information given must be influenced by their own personal judgments. There is no real episode of an event to
remember, so this memory construction must be logical to that person's knowledge base. Hindsight bias and
misinformation effect recall a specific time and event, this is called an episodic memory process.
[10]
These two
memory distortions both use memory-based mechanisms that involve a memory trace that has been changed.
Hippocampus activation takes place when an episodic memory is recalled.
[17]
The memory is then available for
alteration by new information. The person believes that the remembered information is the original memory trace,
not an altered memory. This new memory is made from accurate information and therefore the person does not have
much motivation to admit they were wrong originally by remembering the original memory. This can lead to
motivated forgetting.
Motivated forgetting
Following a negative outcome of a situation people do not want to accept blame. Instead of accepting their role in the
event, they view themselves as caught up in a situation that was unforeseeable and therefore they are not the culprit,
which is referred to as defensive processing, or view the situation as inevitable and that there was nothing that could
be done to prevent it, which is retroactive pessimism.
[18]
Defensive processing involves less hindsight bias as they
are playing ignorant of the event. Retroactive pessimism makes use of hindsight bias after a negative, unwanted
outcome. Events in life can be hard to control or predict. It is no surprise that people want to view themselves in a
more positive light and do not want to take responsibility for situations they could have altered. This leads to
hindsight bias in the form of retroactive pessimism to inhibit upward counterfactual thinking, instead interpreting the
Hindsight bias
114
outcome as succumbing to an inevitable fate.
[19]
This memory inhibition, preventing a person from recalling what
really happened, may lead to failure to accept one's mistakes and therefore to be unable to learn and grow to prevent
a similar mistake from taking place in the future.
[18]
Hindsight bias can also lead to overconfidence in one's decisions
without considering other options.
[20]
Such people see themselves as persons who remember correctly, even though
they are just forgetting that they were wrong. Avoiding responsibility is common among the human population.
Examples will be discussed below to show the regularity and severity of hindsight bias in society.
Elimination
Research shows that people still exhibit the bias even when they are informed about it.
[21]
Researchers attempt to
decrease the bias in participants has failed, leading one to think that hindsight bias has an automatic source in
cognitive reconstruction. This supports the Causal Model Theory and the use of sense-making to understand event
outcomes.
[12]
The only observable way to decrease hindsight bias in testing is to increase accountability of the
participant's answer.
[20]
Related disorders
Schizophrenia
Schizophrenia is an example of a disorder that directly affects the hindsight bias. The hindsight bias has a stronger
effect on schizophrenic individuals compared to individuals from the general public.
[22]
The hindsight bias effect is a paradigm that demonstrates how recently acquired knowledge influences the
recollection of past information. Recently acquired knowledge has a strange, but strong influence on schizophrenic
individuals in relation to information previous learned. New information combined with the lack of acceptable
influence of past reality-based memories can disconfirm behaviour and delusional belief, which typify in patients
suffering from schizophrenia.
[22]
This can cause faulty memory, which can lead to hindsight thinking and believing
in knowing something they don't.
[22]
Delusion-prone individuals suffering from schizophrenia can falsely jump to
conclusions.
[23]
Jumping to conclusions can lead to hindsight, which strongly influences the delusional conviction in
schizophrenic individuals.
[23]
In numerous studies, cognitive functional deficits in schizophrenic individuals impair
their ability represent and uphold contextual processing.
[24]
Post-traumatic stress disorder
Post-traumatic stress disorder is the re-experiencing and avoidance of trauma-related stressors, emotions and
memories from a past event or events that has cognitive dramatizing impact on an individual.
[25]
PTSD can be attributed to the functional impairment of the prefrontal cortex (PFC) structure. Dysfunctions of
cognitive processing of context and abnormalities that PTSD patients suffer from can affect hindsight thinking such
as in combat soldiers perceiving they could have altered outcomes of events in war.
[26]
The Prefrontal Cortex (PFC)
and dopamine (DA) systems are parts of the brain that can be responsible for the impairment in cognitive control
processing of context information. The PFC is well known for controlling the thought process in hindsight bias that
something will happen when it evidently does not. Brain impairment in certain brain regions can also affect the
thought process of an individual who may engage in hindsight thinking.
[27]
Cognitive flashbacks and other associated features from a traumatic event can trigger severe stress and negative
emotions such as unpardonable guilt. For example, studies were done on trauma-related guilt characteristics of war
veterans with chronic PTSD 8.
[28]
Although there has been limited research, significant data proves that hindsight
bias, in terms of guilt and responsibility from traumatic events of war, has an effect on war veterans' personal
perception of wrongdoing. They blame themselves and in hindsight, perceive that they could have prevented what
happened.
Hindsight bias
115
Examples
Health care system
Accidents are prone to happen in any human undertaking, but accidents occurring within the health care system seem
more salient and severe due to their profound effect on the lives of those involved, sometimes resulting in the death
of a patient. Hindsight bias has been shown to be a disadvantage of nearly all methods of measuring error and
adverse events within the healthcare system.
[29]
These methods include morbidity and mortality conferences and
autopsy, case analysis, medical malpractice claims analysis, staff interviews and even patient observation.
Furthermore, studies of injury or death rates as a result of error and virtually all incident review procedures used in
healthcare today fail to control for hindsight bias, severely limiting the generalizability and integrity of the
research.
[30]
Physicians who are primed with a possible diagnosis before evaluating the symptoms of a patient
themselves are more likely to arrive at the primed diagnosis than physicians who were only given the symptoms of
the patient.
[31]
According to Harvard Medical Practice Studies, 44,00098,000 deaths in the United States each year
are a result of safety incidents within the healthcare system.
[29]
Many of these deaths are viewed to be preventable
after the fact, clearly indicating the presence and importance of a hindsight bias in this field.
Judicial system
Hindsight bias results in being held to a higher standard in court. The defense is particularly susceptible to these
effects since their actions are the ones being scrutinized by the jury. Due to the hindsight bias, defendants will be
judged as being capable of preventing the bad outcome.
[32]
Though much stronger for the defendants, hindsight bias
also affects the plaintiffs. In cases where there is an assumption of risk, hindsight bias may contribute to the jurors
perceiving the event as riskier due to the poor outcome. This may lead the jury to feel that the plaintiff should have
exercised greater caution in the situation. Both of these effects can be minimized if attorneys put the jury in a
position of foresight rather than hindsight through the use of language and timelines. Encouraging people to
explicitly think about the counterfactuals was an effective means of reducing the hindsight bias.
[33]
In other words,
people became less attached to the actual outcome and were more open to consider alternative lines of reasoning
prior to the event. Judges involved in fraudulent transfer litigation cases were subject to the hindsight bias as well,
resulting in an unfair advantage for the plaintiff.
[34]
This shows that jurors are not the only ones sensitive to the
effects of the hindsight bias in the courtroom.
References
[1] Hoffrage, U., & Pohl, R. (2003) Hindsight Bias: A Special Issue of Memory. Champlain, NY: Psychology Press
[2] [2] Rudiger, F. (2007) Ways to Assess Hindsight Bias: Social Cognition. 25(1):14-31
[3] Fischhoff, B. (2003). Hindsight foresight: the effect of outcome knowledge on judgement under uncertainty. Qual Saf Health Care. 12,
304-312
[4] [4] Fischhoff, B. (2007). An early history of hindsight research. Social cognition. 25, 10-13
[5] [5] Tversky, A., Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology. 5, 207-232
[6] Fischhoff, B., and Beyth, R. (1975). I knew it would happen Remembered probabilities of once-future things. Organizational Behaviour and
Human Performance. 13, 1-16
[7] [7] Bernstein, D. M., Erdfelder, E., Meltzoff, A. N., Peria, W., Loftus, G. R. (2011). Hindsight bias from 3 to 95 years of age. J Exp Psychol
Learn Mem Cogn. 2, 378-391
[8] [8] Marks, A. Z., and Arkes, H. R. (2010). The effects of mental contamination on the hindsight bias: Source confusion determines success in
disregarding knowledge. J. Behave. Dec. Making.23: 131-160
[9] [9] Biasi, B., Weber, M. (2009). Hindsight Bias, risk perception and investment performance. journal of management science. 55, 1018-1029
[10] Mazzoni, G., & Vannucci, M. (2007). Hindsight bias, the misinformation effect, and false autobiographical memories. Social Cognition,
25(1), 203-220.
[11] Blank, H., Musch, J., & Pohl, R. F. (2007) Hindsight Bias: On Being Wise After the Event. Social Cognition, 25(1), 1-9.
[12] Blank, H., & Nestler, S. (2007). Cognitive Process Models of Hinsight Bias. Social cognition, 25(1), 132-147.
[13] Pohl, R., F., Eisenhauer, M., & Hardt, O. (2003) SARA: A Cognitive Process Model to Stimulate the Anchoring Effect and Hindsight bias.
Memory, 11 337-356.
Hindsight bias
116
[14] Loftus, E., F. (1991). Made in Memory: Distortions in Recollection After Misleading Information.The Psychology of Learning and
Motivation, 25, 187-215. New York: Academic Press
[15] Hertwig, R., Fenselow, C., & Hoffrage, U. (2003). Hindsight Bias: Knowledge and Heuristics Affect our reconstruction of the Past.
Memory, 11, 357-377.
[16] Nestler, S., Blank, H., & von Collani, G. (2008). A Causal Model Theory of Creeping Determinism. Social Psychology, 39(3), 182-188.
[17] Nadel, L., Hupbach, A., Hardt, O., & Gomez, R. (2008)Episodic Memory: Reconsolidation. Dere, D., Easton, A., Nadel, L., & Huston, J., P.
(Eds), Handbook of Episodic Memory (pp. 43-56)The Netherlands: Elsevier.
[18] Pezzo, M., & Pezzo, S., P.(2007) Making Sense of Failure: A Motivated Model of Hindsight Bias. Social Cognition, 25(1), 147-165
[19] Tykocinski, O., E., & Steinberg, N. (2005) Coping with disappointing outcomes: Retroactive pessimism and motivated inhibition of
counterfactuals. Journal of Experimental Social Psychology, 41, 551-558.
[20] Arkes, H., Faust, D., Guilmette, T., J., & Hart, K. (1988). Eliminating Hindsight Bias. Journal of applied Psychology, 73(2), 305-307.
[21] Pohl, R., F., & Hell, W. (1996). No reduction in Hindsight Bias after Complete Information and repeated Testing. Organizational Behaviour
and Human Decision Processes, 67(1), 49-58.
[22] Woodward, T. S., Moritz, S., Arnold, M. M., Cuttler, C., Whitman, J. C., Lindsay, S. (2006). Increased hindsight bias in schizophrenia.
Neuropsychology. 20: 462-467.
[23] Freeman D, Pugh K, Garety PA. (2008) Jumping to conclusions and paranoid ideation in the general population. Schizophrenia Research.
102:254260.
[24] [24] Avram J. Holmes, Angus MacDonald III, Cameron S. Carter, Deanna M. Barch, V. Andrew Stenger and Jonathan D. Cohen. (2005).
Prefrontal functioning during context processing in schizophrenia and major depression: An event-related fMRI study. Schizophrenia
Research, 76(2-3):199-206.
[25] Brewin, C., Dalgleish, R. & Joseph, S. (1996). A dual representation theory of posttraumatic stress disorder. American Psychological
Association. 103(4):670-68.
[26] Richert, K. A., Carrion, V. G., Karchemskiy, A. and Reiss, A. L. (2006), Regional differences of the prefrontal cortex in pediatric PTSD: an
MRI study. Depression and Anxiety. 23: 1725.
[27] Braver, Todd S.; Barch, Deanna M.; Keys, Beth A.; Carter, Cameron S.; Cohen, Jonathan D.; Kaye, Jeffrey A.; Janowsky, Jeri S.; Taylor,
Stephan F.; Yesavage, Jerome A.; Mumenthaler, Martin S.; Jagust, William J.; Reed, Bruce R. (2001). Context processing in older adults:
Evidence for a theory relating cognitive control to neurobiology in healthy aging. Journal of Experimental Psychology. 130(4):746-763.
[28] Beckham, Jean C., Feldman, Michelle E., Kirby, Angela C. (1998). Atrocities Exposure in Vietnam Combat Veterans with Chronic
Posttraumatic Stress Disorder: Relationship to Combat Exposure, Symptom Severity, Guilt, and Interpersonal Violence. Journal of Traumatic
Stress. 11:777-785.
[29] Hurwitz, B., & Sheikh, A. (2009). Healthcare Errors and Patient Safety. Hoboken, NJ: Blackwell Publishing.
[30] [30] Carayon, P. (2007) Handbook of Human Factors and Ergonomics in Healthcare and Patient Safety. Hoboken, NJ: Wiley Publishing
[31] Arkes, H. R., Saville, P. D., Harkness, A. R. (1981). Hindsight bias among physicians weighing the likelihood of diagnoses.Journal of
Applied Psychology. 66, 252-254.
[32] Starr, V. H., & McCormick, M. (2001). Jury Selection (Third Edition). Aspen Law and Business
[33] [33] Peterson, R. L. (2007). Inside the Investor's Brain: the power of mind over money. Hoboken, NJ: Wiley Publishing.
[34] Simkovic, M., & Kaminetzky, B. (2010). Leverage Buyout Bankruptcies, the Problem of Hindsight Bias, and the Credit Default Swap
Solution. Seton Hall Public Research Paper: August 29, 2010.
External links
Excerpt from: David G. Myers, Exploring Social Psychology. New York: McGraw-Hill, 1994, pp.15-19. (http:/ /
csml.som. ohio-state. edu/ Music829C/ hindsight. bias. html) (More discussion of Paul Lazarsfeld's experimental
questions.)
Forecasting (Macro and Micro) and Future Concepts (http:/ / www. cxoadvisory. com/ gurus/ Fisher/ article/ )
Ken Fisher on Market Analysis (4/7/06)
Iraq War Naysayers May Have Hindsight Bias (http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/
2006/ 10/ 01/ AR2006100100784. html). Shankar Vedantam. Washington Post.
Why Hindsight Can Damage Foresight (http:/ / www. forecasters. org/ pdfs/ foresight/ free/ Issue17_Goodwin.
pdf). Paul Goodwin. Foresight: The International Journal of Applied Forecasting, Spring 2010.
Hostile media effect
117
Hostile media effect
The hostile media effect, sometimes called the hostile media phenomenon, refers to the finding that people with
strong biases toward an issue (partisans) perceive media coverage as biased against their opinions, regardless of the
reality. Proponents of the hostile media effect argue that this finding cannot be attributed to the presence of bias in
the news reports, since partisans from opposing sides of an issue rate the same coverage as biased against their side
and biased in favor of the opposing side.
[1]
The phenomenon was first proposed and studied experimentally by
Robert Vallone, Lee Ross and Mark Lepper.
[1][2]
Studies
In the first major study of this phenomenon,
[1]
pro-Palestinian students and pro-Israeli students at Stanford
University were shown the same news filmstrips pertaining to the then-recent (1982) Sabra and Shatila massacre of
Palestinian refugees by Christian Lebanese militia fighters in Beirut during the Lebanese Civil War. On a number of
objective measures, both sides found that these identical news clips were slanted in favor of the other side.
Pro-Israeli students reported seeing more anti-Israel references and fewer favorable references to Israel in the news
report and pro-Palestinian students reported seeing more anti-Palestinian references, and so on. Both sides said a
neutral observer would have a more negative view of their side from viewing the clips, and that the media would
have excused the other side where it blamed their side.
It is important to note that the two sides were not asked questions about subjective generalizations about the media
coverage as a whole, such as what might be expressed as "I thought that the news has been generally biased against
this side of the issue." Instead, when viewing identical news clips, subjects differed along partisan lines on simple,
objective criteria such as the number of references to a given subject. The research suggests the hostile media effect
is not just a difference of opinion but a difference of perception (selective perception).
Studies have also found hostile media effects related to other political conflicts, such as strife in Bosnia.
[3]
and in
U.S. presidential elections.
[4]
This effect is interesting to psychologists because it appears to be a reversal of the
otherwise pervasive effects of confirmation bias: in this area, people seem to pay more attention to information that
contradicts rather than supports their existing views. This is an example of disconfirmation bias.
An oft-cited forerunner to Vallone's et al. study was conducted by Albert Hastorf and Hadley Cantril in 1954.
[5]
Princeton and Dartmouth students were shown a filmstrip of a controversial Princeton-Dartmouth football game.
Asked to count the number of infractions committed by both sides, students at both universities "saw" many more
infractions committed by the opposing side, in addition to making different generalizations about the game. Hastorf
and Cantril concluded that "there is no such 'thing' as a 'game' existing 'out there' in its own right which people
merely 'observe.' ... For the 'thing' simply is not the same for different people whether the 'thing' is a football game, a
presidential candidate, Communism, or spinach."
[6]
References
[1] Vallone, R.P., Ross, L., & Lepper, M.R. (1985). The hostile media phenomenon: Biased Perception and Perceptions of Media Bias in
Coverage of the "Beirut Massacre". (http:/ / www.ssc. wisc. edu/ ~jpiliavi/ 965/ hwang. pdf) Journal of Personality and Social Psychology,
49, 577-585. summary (http:/ / faculty.babson. edu/ krollag/ org_site/ soc_psych/ vallone_beirut. html).
[2] Vallone, R.E., Lepper, M.R., & Ross, L. (1981). Perceptions of media bias in the 1980 presidential election. Unpublished manuscript,
Stanford University. As cited in Vallone, Ross & Lepper, 1985.
[3] Matheson, K. & Dursun, S. (2001). Social identity precursors to the hostile media phenomenon: Partisan perceptions of coverage of the
Bosnian conflict (http:/ / gpi.sagepub.com/ cgi/ reprint/ 4/ 2/ 116) Group Processes and Intergroup Relations, 4, 117-126.
[4] Dalton, R. J.; Beck, P. A.; Huckfeldt, R. (1998). "Partisan Cues and the Media: Information Flows in the 1992 Presidential Election".
American Political Science Review 92 (1): 111126. JSTOR2585932.
[5] Hastorf, A. H.; Cantril, H. (1954). "They Saw a Game: A Case Study". Journal of Abnormal and Social Psychology 49 (1): 129134.
doi:10.1037/h0057880.
Hostile media effect
118
[6] Hastorf & Cantril (1954), pp. 132-133. Emphasis as in original.
External links
Ohio State: Think Political News Is Biased? Depends Who You Ask (http:/ / researchnews. osu. edu/ archive/
talkbias. htm)
Cancelling Each Other Out? Interest Group Perceptions of the News Media (http:/ / www. cjc-online. ca/ index.
php/ journal/ article/ view/ 960/ 866)
Public Perceptions of Bias in the News Media: Taking A Closer Look at the Hostile Media Phenomenon (http:/ /
www. uky. edu/ AS/ PoliSci/ Peffley/ pdf/ MediaBiasMidwest2001_4-04-01_. PDF) (PDF)
Hyperbolic discounting
In economics, hyperbolic discounting is a time-inconsistent model of discounting.
Given two similar rewards, humans show a preference for one that arrives sooner rather than later. Humans are said
to discount the value of the later reward, by a factor that increases with the length of the delay. This process is
traditionally modeled in form of exponential discounting, a time-consistent model of discounting. A large number of
studies have since demonstrated that the constant discount rate assumed in exponential discounting is systematically
being violated.
[1]
Hyperbolic discounting is a particular mathematical model devised as an improvement over
exponential discounting. Hyperbolic discounting has been observed in humans and animals.
In hyperbolic discounting, valuations fall very rapidly for small delay periods, but then fall slowly for longer delay
periods. This contrasts with exponential discounting, in which valuation falls by a constant factor per unit delay,
regardless of the total length of the delay. The standard experiment used to reveal a test subject's hyperbolic
discounting curve is to compare short-term preferences with long-term preferences. For instance: "Would you prefer
a dollar today or three dollars tomorrow?" or "Would you prefer a dollar in one year or three dollars in one year and
one day?" For certain range of offerings, a significant fraction of subjects will take the lesser amount today, but will
gladly wait one extra day in a year in order to receive the higher amount instead.
[2]
Individuals with such preferences
are described as "present-biased".
Individuals using hyperbolic discounting reveal a strong tendency to make choices that are inconsistent over
timethey make choices today that their future self would prefer not to make, despite using the same reasoning.
This dynamic inconsistency happens because the value of future rewards is much lower under hyperbolic
discounting than under exponential discounting.
[3]
Observations
The phenomenon of hyperbolic discounting is implicit in Richard Herrnstein's "matching law," which states that
when dividing their time or effort between two non-exclusive, ongoing sources of reward, most subjects allocate in
direct proportion to the rate and size of rewards from the two sources, and in inverse proportion to their delays. That
is, subjects' choices "match" these parameters.
After the report of this effect in the case of delay,
[4]
George Ainslie pointed out that in a single choice between a
larger, later and a smaller, sooner reward, inverse proportionality to delay would be described by a plot of value by
delay that had a hyperbolic shape, and that when the larger, later reward is preferred, this preference can be reversed
by reducing both rewards' delays by the same absolute amount. That is, for values of x for which under current
conditions it would be obviously rational to prefer x dollars in (n + 1) days over one dollar in n days (e.g., x = 3), a
large subset of the population would (rationally) prefer the former alternative given large values of n, but even
among this subset, a large (sub-)subset would (irrationally) prefer one dollar in n days when n = 0. Ainslie
demonstrated the predicted reversal to occur among pigeons.
[5]
Hyperbolic discounting
119
A large number of subsequent experiments have confirmed that spontaneous preferences by both human and
nonhuman subjects follow a hyperbolic curve rather than the conventional, "exponential" curve that would produce
consistent choice over time.
[6][7]
For instance, when offered the choice between $50 now and $100 a year from now,
many people will choose the immediate $50. However, given the choice between $50 in five years or $100 in six
years almost everyone will choose $100 in six years, even though that is the same choice seen at five years' greater
distance.
Hyperbolic discounting has also been found to relate to real-world examples of self-control. Indeed, a variety of
studies have used measures of hyperbolic discounting to find that drug-dependent individuals discount delayed
consequences more than matched nondependent controls, suggesting that extreme delay discounting is a fundamental
behavioral process in drug dependence.
[8][9][10]
Some evidence suggests pathological gamblers also discount delayed
outcomes at higher rates than matched controls.
[11]
Whether high rates of hyperbolic discounting precede addictions
or vice-versa is currently unknown, although some studies have reported that high-rate discounting rats are more
likely to consume alcohol
[12]
and cocaine
[13]
than lower-rate discounters. Likewise, some have suggested that
high-rate hyperbolic discounting makes unpredictable (gambling) outcomes more satisfying.
[14]
The degree of discounting is vitally important in describing hyperbolic discounting, especially in the discounting of
specific rewards such as money. The discounting of monetary rewards varies across age groups due to the varying
discount rate.
[6]
The rate depends on a variety of factors, including the species being observed, age, experience, and
the amount of time needed to consume the reward.
[15][16]
Criticism
An article from 2003 noted that the evidence might be better explained by a similarity heuristic than by hyperbolic
discounting.
[17]
Similarly, a 2011 paper criticized the existing studies for mostly using data collected from university
students and being too quick to conclude that the hyperbolic model of discounting is correct.
[18]
A study by Daniel Read introduces "subadditive discounting": the fact that discounting over a delay increases if the
delay is divided into smaller intervals. This hypothesis may explain the main finding of many studies in support of
hyperbolic discountingthe observation that impatience declines with timewhile also accounting for observations
not predicted by hyperbolic discounting.
[19]
Mathematical model
Step-by-step explanation
Suppose that in a study, participants are offered the choice between taking x dollars immediately or taking y dollars n
days later. Suppose further that one participant in that study employs exponential discounting and another employs
hyperbolic discounting.
Each participant will realize that a) s/he should take x dollars immediately if s/he can invest the dollar in a different
venture that will yield more than y dollars n days later and b) s/he will be indifferent between the choices (selecting
one at random) if the best available alternative will likewise yield y dollars n days later. (Assume, for the sake of
simplicity, that the values of all available investments are compounded daily.) Each participant correctly understands
the fundamental question being asked: "For any given value of y dollars and n days, what is the minimum amount of
money, i.e., the minimum value for x dollars, that I should be willing to accept? In other words, how many dollars
would I need to invest today to get y dollars n days from now?" Each will take x dollars if x is greater than the
answer that s/he calculates, and each will take y dollars n days from now if x is smaller than that answer. However,
the methods that they use to calculate that amount and the answers that they get will be different, and only the
exponential discounter will use the correct method and get a reliably correct result:
Hyperbolic discounting
120
The exponential discounter will think "The best alternative investment available (that is, the best investment
available in the absence of this choice) in the absence of this choice gives me a return of r percent per day; in
other words, once a day it adds to its value r percent of the value that it had the previous day. That is, every day it
multiplies its value once by (100% + r%). So if I hold the investment for n days, its value will have multiplied
itself by this amount n times, making that value (100% + r%)^n of what it was at the start that is, (1 + r%)^n
times what it was at the start. So to figure out how much I would need to start with today to get y dollars n days
from now, I need to divide y dollars by ([1 + r%]^n). If my other choice of how much money to take is greater
than this result, then I should take the other amount, invest it in the other venture that I have in mind, and get even
more at the end. If this result is greater than my other choice, then I should take y dollars n days from now,
because it turns out that by giving up the other choice I am essentially investing that smaller amount of money to
get y dollars n days from now, meaning that I'm getting an even greater return by waiting n days for y dollars,
making this my best available investment."
The hyperbolic discounter, however, will think "If I want y dollars n days from now, then the amount that I need
to invest today is y divided by n, because that amount times n equals y dollars. [There lies the hyperbolic
discounter's error.] If my other choice is greater than this result, I should take it instead because x times n will be
greater than y times n; if it is less than this result, then I should wait n days for y dollars."
Where the exponential discounter reasons correctly and the hyperbolic discounter goes wrong is that as n becomes
very large, the value of ([1 + r%]^n) becomes much larger than the value of n, with the effect that the value of (y/[(1
+ r%)^n) becomes much smaller than the value of (y/n). Therefore, the minimum value of x (the number of dollars in
the immediate choice) that suffices to be greater than that amount will be much smaller than the hyperbolic
discounter thinks, with the result that s/he will perceive x-values in the range from (y/[(1 + r%)^n) to (y/n) (inclusive
at the low end) as being too small and, as a result, irrationally turn those alternatives down when they are in fact the
better investment.
Formal model
Comparison of the discount factors of hyperbolic and exponential discounting. In
both cases, . Hyperbolic discounting is shown to value future assets
higher than exponential discounting.
Hyperbolic discounting is mathematically
described as:
where f(D) is the discount factor that
multiplies the value of the reward, D is the
delay in the reward, and k is a parameter
governing the degree of discounting. This is
compared with the formula for exponential
discounting:
Simple derivation
If is an exponential discounting function and
a hyperbolic function (with n the amount of
weeks), then the exponential discounting a
Hyperbolic discounting
121
week later from "now" (n=0) is , and the exponential discounting a week from week n is , which means they are the
same. For g(n), , which is the same as for f, while . From this one can see that the two types of discounting are the
same "now", but when n is much greater than 1, for instance 52 (one year), will tend to go to 1, so that the hyperbolic
discounting of a week in the far future is virtually zero, while the exponential is still 1/2.
Quasi-hyperbolic approximation
The "quasi-hyperbolic" discount function, proposed by Laibson (1997),
[3]
approximates the hyperbolic discount
function above in discrete time by
and
where and are constants between 0 and 1; and again D is the delay in the reward, and f(D) is the discount factor.
The condition f(0) = 1 is stating that rewards taken at the present time are not discounted.
Quasi-hyperbolic time preferences are also referred to as "beta-delta" preferences. They retain much of the analytical
tractability of exponential discounting while capturing the key qualitative feature of discounting with true
hyperbolas.
Explanations
Uncertain risks
Notice that whether discounting future gains is rational or notand at what rate such gains should be
discounteddepends greatly on circumstances. Many examples exist in the financial world, for example, where it is
reasonable to assume that there is an implicit risk that the reward will not be available at the future date, and
furthermore that this risk increases with time. Consider: Paying $50 for dinner today or delaying payment for sixty
years but paying $100,000. In this case, the restaurateur would be reasonable to discount the promised future value
as there is significant risk that it might not be paid (e.g. due to the death of the restaurateur or the diner).
Uncertainty of this type can be quantified with Bayesian analysis.
[20]
For example, suppose that the probability for
the reward to be available after time t is, for known hazard rate
but the rate is unknown to the decision maker. If the prior probability distribution of is
then, the decision maker will expect that the probability of the reward after time t is
which is exactly the hyperbolic discount rate. Similar conclusions can be obtained from other plausible distributions
for .
[20]
Hyperbolic discounting
122
Applications
More recently these observations about discount functions have been used to study saving for retirement, borrowing
on credit cards, and procrastination. It has frequently been used to explain addiction.
[21][22]
Hyperbolic discounting
has also been offered as an explanation of the divergence between privacy attitudes and behaviour.
[23]
Present Values of Annuities
Present Value of an Standard Annuity
The present value of a series of equal annual cash flows in arrears discounted hyperbolically:
Where V is the present value, P is the annual cash flow, D is the number of annual payments and k is the factor
governing the discounting.
References
[1] Frederick, Shane; Loewenstein, George; O'Donoghue, Ted (2002). "Time Discounting and Time Preference: A Critical Review". Journal of
Economic Literature 40 (2): 351401. doi:10.1257/002205102320161311.
[2] Thaler, R. H. (1981). "Some Empirical Evidence on Dynamic Inconsistency". Economic Letters 8 (3): 201207.
doi:10.1016/0165-1765(81)90067-7.
[3] Laibson, David (1997). "Golden Eggs and Hyperbolic Discounting". Quarterly Journal of Economics 112 (2): 443477.
doi:10.1162/003355397555253.
[4] Chung, S. H.; Herrnstein, R. J. (1967). "Choice and delay of Reinforcement". Journal of the Experimental Analysis of Behavior 10 (1):
6774. doi:10.1901/jeab.1967.10-67.
[5] Ainslie, G. W. (1974). "Impulse control in pigeons". Journal of the Experimental Analysis of Behavior 21 (3): 485489.
doi:10.1901/jeab.1974.21-485.
[6] Green, L.; Fry, A. F.; Myerson, J. (1994). "Discounting of delayed rewards: A life span comparison". Psychological Science 5 (1): 3336.
doi:10.1111/j.1467-9280.1994.tb00610.x.
[7] Kirby, K. N. (1997). "Bidding on the future: Evidence against normative discounting of delayed rewards". Journal of Experimental
Psychology: General 126 (1): 5470. doi:10.1037/0096-3445.126.1.54.
[8] Bickel, W. K.; Johnson, M. W. (2003). "Delay discounting: A fundamental behavioral process of drug dependence". In Loewenstein, G.;
Read, D.; Baumeister, R. F.. Time and Decision. New York: Russell Sage Foundation. ISBN0-87154-549-7.
[9] Madden, G. J.; Petry, N. M.; Bickel, W. K.; Badger, G. J. (1997). "Impulsive and self-control choices in opiate-dependent patients and
non-drug-using control participants: Drug and monetary rewards". Experimental and Clinical Psychopharmacology 5: 256262.
PMID9260073.
[10] Vuchinich, R. E.; Simpson, C. A. (1998). "Hyperbolic temporal discounting in social drinkers and problem drinkers". Experimental and
Clinical Psychopharmacology 6 (3): 292305. doi:10.1037/1064-1297.6.3.292.
[11] Petry, N. M.; Casarella, T. (1999). "Excessive discounting of delayed rewards in substance abusers with gambling problems". Drug and
Alcohol Dependence 56 (1): 2532. doi:10.1016/S0376-8716(99)00010-1.
[12] Poulos, C. X.; Le, A. D.; Parker, J. L. (1995). "Impulsivity predicts individual susceptibility to high levels of alcohol self administration".
Behavioral Pharmacology 6 (8): 810814. doi:10.1097/00008877-199512000-00006.
[13] Perry, J. L.; Larson, E. B.; German, J. P.; Madden, G. J.; Carroll, M. E. (2005). "Impulsivity (delay discounting) as a predictor of acquisition
of i.v. cocaine self-administration in female rats". Psychopharmacology 178 (23): 193201. doi:10.1007/s00213-004-1994-4.
PMID15338104.
[14] Madden, G. J.; Ewan, E. E.; Lagorio, C. H. (2007). "Toward an animal model of gambling: Delay discounting and the allure of
unpredictable outcomes". Journal of Gambling Studies 23 (1): 6383. doi:10.1007/s10899-006-9041-5.
[15] Loewenstein, G.; Prelec, D. (1992). Choices Over Time. New York: Russell Sage Foundation. ISBN0-87154-558-6.
[16] Raineri, A.; Rachlin, H. (1993). "The effect of temporal constraints on the value of money and other commodities". Journal of Behavioral
Decision-Making 6 (2): 7794. doi:10.1002/bdm.3960060202.
[17] Rubinstein, Ariel (2003). "Economics and Psychology? The Case of Hyperbolic Discounting.". International Economic Review 44 (4):
1207-1216.
[18] [18] Andersen, Steffen; Harrison, Glenn W.; Lau, Morten; Rutstrm, E. Elisabet (2011). "Discounting Behavior: A Reconsideration".
[19] Read, Daniel (2001). "Is time-discounting hyperbolic or subadditive?". Journal of risk and uncertainty 23 (1): 5-32.
doi:10.1023/A:1011198414683.
Hyperbolic discounting
123
[20] Sozou, P. D. (1998). "On hyperbolic discounting and uncertain hazard rates". Proceedings of the Royal Society B Biological Sciences 265
(1409): 2015. doi:10.1098/rspb.1998.0534.
[21] O'Donoghue, T.; Rabin, M. (1999). "Doing it now or later". The American Economic Review 89: 103124.
[22] O'Donoghue, T.; Rabin, M. (2000). "The economics of immediate gratification". Journal of Behavioral Decision Making 13: 233250.
[23] Acquisti, Alessandro; Grossklags, Jens (2004). "Losses, Gains, and Hyperbolic Discounting: Privacy Attitudes and Privacy Behavior". In
Camp, J.; Lewis, R.. The Economics of Information Security. Kluwer. pp.179186.
Further reading
Ainslie, G. W. (1975). "Specious reward: A behavioral theory of impulsiveness and impulsive control".
Psychological Bulletin 82 (4): 463496. doi:10.1037/h0076860. PMID1099599.
Ainslie, G. (1992). Picoeconomics: The Strategic Interaction of Successive Motivational States Within the Person.
Cambridge: Cambridge University Press.
Ainslie, G. (2001). Breakdown of Will. Cambridge: Cambridge University Press. ISBN978-0-521-59694-7.
Rachlin, H. (2000). The Science of Self-Control. Cambridge; London: Harvard University Press.
Illusion of control
The illusion of control is the tendency for people to overestimate their ability to control events, for instance to feel
that they control outcomes that they demonstrably have no influence over.
[1]
The effect was named by psychologist
Ellen Langer and has been replicated in many different contexts.
[2]
It is thought to influence gambling behavior and
belief in the paranormal.
[3]
Along with illusory superiority and optimism bias, the illusion of control is one of the
positive illusions.
The illusion is more common in familiar situations, and in situations where the person knows the desired outcome.
[4]
Feedback that emphasizes success rather than failure can increase the effect, while feedback that emphasizes failure
can decrease or reverse the effect.
[5]
The illusion is weaker for depressed individuals and is stronger when
individuals have an emotional need to control the outcome.
[4]
The illusion is strengthened by stressful and
competitive situations, including financial trading.
[6]
Though people are likely to overestimate their control when the
situations are heavily chance-determined, they also tend to underestimate their control when they actually have it,
which runs contrary to some theories of the illusion and its adaptiveness.
[7]
The illusion might arise because people lack direct introspective insight into whether they are in control of events.
This has been called the introspection illusion. Instead they may judge their degree of control by a process that is
often unreliable. As a result, they see themselves as responsible for events when there is little or no causal link.
Demonstration
The illusion of control is demonstrated by three converging lines of evidence: 1) laboratory experiments, 2) observed
behavior in familiar games of chance such as lotteries, and 3) self-reports of real-world behavior.
[8]
One kind of laboratory demonstration involves two lights marked "Score" and "No Score". Subjects have to try to
control which one lights up. In one version of this experiment, subjects could press either of two buttons.
[9]
Another
version had one button, which subjects decided on each trial to press or not.
[10]
Subjects had a variable degree of
control over the lights, or none at all, depending on how the buttons were connected. The experimenters made clear
that there might be no relation between the subjects' actions and the lights.
[10]
Subjects estimated how much control
they had over the lights. These estimates bore no relation to how much control they actually had, but was related to
how often the "Score" light lit up. Even when their choices made no difference at all, subjects confidently reported
exerting some control over the lights.
[10]
Ellen Langer's research demonstrated that people were more likely to behave as if they could exercise control in a
chance situation where "skill cues" were present.
[11][12]
By skill cues, Langer meant properties of the situation more
Illusion of control
124
normally associated with the exercise of skill, in particular the exercise of choice, competition, familiarity with the
stimulus and involvement in decisions. One simple form of this effect is found in casinos: when rolling dice in a
craps game people tend to throw harder when they need high numbers and softer for low numbers.
[2][13]
In another experiment, subjects had to predict the outcome of thirty coin tosses. The feedback was rigged so that
each subject was right exactly half the time, but the groups differed in where their "hits" occurred. Some were told
that their early guesses were accurate. Others were told that their successes were distributed evenly through the thirty
trials. Afterwards, they were surveyed about their performance. Subjects with early "hits" overestimated their total
successes and had higher expectations of how they would perform on future guessing games.
[2][12]
This result
resembles the irrational primacy effect in which people give greater weight to information that occurs earlier in a
series.
[2]
Forty percent of the subjects believed their performance on this chance task would improve with practice,
and twenty-five percent said that distraction would impair their performance.
[2][12]
Another of Langer's experimentsreplicated by other researchersinvolves a lottery. Subjects are either given
tickets at random or allowed to choose their own. They can then trade their tickets for others with a higher chance of
paying out. Subjects who had chosen their own ticket were more reluctant to part with it. Tickets bearing familiar
symbols were less likely to be exchanged than others with unfamiliar symbols. Although these lotteries were
random, subjects behaved as though their choice of ticket affected the outcome.
[11][14]
Another way to investigate perceptions of control is to ask people about hypothetical situations, for example their
likelihood of being involved in a motor vehicle accident. On average, drivers regard accidents as much less likely in
"high-control" situations, such as when they are driving, than in "low-control" situations, such as when they are in
the passenger seat. They also rate a high-control accident, such as driving into the car in front, as much less likely
than a low-control accident such as being hit from behind by another driver.
[8][15][16]
Explanations
Ellen Langer, who first demonstrated the illusion of control, explained her findings in terms of a confusion between
skill and chance situations. She proposed that people base their judgments of control on "skill cues". These are
features of a situation that are usually associated with games of skill, such as competitiveness, familiarity and
individual choice. When more of these skill cues are present, the illusion is stronger.
[5][6][17]
Suzanne Thompson and colleagues argued that Langer's explanation was inadequate to explain all the variations in
the effect. As an alternative, they proposed that judgments about control are based on a procedure that they called the
"control heuristic".
[5][18]
This theory proposes that judgments of control to depend on two conditions; an intention to
create the outcome, and a relationship between the action and outcome. In games of chance, these two conditions
frequently go together. As well as an intention to win, there is an action, such as throwing a die or pulling a lever on
a slot machine, which is immediately followed by an outcome. Even though the outcome is selected randomly, the
control heuristic would result in the player feeling a degree of control over the outcome.
[17]
Self-regulation theory offers another explanation. To the extent that people are driven by internal goals concerned
with the exercise of control over their environment, they will seek to reassert control in conditions of chaos,
uncertainty or stress. One way of coping with a lack of real control is to falsely attribute oneself control of the
situation.
[6]
The core self-evaluations (CSE) trait is a stable personality trait composed of locus of control, neuroticism,
self-efficacy, and self-esteem.
[19]
While those with high core self-evaluations are likely to believe that they control
their own environment (i.e., internal locus of control),
[20]
very high levels of CSE may lead to the illusion of control.
Illusion of control
125
Benefits and costs to the individual
Taylor and Brown have argued that positive illusions, including the illusion of control, are adaptive as they motivate
people to persist at tasks when they might otherwise give up.
[21]
This position is supported by Albert Bandura's claim
that "optimistic self-appraisals of capability, that are not unduly disparate from what is possible, can be
advantageous, whereas veridical judgements can be self-limiting".
[22]
His argument is essentially concerned with the
adaptive effect of optimistic beliefs about control and performance in circumstances where control is possible, rather
than perceived control in circumstances where outcomes do not depend on an individual's behavior.
Bandura has also suggested that:
"In activities where the margins of error are narrow and missteps can produce costly or injurious
consequences, personal well-being is best served by highly accurate efficacy appraisal."
[23]
Taylor and Brown argue that positive illusions are adaptive, since there is evidence that they are more common in
normally mentally healthy individuals than in depressed individuals. However, Pacini, Muir and Epstein have shown
that this may be because depressed people overcompensate for a tendency toward maladaptive intuitive processing
by exercising excessive rational control in trivial situations, and note that the difference with non-depressed people
disappears in more consequential circumstances.
[24]
There is also empirical evidence that high self-efficacy can be maladaptive in some circumstances. In a
scenario-based study, Whyte et al. showed that participants in whom they had induced high self-efficacy were
significantly more likely to escalate commitment to a failing course of action.
[25]
Knee and Zuckerman have
challenged the definition of mental health used by Taylor and Brown and argue that lack of illusions is associated
with a non-defensive personality oriented towards growth and learning and with low ego involvement in
outcomes.
[26]
They present evidence that self-determined individuals are less prone to these illusions. In the late
1970s, Abramson and Alloy demonstrated that depressed individuals held a more accurate view than their
non-depressed counterparts in a test which measured illusion of control.
[27]
This finding held true even when the
depression was manipulated experimentally. However, when replicating the findings Msetfi et al. (2005, 2007) found
that the overestimation of control in nondepressed people only showed up when the interval was long enough,
implying that this is because they take more aspects of a situation into account than their depressed
counterparts.
[28][29]
Also, Dykman et al. (1989) showed that depressed people believe they have no control in
situations where they actually do, so their perception is not more accurate overall.
[30]
Allan et al. (2007) has
proposed that the pessimistic bias of depressives resulted in "depressive realism" when asked about estimation of
control, because depressed individuals are more likely to say no even if they have control.
[31]
A number of studies have found a link between a sense of control and health, especially in older people.
[32]
Fenton-O'Creevy et al.
[6]
argue, as do Gollwittzer and Kinney,
[33]
that while illusory beliefs about control may
promote goal striving, they are not conducive to sound decision-making. Illusions of control may cause insensitivity
to feedback, impede learning and predispose toward greater objective risk taking (since subjective risk will be
reduced by illusion of control).
Applications
Psychologist Daniel Wegner argues that an illusion of control over external events underlies belief in psychokinesis,
a supposed paranormal ability to move objects directly using the mind.
[34]
As evidence, Wegner cites a series of
experiments on magical thinking in which subjects were induced to think they had influenced external events. In one
experiment, subjects watched a basketball player taking a series of free throws. When they were instructed to
visualise him making his shots, they felt that they had contributed to his success.
[35]
One study examined traders working in the City of London's investment banks. They each watched a graph being
plotted on a computer screen, similar to a real-time graph of a stock price or index. Using three computer keys, they
had to raise the value as high as possible. They were warned that the value showed random variations, but that the
Illusion of control
126
keys might have some effect. In fact, the fluctuations were not affected by the keys.
[6][16]
The traders' ratings of their
success measured their susceptibility to the illusion of control. This score was then compared with each trader's
performance. Those who were more prone to the illusion scored significantly lower on analysis, risk management
and contribution to profits. They also earned significantly less.
[6][16][36]
Notes
[1] [1] Thompson 1999, pp.187,124
[2] [2] Plous 1993, p.171
[3] Vyse 1997, pp.129130
[4] [4] Thompson 1999, p.187
[5] [5] Thompson 1999, p.188
[6] Fenton-O'Creevy, Mark; Nigel Nicholson, Emma Soane, Paul Willman (2003), "Trading on illusions: Unrealistic perceptions of control and
trading performance", Journal of Occupational and Organizational Psychology (British Psychological Society) 76: 5368,
doi:10.1348/096317903321208880, ISSN2044-8325
[7] Gino, Francesca; Zachariah Sharek, Don A. Moore (March 2011). "Keeping the illusion of control under control: Ceilings, floors, and
imperfect calibration" (http:/ / www.sciencedirect. com/ science?_ob=ArticleURL& _udi=B6WP2-51F1SGG-1& _user=10& _coverDate=03/
31/ 2011& _rdoc=1& _fmt=high& _orig=gateway& _origin=gateway& _sort=d& _docanchor=& view=c& _acct=C000050221&
_version=1& _urlVersion=0& _userid=10& md5=eea0b3f8be009d60a1338e598da297ac& searchtype=a). Organizational Behavior and
Human Decision Processes 114 (2): 104114. doi:10.1016/j.obhdp.2010.10.002. . Retrieved 23 April 2011.
[8] [8] Thompson 2004, p.116
[9] Jenkins, H.H. & Ward, W.C. (1965) Judgement of contingency between responses and outcomes. Psychological Monographs, 79 (1, Wole
No. 79).
[10] Allan, L.G.; Jenkins, H.M. (1980), "The judgment of contingency and the nature of the response alternatives", Canadian Journal of
Psychology 34: 111
[11] Langer, Ellen J. (1975), "The Illusion of Control", Journal of Personality and Social Psychology 32 (2): 311328
[12] Langer, Ellen J.; Roth, Jane (1975), "Heads I win, tails it's chance: The illusion of control as a function of the sequence of outcomes in a
purely chance task", Journal of Personality and Social Psychology 32 (6): 951955
[13] Henslin, J. M. (1967), "Craps and magic", American Journal of Sociology 73: 316330
[14] [14] Thompson 2004, p.115
[15] McKenna, F. P. (1993), "It won't happen to me: Unrealistic optimism or illusion of control?", British Journal of Psychology (British
Psychological Society) 84 (1): 3950, ISSN0007-1269
[16] Hardman 2009, pp.101103
[17] [17] Thompson 2004, p.122
[18] Thompson, Suzanne C.; Armstrong, Wade; Thomas, Craig (1998), "Illusions of Control, Underestimations, and Accuracy: A Control
Heuristic Explanation", Psychological Bulletin (American Psychological Association) 123 (2): 143161, doi:10.1037/0033-2909.123.2.143,
ISSN00332909, PMID9522682
[19] Judge, T. A., Locke, E. A., & Durham, C. C. (1997). The dispositional causes of job satisfaction: A core evaluations approach. Research in
Organizational Behavior, 19, 151188.
[20] Judge, T. A., Kammeyer-Mueller, J. D. (2011). Implications of core self-evaluations for a changing organizational context. Human Resource
Management Review, 21, 331-341.
[21] Taylor, S.E., & Brown, J.D. (1988). Illusion and Well-Being - a Social Psychological Perspective On Mental-Health. Psychological Bulletin,
103(2), 193210.
[22] Bandura, A. (1989), "Human Agency in Social Cognitive Theory", American Psychologist 44 (9): 11751184
[23] Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W.H. Freeman and Company.
[24] Pacini, R., Muir, F., & Epstein, S. (1998). Depressive realism from the perspective of Cognitive-Experiential Self-Theory. Journal of
Personality and Social Psychology, 74(4), 1056-1068.
[25] Whyte, G., Saks, A. & Hook, S. (1997) When success breeds failure: The role of self-efficacy in escalating commitment to a losing course
of action. Journal of Organizational Behavior, 18, 415-432.
[26] Knee, C.R., & Zuckerman, M. (1998). A nondefensive personality: Autonomy and control as moderators of defensive coping and
self-handicapping. Journal of Research in Personality, 32(2), 115-130.
[27] Abramson, L.Y., & Alloy, L.B. (1980). The judgment of contingency: Errors and their implications. In J. Singer and A. Baum (Eds.),
Advances in environmental psychology. Vol. II. New York: Erlbaum
[28] Msetf RM, Murphy RA, Simpson J (2007). "Depressive realism and the effect of intertrial interval on judgements of zero, positive, and
negative contingencies". The Quarterly Journal of Experimental Psychology 60 (3): 461481. doi:10.1080/17470210601002595.
PMID17366312.
[29] Msetfi RM, Murphy RA, Simpson J, Kornbrot DE (2005). "Depressive realism and outcome density bias in contingency judgments: the
effect of the context and intertrial interval" (http:/ / www. lancs. ac. uk/ shm/ dhr/ publications/ janesimpson/ depressiverealism. pdf) (PDF).
Illusion of control
127
Journal of Experimental Psychology. General 134 (1): 1022. doi:10.1037/0096-3445.134.1.10. PMID15702960. .
[30] Dykman, B.M., Abramson, L.Y., Alloy, L.B., Hartlage, S. (1989). "Processing of ambiguous and unambiguous feedback by depressed and
nondepressed college students: Schematic biases and their implications for depressive realism". Journal of Personality and Social Psychology
56 (3): 431445. doi:10.1037/0022-3514.56.3.431. PMID2926638.
[31] Allan LG, Siegel S, Hannah S. (2007). "The sad truth about depressive realism" (http:/ / psych. mcmaster. ca/ hannahsd/ pubs/
AllanSiegelHannah'07. pdf) (PDF). The Quarterly Journal of Experimental Psychology 60 (3): 482495. doi:10.1080/17470210601002686.
PMID17366313. .
[32] [32] Plous 1993, p.172
[33] Gollwitzer, P.M.; Kinney, R.F. (1989), "Effects of Deliberative and Implemental Mind-Sets On Illusion of Control", Journal of Personality
and Social Psychology 56 (4): 531542
[34] Wegner, Daniel M. (2008), "Self is Magic" (http:/ / isites. harvard. edu/ fs/ docs/ icb. topic67047. files/ 2_13_07_Wegner. pdf), in John
Baer, James C. Kaufman, Roy F. Baumeister, Are we free?: psychology and free will, New York: Oxford University Press,
ISBN978-0-19-518963-6, , retrieved 2008-07-02
[35] Pronin, Emily; Daniel M. Wegner, Kimberly McCarthy, Sylvia Rodriguez (2006), "Everyday Magical Powers: The Role of Apparent Mental
Causation in the Overestimation of Personal Influence" (http:/ / www. wjh. harvard. edu/ ~wegner/ pdfs/ Pronin, Wegner, McCarthy, &
Rodriguez (2006).pdf), Journal of Personality and Social Psychology (American Psychological Association) 91 (2): 218231,
doi:10.1037/0022-3514.91.2.218, ISSN0022-3514, PMID16881760, , retrieved 2009-07-03
[36] Fenton-O'Creevy, M., Nicholson, N. and Soane, E., Willman, P. (2005) Traders - Risks, Decisions, and Management in Financial Markets
ISBN 0-19-926948-3
References
Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press,
ISBN0-521-65030-5, OCLC316403966
Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell,
ISBN978-1-4051-2398-3
Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN978-0-07-050477-6,
OCLC26931106
Thompson, Suzanne C. (1999), "Illusions of Control: How We Overestimate Our Personal Influence", Current
Directions in Psychological Science (Association for Psychological Science) 8 (6): 187190, ISSN09637214,
JSTOR20182602
Thompson, Suzanne C. (2004), "Illusions of control", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on
Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.115125,
ISBN978-1-84169-351-4, OCLC55124398
Vyse, Stuart A. (1997), Believing in Magic: The Psychology of Superstition, Oxford University Press US,
ISBN0-19-513634-9
Further reading
Fast, Nathanael J.; Gruenfeld, Deborah H; Sivanathan, Niro; Galinsky, Adam D. (2009). "Illusory Control: A
Generative Force Behind Power's Far-Reaching Effects". Psychological Science 20 (4): 502508.
doi:10.1111/j.1467-9280.2009.02311.x. ISSN0956-7976.
Illusion of validity
128
Illusion of validity
The illusion of validity is a cognitive bias described by Amos Tversky and Daniel Kahneman in which consistent
evidence persistently leads to confident predictions even after the predictive value of the evidence has been
discredited.
[1]
Kahneman describes this bias as the first cognitive bias he conceived, in which he evaluated officer candidates for
the Israeli Defense Forces according to a test that he knew to be nearly worthless but yet on which he still found it
compelling to make strong predictions.
[2]
In one study, subjects reported higher confidence in a prediction of the final grade point average of a student after
seeing a first-year record of consistent B's than a first-year record of an even number of A's and
C's.
[3]
Consistent patterns may be observed when input variables are highly redundant or correlated, which may increase
subjective confidence. However, a number of highly correlated inputs should not increase confidence much more
than only one of the inputs; instead higher confidence should be merited when a number of highly independent
inputs show a consistent pattern.
[4]
For example, some studies have shown a high degree of correlation between IQ and SAT scores,
[5]
so once you
know someone's SAT score, knowing their IQ in addition would not add much information and should increase
confidence only very little; and, to whatever degree SAT scores correlate with GPA scores, GPA could be predicted
from an IQ score nearly as effectively as SAT score.
The illusion of validity may be caused in part by confirmation bias
[6]
and/or the representativeness heuristic and
could in turn cause the overconfidence effect.
[7]
References
Kahnemann on YouTube: Explorations of the Mind: Intuition
[8]
[1] Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 11241131.
doi:10.1126/science.185.4157.1124. PMID17835457.
[2] Kahneman, Daniel (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux. pp.209211.
[3] Tversky & Kahneman, 1974
[4] Tversky & Kahneman, 1974
[5] Frey, Meredith C.; Douglas K. Detterman (2004). "Scholastic Assessment org?". Psychological Science 15 (6): 373378.
doi:10.1111/j.0956-7976.2004.00687.x. PMID15147489.
[6] Einhorn, Hillel; Robyn M. Hogarth (1978). "Confidence in judgment: Persistence of the illusion of validity". Psychological Review 85 (5):
395416.
[7] Einhorn & Hogarth, 1978
[8] http:/ / www. youtube.com/ watch?v=dddFfRaBPqg
Illusory correlation
129
Illusory correlation
Illusory correlation is the phenomenon of seeing a relationship between variables (typically people, events, or
behaviors) even when no such relationship exists. A common example of this phenomenon would be when people
form false associations between membership in a statistical minority group and rare (typically negative) behaviors as
variables that are novel or deviant tend to capture the attention.
[1]
This is one way stereotypes form and endure.
David Hamilton and Terrence Rose (1980) found that stereotypes can lead people to expect certain groups and traits
to fit together, and then to overestimate the frequency with which these correlations actually occur.
[2]
History
"Illusory correlation" was originally coined by Chapman and Chapman (1967) to describe people's tendencies to
overestimate relationships between two groups when distinctive and unusual information is presented.
[3][4]
The
concept was used to question claims about objective knowledge in clinical psychology through the Chapmans'
refutation of many clinicians' widely-used Wheeler signs for homosexuality in Rorschach tests.
[5]
Example
David Hamilton and Robert Gifford (1976) conducted a series of experiments that demonstrated how stereotypic
beliefs regarding minorities could derive from illusory correlation processes.
[6]
To test their hypothesis, Hamilton
and Gifford had research participants read a series of sentences discribing either desirable or undesirable behaviors,
which were attributed to either Group A or Group B.
[3]
Abstract groups were used so that no previously established
stereotypes would influence results. Most of the sentences were associated with Group A, and the remaining few
were associated with Group B.
[6]
The following table summarizes the information given.
Behaviors Group A (Majority) Group B (Minority) Total
Desirable 18 (69%) 9 (69%) 27
Undesirable 8 (30%) 4 (30%) 12
Total 26 13 39
Each group had the same proportions of positive and negative behaviors, so there was no real association between
behaviors and group membership. Results of the study show that positive, desirable behaviors were not seen as
distinctive so people were accurate in their associations. On the other hand, when distinctive, undesirable behaviors
were represented in the sentences, the participants overestimated how much the minority group exhibited the
behaviors.
[6]
A parallel effect occurs when people judge whether two events, such as pain and bad weather, are correlated. They
rely heavily on the relatively small number of cases where the two events occur together. People pay relatively little
attention to the other kinds of observation(of no pain and/or good weather).
[7][8]
Illusory correlation
130
Theories
General theory
Most explanations for illusory correlation involve psychological heuristics: information processing short-cuts that
underlie many human judgments.
[9]
One of these is availability: the ease with which an idea comes to mind.
Availability is often used to estimate how likely an event is or how often it occurs.
[10]
This can result in illusory
correlation, because some pairings can come easily and vividly to mind even though they are not especially
frequent.
[9]
Information processing
Martin Hilbert (2012) proposes an information processing mechanism that assumes a noisy conversion of objective
observations into subjective judgments. The theory defines noise as the mixing of these observations during retrieval
from memory.
[11]
According to the model, underlying cognitions or subjective judgments are identical with noise or
objective observations that can lead to overconfidence or what is known as conservatism bias- when asked about
behavior participants underestimate the majority or larger group and overestimate the minority or smaller group.
These results are illusory correlations.
Working-memory capacity
In an experimental study done by Eder, Fiedler and Hamm-Eder (2011), the effects of working-memory capacity on
illusory correlations were investigated. They first looked at the individual differences in working memory, and then
looked to see if that had any effect on the formation of illusory correlations. They found that individuals with higher
working memory capacity viewed minority group members more positively than individuals with lower working
memory capacity. In a second experiment, the authors looked into the effects of memory load in working memory on
illusory correlations. They found that increased memory load in working memory led to an increase in the
prevalence of illusory correlations. The experiment was designed to specifically test working memory and not
substantial stimulus memory. This means that the development of illusory correlations was caused by deficiencies in
central cognitive resources caused by the load in working memory, not selective recall.
[12]
Attention theory of learning
Attention theory of learning proposes that features of majority groups are learned first, and then features of minority
groups. This results in an attempt to distinguish the minority group from the majority, leading to these differences
being learned more quickly. The Attention theory also argues that, instead of forming one stereotype regarding the
minority group, two stereotypes, one for the majority and one for the minority, are formed.
[13]
Learning effects on illusory correlations
A study by Murphy et al.(2011) was conducted to investigate whether increased learning would have any effect on
illusory correlations. It was found that educating people about how illusory correlation occurs resulted in a decreased
incidence of illusory correlations.
[14]
Illusory correlation
131
Age
Johnson and Jacobs (2003) performed an experiment to see how early in life individuals begin forming illusory
correlations. Children in grades 2 and 5 were exposed to a typical illusory correlation paradigm to see if negative
attributes were associated with the minority group. The authors found that both groups formed illusory
correlations.
[15]
A study performed by Primi and Agnoli(2002) found that children also create illusory correlations. In their
experiment, children in grades 1, 3, 5, and 7, and adults all look at the same illusory correlation paradigm. The study
found that children did create significant illusory correlations, but those correlations were weaker than those created
by adults. In a second study, groups of shapes with different colors were used. The formation of illusory correlation
persisted showing that social stimuli are not necessary for creating these correlations.
[16]
Explicit versus implicit attitudes
Two studies performed by Ratliff and Nosek examined whether or not explicit and implicit attitudes affected illusory
correlations. In one study, Ratliff and Nosek had two groups: one a majority and the other a minority. They then had
three groups of participants, all with readings about the two groups. One group of participants received
overwhelming pro-majority readings, one was given pro-minority readings, and one received neutral readings. The
groups that had pro-majority and pro-minority readings favored their respective pro groups both explicitly and
implicitly. The group that had neutral readings favored the majority explicitly, but not implicitly. The second study
was similar, but instead of readings, pictures of behaviors were shown, and the participants wrote a sentence
describing the behavior they saw in the pictures presented. The findings of both studies supported the authors'
argument that the differences found between the explicit and implicit attitudes is a result of the interpretation of the
covariation and making judgments based on these interpretations (explicit) instead of just accounting for the
covariation (implicit).
[17]
Paradigm structure
Berndsen et al. (1999) wanted to determine if the structure of testing for illusory correlations could lead to the
formation of illusory correlations. The hypothesis was that identifying test variables as Group A and Group B might
be causing the participants to look for differences between the groups, resulting in the creation of illusory
correlations. An experiment was set up where one set of participants were told the groups were Group A and Group
B, while another set of participants were given groups labeled as students who graduated in 1993 or 1994. This study
found that illusory correlations were more likely to be created when the groups were Group A and B, as compared to
students of the class of 1993 or the class of 1994.
[18]
References
Notes
[1] Pelham, Brett (2007). Conducting Research in Psychology : measuring the weight of smoke. Belmont, CA: Wadsworth Publishing.
ISBN0-534-53294-2. OCLC70836619.
[2] "Stereotypes" (http:/ / www. colorado.edu/ conflict/ peace/ problem/ stereoty. htm). .
[3] Whitley & Kite 2010
[4] Chapman, L (1967). "Illusory correlation in observational report". Journal of Verbal Learning and Verbal Behavior 6 (1): 151155.
doi:10.1016/S0022-5371(67)80066-5. ISSN00225371.
[5] Chapman, Loren J. and Jean P. (1969). "Illusory Correlation as an Obstacle to the Use of Valid Psychodiagnostic Signs". Journal of
Abnormal Psychology 74 (3): 271-80.
[6] Hamilton, D; Gifford, R (1976). "Illusory correlation in interpersonal perception: A cognitive basis of stereotypic judgments". Journal of
Experimental Social Psychology 12 (4): 392407. doi:10.1016/S0022-1031(76)80006-6. ISSN00221031.
[7] Kunda 1999, pp.127130
[8] Plous 1993, pp.162164
Illusory correlation
132
[9] Plous 1993, pp.164167
[10] [10] Plous 1993, p.121
[11] Martin Hilbert (2012) "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin, 138(2), 211237; free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf
[12] Eder, Andreas B.; Fiedler, Klaus & Hamm-Eder, Silke (2011). "Illusory correlations revisited: The role of pseudocontingencies and
working-memory capacity". The Quarterly Journal of Experimental Psychology 64 (3): 517532. doi:10.1080/17470218.2010.509917.
[13] Sherman, Jeffrey; Kruschke, Sherman, Percy, Petrocelli and Conrey (2009). "Attentional processes in stereotype formation: A common
model for category accentuation and illusory correlation". Journal of Personal and Social Psychology 96 (2): 305323.
doi:10.1037/a0013778.
[14] Murphy, Robin; Schmeer, Stefanie; Frederic Vallee-Tourangeau;Esther Mondragon; Denis Hilton (2011). "Making the illusory
correlation effect appear and then disappear: The effects of increased learning". The Quarterly journal of Experimental Psychology. 1 64:
2440. doi:10.1080/17470218.2010.493615.
[15] Johnston, Kristen E.; Jacobs, J. E. (2003). "Children's Illusory Correlations: The role of attentional bias in group impression formation".
Journal of Cognition and Development 4 (2): 129160.
[16] Primi, Caterina; Agnoli (2002). "Children Correlate infrequent behaviors with minority groups: a case of illusory correlation". Cognitive
Development 17: 11051131.
[17] Ratliff, Kate A.; Nosek, Brian A. (2010). "Creating distinct implicit and explicit attitudes with an illusory correlation paradigm". Journal of
Experimental Social Psychology 46: 721728. doi:10.1016/j.jesp.2010.04.011.
[18] Berndsen, Mariette; Spears, and van der Pligt (1999). "Determinants of intergroup differentiation in the illusory correlation task". British
Journal of Psychology 90: 201220.
Sources
Kunda, Ziva (1999). Social Cognition: Making Sense of People. MIT Press. ISBN978-0-262-61143-5.
OCLC40618974.
Plous, Scott (1993). The Psychology of Judgment and Decision Making. McGraw-Hill. ISBN978-0-07-050477-6.
OCLC26931106.
Whitley, Bernard E.; Kite, Mary E. (2010). The Psychology of Prejudice and Discrimination. Belmont, CA:
Wadsworth. ISBN978-0-495-59964-7. OCLC695689517.
Information bias (psychology)
133
Information bias (psychology)
Information bias is a type of cognitive bias, and involves e.g. distorted evaluation of information. Information bias
occurs due to people's curiosity and confusion of goals when trying to choose a course of action.
Over-evaluation of information
An example of information bias is believing that the more information that can be acquired to make a decision, the
better, even if that extra information is irrelevant for the decision.
Examples of information bias are prevalent in medical diagnosis. Subjects in experiments concerning medical
diagnostic problems show an information bias in which they seek information that is unnecessary in deciding the
course of treatment.
Globoma experiment
In an experiment,
[1]
subjects considered this diagnostic problem involving fictitious diseases:
A female patient is presenting symptoms and a history which both suggest a diagnosis of globoma, with about 80%
probability. If it isn't globoma, it's either popitis or flapemia. Each disease has its own treatment which is ineffective
against the other two diseases. A test called the ET scan would certainly yield a positive result if the patient had
popitis, and a negative result if she has flapemia. If the patient has globoma, a positive and negative result are
equally likely. If the ET scan was the only test you could do, should you do it? Why or why not?
Many subjects answered that they would conduct the ET scan even if it were costly, and even if it were the only test
that could be done. However, the test in question does not affect the course of action as to what treatment should be
done. Because the probability of globoma is so high with a probability of 80%, the patient would be treated for
globoma no matter what the test says. Globoma is the most probable disease before or after the ET scan.
In this example, we can calculate the value of the ET scan. Out of 100 patients, a total of 80 people will have
globoma regardless of whether the ET scan is positive or negative. Since it is equally likely for a patient with
globoma to have a positive or negative ET scan result, 40 people will have a positive ET scan and 40 people will
have a negative ET scan, which totals to 80 people having globoma. This means that a total of 20 people will have
either popitis or flapemia regardless of the result of the ET scan. The number of patients with globoma will always
be greater than the number of patients with popitis or flapemia in either case of a positive or negative ET scan so the
ET scan is useless in determining what disease to treat. The ET scan will indicate that globoma should be treated
regardless of the result.
References
[1] Baron, J. (1988, 1994, 2000). Thinking and Deciding. Cambridge University Press. (http:/ / www. amazon. com/ dp/ 0521659728)
Insensitivity to sample size
134
Insensitivity to sample size
Insensitivity to sample size is a cognitive bias that occurs when people judge the probability of obtaining a sample
statistic without respect to the sample size. For example, in one study subjects assigned the same probability to the
likelihood of obtaining a mean height of above six feet [183cm] in samples of 10, 100, and 1,000 men. In other
words, variation is more likely in smaller samples, but people may not expect this.
[1]
In another example, Amos Tversky and Daniel Kahneman asked subjects
A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and
in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are
boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50%,
sometimes lower.
For a period of 1 year, each hospital recorded the days on which more than 60% of the babies born were
boys. Which hospital do you think recorded more such days?
1. 1. The larger hospital
2. 2. The smaller hospital
3. About the same (that is, within 5% of each other)
[1]
56% of subjects chose option 3, and 22% of subjects respectively chose options 1 or 2. However, according to
sampling theory the larger hospital is much more likely to report a sex ratio close to 50% on a given day than the
smaller hospital (see the law of large numbers).
Relative neglect of sample size were obtained in a different study of statistically sophisticated psychologists.
[2]
Tversky and Kahneman explained these results as being caused by the representativeness heuristic, according to
which people intuitively judge samples as having similar properties to their population without taking other
considerations into effect. A related bias is the clustering illusion, in which people under-expect streaks or runs in
small samples. Insensitivity to sample size is a subtype of extension neglect.
[3]
References
[1] Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 11241131.
doi:10.1126/science.185.4157.1124. PMID17835457.
[2] Tversky, Amos; Daniel Kahneman (1971). "Belief in the law of small numbers". Psychological Bulletin 76 (2): 105110.
doi:10.1037/h0031322.
[3] Kahneman, Daniel (2000). "Evaluation by moments, past and future". In Daniel Kahneman and Amos Tversky (Eds.). Choices, Values and
Frames. p.708.
Just-world hypothesis
135
Just-world hypothesis
The just-world hypothesis (or just-world fallacy) is the cognitive bias that human actions eventually yield morally
fair and fitting consequences, so that, ultimately, noble actions are duly rewarded and evil actions are duly punished.
In other words, the just-world hypothesis is the tendency to attribute consequences to, or expect consequences as the
result of, an unspecified power that restores moral balance; the fallacy is that this implies (often unintentionally) the
existence of such a power in terms of some cosmic force of justice, desert, stability, or order in the universe.
The fallacy popularly appears in the English language in various figures of speech, which often imply a negative
reprisal of justice, such as: "You got what was coming to you," "What goes around comes around," and "You reap
what you sow." This phenomenon of this fallacy has been widely studied by social psychologists since Melvin J.
Lerner conducted seminal work on the belief in a just world in the early 1960s.
[1]
Since that time, research has
continued, examining the predictive capacity of the hypothesis in various situations and across cultures, and
clarifying and expanding the theoretical understandings of just world beliefs.
[2]
Emergence
The phenomenon of belief in a just world has been observed and considered by many philosophers and social
theorists. Psychologist Melvin Lerner's work made the just world hypothesis a focus of social psychological
research.
Melvin Lerner
Melvin Lerner was prompted to study justice beliefs and the just world hypothesis in the context of social
psychological inquiry into negative social and societal interactions.
[3]
Lerner saw his work as extending Stanley
Milgram's work on obedience. He sought to answer the questions of how regimes that cause cruelty and suffering
maintain popular support, and how people come to accept social norms and laws that produce misery and
suffering.
[4]
Lerner's inquiry was influenced by repeatedly witnessing the tendency of observers to blame victims for their
suffering. During his clinical training as a psychologist, he observed treatment of mentally ill persons by the health
care practitioners with whom he worked. Though he knew them to be kindhearted, educated people, they blamed
patients for their own suffering.
[5]
He also describes his surprise at hearing his students derogate the poor, seemingly
oblivious to the structural forces that contribute to poverty.
[3]
In a study he was doing on rewards, he observed that
when one of two men was chosen at random to receive a reward for a task, observers' evaluations were more positive
for the man who had been randomly rewarded than for the man who did not receive a reward.
[6][7]
Existing social
psychological theories, including cognitive dissonance, could not fully explain these phenomena.
[7]
The desire to
understand the processes that caused these observed phenomena led Lerner to conduct his first experiments on what
is now called the just world hypothesis.
Early evidence
In 1966, Lerner and his colleagues began a series of experiments that used shock paradigms to investigate observer
responses to victimization. In the first of these experiments conducted at the University of Kansas, 72 female
subjects were made to watch a confederate receiving electrical shocks under a variety of conditions. Initially,
subjects were upset by observing the apparent suffering of the confederate. However, as the suffering continued and
observers remained unable to intervene, the observers began to derogate the victim. Derogation was greater when the
observed suffering from shock treatments was greater. However, under conditions in which subjects were told that
the victim would receive compensation for her suffering, subjects did not derogate the victim.
[4]
Lerner and
colleagues replicated these findings in subsequent studies, as did other researchers.
[6]
Just-world hypothesis
136
Theory
To explain the findings of these studies, Lerner theorized the prevalence of the belief in a just world. A just world is
one in which actions and conditions have predictable, appropriate consequences. These actions and conditions are
typically individuals' behaviors or attributes. The specific conditions that correspond to certain consequences are
socially determined by the norms and ideologies of a society. Lerner presents the belief in a just world as functional:
it maintains the idea that one can impact the world in a predictable way. Belief in a just world functions as a sort of
"contract" with the world regarding the consequences of behavior. This allows people to plan for the future and
engage in effective, goal-driven behavior. Lerner summarized his findings and his theoretical work in his 1980
monograph The Belief in a Just World: A Fundamental Delusion.
[5]
Lerner hypothesized that the belief in a just world is crucially important for people to maintain for their own
well-being. However, people are confronted daily with evidence that the world is not just: people suffer without
apparent cause. Lerner explained that people use strategies to eliminate threats to their belief in a just world. These
strategies can be rational or irrational. Rational strategies include accepting the reality of injustice, trying to prevent
injustice or provide restitution, and accepting one's own limitations. Non-rational strategies include denial or
withdrawal, and reinterpretation of the event.
There are a few modes of reinterpretation that could make an event fit the belief in a just world. One can reinterpret
the outcome, the cause, and/or the character of the victim. In the case of observing the injustice of the suffering of
innocent others, one major way to rearrange the cognition of an event is to interpret the victim of suffering as
deserving of that suffering.
[1]
Specifically, observers can blame victims for their suffering on the basis of their
behaviors and/or their characteristics. This would result in observers both derogating victims and blaming victims for
their own suffering.
[6]
Much psychological research on the belief in a just world has focused on these negative social
phenomena of victim blaming and victim derogation in different contexts.
[2]
An additional effect of this thinking is that individuals experience less personal vulnerability because they do not
believe they have done anything to deserve or cause negative outcomes.
[2]
This is related to the self-serving bias
observed by social psychologists.
[8]
Many researchers have interpreted just world beliefs as an example of causal attribution. In victim blaming, the
causes of victimization are attributed to an individual rather than a situation. Thus, the consequences of belief in a
just world may be related to or explained in terms of particular patterns of causal attribution.
[9]
Alternatives
Veridical judgment
Others have suggested alternative explanations for the derogation of victims. One suggestion is that derogation
effects are based on accurate judgments of a victim's character. In particular, in relation to Lerner's first studies, some
have hypothesized that it would be logical for observers to derogate an individual who would allow herself to be
shocked without reason.
[10]
A subsequent study by Lerner challenged this alternative hypothesis by showing that
individuals are only derogated when they actually suffer; individuals who agreed to undergo suffering but did not
were viewed positively.
[11]
Guilt reduction
Another alternative explanation offered for the derogation of victims early in the development of the just world
hypothesis is that observers derogate victims to reduce their own feelings of guilt. Observers may feel responsible, or
guilty, for a victim's suffering if they themselves are involved in the situation or experiment. In order to reduce the
guilt, they may devalue the victim.
[12][13][14]
Lerner and colleagues claim that there has not been adequate evidence
to support this interpretation. They conducted one study that found derogation of victims occurred even by observers
Just-world hypothesis
137
who were not implicated in the process of the experiment and thus had no reason to feel guilty.
[6]
Additional evidence
Following Lerner's first studies, other researchers replicated these findings in other settings in which individuals are
victimized. This work, which began in the 1970s and continues today, has investigated how observers react to
victims of random calamities, like traffic accidents, as well as rape and domestic violence, illnesses, and poverty.
[1]
Generally, researchers have found that observers of the suffering of innocent victims tend to both derogate victims
and blame victims for their suffering. Thus, observers maintain their belief in a just world by changing their
cognitions about the character of victims.
[15]
In the early 1970s, social psychologists Zick Rubin and Letitia Anne Peplau developed a measure of belief in a just
world.
[16]
This measure and its revised form published in 1975 allowed for the study of individual differences in just
world beliefs.
[17]
Much of the subsequent research on the just world hypothesis utilized these measurement scales.
Violence
Researchers have looked at how observers react to victims of rape and other violence. In a formative experiment on
rape and belief in a just world by Linda Carli and colleagues, researchers gave two groups of subjects a narrative
about interactions between a man and a woman. The description of the interaction was the same until the end; one
group received a narrative that had a neutral ending and the other group received a narrative that ended with the man
raping the woman. Subjects judged the rape ending as inevitable and blamed the woman in the narrative for the rape
on the basis of her behavior, but not her characteristics.
[18]
These findings have been replicated repeatedly, including
using a rape ending and a 'happy ending' (a marriage proposal).
[19][2]
Other researchers have found a similar phenomenon for judgments of battered partners. One study found that
observers' labels of blame of female victims of relationship violence increase with the intimacy of the relationship.
Observers blamed the perpetrator only in the most significant case of violence, in which a male struck an
acquaintance.
[20]
Bullying
Researchers have employed the just world hypothesis to help understand bullying. Given other research on beliefs in
a just world, it would be expected that observers would derogate and blame victims of bullying. However, the
opposite has been found: individuals high in just world belief have stronger anti-bullying attitudes.
[21]
Other
researchers have found that strong belief in a just world is associated with lower levels of bullying behavior.
[22]
This
finding is in keeping with Lerner's understanding of belief in a just world as functioning as a "contract" that governs
behavior.
[5]
There is additional evidence that belief in a just world is protective of the well-being of children and
adolescents in the school environment,
[23]
as has been shown for the general population.
Illness
Other researchers have found that observers judge sick people as responsible for their illnesses. One experiment
showed that persons suffering from a variety of illnesses were derogated on a measure of attractiveness more so than
healthy individuals were. Victim derogation was found to be higher for those suffering from more severe illnesses,
except in the case of cancer victims.
[24]
Many studies have looked at derogation of AIDS victims specifically. Higher
beliefs in a just world have been found to be related to greater derogation of AIDS victims.
[25]
Just-world hypothesis
138
Poverty
More recently, researchers have explored how people react to poverty through the lens of the just world hypothesis.
High belief in a just world is associated with blaming the poor, and low belief in a just world is associated with
identifying external causes of poverty including world economic systems, war, and exploitation.
[26][27]
The self as victim
Some research on belief in a just world has examined how people react when they themselves are victimized. An
early paper by researcher Ronnie Janoff-Bulman found that rape victims often engage in blaming their own
behaviors, but not their own characteristics, for their victimization.
[28]
It was hypothesized that this may be because
blaming one's own behaviors makes an event more controllable.
These studies on victims of violence, illness, and poverty and others like them have provided consistent support for
the link between observers' just world beliefs and their tendency to blame victims for their suffering.
[1]
As a result,
the just world hypothesis has become widely accepted as a psychological phenomenon.
Theoretical refinement
Subsequent work on measuring belief in a just world has focused on identifying multiple dimensions of the belief.
This work has resulted in the development of new measures of just world belief and additional research.
[2]
Hypothesized dimensions of just world beliefs include belief in an unjust world,
[29]
beliefs in immanent justice and
ultimate justice,
[30]
hope for justice, and belief in one's ability to reduce injustices.
[31]
Other work has focused on
looking at the different domains in which the belief may function; individuals may have different just world beliefs
for the personal domain, the sociopolitical domain, the social domain, etc.
[25]
An especially fruitful distinction is
between the belief in a just world for the self (personal) and the belief in a just world for others (general). These
distinct beliefs are differentially associated with health.
[32]
Correlates
Researchers have used measures of belief in a just world to look at correlates of high and low levels of belief in a just
world.
Limited studies have examined ideological correlates of the belief in a just world. These studies have found
sociopolitical correlates of just world beliefs, including right-wing authoritarianism and the protestant work
ethic.
[33][34]
Studies have also found belief in a just world to be correlated with aspects of religiousness.
[35][36]
Studies of demographic differences, including gender and racial differences, have not shown systematic differences,
but do suggest racial differences, with Black and African Americans having the lowest levels of belief in a just
world.
[37][38]
The development of measures of just world beliefs has also allowed researchers to assess cross-cultural differences
in just world beliefs. Much research conducted shows that beliefs in a just world are evident cross-culturally. One
study tested beliefs in a just world of students in 12 countries. This study found that in countries where the majority
of inhabitants are powerless, belief in a just world tends to be weaker than in other countries.
[39]
This supports the
theory of the just world hypothesis because the powerless have had more personal and societal experiences that have
provided evidence that the world is not just and predictable.
[40]
Just-world hypothesis
139
Current research
Positive mental health effects
Though much of the initial work on belief in a just world focused on the negative social effects of this belief, other
research on belief in a just world suggests that belief in a just world is good, and even necessary, for the mental
health of individuals.
[41]
Belief in a just world is associated with greater life satisfaction and well-being and less
depressive affect.
[32][42]
Researchers are actively exploring reasons that belief in a just world might have these
relationships to mental health; it has been suggested that such beliefs could be a personal resource or coping strategy
that buffers stress associated with daily life and with traumatic events.
[43]
This hypothesis suggests that belief in a
just world can be understood as a positive illusion.
[44]
Correlational studies also showed that beliefs in a just world are correlated with internal locus of control.
[17]
Strong
belief in a just world is associated with greater acceptance of and less dissatisfaction with the negative events in one's
life.
[43]
This may be one pathway through which belief in a just world affects mental health. Others have suggested
that this relationship only holds for beliefs in a just world that apply to the self. Beliefs in a just world that apply to
others are related instead to negative social phenomena of victim blaming and victim derogation observed in other
studies.
[45]
International research
Over forty years after Lerner's seminal work on belief in a just world, researchers continue to study the phenomenon.
Work continues primarily in the United States, Europe, Australia, and Asia.
[7]
Researchers in Germany have
contributed disproportionately to recent research.
[3]
Their work resulted in a volume edited by Lerner and a German
researcher entitled Responses to Victimizations and Belief in a Just World.
[46]
References
[1] Lerner, M.J. & Montada, L. (1998). An Overview: Advances in Belief in a Just World Theory and Methods, in Leo Montada & M.J. Lerner
(Eds.). Responses to Victimizations and Belief in a Just World (17). Plenum Press: New York.
[2] Furnham, A. (2003). Belief in a just world: research progress over the past decade. Personality and Individual Differences; 34: 795817.
[3] Montada, L. & Lerner, M.J. (1998). Preface, in Leo Montada & M.J. Lerner (Eds.). Responses to Victimizations and Belief in a Just World
(pp. viiviii). Plenum Press: New York.
[4] Lerner, M. J., & Simmons, C. H. (1966). Observers reaction to the innocent victim: Compassion or rejection? Journal of Personality and
Social Psychology, 4(2), 203210.
[5] Lerner (1980). The Belief in a Just World: A Fundamental Delusion. Plenum: New York.
[6] Lerner, M. J., & Miller, D. T. (1978). Just world research and the attribution process: Looking back and ahead. Psychological Bulletin, 85(5),
10301051
[7] Maes, J. (1998) Eight Stages in the Development of Research on the Construct of BJW?, in Leo Montada & M.J. Lerner (Eds.). Responses to
Victimizations and Belief in a Just World (pp. 163185). Plenum Press: New York.
[8] Linden, M. & Maercker, A. (2011) Embitterment: Societal, psychological, and clinical perspectives. Wien: Springer.
[9] Howard, J. (1984). Societal influences on attribution: Blaming some victims more than others. Journal of Personality and Social Psychology,
47(3), 494505.
[10] Godfrey, B. & Lowe, C. (1975). Devaluation of innocent victims: An attribution analysis within the just world paradigm. Journal of
Personality and Social Psychology, 31, 944951.
[11] Lerner, M.J. (1970). The desire for justice and reactions to victims. In J. Macaulay & L. Berkowitz (Eds.), Altruism and helping behavior
(pp. 205229). New York: Academic Press.
[12] Davis, K. & Jones, E. (1960). Changes in interpersonal perception as a means of reducing cognitive dissonance. Journal of Abnormal and
Social Psychology, 61, 402410.
[13] Glass, D. (1964). Changes in liking as a means of reducing cognitive discrepancies between self-esteem and aggression. Journal of
Personality, 1964, 32, 531549.
[14] Cialdini, R. B., Kenrick, D. T., & Hoerig, J. H. (1976). Victim derogation in the Lerner paradigm: Just world or just justification? Journal of
Personality and Social Psychology, 33(6), 719724.
[15] Reichle, B., Schneider, A., & Montada, L. (1998). How do observers of victimization preserve their belief in a just world cognitively or
actionally? In L. Montada & M. J. Lerner (Eds.), Responses to victimization and belief in a just world (pp. 5586). New York: Plenum.
Just-world hypothesis
140
[16] Rubin, Z. & Peplau, A. (1973). Belief in a just world and reactions to another's lot: A study of participants in the national draft lottery.
Journal of Social Issues, 29, 7393.
[17] Rubin, Z. & Peplau, L.A. (1975). Who believes in a just world? Journal of Social Issues, 31, 6589.
[18] Janoff-Bulman, R., Timko, C., & Carli, L. L. (1985). Cognitive biases in blaming the victim. Journal of Experimental Social Psychology,
21(2), 161177.
[19] Carli, L. L. (1999). Cognitive Reconstruction, Hindsight, and Reactions to Victims and Perpetrators. Personality and Social Psychology
Bulletin, 25(8), 966979.
[20] Summers, G., & Feldman, N. S. (1984). Blaming the victim versus blaming the perpetrator:An attributional analysis of spouse abuse.
Symposium A Quarterly Journal In Modern Foreign Literatures, 2(4), 339347.
[21] Fox, C. L., Elder, T., Gater, J., & Johnson, E. (2010). The association between adolescents beliefs in a just world and their attitudes to
victims of bullying. The British journal of educational psychology, 80(Pt 2), 18398.
[22] Correia, I., & Dalbert, C. (2008). School Bullying. European Psychologist, 13(4), 248254.
[23] Correia, I., Kamble, S. V., & Dalbert, C. (2009). Belief in a just world and well-being of bullies, victims and defenders: a study with
Portuguese and Indian students. Anxiety, stress, and coping, 22(5), 497508.
[24] Gruman, J. C., & Sloan, R. P. (1983). Disease as Justice: Perceptions of the Victims of Physical Illness. Basic and Applied Social
Psychology, 4(1), 3946.
[25] Furnham, A. & Procter, E. (1992). Sphere-specific just world beliefs and attitudes to AIDS. Human Relations, 45, 265280.
[26] Harper, D. J., Wagstaff, G. F., Newton, J. T., & Harrison, K. R. (1990). Lay causal perceptions of third world poverty and the just world
theory. Social Behavior and Personality: an international journal, 18(2), 235238. Scientific Journal Publishers.
[27] Harper, D. J., & Manasse, P. R. (1992). The Just World and the Third World: British explanations for poverty abroad. The Journal of social
psychology, 6. Heldref Publications.
[28] Janoff-Bulman, R. (1979). Characterological versus behavioral self-blame: inquiries into depression and rape. Journal of personality and
social psychology, 37(10), 1798809.
[29] Dalbert, C., Lipkus, I. M., Sallay, H., & Goch, I. (2001). A just and unjust world: Structure and validity of different world beliefs.
Personality and Individual Differences, 30, 561577.
[30] Maes, J. (1998). Immanent justice and ultimate justice: two ways of believing in justice. In L. Montada, & M. Lerner (Eds.), Responses to
victimization and belief in a just world (pp. 940). New York: Plenum Press.
[31] Mohiyeddini, C., & Montada, L. (1998). BJW and self-efficacy in coping with observed victimization. In L. Montada, & M. Lerner (Eds.),
Responses to victimizations and belief in the just world (pp. 4353). New York: Plenum.
[32] Lipkus, I. M., Dalbert, C., & Siegler, I. C. (1996). The Importance of Distinguishing the Belief in a Just World for Self Versus for Others:
Implications for Psychological Well-Being. Personality and Social Psychology Bulletin, 22(7), 666677.
[33] Lambert, A. J., Burroughs, T., & Nguyen, T. (1999). Perceptions of risk and the buffering hypothesis: The role of just world beliefs and right
wing authoritarianism. Personality and Social Psychology Bulletin, 25(6), 643656.
[34] Furnham, A. & Procter, E. (1989). Belief in a just world: review and critique of the individual difference literature. British Journal of Social
Psychology, 28, 365384.
[35] Begue, L. (2002). Beliefs in justice and faith in people: just world, religiosity and interpersonal trust. Personality and Individual Differences,
32(3), 375382.
[36] Kurst, J., Bjorck, J., & Tan, S. (2000). Causal attributions for uncontrollable negative events. Journal of Psychology and Christianity, 19,
4760.
[37] Calhoun, L., & Cann, A. (1994). Differences in assumptions about a just world: ethnicity and point of view. Journal of Social Psychology,
134, 765770.
[38] Hunt, M. (2000). Status, religion, and the belief in a just world: comparing African Americans, Latinos, and Whites. Social Science
Quarterly, 81, 325343.
[39] Furnham, A. (1991). Just world beliefs in twelve societies. Journal of Social Psychology, 133, 317329.
[40] Furnham, A. (1992). Relationship knowledge and attitudes towards AIDS. Psychological Reports, 71, 11491150.
[41] Dalbert, C. (2001). The justice motive as a personal resource: dealing with challenges and critical life events. New York: Plenum.
[42] Ritter, C., Benson, D. E., & Snyder, C. (1990). Belief in a just world and depression. Sociological Perspective, 25, 235252.
[43] Hafer, C., & Olson, J. (1998). Individual differences in beliefs in a just world and responses to personal misfortune. In L. Montada, & M.
Lerner (Eds.), Responses to victimizations and belief in the just world (pp. 6586). New York: Plenum.
[44] Taylor, S.E., & Brown, J. (1988). Illusion and well-being: A social psychological perspective on mental health. Psychological Bulletin, 103,
193210.
[45] Sutton, R., & Douglas, K. (2005). Justice for all, or just for me? More evidence of the importance of the self-other distinction in just-world
beliefs. Personality and Individual Differences, 39(3), 637645.
[46] Montada, L. & Lerner, M. (Eds.) (1998) Responses to victimizations and belief in the just world. New York: Plenum.
Just-world hypothesis
141
Further reading
Hafer, C. L.; Bgue (2005). "Experimental research on just-world theory: problems, developments, and future
challenges" (http:/ / www. brocku. ca/ psychology/ people/ Hafer_Begue_05. pdf). Psychological Bulletin 131
(1): 128167. doi:10.1037/0033-2909.131.1.128.
Lerner, Melvin J. (1980). The Belief in a Just World A Fundamental Delusion. Perspectives in Social Psychology.
New York: Plenum Press. ISBN978-0-306-40495-5.
Lerner, M.; Simmons, C. H. (1966). "Observers Reaction to the Innocent Victim: Compassion or Rejection?".
Journal of Personality and Social Psychology 4 (2): 203210. doi:10.1037/h0023562. PMID5969146.
Montada, Leo; Lerner, Melvin J. (1998). Responses to Victimization and Belief in a Just World. Critical Issues in
Social Justice. ISBN978-0-306-46030-2.
Rubin, Z.; Peplau, L. A. (1975). "Who believes in a just world?" (http:/ / www. peplaulab. ucla. edu/
Publications_files/ Rubin & Peplau 1975s. pdf). Journal of Social Issues 31 (3): 6590. Reprinted (1977) in
Reflections, XII(1), 126.
Rubin, Z.; Peplau, L. A. (1973). "Belief in a just world and reactions to another's lot: A study of participants in the
national draft lottery" (http:/ / www. peplaulab. ucla. edu/ Publications_files/ Rubin_Peplau_73. pdf). Journal of
Social Issues 29 (4): 7394.
External links
The Just World Hypothesis (http:/ / www. units. muohio. edu/ psybersite/ justworld/ index. shtml)
Issues in Ethics: The Just World Theory (http:/ / www. scu. edu/ ethics/ publications/ iie/ v3n2/ justworld. html)
Less-is-better effect
The less-is-better effect is a type of preference reversal that occurs when a proposition is preferred under joint
evaluation, but not separate evaluation. The term was first proposed by Christoper Hsee.
[1]
The effect has also been
studied by Dan Ariely.
Christopher Hsee demonstrated the effect in a number of experiments, including some which found:
[1]
an expensive $45 scarf was preferred to a cheap $55 coat
7 ounces of ice cream overflowing a small cup was preferred to 8 ounces of ice cream in much larger cup
a dinnerware set with 24 intact pieces was preferred to a set of 24 pieces plus 7 broken pieces
a smaller dictionary was preferred to a larger one with a torn cover
When both options were offered jointly (at the same time) the larger set was preferred, but if they were judged
separately (against control options) the preference was reversed.
Theoretical causes of the less-is-better effect include:
counterfactual thinking. A study found that bronze medalists are happier than silver medalists, apparently because
silver invites comparison to gold whereas bronze invites comparison to not receiving a medal.
[2]
evaluability heuristic and/or fluency heuristic. Hsee hypothesized that subjects evaluated proposals more highly
based on attributes which were easier to evaluate
[1]
(attribute substitution). Another study found that students
preferred funny versus artistic posters according to attributes they could verbalize easily, but the preference was
reversed when they did not need to explain a reason
[3]
(see also introspection illusion).
representativeness heuristic or judgment by prototype. People judge things according to average of a set more
easily than size, a component of extension neglect.
[4]
Less-is-better effect
142
References
[1] Hsee, Christopher K. (1998). "Less Is Better: When Low-value Options Are Valued More Highly than High-value Options". Journal of
Behavioral Decision Making 11: 107121. doi:10.1002/(SICI)1099-0771(199806)11:2<107::AID-BDM292>3.0.CO;2-Y.
[2] Medvec, V. H.; S. Madey & T. Gilovich (1995). "When less is more: Counterfactual thinking and satisfaction among Olympic medalists".
Journal of Personality and Social Psychology 69: 603610.
[3] Wilson, T. D.; J. W. Schooler (1991). "Thinking too much: Introspection can reduce the quality of preferences and decisions". Journal of
Personality and Social Psychology 60: 181192.
[4] Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Loss aversion
Daniel Kahneman
In economics and decision theory, loss aversion refers to people's tendency
to strongly prefer avoiding losses to acquiring gains. Some studies suggest
that losses are twice as powerful, psychologically, as gains. Loss aversion
was first convincingly demonstrated by Amos Tversky and Daniel
Kahneman.
[1]
This leads to risk aversion when people evaluate a possible gain; since
people prefer avoiding losses to making gains. This explains the curvilinear
shape of the prospect theory utility graph in the positive domain.
Conversely people strongly prefer risks that might mitigate a loss (called
risk seeking behavior).
Loss aversion may also explain sunk cost effects.
Loss aversion implies that one who loses $100 will lose more satisfaction
than another person will gain satisfaction from a $100 windfall. In
marketing, the use of trial periods and rebates tries to take advantage of the buyer's tendency to value the good more
after he incorporates it in the status quo.
Note that whether a transaction is framed as a loss or as a gain is very important to this calculation: would you rather
get a $5 discount, or avoid a $5 surcharge? The same change in price framed differently has a significant effect on
consumer behavior. Though traditional economists consider this "endowment effect" and all other effects of loss
aversion to be completely irrational, that is why it is so important to the fields of marketing and behavioral finance.
The effect of loss aversion in a marketing setting was demonstrated in a study of consumer reaction to price changes
to insurance policies.
[2]
The study found price increases had twice the effect on customer switching, compared to
price decreases.
Loss aversion and the endowment effect
Loss aversion was first proposed as an explanation for the endowment effectthe fact that people place a higher
value on a good that they own than on an identical good that they do not ownby Kahneman, Knetsch, and Thaler
(1990).
[3]
Loss aversion and the endowment effect lead to a violation of the Coase theoremthat "the allocation of
resources will be independent of the assignment of property rights when costless trades are possible" (p. 1326).
In several studies, the authors demonstrated that the endowment effect could be explained by loss aversion but not
five alternatives: (1) transaction costs, (2) misunderstandings, (3) habitual bargaining behaviors, (4) income effects,
or (5) trophy effects. In each experiment half of the subjects were randomly assigned a good and asked for the
minimum amount they would be willing to sell it for while the other half of the subjects were given nothing and
asked for the maximum amount they would be willing to spend to buy the good. Since the value of the good is fixed
and individual valuation of the good varies from this fixed value only due to sampling variation, the supply and
Loss aversion
143
demand curves should be perfect mirrors of each other and thus half the goods should be traded. KKT also ruled out
the explanation that lack of experience with trading would lead to the endowment effect by conducting repeated
markets.
The first two alternative explanationsthat under-trading was due to transaction costs or misunderstandingwere
tested by comparing goods markets to induced-value markets under the same rules. If it was possible to trade to the
optimal level in induced value markets, under the same rules, there should be no difference in goods markets.
The results showed drastic differences between induced-value markets and goods markets. The median prices of
buyers and sellers in induced-value markets matched almost every time leading to near perfect market efficiency, but
goods markets sellers had much higher selling prices than buyers' buying prices. This effect was consistent over
trials, indicating that this was not due to inexperience with the procedure or the market. Since the transaction cost
that could have been due to the procedure was equal in the induced-value and goods markets, transaction costs were
eliminated as an explanation for the endowment effect.
The third alternative explanation was that people have habitual bargaining behaviors, such as overstating their
minimum selling price or understating their maximum bargaining price, that may spill over from strategic
interactions where these behaviors are useful to the laboratory setting where they are sub-optimal. An experiment
was conducted to address this by having the clearing prices selected at random. Buyers who indicated a
willingness-to-pay higher than the randomly drawn price got the good, and vice versa for those who indicated a
lower WTP. Likewise, sellers who indicated a lower willingness-to-accept than the randomly drawn price sold the
good and vice versa. This incentive compatible value elicitation method did not eliminate the endowment effect but
did rule out habitual bargaining behavior as an alternative explanation.
Income effects were ruled out by giving one third of the participants mugs, one third chocolates, and one third
neither mug nor chocolate. They were then given the option of trading the mug for the chocolate or vice versa and
those with neither were asked to merely choose between mug and chocolate. Thus, wealth effects were controlled for
those groups who received mugs and chocolate. The results showed that 86% of those starting with mugs chose
mugs, 10% of those starting with chocolates chose mugs, and 56% of those with nothing chose mugs. This ruled out
income effects as an explanation for the endowment effect. Also, since all participants in the group had the same
good, it could not be considered a "trophy", eliminating the final alternative explanation.
Thus, the five alternative explanations were eliminated in the following ways:
1 & 2: Induced-value market vs. consumption goods market;
3: Incentive compatible value elicitation procedure;
4 & 5: Choice between endowed or alternative good.
Questions about the existence of loss aversion
Recently, studies have questioned the existence of loss aversion. In several studies examining the effect of losses in
decision making under risk and uncertainty no loss aversion was found.
[4]
There are several explanations for these
findings: one, is that loss aversion does not exist in small payoff magnitudes; the other, is that the generality of the
loss aversion pattern is lower than that thought previously. Finally, losses may have an effect on attention but not on
the weighting of outcomes; as suggested, for instance, by the fact that losses lead to more autonomic arousal than
gains even in the absence of loss aversion.
[5]
Loss aversion may be more salient when people compete. Gill and Prowse (2012) provide experimental evidence
that people are loss averse around reference points given by their expectations in a competitive environment with
real effort.
[6]
Loss aversion and the endowment effect are often confused. Gal (2006) argued that the endowment effect, previously
attributed to loss aversion, is more parsimoniously explained by inertia than by a loss/gain asymmetry.
Loss aversion
144
Loss aversion in nonhuman subjects
In 2005, experiments were conducted on the ability of capuchin monkeys to use money. After several months of
training, the monkeys began showing behavior considered to reflect understanding of the concept of a medium of
exchange. They exhibited the same propensity to avoid perceived losses demonstrated by human subjects and
investors.
[7]
However, a subsequent study by Silberberg and colleagues suggested that in fact the 2005 results were
not indicative of loss aversion because there was an unequal time delay in the presentation of gains and losses.
Losses were presented with a delay. Hence, the results can also be interpreted as indicating "delay aversion".
Loss Aversion within Education
Loss aversion experimentation has most recently been applied within an educational setting in an effort to improve
achievement within the U.S. Recent results from Programme for International Student Assessment (PISA) 2009
ranked the US ranks # 31 in Math and #17 in Reading. [8]
In this latest experiment, Fryer et al. posits framing merit pay in terms of a loss in order to be most effective. This
study was performed in the city of Chicago Heights within nine K-8 urban schools, which included 3,200 students.
150 out of 160 eligible teachers participated and were assigned to one of four treatment groups or a control group.
Teachers in the incentive groups received rewards based on their students end of the year performance on the
ThinkLink Predictive Assessment and K-2 students took the Iowa Test of Basic Skills (ITBS) in March). The control
group followed the traditional merit pay process of receiving bonus pay at the end of the year based on student
performance on standardized exams. However, the experimental groups received a lump sum given at beginning of
the year, that would have to be paid back. The bonus was equivalent to approximately 8% of the average teacher
salary in Chicago Heights, approximately $8,000.
Methodology - Gain and Loss teachers received identical net payments for a given level of performance. The only
difference is the timing and framing of the rewards. An advance on the payment and the reframing of the incentive as
avoidance of a loss, the researchers observed treatment effects in excess of 0.20 and some as high as 0.398 Standard
Deviations. According to the authors, 'this suggests that there may be significant potential for exploiting loss
aversion in the pursuit of both optimal public policy and the pursuit of profits'.
[9]
Utilizing loss aversion, specifically within the realm of education has gotten much notoriety in blogs and mainstream
media:
The Washington Post discussed merit pay in a recent article and specifically the study conducted by Fryer et al. The
article discusses the positive results of the experiment and estimates the testing gains of those of the loss group are
associated with an increase in lifetime earnings of between $37,180 and $77,740. They also comment on the fact that
it didnt matter much whether the pay was tied to the performance of a given teacher or to the team to which that
teacher was assigned. They state that a merit pay regime need not pit teachers in a given school against each other to
get results. Washington Post
[10]
Science Daily specifically covers the Fryer study stating that the study showed that students gained as much as a 10
percentile increase in their scores compared to students with similar backgrounds -- if their teacher received a bonus
at the beginning of the year, with conditions attached. It also explains how there was no gain for students when
teachers were offered the bonus at the end of the school year. Thomas Amadio, superintendent of Chicago Heights
Elementary School District 170, where the experiment was conducted, is quoted in this article stating the study
shows the value of merit pay as an encouragement for better teacher performance. Science Daily
[11]
Education weekly also weighs in and discusses utilizing loss aversion within education, specifically merit pay. The
article states there are few noteworthy limitations to the study, particularly relative to scope and sample size;
further, the outcome measure was a low-stakes diagnostic assessment, not the state testits unclear if findings
would look the same if the test was used for accountability purposes. Still Fryer et al. have added an interesting
tumbling element to the merit-pay routine.Education weekly
[12]
Loss aversion
145
The Sun Times interviewed John List, Chairman of the University of Chicagos department of economics. He stated
Its a deeply ingrained behavioral trait. .. that all human beings have this underlying phenomenon that I really,
really dislike losses, and I will do all I can to avoid losing something. The article also speaks to only one other study
to enhance performance in a work environment. The only prior field study of a loss aversion payment plan, they
said, occurred in Nanjing, China, where it improved productivity among factory workers who made and inspected
DVD players and other consumer electronics. The article also covers reaction by Barnett Berry, president of the
Center for Teaching Quality, who stated the study seems to suggest that districts pay teachers working with
children and adolescents in the same way Chinese factory workers were paid for producing widgets. I think this
suggests a dire lack of understanding of the complexities of teaching.Suntimes
[13]
There has also been other criticism of the notion of loss aversion as an explanation of greater effects.
Larry Ferlazzo in his blog questioned what kind of positive classroom culture a loss aversion strategy would create
with students, and what kind of affect a similar plan with teachers would have on school culture. He states that 'the
usual kind of teacher merit pay is bad enough, but a threatened take-away strategy might even be more offensive'.
[14]
Researchers Nathan Novemsky and Daniel Kahneman also state there are limits to loss aversion. Their article
focuses on individual intentions and how such intentions can produce or inhibit loss aversion. They further state that
'the coding of outcomes as gains and losses depends on the agents intentions and not only on the objective state of
affairs at the moment of decision'. They provide an example of two individuals with different intentions performing a
transaction. One, a consumer who has a pair of shoes and would consider giving them up a loss, because their
intention would be to keep them. The other individual however, if he were a shoe salesman with different intentions,
would not be effected by loss aversion, if he were to give up the shoes from his store. [15]
Bill Ferriter also posted an article in the Teachers Leaders Network exclaiming that no matter when external
incentives are awarded, they are not effective in education because teachers are already working as hard as they
can.[16]
References
[1] Kahneman, D. and Tversky, A. (1984). "Choices, Values, and Frames" (http:/ / dirkbergemann. commons. yale. edu/ files/
kahnemann-1984-choices-values-frames.pdf). American Psychologist 39 (4): 341350. .
[2] [2] Dawes, J. 2004 "Price Changes and Defection levels in a Subscription-type Market." Journal of Services Marketing Vol 18, No. 1 2004
[3] Kahneman, D., Knetsch, J., & Thaler, R. (1990). Experimental Test of the endowment effect and the Coase Theorem. Journal of Political
Economy 98(6), 1325-1348.
[4] Erev, Ert & Yechiam, 2008; Ert & Erev, 2008; Harinck, Van Dijk, Van Beest, & Mersmann, 2007; Kermer, Driver-Linn, Wilson, & Gilbert,
2006; Nicolau, 2012; Yechiam & Telpaz, in press
[5] Hochman & Yechiam, 2011
[6] Gill, David and Victoria Prowse (2012). "A structural analysis of disappointment aversion in a real effort competition" (http:/ / papers. ssrn.
com/ sol3/ papers. cfm?abstract_id=1578847). American Economic Review 102 (1): 469503. .
[7] Dubner, Stephen J.; Levitt, Steven D. (2005-06-05). "Monkey Business" (http:/ / www. nytimes. com/ 2005/ 06/ 05/ magazine/ 05FREAK.
html?pagewanted=all). Freakonomics column. New York Times. . Retrieved 2010-08-23.
[8] http:/ / www. oecd. org/ pisa/ pisaproducts/ pisa2009/ pisa2009keyfindings. htm
[9] Fryer et al., Enhancing the efficacy of teacher incentives through loss aversion (http:/ / www. economics. harvard. edu/ faculty/ fryer/ files/
enhancing_teacher_incentives. pdf), Harvard University, 2012
[10] http:/ / www.washingtonpost. com/ blogs/ wonkblog/ wp/ 2012/ 07/ 23/ does-teacher-merit-pay-work-a-new-study-says-yes/
[11] http:/ / www.sciencedaily.com/ releases/ 2012/ 08/ 120809090335. htm
[12] http:/ / www.edexcellence. net/ commentary/ education-gadfly-weekly/ 2012/ august-2/
enhancing-the-efficacy-of-teacher-incentives-through-loss-aversion. html
[13] http:/ / www.suntimes. com/ news/ education/ 14687664-418/
cash-upfront-the-way-to-get-teachers-to-rack-up-better-student-test-scores-study. html
[14] http:/ / larryferlazzo. edublogs. org/ 2012/ 07/ 21/
if-you-only-have-a-hammer-you-tend-to-see-every-problem-as-a-nail-economists-go-after-schools-again/
[15] http:/ / wolfweb.unr. edu/ homepage/ pingle/ Teaching/ BADM%20791/ Week%205%20Decision%20Invariance/
Kahneman-Novemsky-Loss%20Aversion.pdf
Loss aversion
146
[16] http:/ / teacherleaders.typepad. com/ the_tempered_radical/ 2012/ 07/ what-economists-dont-understand-about-educators. html
Sources
Ert, E., & Erev, I. (2008). The rejection of attractive gambles, loss aversion, and the lemon avoidance heuristic.
Journal of Economic Psychology, 29, 715-723.
Erev, I., Ert, E., & Yechiam, E. (2008). Loss aversion, diminishing sensitivity, and the effect of experience on
repeated decisions. Journal of Behavioral Decision Making, 21, 575-597.
Gal, D. (2006). A psychological law of inertia and the illusion of loss aversion. Judgment and Decision Making,
1, 23-32.
Harinck, F., Van Dijk, E., Van Beest, I., & Mersmann, P. (2007). When gains loom larger than losses: Reversed
loss aversion for small amounts of money. Psychological Science, 18, 1099-1105.
Hochman, G., and Yechiam, E. (2011). Loss aversion in the eye and in the heart: The Autonomic Nervous
Systems responses to losses. Journal of Behavioral Decision Making, 24, 140-156.
Kahneman, D., Knetsch, J., & Thaler, R. (1990). Experimental Test of the endowment effect and the Coase
Theorem. Journal of Political Economy 98(6), 1325-1348.
Kahneman, D. & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica 47,
263-291.
Kermer, D.A., Driver-Linn, E., Wilson, T.D., & Gilbert, D.T. (2006). Loss aversion is an affective forecasting
error. Psychological Science, 17, 649-653.
McGraw, A.P., Larsen, J.T., Kahneman, D., & Schkade, D. (2010). Comparing gains and losses. Psychological
Science.
Nicolau, J.L. (2012). Battle Royal: Zero-price effect vs relative vs referent thinking, Marketing Letters, 23, 3,
661-669.
Silberberg, A., et al. (2008). On loss aversion in capuchin monkeys. Journal of the experimental analysis of
behavior, 89, 145-155
Tversky, A. & Kahneman, D. (1991). Loss Aversion in Riskless Choice: A Reference Dependent Model.
Quarterly Journal of Economics 106, 1039-1061.
Yechiam, E., and Telpaz, A. (in press). Losses induce consistency in risk taking even without loss aversion.
Journal of Behavioral Decision Making.
Ludic fallacy
147
Ludic fallacy
The ludic fallacy is a term coined by Nassim Nicholas Taleb in his 2007 book The Black Swan. "Ludic" is from the
Latin ludus, meaning "play, game, sport, pastime."
[1]
It is summarized as "the misuse of games to model real-life
situations."
[2]
Taleb explains the fallacy as "basing studies of chance on the narrow world of games and dice."
[3]
It is a central argument in the book and a rebuttal of the predictive mathematical models used to predict the future
as well as an attack on the idea of applying nave and simplified statistical models in complex domains. According to
Taleb, statistics works only in some domains like casinos in which the odds are visible and defined. Taleb's argument
centers on the idea that predictive models are based on platonified forms, gravitating towards mathematical purity
and failing to take some key ideas into account:
It is impossible to be in possession of all the information.
Very small unknown variations in the data could have a huge impact. Taleb does differentiate his idea from that
of mathematical notions in chaos theory, e.g. the butterfly effect.
Theories/Models based on empirical data are flawed, as events that have not taken place before for which no
conclusive explanation or account can be provided.
Examples
Example 1: Suspicious coin
One example given in the book is the following thought experiment. There are two people:
Dr John, who is regarded as a man of science and logical thinking.
Fat Tony, who is regarded as a man who lives by his wits.
A third party asks them, "assume a fair coin is flipped 99 times, and each time it comes up heads. What are the odds
that the 100th flip would also come up heads?"
Dr John says that the odds are not affected by the previous outcomes so the odds must still be 50:50.
Fat Tony says that the odds of the coin coming up heads 99 times in a row are so low (less than 1 in 6.33 10
29
)
that the initial assumption that the coin had a 50:50 chance of coming up heads is most likely incorrect.
The ludic fallacy here is to assume that in real life the rules from the purely hypothetical model (where Dr John is
correct) apply. Would a reasonable person bet on black on a roulette table that has come up red 99 times in a row
(especially as the reward for a correct guess is so low when compared with the probable odds that the game is fixed)?
In classical terms, highly statistically significant (unlikely) events should make one question one's model
assumptions. In Bayesian statistics, this can be modelled by using a prior distribution for one's assumptions on the
fairness of the coin, then Bayesian inference to update this distribution.
Example 2: Job interview
A man considers going to a job interview. He recently studied statistics and utility theory in college and performed
well in the exams. Considering whether to take the interview, he tries to calculate the probability he will get the job
versus the cost of the time spent.
This young job seeker forgets that real life has more variables than the small set he has chosen to estimate. Even with
a low probability of success, a really good job may be worth the effort of going to the interview. Will he enjoy the
process of the interview? Will his interview technique improve regardless of whether he gets the job or not? Even the
statistics of the job business are non-linear. What other jobs could come the man's way by meeting the interviewer?
Might there be a possibility of a very high pay-off in this company that he has not thought of?
Ludic fallacy
148
Example 3: Stock returns
Any decision theory based on a fixed universe or model of possible outcomes ignores and minimizes the impact of
events which are "outside model." For instance, a simple model of daily stock market returns may include extreme
moves such as Black Monday (1987) but might not model the market breakdowns following the 2011 Japanese
tsunami and its consequences. A fixed model considers the "known unknowns," but ignores the "unknown
unknowns."
Relation to Platonicity
The ludic fallacy is a specific case of the more general problem of Platonicity defined by Taleb as:
the focus on those pure, well-defined, and easily discernible objects like triangles, or more social
notions like friendship or love, at the cost of ignoring those objects of seemingly messier and less
tractable structures.
References
[1] [1] D.P. Simpson, "Cassell's Latin and English Dictionary" (New York: Hungry Minds, 1987) p. 134.
[2] Black Swans, the Ludic Fallacy and Wealth Management (http:/ / www. tocqueville. com/ article/ show/ 204), Franois Sicart.
[3] Nassim Taleb, The Black Swan (New York: Random House, 2007) p. 309.
Further reading
The Ludic Fallacy. Chapter from the book "The Black Swan" (http:/ / www. fooledbyrandomness. com/
LudicFallacy. pdf)
Taleb, Nassim N. (2007). The Black Swan. Random House. ISBN1-4000-6351-5.
Medin, D. & Atran, S. (2004) The native mind: Biological categorization and reasoning in development and
across cultures. Psychological Review.111, 96098
Fodor, J. (1983) Modularity of mind. Cambridge, MA: MIT Press.
Tales of the Unexpected, Wilmott Magazine, June 2006, pp 3036 (http:/ / www. fooledbyrandomness. com/
0603_coverstory. pdf)
"A misplaced question". Taleb at Freakonomics blog (http:/ / freakonomics. blogs. nytimes. com/ 2007/ 08/ 09/
freakonomics-quorum-the-economics-of-street-charity/ )
Mere-exposure effect
149
Mere-exposure effect
The mere-exposure effect is a psychological phenomenon by which people tend to develop a preference for things
merely because they are familiar with them. In social psychology, this effect is sometimes called the familiarity
principle. The effect has been demonstrated with many kinds of things, including words, Chinese characters,
paintings, pictures of faces, geometric figures, and sounds.
[1]
In studies of interpersonal attraction, the more often a
person is seen by someone, the more pleasing and likeable that person appears to be.
Research
The earliest known research on the effect was conducted by Gustav Fechner in 1876.
[2]
Edward B. Titchener also
documented the effect and described the "glow of warmth" felt in the presence of something that is familiar.
[3]
However, Titchener's hypothesis was thrown out once tested and results showed that the enhancement of preferences
for objects did not depend on the individual's subjective impressions of how familiar the objects were. The rejection
of Titchener's hypothesis spurred further research and the development of current theory.
The scholar who is best known for developing the mere-exposure effect is Robert Zajonc. Before conducting his
research, he observed that exposure to a novel stimulus initially elicits a fear/avoidance response by all organisms.
Each repeated exposure to the novel stimulus causes less fear and more of an approach tactic by the observing
organism. After repeated exposure, the observing organism will begin to react fondly to the once novel stimulus.
This observation led to the research and development of the mere-exposure effect.
Zajonc (1960s)
In the 1960s, a series of laboratory experiments by Robert Zajonc demonstrated that simply exposing subjects to a
familiar stimulus led them to rate it more positively than other, similar stimuli which had not been presented.
[4]
In
the beginning of his research, Zajonc looked at language and the frequency of words used. He found that overall
positive words received more usage than their negative counterparts.
[4]
One experiment that was conducted to test the mere-exposure effect used fertile chicken eggs for the test subjects.
Tones of two different frequencies were played to different groups of chicks while they were still unhatched. Once
hatched, each tone was played to both groups of chicks. Each set of chicks consistently chose the tone prenatally
played to it.
[1]
Zajonc tested the mere-exposure effect by using meaningless Chinese characters on two groups of
individuals. The individuals were then told that these symbols represented adjectives and were asked to rate whether
the symbols held positive or negative connotations. The symbols that had been previously seen by the test subjects
were consistently rated more positively than those unseen. After this experiment, the group with repeated exposure
to certain characters reported being in better moods and felt more positive than those who did not receive repeated
exposure.
[1]
In one variation, subjects were shown an image on a tachistoscope for a very brief duration that could not be
perceived consciously. This subliminal exposure produced the same effect,
[5]
though it is important to note that
subliminal effects are unlikely to occur without controlled laboratory conditions.
[6]
According to Zajonc, the mere-exposure effect is capable of taking place without conscious cognition, and that
"preferences need no inferences".
[7]
This statement by Zajonc has spurred much research in the relationship between
cognition and affect. Zajonc explains that if preferences (or attitudes) were merely based upon information units with
affect attached to them, then persuasion would be fairly simple. He argues that this is not the case: such simple
persuasion tactics have failed miserably.
[7]
Zajonc states that affective responses to stimuli happen much more
quickly than cognitive responses, and that these responses are often made with much more confidence. He states that
thought (cognition) and feeling (affect) are distinct, and that cognitions are not free from affect, nor is affect free of
cognition.
[7]
Zajonc states, "...the form of experience that we came to call feeling accompanies all cognitions, that it
Mere-exposure effect
150
arises early in the process of registration and retrieval, albeit weakly and vaguely, and that it derives from a parallel,
separate, and partly independent system in the organism."
[7]
In regards to the mere-exposure effect and decision making, Zajonc states that there has been no empirical proof that
cognition precedes any form of decision making. While this is a common assumption, Zajonc argues that the
opposite is more likely: decisions are made with little to no cognitive process. He equates deciding upon something
with liking it, meaning that more often we cognize reasons to rationalize a decision instead of deciding upon it.
[7]
Being that as it may, once we have decided that we 'like' something it is very difficult to sway that opinion. We are
experts on ourselves, we know what we like, whether or not we have made cognitions to back it up.
Goetzinger (1968)
Charles Goetzinger conducted an experiment using the mere-exposure effect on his class at Oregon State University.
Goetzinger had a student come to class in a large black bag with only his feet visible. The black bag sat on a table in
the back of the classroom. Goetzinger's experiment was to observe if the students would treat the black bag in
accordance to Zajonc's mere-exposure effect. His hypothesis was confirmed. The students in the class first treated
the black bag with hostility, which over time turned into curiosity, and eventually friendship.
[4]
This experiment
confirms Zajonc's mere-exposure effect, by simply presenting the black bag over and over again to the students their
attitudes were changed, or as Zajonc states "mere repeated exposure of the individual to a stimulus is a sufficient
condition for the enhancement of his attitude toward it".
[4]
Bornstein (1989)
A meta-analysis of 208 experiments found that the mere-exposure effect is robust and reliable, with an effect size of
r=0.26. This analysis found that the effect is strongest when unfamiliar stimuli are presented briefly. Mere exposure
typically reaches its maximum effect within 1020 presentations, and some studies even show that liking may
decline after a longer series of exposures. For example, people generally like a song more after they have heard it a
few times, but many repetitions can reduce this preference. A delay between exposure and the measurement of liking
actually tends to increase the strength of the effect. The effect is weaker on children, and for drawings and paintings
as compared to other types of stimuli.
[8]
One social psychology experiment showed that exposure to people we
initially dislike makes us dislike them even more.
[9]
Zola-Morgan (2001)
In support of Zajonc's claim that affect does not need cognition to occur, Zola-Morgan conducted experiments on
monkeys with lesions to the amygdala (the brain structure that is responsive to affective stimuli). In his experiments
Zola-Moragan proved that lesions to the amygdala impair affective functioning, but not cognitive processes.
However, lesions in the hippocampus (the brain structure responsible for memory) impair cognitive functions but
leave emotional responses fully functional.
[1]
Two-factor theory
The mere-exposure effect has been explained by a two-factor theory that posits that repeated exposure of a stimulus
increases perceptual fluency which is the ease with which a stimulus can be processed. Perceptual fluency, in turn,
increases positive affect
[10][11]
Studies showed that repeated exposure increases perceptual fluency, confirming the
first part of the two-factor theory.
[12]
Later studies observed that perceptual fluency is affectively positive,
confirming the second part of the fluency account of the mere-exposure effect.
[13][14]
Mere-exposure effect
151
Application
Advertising
The most obvious application of the mere-exposure effect is found in advertising, but research has been mixed as to
its effectiveness at enhancing consumer attitudes toward particular companies and products. One study tested the
mere-exposure effect with banner ads seen on a computer screen. The study was conducted on college-aged students
who were asked to read an article on the computer while banner ads flashed at the top of the screen. The results
showed that each group exposed to the "test" banner rated the ad more favorably than other ads shown less
frequently or not at all. This research bolsters the evidence for the mere-exposure effect.
A different study showed that higher levels of media exposure are associated with lower reputations for companies,
even when the mere exposure is mostly positive.
[15]
A subsequent review of the research concluded that exposure
leads to ambivalence because it brings about a large number of associations, which tend to be both favorable and
unfavorable.
[16]
Exposure is most likely to be helpful when a company or product is new and unfamiliar to
consumers. An 'optimal' level of exposure to an advertisement may or may not exist. In a third study, experimenters
primed consumers with affective motives. One group of thirsty consumers were primed with a happy face before
being offered a beverage, while a second group was primed with an unpleasant face. The group primed with the
happy face bought more beverages, and were also willing to pay more for the beverage than their unhappy
counterparts. This study bolsters Zajonc's claim that choices are not in need of cognition. Buyers often choose what
they 'like' instead of what they have substantially cognized.
[17]
In the advertising world, the mere-exposure effect suggests that consumers need not cognize advertisements: the
simple repetition is enough to make a 'memory trace' in the consumer's mind and unconsciously affect their
consuming behavior. One scholar explains this relationship as follows: "The approach tendencies created by mere
exposure may be preattitudinal in the sense that they do not require the type of deliberate processing that is required
to form brand attitude."
[18]
Other areas
The mere-exposure effect exists in most areas of human decision making. For example, many stock traders tend to
invest in securities of domestic companies merely because they are more familiar with them despite the fact that
international markets offer similar or even better alternatives.
[19]
The mere-exposure effect also distorts the results of
journal ranking surveys; those academics who previously published or completed reviews for a particular academic
journal rate it dramatically higher than those who did not.
[20]
There are mixed results on the question of whether
mere exposure can promote good relations between different social groups.
[21]
When groups already have negative
attitudes to each other, further exposure can increase hostility.
[21]
A statistical analysis of voting patterns found that a
candidate's exposure has a strong effect on the number of votes they receive, distinct from the popularity of the
policies.
[21]
Another example would be an automotive journalist claiming his own car being the best car in the world
despite having driven countless cars.
Mere-exposure effect
152
References
[1] Zajonc, R.B. (December). "Mere Exposure: A Gateway to the Subliminal" (http:/ / cdp. sagepub. com/ content/ 10/ 6/ 224). Current
Directions in Psychological Science 10 (6). doi:10.1111/1467-8721.00154. . Retrieved April 10, 2011.
[2] Fechner, G.T. (1876). Vorschule der aesthetik. Leipzig, Germany: Breitkoff & Hartel.
[3] Titchener, E.B. (1910). Textbook of psychology. New York: Macmillan.
[4] Zajonc, Robert B. (1968). "Attitudinal Effects Of Mere Exposure". Journal of Personality and Social Psychology 9 (2, Pt.2): 127.
doi:10.1037/h0025848. ISSN1939-1315.
[5] Kunst-Wilson, W.; Zajonc, R. (1980). "Affective discrimination of stimuli that cannot be recognized". Science 207 (4430): 557558.
doi:10.1126/science.7352271. ISSN0036-8075.
[6] De Houwer, J., Hendrickx, H. & Baeyens, F. (1997). Evaluative learning with "subliminally" presented stimuli. Consciousness and
Cognition, 6, 87107.
[7] Zajonc, R.B. (February). "Feeling and thinking: Preferences need no inferences". American Psychologist 35 (2): 151175.
[8] Bornstein, R.F. (1989) Exposure and affect: overview and meta-analysis of research, 19681987. Psychological Bulletin, 106, 265289.
[9] Swap, W. C. (1977). "Interpersonal Attraction and Repeated Exposure to Rewarders and Punishers". Personality and Social Psychology
Bulletin 3 (2): 248251. doi:10.1177/014616727700300219. ISSN0146-1672.
[10] Seamon, John G.; Brody, Nathan; Kauff, David M. (1983). "Affective discrimination of stimuli that are not recognized: Effects of
shadowing, masking, and cerebral laterality". Journal of Experimental Psychology: Learning, Memory, and Cognition 9 (3): 544555.
doi:10.1037/0278-7393.9.3.544. ISSN0278-7393.
[11] Bornstein, Robert F.; D'Agostino, Paul R. (1994). "The Attribution and Discounting of Perceptual Fluency: Preliminary Tests of a
Perceptual Fluency/Attributional Model of the Mere Exposure Effect". Social Cognition 12 (2): 103128. doi:10.1521/soco.1994.12.2.103.
ISSN0278-016X.
[12] Jacoby, Larry L.; Dallas, Mark (1981). "On the relationship between autobiographical memory and perceptual learning.". Journal of
Experimental Psychology: General 110 (3): 306340. doi:10.1037/0096-3445.110.3.306. ISSN0096-3445.
[13] Reber, R.; Winkielman, P.; Schwarz, N. (1998). "Effects of Perceptual Fluency on Affective Judgments". Psychological Science 9 (1):
4548. doi:10.1111/1467-9280.00008. ISSN0956-7976.
[14] Winkielman, Piotr; Cacioppo, John T. (2001). "Mind at ease puts a smile on the face: Psychophysiological evidence that processing
facilitation elicits positive affect.". Journal of Personality and Social Psychology 81 (6): 9891000. doi:10.1037/0022-3514.81.6.989.
ISSN0022-3514.
[15] Fombrun, Charles; Shanley, Mark (1990). "What's in a Name? Reputation Building and Corporate Strategy". The Academy of Management
Journal 33 (2): 233. doi:10.2307/256324. ISSN00014273.
[16] Brooks, Margaret E; Highhouse, Scott (2006). "Familiarity Breeds Ambivalence". Corporate Reputation Review 9 (2): 105113.
doi:10.1057/palgrave.crr.1550016. ISSN1363-3589.
[17] Tom, Gail; Nelson, Carolyn; Srzentic, Tamara; King, Ryan (2007). "Mere Exposure and the Endowment Effect on Consumer Decision
Making". The Journal of Psychology 141 (2): 117125. doi:10.3200/JRLP.141.2.117-126. ISSN0022-3980.
[18] Grimes, Anthony; Phillip J. Kitchen (29). "Researching Mere Exposure Effects to Advertising". International Journal of Market Research
49 (2): 191221.
[19] Huberman, G. (2001). "Familiarity Breeds Investment". Review of Financial Studies 14 (3): 659680. doi:10.1093/rfs/14.3.659.
ISSN14657368.
[20] Serenko, A., & Bontis, N. (2011). What's familiar is excellent: The impact of exposure effect on perceived journal quality. (http:/ / foba.
lakeheadu. ca/ serenko/ papers/ JOI_Serenko_Bontis_Published. pdf) Journal of Informetrics, 5, 219223.
[21] Bornstein, Robert F.; Craver-Lemley, Catherine (2004). "Mere exposure effect". In Pohl, Rdiger F.. Cognitive Illusions: A Handbook on
Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press. pp.215234. ISBN978-1-84169-351-4.
OCLC55124398.
External links
Changing minds: Mere exposure theory (http:/ / changingminds. org/ explanations/ theories/ mere_exposure. htm)
Money illusion
153
Money illusion
In economics, money illusion refers to the tendency of people to think of currency in nominal, rather than real,
terms. In other words, the numerical/face value (nominal value) of money is mistaken for its purchasing power (real
value). This is false, as modern fiat currencies have no intrinsic value and their real value is derived from their ability
to be exchanged for goods (purchasing power) and used for payment of taxes.
The term was coined by Irving Fisher in Stabilizing the Dollar. It was popularized by John Maynard Keynes in the
early twentieth century, and Irving Fisher wrote an important book on the subject, The Money Illusion, in 1928.
[1]
The existence of money illusion is disputed by monetary economists who contend that people act rationally (i.e.
think in real prices) with regard to their wealth.
[2]
Eldar Shafir, Peter A. Diamond, and Amos Tversky (1997) have
provided compelling empirical evidence for the existence of the effect and it has been shown to affect behaviour in a
variety of experimental and real-world situations.
[3]
Shafir et al.
[3]
also state that money illusion influences economic behaviour in three main ways:
Price stickiness. Money illusion has been proposed as one reason why nominal prices are slow to change even
where inflation has caused real prices or costs to rise.
Contracts and laws are not indexed to inflation as frequently as one would rationally expect.
Social discourse, in formal media and more generally, reflects some confusion about real and nominal value.
Money illusion can also influence people's perceptions of outcomes. Experiments have shown that people generally
perceive an approximate 2% cut in nominal income with no change in monetary value as unfair, but see a 2% rise in
nominal income where there is 4% inflation as fair, despite them being almost rational equivalents. However, this
result is consistent with the 'Myopic Loss Aversion theory'.
[4]
Furthermore, the money illusion means nominal
changes in price can influence demand even if real prices have remained constant.
[5]
On the money illusion
Some have suggested that money illusion implies that the negative relationship between inflation and unemployment
described by the Phillips curve might hold, contrary to recent macroeconomic theories such as the
"expectations-augmented Phillips curve".
[6]
If workers use their nominal wage as a reference point when evaluating
wage offers, firms can keep real wages relatively lower in a period of high inflation as workers accept the seemingly
high nominal wage increase. These lower real wages would allow firms to hire more workers in periods of high
inflation.
Explanations of money illusion generally describe the phenomenon in terms of heuristics. Nominal prices provide a
convenient rule of thumb for determining value and real prices are only calculated if they seem highly salient (e.g. in
periods of hyperinflation or in long term contracts).
A hypothetical example is if a man has $1 000 000 which doubles every 10 years in the bank, while living expenses
(beginning at $100 000 per 10 years) also doubles every 10 years. The man will have $1 900 000 after the first
decade, $3 600 000 after the second, and $6 800 000 after the third (ignoring inflation before each 10-year mark),
and will thus feel safe because each 10 years his net gains (interest subtract living expenses) are more than the
previous 10 years, even though his purchasing power is decreasing because the interest rate matches inflation rate.
Money illusion
154
References
[1] Fisher, Irving (1928), The Money Illusion, New York: Adelphi Company
[2] "A behavioral-economics view of poverty". The American Economic Review 94 (2): 419423. May 2004. doi:10.1257/0002828041302019.
JSTOR3592921.
[3] Shafir, E.; Diamond, P. A.; Tversky, A. (1997), "On Money Illusion", Quarterly Journal of Economics 112 (2): 341374,
doi:10.1162/003355397555208
[4] http:/ / ideas.repec. org/ a/ tpr/ qjecon/ v110y1995i1p73-92. html
[5] [5] Patinkin, 1969
[6] [6] Romer 2006, p.252
Further reading
Fehr, Ernst; Tyran, Jean-Robert (2001), "Does Money Illusion Matter?", American Economic Review 91 (5):
12391262, doi:10.1257/aer.91.5.1239, JSTOR2677924
Howitt, P. (1987), "money illusion", The New Palgrave: A Dictionary of Economics, 3, London: Macmillan,
pp.518519, ISBN0-333-37235-2
Weber, Bernd; Rangel, Antonio; Wibral, Matthias; Falk, Armin (2009), "The medial prefrontal cortex exhibits
money illusion", PNAS 106 (13): 50255028, doi:10.1073/pnas.0901490106, PMC2664018, PMID19307555
Akerlof, George A.; Shiller, Robert J. (2009), Animal Spirits (http:/ / press. princeton. edu/ titles/ 8967. html),
Princeton University Press, pp.4150
Thaler, Richard H.(1997) "Irving Fisher: Modern Behavioral Economist" (http:/ / faculty. chicagobooth. edu/
richard. thaler/ research/ pdf/ IrvingFisher. pdf) in The American Economic Review Vol 87, No 2, Papers and
Proceedings of the Hundred and Fourth Annual Meeting of the American Economic Association (May, 1997)
Huw Dixon (2008), New Keynesian Economics (http:/ / www. dictionaryofeconomics. com/
article?id=pde2008_N000166), New Palgrave Dictionary of Economics New Keynesian macroeconomics (http:/ /
www. cardiff. ac. uk/ carbs/ econ/ workingpapers/ papers/ E2007_3. pdf).
Moral credential
155
Moral credential
The moral credential effect is a bias that occurs when a person's track record as a good egalitarian establishes in
them an unconscious ethical certification, endorsement, or license that increases the likelihood of less egalitarian
decisions later. This effect occurs even when the audience or moral peer group is unaware of the affected person's
previously established moral credential. For example, individuals who had the opportunity to recruit a woman or
African American in one setting were more likely to say later, in a different setting, that a job would be better suited
for a man or a Caucasian.
[1]
Similar effects also appear to occur when a person observes another person from a group
they identify with making an egalitarian decision.
[2]
Group membership
It has been found that moral credentials can be obtained vicariously. That is, a person will behave as if they
themselves have moral credentials when that person observes another person from a group they identify with making
an egalitarian decision.
[3]
In research that draws on social identity theory is was also found that group membership
moderates the effectiveness of moral credentials in mitigating perceptions of prejudice. Specifically, it was observed
that displays of moral credentials have more effect between people who share in-group status.
[4]
Quotes
Philosopher Friedrich Nietzsche in Human, All Too Human (1878):
Innocent corruption. In all institutions that do not feel the sharp wind of public criticism (as, for
example, in scholarly organizations and senates), an innocent corruption grows up, like a
mushroom.
[5][6]
References
[1] Monin, B. & Miller, D. T. (2001). "Moral credentials and the expression of prejudice." Journal of Personality and Social Psychology, 81(1),
33-43.
[2] Kouchaki, M. (Jul 2011). "Vicarious moral licensing: The influence of others' past moral actions on moral behavior.". J Pers Soc Psychol 101
(4): 70215. doi:10.1037/a0024552. PMID21744973.
[3] Kouchaki, M. (Jul 2011). "Vicarious moral licensing: The influence of others' past moral actions on moral behavior.". J Pers Soc Psychol 101
(4): 70215. doi:10.1037/a0024552. PMID21744973.
[4] Krumm, Angela J.; Corning, Alexandra F. (1 December 2008). "Who Believes Us When We Try to Conceal Our Prejudices? The
Effectiveness of Moral Credentials With In-Groups Versus Out-Groups.". The Journal of Social Psychology 148 (6): 689709.
doi:10.3200//SOCP.148.6.689-710.
[5] Human, All Too Human, 468
[6] Zimmern, Helen (translator) (1909). "8. A Look at the State" (http:/ / www. wordsworthclassics. com/ wordsworth/ details2.
aspx?isbn=9781840220834& cat=world). Human, All Too Human. London, England: Wordsworth Editions Limited. p.210.
ISBN978-1-84022-083-4. .
Negativity bias
156
Negativity bias
Negativity bias is the psychological phenomenon by which humans pay more attention to and give more weight to
negative rather than positive experiences or other kinds of information.
Neurological Evidence
In the brain, there are two different systems for negative and positive stimuli. The left hemisphere, which is known
for articulate language, is specialized for positive experiences; whereas, the right hemisphere focuses on negative
experiences. Another area of the brain used for the negativity bias is the amygdala. This specific area of the brain
uses about two-thirds of its neurons searching for negative experiences. Once the amygdala starts looking for the bad
news, it is stored into long-term memory. Positive experiences have to be held in awareness for more than twelve
seconds in order for the transfer from short-term memory to long-term memory to take place.
[1]
We remember more
after we hear disapproving or disappointing news than before; this shows how the brain processes criticism. The
brain also produces an effective management tool for criticism called the criticism sandwich: offering someone
words of praise, discussing critical issues, and then adding more words of praise.
[2]
Implicit memory registers and
responds to negative events almost immediately. It takes five to twenty seconds for positive experiences to even
register in the brain.
[3]
Emotional information revolves within the limbic system. Therefore, the limbic system ties
perfectly into the negativity bias. Furthermore, the limbic system can become overloaded with negative information
and in turn takes control of the brain. The neocortex is responsible for maintaining higher level cognitive processes.
A person uses the neocortex when trying to control the negative symptoms dispersed from the limbic system. Based
on the connection between the limbic system and the nervous system, the body reacts harshly when solely speaking
about negative events.
[4]
Explanations
Research suggests many explanations behind the negativity bias. Listed below are several explanations ranging from
small to large instances of information integration. Each of these tries to clarify why negativity biases occur.
However, future research must be conducted in order to fully understand the causation of humans negative
mindset.
[5]
Selective attention Research shows that people pay more attention to negative issues. Since humans can only focus
on one message at a time, due to selective attention, the negative message becomes more profound.
Retrieval and accessibility
[6]
Some studies found some negativity biases to appear only over time. This
demonstrates how memory places an important role in negativity bias. Throughout the retrieval process, negativity
biases arise. People retain the impression of information rather than the features of the information. Also, since
negative experiences and memories are more distinct in ones mind, they are retrieved more rapidly and therefore
more easily accessible.
Definitiveness
[7]
Humans rely heavily on distinguishing features from an object. For example, when talking about
cars, people rely on the features that make a certain car stand out from another car. However, when this effect is
applied to perception of people, it is the negative traits that stand out. Normal traits of people tend to be positive
traits, so when perceiving other people, humans rely heavily on the negative appearances such as a big nose or a
round tummy.
The judgment process People weigh negative information more than positive information because that is how they
think it should be weighed. It makes sense to people to think in negative terms.
The figureground hypothesis
[8]
There are many happy people in the world and most people expect and report high
levels of personal happiness. People evaluate others in a positive way, and this makes it easier for the more negative
information to stand out so much.
Negativity bias
157
Novelty and distinctiveness Negative information is more distinctive and more novel compared to positive
information. The fact that is has more novelty means that it will be remembered more and more easily recalled. The
fact that it is more distinctive means that it will be more distinguishable among different objects. If the negative
information eliminates its surprisingness or informativeness it will reduce the impact of the negative information.
Credibility Negative information is more credible than positive information. Since there is a strong normative
pressure to say positive things, the person who says something negative is the one who is more likely to seem
sincere.
Interference effects Humans have a very hard time enjoying the positive attributes of an object or event when there
is a negative attribute clinging to that same object or event. For example, if an iPhone screen is cracked, then it is a
cracked iPhone and no longer a great and fabulous iPhone.
Research
Hamlin et al. researched three-month olds and found that they process negativity just as adults do. This reflects
the fact that negativity bias is instinctual in humans and not a conscious decision.
[9]
John Cacioppo showed his participants pictures that he knew would arouse positive feelings, negative feelings,
and neutral feelings. He recorded electrical activity in the brains cerebral cortex to show the information
processing taking place. This demonstration showed that the participants electrical activity was stronger towards
the negative stimuli compared to the positive or neutral stimuli.
[10]
Researchers found that the negativity bias is noticeable during the work day. Amabile studied professionals and
looked at what made their day good or bad. The findings showed that when professionals made even the slightest
step forward on a project, their day was good; however, a minor setback resulted in a bad day. Furthermore,
Amabile found that the negative setbacks were more than twice as strong as the positive steps forward when
relating to the individuals happiness that day.
[2]
Researchers examined the negativity bias with respect to reward and punishment. The findings conclude that
faster learning develops from negative reinforcement rather than positive reinforcement.
[11]
Researchers analyzed language to study the negativity bias. There are more emotional words in the human
dictionary that are negative. One study in particular found that 62% of the emotional words were negative and
32% were positive. 74% of the total words in the English language describe personality traits as negative.
[12]
Researchers studied facial expressions in order to study the negativity bias. Participants facial expressions were
monitored as they were exposed to pleasant, neutral, and unpleasant odors. The results show that the participants
negative reactions to unpleasant odors were stronger than the positive reactions to the pleasant odors.
[12]
Researchers also tested negativity bias with children as the participants with respect to facial expressions.
Children perceived both negative and neutral facial expressions as negative.
[13]
Examples
Researchers have found that children and adults have a greater recall of unpleasant memories compared with
positive memories. Adults and children can recall the detailed descriptions of unpleasant behaviors compared
with positive memories. As humans, we learn faster when we have negative reinforcement.
[11]
Negativity plays a key role in maintaining a healthy marriage. Couples who participate in both negative and
positive interactions remain together; however, the interactions should not be 50-50. Since the brain weighs
negative situations and experiences heavier than positive, the ratio in marriages must be five-to-one. Couples
must engage in five times as many positive experiences as negative experiences.
[10]
Most everyone has pleasant experiences with dogs throughout their life; however if someone has one experience
with a dog attacking or biting him, then they will most likely be scared of dogs and rely more heavily on the
Negativity bias
158
unpleasant experience than the many pleasant experiences.
[14]
The use of social media for large organizations demonstrates the negativity bias. McDonalds used Twitter to get
customers to tell their favorite stories of their experiences with the restaurant (#McFail
[15]
). Out of the 79,000
tweets about McDonalds, 2,000 were negative. Even though there were more positive tweets overall, most of the
headlines focused on the failure of the campaign.
[16]
Managers that do not give any new opportunities to their employees because of their previous mistakes or
something that they did not like on a past project provides a common example of the negativity bias.
[17]
Is bad stronger than good?
Roy F. Baumeister, a professor of social psychology at Florida State University, co-authored the idea of the
negativity bias in a journal article in 2001 entitled, Bad Is Stronger Than Good. He did an experiment where his
participants gained or lost the same amount of money ($50). The findings concluded that people are more upset
about losing money than are pleased gaining money. Baumeister also found that negative events have longer lasting
effects on emotions than positive events do. We also tend to think that people who say negative things are smarter
than those who say positive things. This makes us give more weight to critical reviews and insights.
[2]
The tendency of bad being stronger than good reflects into almost every aspect of human existence. For example, if a
person makes a bad first impression on another person, he will remember that far more easily than a good first
impression. Furthermore, that same person who made a bad impression will have a harder time changing that
impression to good. Additionally, when receiving feedback on a presentation or a finished job, negative feedback
makes a much more profound impact on the person receiving the information. These are simply examples of
everyday life in which negativity impacts humans greater than positivity. This tendency will play into most situations
a person faces throughout his lifetime.
[12]
References
[1] Hanson, Rick. "Confronting the Negativity Bias" (http:/ / www. rickhanson. net/ your-wise-brain/
how-your-brain-makes-you-easily-intimidated). . Retrieved 8 October 2012.
[2] Tugend, Alina. "Praise Is Fleeting, but Brickbats We Recall" (http:/ / www. nytimes. com/ 2012/ 03/ 24/ your-money/
why-people-remember-negative-events-more-than-positive-ones. html?pagewanted=all& _r=0). . Retrieved 9 October 2012.
[3] Moon, Tom. "Are We Hardwired for Unhappiness?" (http:/ / www. tommoon. net/ articles/ are_we_hardwired-1. html). . Retrieved October
25, 2012.
[4] Manley, Ron. "The Nervous System and Self-Regulation" (http:/ / drronmanley. com/ wisdom/ the-nervous-system-and-self-regulation/ ). .
Retrieved October 25, 2012.
[5] Kanouse, David. "Explaining Negativity Biases in Evaluation and Choice Behavior: Theory and Research" (http:/ / www. acrwebsite. org/
search/ view-conference-proceedings. aspx?Id=6335). . Retrieved October 25, 2012.
[6] http:/ / psychology. about.com/ od/ cognitivepsychology/ a/ memory_retrival. htm
[7] http:/ / eesenor. blogspot. com/ 2010/ 04/ definitiveness.html
[8] http:/ / www. turnyourhead.com/ psych.php
[9] [9] Hamlin, J. Kiley et al. "Three-month-olds show a negativity bias in their social evaluations", Developmental Science, Vol 13, Number 6.
2010. pp 923-929. USA. Retrieved on 2012-10-02.
[10] Marano, Hara E.. "Our Brain's Negativite Bias" (http:/ / www. psychologytoday. com/ articles/ 200306/ our-brains-negative-bias).
Psychology Today. Sussex Publishers, LLC. . Retrieved 9 October 2012.
[11] Haizlip, Julie et al. "Perspective: The Negativity Bias, Medical Education, and the Culture of Academic Medicine: Why Culture Change Is
Hard" (http:/ / journals. lww. com/ academicmedicine/ Fulltext/ 2012/ 09000/ Perspective___The_Negativity_Bias,_Medical. 19. aspx). .
Retrieved October 3, 2012.
[12] Bosman, Manie. "You Might Not Like it, But Bad is Stronger than Good" (http:/ / www. strategicleadershipinstitute. net/ news/
you-might-not-like-it-but-bad-is-stronger-than-good/ ). . Retrieved 9 October 2012.
[13] Tottenham, N.; Phuong, Flannery, Gabard-Durnam, & Goff (08 2012). "A Negativity Bias for Ambiguous Facial-Expression Valence
During Childhood: Converging Evidence From Behavior and Facial Corrugator Muscle Responses". Emotion. doi:10.1037/a0029431.
[14] Moon, Tom. "Are We Hardwired for Unhappiness?" (http:/ / www. tommoon. net/ articles/ are_we_hardwired-1. html). . Retrieved 8
October 2012.
[15] http:/ / www.businessesgrow. com/ tag/ psychology-and-social-media-2/
Negativity bias
159
[16] Schaefer, Mark. "We are all standing on digital quicksand" (http:/ / www. businessesgrow. com/ tag/ psychology-and-social-media-2/ ). .
Retrieved October 25, 2012.
[17] Gonzalez, Al. "Leading through Negativity" (http:/ / www. aboutleaders. com/ bid/ 160556/ Leading-through-Negativity). . Retrieved
October 25, 2012.
Further reading
Early negativity bias occurring prior to experiencing of emotion: An ERP study (http:/ / psycnet. apa. org/ index.
cfm?fa=buy.optionToBuy& id=2011-02799-002). Dong, G. et al. (2011)
How to change things when things are hard. Written by: Chip Heath and Dan Heath
Bad is Stronger Than Good (http:/ / www. carlsonmba. umn. edu/ Assets/ 71516. pdf) Article
A note on negativity bias and framing response asymmetry (http:/ / www. springerlink. com/ content/
f4h28734v301068u/ ). Sonsino, D. (2011)
External links
Theory and Research (http:/ / www. acrwebsite. org/ search/ view-conference-proceedings. aspx?Id=6335)
Information on theoretical aspect of negativity bias.
Negativity Bias- description (http:/ / www. youtube. com/ watch?v=E09077HRurg) Video.
Neglect of probability
The neglect of probability, a type of cognitive bias, is the tendency to completely disregard probability when
making a decision under uncertainty and is one simple way in which people regularly violate the normative rules for
decision making. Small risks are typically either neglected entirely or hugely overrated, the continuum between the
extremes is ignored. The term probability neglect was coined by Cass Sunstein.
[1]
There are many related ways in which people violate the normative rules of decision making with regard to
probability including the hindsight bias, the neglect of prior base rates effect, and the gambler's fallacy. This bias,
though, is notably different from the preceding biases because with this bias, the actor completely disregards
probability when deciding, instead of incorrectly using probability, as the actor does in the above examples.
Baron, Granato, Spranca, and Teubal (1993) studied the bias. They did so by asking children the following question:
Susan and Jennifer are arguing about whether they should wear seat belts when they ride in a car.
Susan says that you should. Jennifer says you shouldn't... Jennifer says that she heard of an accident
where a car fell into a lake and a woman was kept from getting out in time because of wearing her seat
belt, and another accident where a seat belt kept someone from getting out of the car in time when there
was a fire. What do you think about this?
Jonathan Baron (2000) notes that subject X responded in the following manner:
A: Well, in that case I don't think you should wear a seat belt.
Q (interviewer): How do you know when that's gonna happen?
A: Like, just hope it doesn't!
Q: So, should you or shouldn't you wear seat belts?
A: Well, tell-you-the-truth we should wear seat belts.
Q: How come?
A: Just in case of an accident. You won't get hurt as much as you will if you didn't wear a seat belt.
Q: OK, well what about these kinds of things, when people get trapped?
Neglect of probability
160
A: I don't think you should, in that case.
It is clear that subject X completely disregards the probability of an accident happening versus the probability of
getting hurt by the seat belt in making the decision. A normative model for this decision would advise the use of
expected-utility theory to decide which option would likely maximize utility. This would involve weighing the
changes in utility in each option by the probability that each option will occur, something that subject X ignores.
Another subject responded to the same question:
A: If you have a long trip, you wear seat belts half way.
Q: Which is more likely?
A: That you'll go flyin' through the windshield.
Q: Doesn't that mean you should wear them all the time?
A: No, it doesn't mean that.
Q: How do you know if you're gonna have one kind of accident or the other?
A: You don't know. You just hope and pray that you don't.
Here again, the subject disregards the probability in making the decision by treating each possible outcome as equal
in his reasoning.
Baron (2000) suggests that adults may suffer from the bias as well, especially when it comes to difficult decisions
like a medical decision under uncertainty. This bias could make actors drastically violate expected-utility theory in
their decision making, especially when a decision must be made in which one possible outcome has a much lower or
higher utility but a small probability of occurring (e.g. in medical or gambling situations). In this aspect, the neglect
of probability bias is similar to the neglect of prior base rates effect.
In another example of near-total neglect of probability, Rottenstreich and Hsee (2001) found that the typical subject
was willing to pay $10 to avoid a 99% chance of a painful electric shock, and $7 to avoid a 1% chance of the same
shock. (They suggest that probability is more likely to be neglected when the outcomes are emotion arousing.)
References
Baron, J. (2000). Thinking and Deciding (3d ed.). Cambridge University Press. p.260-261
Rottenstreich, Y. & Hsee, C.K. (2001). Money, kisses, and electric shocks: on the affective psychology of risk.
Psychological Science, 12, 185-190.
[1] Kahneman, D. (2011). Thinking Fast and Slow (http:/ / www. penguin. co. uk/ nf/ Book/ BookDisplay/ 0,,9780141918921,00. html), Allen
Lane 2011, p. 143 f.
Normalcy bias
161
Normalcy bias
The normalcy bias, or normality bias, refers to a mental state people enter when facing a disaster. It causes people to
underestimate both the possibility of a disaster occurring and its possible effects. This often results in situations
where people fail to adequately prepare for a disaster, and on a larger scale, the failure of governments to include the
populace in its disaster preparations. The assumption that is made in the case of the normalcy bias is that since a
disaster never has occurred then it never will occur. It also results in the inability of people to cope with a disaster
once it occurs. People with a normalcy bias have difficulties reacting to something they have not experienced before.
People also tend to interpret warnings in the most optimistic way possible, seizing on any ambiguities to infer a less
serious situation.
[1]
Possible causes
The normalcy bias may be caused in part by the way the brain processes new data. Research suggests that even when
the brain is calm, it takes 810 seconds to process new information. Stress slows the process, and when the brain
cannot find an acceptable response to a situation, it fixates on a single and sometimes default solution that may or
may not be correct. An evolutionary reason for this response could be that paralysis gives an animal a better chance
of surviving an attack; predators are less likely to eat prey that isn't struggling.
[2]
Effects
The normalcy bias often results in unnecessary deaths in disaster situations. The lack of preparation for disasters
often leads to inadequate shelter, supplies, and evacuation plans. Even when all these things are in place, individuals
with a normalcy bias often refuse to leave their homes. Studies have shown that more than 70% of people check with
others before deciding to evacuate.
[2]
The normalcy bias also causes people to drastically underestimate the effects of the disaster. Therefore, they think
that everything will be all right, while information from the radio, television, or neighbors gives them reason to
believe there is a risk. This creates a cognitive dissonance that they then must work to eliminate. Some manage to
eliminate it by refusing to believe new warnings coming in and refusing to evacuate (maintaining the normalcy bias),
while others eliminate the dissonance by escaping the danger. The possibility that some may refuse to evacuate
causes significant problems in disaster planning.
[3]
Examples
Not limited to, but most notably: The Nazi genocide of millions of Jews. Even after knowing friends and family were
being taken against their will, the Jewish community still stayed put, and refused to believe something was "going
on." Because of the extreme nature of the situation it is understandable why most would deny it.
Little Sioux Scout camp in June 2008. Despite being in the middle of "Tornado Alley," the campground had no
tornado shelter to offer protection from a strong tornado.
[4]
New Orleans before Hurricane Katrina. Inadequate government and citizen preparation and the denial that the levees
could fail were an example of the normalcy bias, as were the thousands of people who refused to evacuate.
Normalcy bias
162
Prevention
The negative effects can be combated through the four stages of disaster response:
preparation, including publicly acknowledging the possibility of disaster and forming contingency plans
warning, including issuing clear, unambiguous, and frequent warnings and helping the public to understand and
believe them
impact, the stage at which the contingency plans take effect and emergency services, rescue teams, and disaster
relief teams work in tandem
aftermath, or reestablishing equilibrium after the fact by providing supplies and aid to those in need
References
[1] [1] "Finding Something to Do: the Disaster Continuity Care Model".
[2] [2] "How to Survive".
[3] "Information Technology for Advancement of Evacuation" (http:/ / www. ysk. nilim. go. jp/ kakubu/ engan/ engan/ taigai/ hapyoronbun/
07-17. pdf).
[4] "Thoughts about Tornadoes and Camping Safety after the Iowa Tragedy on June 11, 2008" (http:/ / www. flame. org/ ~cdoswell/
scout_tragedy/ scout_tragedy_2008. html).
External links
Doswell, Chuck. "Thoughts about Tornadoes and Camping Safety after the Iowa Tragedy on June 11, 2008."
Flame.org. 26 July 2008 http:/ / web. archive. org/ web/ 20081120003313/ http:/ / www. flame. org/ ~cdoswell/
Scout_tragedy/ Scout_tragedy_2008. html.
Oda, Katsuya. "Information Technology for Advancement of Evacuation." http:/ / www. ysk. nilim. go. jp/
kakubu/ engan/ engan/ taigai/ hapyoronbun/ 07-17. pdf.
Ripley, Amanda. "How to Get Out Alive." Time 25 Apr. 2005. http:/ / www. time. com/ time/ magazine/ article/
0,9171,1053663,00. html
Valentine, Pamela V., and Thomas E. Smith. "Finding Something to Do: the Disaster Continuity Care Model."
Brief Treatment and Crisis Intervention 2 (2002): 183-96.
Observer-expectancy effect
163
Observer-expectancy effect
"Participant-observer effect" redirects here.
The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer
effect, or experimenter effect) is a form of reactivity in which a researcher's cognitive bias causes them to
unconsciously influence the participants of an experiment. It is a significant threat to a study's internal validity, and
is therefore typically controlled using a double-blind experimental design.
An example of the observer-expectancy effect is demonstrated in music backmasking, in which hidden verbal
messages are said to be audible when a recording is played backwards. Some people expect to hear hidden messages
when reversing songs, and therefore hear the messages, but to others it sounds like nothing more than random
sounds. Often when a song is played backwards, a listener will fail to notice the "hidden" lyrics until they are
explicitly pointed out, after which they are obvious. Other prominent examples include facilitated communication
and dowsing.
External links
Skeptic's Dictionary on the Experimenter Effect
[1]
An article that speaks of Expectancy effects in paranormal investigation
[2]
Another article by Rupert Sheldrake
[3]
References
[1] http:/ / skepdic. com/ experimentereffect. html
[2] http:/ / www. williamjames.com/ Science/ ESP.htm
[3] http:/ / www. sheldrake. org/ experiments/ expectations/
Omission bias
164
Omission bias
The omission bias is an alleged type of cognitive bias. It is the tendency to judge harmful actions as worse, or less
moral than equally harmful omissions (inactions). It is contentious as to whether this represents a systematic error in
thinking, or is supported by a substantive moral theory. For a consequentialist, judging harmful actions as worse than
inaction would indeed be inconsistent, but deontological ethics may, and normally does, draw a moral distinction
between doing and allowing.
[1]
Spranca, Minsk and Baron extended the omission bias to judgments of morality of choices. In one scenario, John, a
tennis player, would be facing a tough opponent the next day in a decisive match. John knows his opponent is
allergic to a food substance. Subjects were presented with two conditions: John recommends the food containing the
allergen to hurt his opponent's performance, or the opponent himself orders the allergenic food, and John says
nothing. A majority of people judged that John's action of recommending the allergenic food as being more immoral
than John's inaction of not informing the opponent of the allergenic substance.
References
[1] Frances Howard-Snyder, Doing vs. Allowing Harm (http:/ / plato. stanford. edu/ entries/ doing-allowing/ ) (Stanford Encyclopaedia of
Philosophy)
Baron, Jonathan. (1988, 1994, 2000). Thinking and Deciding. Cambridge University Press.
Asch DA, Baron J, Hershey JC, Kunreuther H, Meszaros JR, Ritov I, Spranca M. Omission bias and pertussis
vaccination. Medical Decision Making. 1994; 14:118-24.
Optimism bias
The optimism bias (also known as unrealistic or comparative optimism) is a bias that causes a person to believe
that they are less at risk of experiencing a negative event compared to others. There are 4 factors that cause a person
to be optimistically biased: their desired end state, their cognitive mechanisms, the information they have about
themselves versus others, and overall mood.
[1]
The optimistic bias is seen in a number of situations. For example:
people believing that they are less at risk of being a crime victim,
[2]
smokers believing that they are less likely to
contract lung cancer or disease than other smokers, first-time bungee jumpers believing that they are less at risk of an
injury than other jumpers,
[3]
or traders who think they are less exposed to losses in the markets.
[4]
Although the optimism bias occurs for both positive events, such as believing oneself to be more financially
successful than others and negative events, such as being less likely to have a drinking problem, there is more
research and evidence suggesting that the bias is stronger for negative events.
[1][5]
However, different consequences
result from these two types of events: positive events often lead to feelings of well being and self-esteem, while
negative events lead to consequences involving more risk, such as engaging in risky behaviors and not taking
precautionary measures for safety.
[1]
Measuring optimistic bias
The optimistic bias is typically measured through two determinants of risk: absolute risk, where individuals are
asked to estimate their likelihood of experiencing a negative event compared to their actual chance of experiencing a
negative event (comparison against self), and comparative risk, where individuals are asked to estimate the
likelihood of experiencing a negative event (their personal risk estimate) compared to others of the same age and sex
(a target risk estimate).
[5][6]
Problems can occur when trying to measure absolute risk because it is extremely
difficult to determine the actual risk statistic for a person.
[6][7]
Therefore, the optimistic bias is primarily measured in
comparative risk forms, where people compare themselves against others, through direct and indirect comparisons.
[3]
Optimism bias
165
Direct comparisons ask whether an individual's own risk of experiencing an event is negative, positive or equal than
someone else's risk, while indirect comparisons ask individuals to provide separate estimates of their own risk of
experiencing an event and other's risk of experiencing the same event.
[6][8]
After obtaining scores, researchers are able to use the information to determine if there is a difference in the average
risk estimate of the individual compared to the average risk estimate of their peers. Generally in negative events, the
mean risk of an individual appears lower than the risk estimate of others.
[6]
This is then used to demonstrate the bias'
effect. The optimistic bias can only be defined at a group level, because at an individual level the positive assessment
could be true.
[5]
Likewise, difficulties can arise in measurement procedures, as it is difficult to determine when
someone is being optimistic, realistic, or pessimistic.
[6][8]
Research suggests that the bias comes from an
overestimate of group risks rather than underestimating one's own risk.
[6]
Factors of optimistic bias
The factors leading to the optimistic bias can be categorized into four different groups: desired end states of
comparative judgment, cognitive mechanisms, information about the self versus a target, and underlying affect.
[1]
These are explained more in detail below.
1. Desired end states of comparative judgment
Many explanations for the optimistic bias come from the goals that people want and outcomes they wish to see.
[1]
People tend to view their risks as less than others because they believe that this is what other people want to see.
These explanations include self-enhancement, self-presentation, and perceived control.
Self-enhancement
Self-enhancement suggests that optimistic predictions are satisfying and that it feels good to think that positive
events will happen.
[1]
People can control their anxiety and other negative emotions if they believe they are better off
than others.
[1]
People tend to focus on finding information that supports what they want to see happen, rather than
what will happen to them.
[1]
With regards to the optimistic bias, individuals will perceive events more favorably,
because that is what they would like the outcome to be. This also suggests that people might lower their risks
compared to others to make themselves look better than average: they are less at risk than others and therefore
better.
[1]
Self-presentation
Studies suggest that people attempt to establish and maintain a desired personal image in social situations. People are
motivated to present themselves towards others in a good light, and some researchers suggest that the optimistic bias
is a representative of self-presentational processes:people want to appear more well off than others. However, this is
not through conscious effort. In a study where participants believed their driving skills would be either tested in
either real-life or driving simulations, people who believed they were to be tested had less optimistic bias and were
more modest about their skills than individuals who would not be tested.
[9]
Studies also suggest that individuals who
present themselves in a pessimistic and more negative light are generally less accepted by the rest of society.
[10]
This
might contribute to overly optimistic attitudes.
Personal control/perceived control
People tend to be more optimistically biased when they believe they have more control over events than
others.
[1][7][11]
For example, people are more likely to think that they will not be harmed in a car accident if they are
driving the vehicle.
[11]
Another example is that if someone believes that they have a lot of control over becoming
infected with HIV, they are more likely to view their risk of contracting the disease to be low.
[6]
Studies have
suggested that the greater perceived control someone has, the greater their optimistic bias.
[11][12]
Stemming from
Optimism bias
166
this, control is a stronger factor when it comes to personal risk assessments, but not when assessing others.
[7][11]
A meta-analysis reviewing the relationship between the optimistic bias and perceived control found that a number of
moderators contribute to this relationship.
[7]
In previous research, participants from the United States generally had
higher levels of optimistic bias relating to perceived control than those of other nationalities. Students also showed
larger levels of the optimistic bias than non-students.
[7]
The format of the study also demonstrated differences in the
relationship between perceived control and the optimistic bias: direct methods of measurement suggested greater
perceived control and greater optimistic bias as compared to indirect measures of the bias.
[7]
The optimistic bias is
strongest in situations where an individual needs to rely heavily on direct action and responsibility of situations.
[7]
An opposite factor of perceived control is that of prior experience.
[6]
Prior experience is typically associated with
less optimistic bias, which some studies suggest is from either a decrease in the perception of personal control, or
make it easier for individuals to imagine themselves at risk.
[6][12]
Prior experience suggests that events may be less
controllable than previously believed.
[6]
2. Cognitive mechanisms
The optimistic bias is possibly also influenced by three cognitive mechanisms that guide judgments and
decision-making processes: the representativeness heuristic, singular target focus, and interpersonal distance.
[1]
Representativeness heuristic
The estimates of likelihood associated with the optimistic bias are based on how closely an event matches a person's
overall idea of the specific event.
[1]
Some researchers suggest that the representative heuristic is a reason for the
optimistic bias: individuals tend to think in stereotypical categories rather than about their actual targets when
making comparisons.
[12]
For example, when drivers are asked to think about a car accident, they are more likely to
associate a bad driver, rather than just the average driver.
[1]
Individuals compare themselves with the negative
elements that come to mind, rather than an overall accurate comparison between them and another driver.
Additionally, when individuals were asked to compare themselves towards friends, they chose more vulnerable
friends based on the events they were looking at.
[13]
Individuals generally chose a specific friend based on if they
resemble a given example, rather than just an average friend.
[13]
People find examples that relate directly to what
they are asked, resulting in representativeness heuristics.
Singular target focus
One of the difficulties of the optimistic bias is that people know more about themselves than they do about others.
While individuals know how to think about themselves as a single person, they still think of others as a generalized
group, which leads to biased estimates and inabilities to sufficiently understand their target or comparison group.
Likewise, when making judgments and comparisons about their risk compared to others, people generally ignore the
average person, but primarily focus on their own feelings and experiences.
[1]
Interpersonal distance
Perceived risk differences occur depending on how far or close a compared target is to an individual making a risk
estimate.
[1]
The greater the perceived distance between the self and the comparison target, the greater the perceived
difference in risk. When one brings the comparison target closer to the individual, risk estimates appear closer
together than if the comparison target was someone more distant to the participant.
[1]
There is support for perceived
social distance in determining the optimistic bias.
[14]
Through looking at comparisons of personal and target risk
between the in-group level contributes to more perceived similarities than when individuals think about outer-group
comparisons which lead to greater perceived differences.
[14]
In one study, researchers manipulated the social context
of the comparison group, where participants made judgements for two different comparison targets: the typical
student at their university and a typical student at another university. Their findings showed that not only did people
work with the closer comparison first, but also had closer ratings to themselves than the "more different" group.
[14]
Optimism bias
167
Studies have also noticed that people demonstrate more optimistic bias when making comparisons when the other is
a vague individual, but biases are reduced when the other is a familiar person, such as a friend or family member.
This also is determined due to the information they have about the individuals closest to them, but not having the
same information about other people.
[5]
3. Information about self versus target
Individuals know a lot more about themselves than they do about others.
[1]
Because information about others is less
available, information about the self versus others leads people to make specific conclusions about their own risk,
but results in them having a harder time making conclusions about the risks of others. This leads to differences in
judgments and conclusions about self-risks compared to the risks of others, leading to larger gaps in the optimistic
bias.
[1]
Person-positivity bias
Person-positivity bias is the tendency to evaluate an object more favorably the more the object resembles an
individual human being. Generally, the more a comparison target resembles a specific person, the more familiar it
will be. However, groups of people are considered to be more abstract concepts, which leads to less favorable
judgments. With regards to the optimistic bias, when people compare themselves to an average person, whether
someone of the same sex or age, the target continues to be viewed as less human and less personified, which will
result in less favorable comparisons between the self and others.
[1]
Egocentric thinking
Egocentric thinking refer to how individuals know more of their own personal information and risk that they can use
to form judgments and make decisions. One difficulty, though, is that people have a large amount of knowledge
about themselves, but no knowledge about others. Therefore, when making decisions, people have to use other
information available to them, such as population data, in order to learn more about their comparison group.
[1]
This
can relate to an optimism bias because while people are using the available information they have about themselves,
they have more difficulty understanding correct information about others.
[1]
This self-centered thinking is seen most
commonly in adolescents and college students, who generally think more about themselves than others.
[15]
It is also possible that someone can escape egocentric thinking. In one study, researchers had one group of
participants list all factors that influenced their chances of experiencing a variety of events, and then a second group
read the list. Those who read the list showed less optimistic bias in their own reports. It's possible that greater
knowledge about others and their perceptions of their chances of risk bring the comparison group closer to the
participant.
[12]
Underestimating average person's control
Also regarding egocentric thinking, it is possible that individuals underestimate the amount of control the average
person has. This is explained in two different ways:
1. People underestimate the control that others have in their lives.
[12]
2. 2. People completely overlook that others have control over their own outcomes.
For example, many smokers believe that they are taking all necessary precautionary measures so that they won't get
lung cancer, such as smoking only once a day, or using filtered cigarettes, and believe that others are not taking the
same precautionary measures. However, it is likely that many other smokers are doing the same things.
[1]
Optimism bias
168
4. Underlying affect
The last factor of optimistic bias is that of underlying affect and affect experience. Research has found that people
show less optimistic bias when experiencing a negative mood, and more optimistic bias when in a positive mood.
[6]
Sad moods reflect greater memories of negative events, which lead to more negative judgments, while positive
moods promote happy memories and more positive feelings.
[1]
This suggests that overall negative moods, including
depression, result in increased personal risk estimates but less optimistic bias overall.
[6]
Anxiety also leads to less
optimistic bias, continuing to suggest that overall positive experiences and positive attitudes lead to more optimistic
bias in events.
[6]
Why do we care about the optimistic bias?
In health, the optimistic bias tends to prevent individuals from taking on preventative measures for good health.
[16]
Therefore, researchers need to be aware of the optimistic bias and the ways it can prevent people from taking
precautionary measures in life choices. For example, people who underestimate their comparative risk of heart
disease know less about heart disease, and even after reading an article with more information, are still less
concerned about risk of heart disease.
[8]
Because the optimistic bias can be a strong force in decision-making, it is
important to look at how risk perception is determined and how this will result in preventative behaviors. Risk
perceptions are particularly important for individual behaviors, such as exercise, diet, and even sunscreen use.
[17]
A large portion of risk prevention focuses on adolescents. Especially with health risk perception, adolescence is
associated with an increased frequency of risky health-related behaviors such as smoking, drugs, and unsafe sex.
While adolescents are aware of the risk, this awareness does not change behavior habits.
[18]
Adolescents with strong
positive optimistic bias toward risky behaviors had an overall increase in the optimistic bias with age.
[16]
However, many times there are methodological problems in these tests. Unconditional risk questions in
cross-sectional studies are used consistently, leading to problems, as they ask about the likelihood of an action
occurring, but does not determine if there is an outcome, or compare events that haven't happened to events that
have.
[17]
Concerning vaccines, perceptions of those who have not been vaccinated are compared to the perceptions
of people who have been. Other problems which arise include the failure to know a person's perception of a risk.
[17]
Knowing this information will be helpful for continued research on optimistic bias and preventative behaviors.
Attempts to alter and eliminate optimistic bias
Studies have shown that is very difficult to eliminate the optimistic bias, however some people believe that by trying
to reduce the optimistic bias will encourage people to adapt to health-protective behaviors. Researchers suggest that
the optimistic bias cannot be reduced, and that by trying to reduce the optimistic bias the end result was generally
even more optimistically biased.
[19]
In a study of four different tests to reduce the optimistic bias, researchers found
that regardless of the attempts to reduce the bias, through lists of risk factors, participants perceiving themselves as
inferior to others, participants asked to think of high-risk individuals, and giving attributes of why they were at risk,
all increased the bias rather than decreased it.
[19]
Although studies have tried to reduce the optimistic bias through
reducing distance, overall, the optimistic bias still remains.
[14]
Although research has suggested that it is very difficult to eliminate the bias, some factors may help in closing the
gap of the optimistic bias between an individual and their target risk group. First, by placing the comparison group
closer to the individual, the optimistic bias can be reduced: studies found that when individuals were asked to make
comparisons between themselves and close friends, there was almost no difference in the likelihood of an event
occurring.
[13]
Additionally, by actually experiencing an event leads to a decrease in the optimistic bias.
[6]
While this
only applies to events with prior experience, knowing the previously unknown will result in less optimism of it not
occurring.
Optimism bias
169
Optimism bias in policy, planning, and management
Optimism bias influences decisions and forecasts in policy, planning, and management, e.g., the costs and
completion times of planned decisions tend to be underestimated and the benefits overestimated due to optimism
bias. The term planning fallacy for this effect was first proposed by Daniel Kahneman and Amos Tversky.
[20][21]
Reference class forecasting was developed by Oxford professor Bent Flyvbjerg to reduce optimism bias and increase
forecasting accuracy by framing decisions so they take into account available distributional information about
previous, comparable outcomes.
[22]
Daniel Kahneman, winner of the Nobel Prize in economics, calls the use of
reference class forecasting "the single most important piece of advice regarding how to increase accuracy in
forecasting.
[23]
Pessimistic bias
Researchers have not coined the term pessimism bias, because the principles of the optimistic bias continue to be in
effect in situations where individuals regard themselves as worse off than others.
[1]
Optimism may occur from either
a distortion of personal estimates, representing personal optimism, or a distortion for others, representing personal
pessimism,
[1]
making the term "pessimistic bias" obsolete.
References
[1] Shepperd, James A.; Patrick Carroll, Jodi Grace, Meredith Terry (2002). "Exploring the Causes of Comparative Optimism" (http:/ / www.
psych. ufl. edu/ ~shepperd/ articles/ PsychBelgica2002. pdf). Psychologica Belgica 42: 6598. .
[2] Chapin, John; Grace Coleman (2009). "Optimistic Bias: What you Think, What you Know, or Whom you Know?" (http:/ / findarticles. com/
p/ articles/ mi_6894/ is_1_11/ ai_n31528106/ ). North American Journal of Psychology 11 (1): 121132. .
[3] Weinstein, Neil D.; William M. Klein (1996). "Unrealistic Optimism: Present and Future". Journal of Social and Clinical Psychology 15 (1):
18. doi:10.1521/jscp.1996.15.1.1.
[4] Elder; Alexander "Trading for a Living; Psychology, Trading Tactics, Money Management" John Wiley & Sons 1993, Intro - sections
"Psychology is the Key" & "The Odds are against You", And Part I "Individual Psychology", Section 5 "Fantasy versus Reality" ISBN
0-47159224-2
[5] Gouveia, Susana O.; Valerie Clarke (2001). "Optimistic bias for negative and positive events". Health Education 101 (5): 228234.
doi:10.1108/09654280110402080.
[6] Helweg-Larsen, Marie; James A. Shepperd (2001). "Do Moderators of the Optimistic Bias Affect Personal or Target Risk Estimates? A
Review of the Literature" (http:/ / users. dickinson.edu/ ~helwegm/ pdfversion/ do_moderators_of_the_optimistic_bias. pdf). Personality and
Social Psychology Review 5 (1): 7495. doi:10.1207/S15327957PSPR0501_5. .
[7] Klein, Cynthia T. F.; Marie Helweg-Larsen (2002). "Perceived Control and the Optimistic Bias: A Meta-analytic Review" (http:/ / www2.
dickinson.edu/ departments/ psych/ helwegm/ PDFVersion/ Perceived_control_and_the_optimistic. pdf). Psychology and Health 17 (4):
437446. doi:10.1080/0887044022000004920. .
[8] Radcliffe, Nathan M.; William M. P. Klein (2002). "Dispositional, Unrealistic, and Comparative Optimism: Differential Relations with the
Knowledge and Processing of Risk Information and Beliefs about Personal Risk". Personality and Social Psychology Bulletin 28: 836846.
doi:10.1177/0146167202289012.
[9] McKenna, F. P; R. A. Stanier, C. Lewis (1991). "Factors underlying illusionary self-assessment of driving skill in males and females".
Accident Analysis and Prevention 23: 4552. doi:10.1016/0001-4575(91)90034-3. PMID2021403.
[10] Helweg-Larsen, Marie; Pedram Sadeghian, Mary S. Webb (2002). "The stigma of being pessimistically biased" (http:/ / users. dickinson.
edu/ ~helwegm/ PDFVersion/ The_Stigma_of_Being_Pessimistically_Biased. pdf). Journal of Social and Clinical Psychology 21 (1): 92=107.
.
[11] Harris, Peter (1996). "Sufficient grounds for optimism?: The relationship between perceived controllability and optimistic bias". Journal of
Social and Clinical Psychology (http:/ / search.proquest. com/ docview/ 61536420/ 136283117CD63890645/ 1?accountid=10506)& #32;'''15
(1): 952.
[12] Weinstein, Neil D. (1980). "Unrealistic optimism about future life events". Journal of Personality and Social Psychology 39: 806820.
doi:10.1037/0022-3514.39.5.806.
[13] Perloff, Linda S; Barbara K. Fetzer (1986). "Self-other judgments and perceived vulnerability to victimization". Journal of Personality and
Social Psychology 50: 502510. doi:10.1037/0022-3514.50.3.502.
[14] Harris, P; Wendy Middleton, Richard Joiner (2000). "The typical student as an in-group member: eliminating optimistic bias by reducing
social distance". European Journal of Social Psychology 30: 235253.
doi:10.1002/(SICI)1099-0992(200003/04)30:2<235::AID-EJSP990>3.0.CO;2-G.
Optimism bias
170
[15] Weinstein, Neil D. (1987). "Unrealistic Optimism: About Susceptibility in Health Problems: Conclusions from a Community-Wide
Sample". Journal of Behavioral Medicine 10 (5): 481500. doi:10.1007/BF00846146. PMID3430590.
[16] Brnstrm, Richard; Yvonne Brandberg (2010). "Health Risk Perception, Optimistic Bias, and Personal Satisfaction" (http:/ / www. ajhb.
org/ ISSUES/ 2010/ 2/ 02MarApr0710Branstrom.pdf). American Journal of Health Behavior 34 (2): 197205. PMID19814599. .
[17] Brewer, Noel T.; Gretchen B. Chapman, Fredrick X. Gibbons, Meg Gerrard, Kevin D. McCaul, Neil D. Weinstein (2007). "Meta-analysis of
the Relationship Between Risk Perception and Health Behavior: The Example of Vaccination" (http:/ / www. unc. edu/ ~ntbrewer/ pubs/ 2007,
brewer, chpaman, gibbons, et al. pdf). Health Psychology 26 (2): 136=145. doi:10.1037/0278-6133.26.2.136. .
[18] Gerrard, Meg; Frederick X. Gibbons, Alida C. Benthin, Robert M. Hessling (1996). "A Longitudinal Study of the Reciprocal Nature of Risk
Behaviors and Cognitions in Adolescents: What You Do Shapes What You Think, and Vice Versa" (http:/ / faculty. weber. edu/ eamsel/
Classes/ Adolescent Risk taking/ Lectures/ 4-5 - Cognitive/ Gerrard et al (1996). pdf). Health Psychology 15 (5): 344354. PMID8891713. .
[19] Weinstein, Neil D.; William M. Klein (1995). "Resistance of Personal Risk Perceptions to Debiasing Interventions". Health Psychology 14
(2): 132140. doi:10.1037/0278-6133.14.2.132. PMID7789348.
[20] Pezzo, Mark V.; Litman, Jordan A.; Pezzo, Stephanie P. (2006). "On the distinction between yuppies and hippies: Individual differences in
prediction biases for planning future tasks". Personality and Individual Differences 41 (7): 13591371. doi:10.1016/j.paid.2006.03.029.
ISSN01918869.
[21] Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures". TIMS Studies in Management Science
12: 313327.
[22] Flyvbjerg, B., 2008, "Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice."
European Planning Studies, vol. 16, no. 1, January, pp. 3-21. (http:/ / www. sbs. ox. ac. uk/ centres/ bt/ Documents/ Curbing Optimism Bias
and Strategic Misrepresentation. pdf)
[23] [23] Daniel Kahneman, 2011, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux), p. 251
Ostrich effect
In behavioral finance, the ostrich effect is the avoidance of apparently risky financial situations by pretending they
do not exist. The name comes from the common (but false)
[1]
legend that ostriches bury their heads in the sand to
avoid danger.
Galai and Sade (2006) explain differences in returns in the fixed income market by using a psychological
explanation, which they name the "ostrich effect," attributing this anomalous behavior to an aversion to receiving
information on potential interim losses.
[2]
They also provide evidence that the entrance to a leading financial portal
in Israel is positively related to the equity market. Later, research by George Loewenstein and Duane Seppi
determined that people in Scandinavia looked up the value of their investments 50% to 80% less often during bad
markets.
[3]
References
[1] Karl Kruszelnicki, Ostrich Head in Sand (http:/ / www. abc. net. au/ science/ articles/ 2006/ 11/ 02/ 1777947. htm), ABC Science: In Depth.
[2] Galai, Dan; Sade, Orly (2006). "The "Ostrich Effect" and the Relationship between the Liquidity and the Yields of Financial Assets". Journal
of Business 79 (5)
[3] Zweig, Jason (September 13, 2008). "Should You Fear the Ostrich Effect?". The Wall Street Journal: pp.B1.
Outcome bias
171
Outcome bias
The outcome bias is an error made in evaluating the quality of a decision when the outcome of that decision is
already known.
Overview
One will often judge a past decision by its ultimate outcome instead of based on the quality of the decision at the
time it was made, given what was known at that time. This is an error because no decision maker ever knows
whether or not a calculated risk will turn out for the best. The actual outcome of the decision will often be
determined by chance, with some risks working out and others not. Individuals whose judgments are influenced by
outcome bias are seemingly holding decision makers responsible for events beyond their control.
Baron and Hershey (1988) presented subjects with hypothetical situations in order to test this.
[1]
One such example
involved a surgeon deciding whether or not to do a risky surgery on a patient. The surgery had a known probability
of success. Subjects were presented with either a good or bad outcome (in this case living or dying), and asked to
rate the quality of the surgeon's pre-operation decision. Those presented with bad outcomes rated the decision worse
than those who had good outcomes.
The reason why an individual makes this mistake is that he or she will incorporate presently available information
when evaluating a past decision. To avoid the influence of outcome bias, one should evaluate a decision by ignoring
information collected after the fact and focusing on what the right answer is, or was at the time the decision was
made.
References
[1] Baron J. & Hershey J.C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology. Vol 54(4) Apr, 569-579.
Overconfidence effect
172
Overconfidence effect
The overconfidence effect is a well-established bias in which someone's subjective confidence in their judgments is
reliably greater than their objective accuracy, especially when confidence is relatively high.
[1]
For example, in some
quizzes, people rate their answers as "99% certain" but are wrong 40% of the time. It has been proposed that a
metacognitive trait mediates the accuracy of confidence judgments,
[2]
but this trait's relationship to variations in
cognitive ability and personality remains uncertain.
[1]
Overconfidence is one example of a miscalibration of
subjective probabilities.
Demonstration
The most common way in which overconfidence has been studied is by asking people how confident they are of
specific beliefs they hold or answers they provide. The data show that confidence systematically exceeds accuracy,
implying people are more sure that they are correct than they deserve to be. If human confidence had perfect
calibration, judgments with 100% confidence would be correct 100% of the time, 90% confidence correct 90% of the
time, and so on for the other levels of confidence. By contrast, the key finding is that confidence exceeds accuracy so
long as the subject is answering hard questions about an unfamiliar topic. For example, in a spelling task, subjects
were correct about 80% of the time when they were "100% certain."
[3]
Put another way, the error rate was 20% when
subjects expected it to be 0%. In a series where subjects made true-or-false responses to general knowledge
statements, they were overconfident at all levels. When they were 100% certain of their answer to a question, they
were wrong 20% of the time.
[4]
In a confidence-intervals task, where subjects had to judge quantities such as the total egg production of the U.S. or
the total number of physicians and surgeons in the Boston Yellow Pages, they expected an error rate of 2% when
their real error rate was 46%.
[5]
Once subjects had been thoroughly warned about the bias, they still showed a high
degree of overconfidence.
Overprecision is the excessive confidence that one knows the truth. For reviews, see Harvey (1997) or Hoffrage
(2004).
[6][7]
Much of the evidence for overprecision comes from studies in which participants are asked about their
confidence that individual items are correct. This paradigm, while useful, cannot distinguish overestimation from
overprecision; they are one and the same in these item-confidence judgments. After making a series of
item-confidence judgments, if people try to estimate the number of items they got right, they do not tend to
systematically overestimate their scores. The average of their item-confidence judgments exceeds the count of items
they claim to have gotten right.
[8]
One possible explanation for this is that item-confidence judgments were inflated
by overprecision, and that their judgments do not demonstrate systematic overestimation.
Confidence intervals
The strongest evidence of overprecision comes from studies in which participants are asked to indicate how precise
their knowledge is by specifying a 90% confidence interval around estimates of specific quantities. If people were
perfectly calibrated, their 90% confidence intervals would include the correct answer 90% of the time.
[5]
In fact, hit
rates are often as low as 50%, suggesting people have drawn their confidence intervals too narrowly, implying that
they think their knowledge is more accurate than it actually is.
Overconfidence effect
173
Planning fallacy
The planning fallacy describes the tendency for people to overestimate their rate of work or to underestimate how
long it will take them to get things done.
[9]
It is strongest for long and complicated tasks, and disappears or reverses
for simple tasks that are quick to complete.
Illusion of control
Illusion of control describes the tendency for people to behave as if they might have some control when in fact they
have none.
[10]
However, evidence does not support the notion that people systematically overestimate how much
control they have; when they have a great deal of control, people tend to underestimate how much control they
have.
[11]
Contrary evidence
Wishful thinking effects, in which people overestimate the likelihood of an event because of its desirability are
relatively rare.
[12]
This may be in part because people engage in more defensive pessimism in advance of important
outcomes,
[13]
in an attempt to reduce the disappointment that follows overly optimistic predictions.
[14]
Overplacement
Overplacement is the false belief that one is better than others. For a review, see Alicke and Govorun (2005).
[15]
Better-than-average effects
Perhaps the most celebrated better-than-average finding is Svensons (1981) finding that 93% of American drivers
rate themselves as better than the median.
[16]
The frequency with which school systems claim their students
outperform national averages has been dubbed the Lake Wobegon effect, after Garrison Keillors apocryphal town
in which all the children are above average.
[17]
Overplacement has likewise been documented in a wide variety of
other circumstances.
[18]
Kruger (1999), however showed that this effect is limited to easy tasks in which success is
common or in which people feel competent. For difficult tasks, the effect reverses itself and people believe they are
worse than others.
[19]
Comparative-optimism effects
Some researchers have claimed that people think good things are more likely to happen to them than to others,
whereas bad events were less likely to happen to them than to others.
[20]
But others (Chambers & Windschitl, 2004;
Chambers, Windschitl, & Suls, 2003; Kruger & Burrus, 2004) have pointed out that prior work tended to examine
good outcomes that happened to be common (such as owning ones own home) and bad outcomes that happened to
be rare (such as being struck by lightning).
[21][22][23]
Event frequency accounts for a proportion of prior findings of
comparative optimism. People think common events (such as living past 70) are more likely to happen to them than
to others, and rare events (such as living past 100) are less likely to happen to them than to others.
Positive illusions
Taylor and Brown (1988) have argued that people cling to overly positive beliefs about themselves, illusions of
control, and beliefs in false superiority, because it helps them cope and thrive.
[24]
While there is some evidence that
optimistic beliefs are correlated with better life outcomes, most of the research documenting such links is vulnerable
to the alternative explanation that their forecasts are accurate. The cancer patients who are most optimistic about
their survival chances are optimistic because they have good reason to be.
Overconfidence effect
174
Contrary evidence
Recent work has critiqued the methodology used in older research on overplacement, calling some of the effects
documented in prior research into question.
[25]
Practical implications
"Overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to
remind yourself that they may be in the grip of an illusion."
Daniel Kahneman
[26]
Overconfidence has been called the most pervasive and potentially catastrophic of all the cognitive biases to which
human beings fall victim.
[27]
It has been blamed for lawsuits, strikes, wars, and stock market bubbles and crashes.
Strikes, lawsuits, and wars could arise from overplacement. If plaintiffs and defendants were prone to believe that
they were more deserving, fair, and righteous than their legal opponents, that could help account for the persistence
of inefficient enduring legal disputes.
[28]
If corporations and unions were prone to believe that they were stronger
and more justified than the other side, that could contribute to their willingness to endure labor strikes.
[29]
If nations
were prone to believe that their militaries were stronger than were those of other nations, that could explain their
willingness to go to war.
[30]
Overprecision could have important implications for investing behavior and stock market trading. Because
Bayesians cannot agree to disagree,
[31]
classical finance theory has trouble explaining why, if stock market traders
are fully rational Bayesians, there is so much trading in the stock market. Overprecision might be one answer.
[32]
If
market actors are too sure their estimates of an assets value is correct, they will be too willing to trade with others
who have different information than they do.
Oskamp (1965) tested groups of clinical psychologists and psychology students on a multiple-choice task in which
they drew conclusions from a case study.
[33]
Along with their answers, subjects gave a confidence rating in the form
of a percentage likelihood of being correct. This allowed confidence to be compared against accuracy. As the
subjects were given more information about the case study, their confidence increased from 33% to 53%. However
their accuracy did not significantly improve, staying under 30%. Hence this experiment demonstrated
overconfidence which increased as the subjects had more information to base their judgment on.
[33]
Even if there is no general tendency toward overconfidence, social dynamics and adverse selection could
conceivably promote it. For instance, those most likely to have the courage to start a new business are those who
most overplace their abilities relative to those of other potential entrants. And if voters find confident leaders more
credible, then contenders for leadership learn that they should express more confidence than their opponents in order
to win election.
[34]
Overconfidence can be beneficial to individual self-esteem as well as giving an individual the will to succeed in their
desired goal. Just believing in oneself may give one the will to take one's endeavours further than those who do
not.
[35]
Overconfidence effect
175
Related biases
Overconfidence bias often serves to increase the effects of escalating commitment - causing decision makers to
refuse to withdraw from a losing situation, or to continue to throw good money, effort, time and other resources
after bad investments.
People often tend to ignore base rates or undervalue their effect. For example, if one is competing against
individuals who are already winners of previous competitions, one's odds of winning should be adjusted
downward considerably. People tend to fail to do so sufficiently.
Core self-evaluations
Very high levels of core self-evaluations (CSE), a stable personality trait composed of locus of control, neuroticism,
self-efficacy, and self-esteem,
[36]
may lead to the overconfidence effect. People who have high core self-evaluations
will think positively of themselves and be confident in their own abilities,
[36]
although extremely high levels of CSE
may cause an individual to be more confident than is warranted.
Notes
[1] [1] Pallier, Gerry, et al. "The role of individual differences in the accuracy of confidence judgments." The Journal of General Psychology 129.3
(2002): 257+.
[2] Stankov, L. (1999). Mining on the "no man's land" between intelligence and personality. In P.L. Ackerman, P.C. Kyllonen, & R.D. Roberts
(Eds.), Learning and individual differences: Process, trait, and content determinants (pp. 314337). Washington, DC: American Psychological
Association.
[3] Adams, P. A., & Adams, J. K. (1960). Confidence in the recognition and reproduction of words difficult to spell. The American Journal of
Psychology, 73(4), 544-552.
[4] Lichtenstein, Sarah; Baruch Fischhoff, Lawrence D. Phillips (1982). "Calibration of probabilities: The state of the art to 1980". In Daniel
Kahneman, Paul Slovic, Amos Tversky. Judgment under uncertainty: Heuristics and biases. Cambridge University Press. pp.306334.
ISBN978-0-521-28414-1.
[5] Alpert, Marc; Howard Raiffa (1982). "A progress report on the training of probability assessors". In Daniel Kahneman, Paul Slovic, Amos
Tversky. Judgment under uncertainty: Heuristics and biases. Cambridge University Press. pp.294305. ISBN978-0-521-28414-1.
[6] [6] Harvey, N. (1997). Confidence in judgment. Trends in Cognitive Sciences, 1(2), 78-82.
[7] Hoffrage, Ulrich (2004). "Overconfidence". In Rdiger Pohl. Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement
and memory. Psychology Press. ISBN978-1-84169-351-4.
[8] Gigerenzer, G. (1993). The bounded rationality of probabilistic mental modules. In K. I. Manktelow & D. E. Over (Eds.), Rationality (pp.
127-171). London: Routledge.
[9] Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the "planning fallacy": Why people underestimate their task completion times. Journal
of Personality and Social Psychology, 67(3), 366-381.
[10] [10] Langer, E. J. (1975). The illusion of control. Journal of Personality and Social Psychology, 32(2), 311-328.
[11] Gino, F., Sharek, Z., & Moore, D. A. (2011). Keeping the illusion of control under control: Ceilings, floors, and imperfect calibration.
Organizational Behavior & Human Decision Processes, 114, 104-114.
[12] Krizan, Z., & Windschitl, P. D. (2007). The influence of outcome desirability on optimism. Psychological Bulletin, 133(1), 95-121.
[13] Norem, J. K., & Cantor, N. (1986). Defensive pessimism: Harnessing anxiety as motivation. Journal of Personality and Social Psychology,
51(6), 1208-1217.
[14] McGraw, A. P., Mellers, B. A., & Ritov, I. (2004). The affective costs of overconfidence. Journal of Behavioral Decision Making, 17(4),
281-295.
[15] Alicke, M. D., & Govorun, O. (2005). The better-than-average effect. In M. D. Alicke, D. Dunning & J. Krueger (Eds.), The self in social
judgment (pp. 85-106). New York: Psychology Press.
[16] [16] Svenson, O. (1981). Are we less risky and more skillful than our fellow drivers? Acta Psychologica, 47, 143-151.
[17] [17] Cannell, J. J. (1989). How public educators cheat on standardized achievement tests: The "Lake Wobegon" report.
[18] [18] Dunning, D. (2005). Self-insight: Roadblocks and detours on the path to knowing thyself. New York: Psychology Press.
[19] [19] Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments. Journal
of Personality and Social Psychology, 77(2), 221-232.
[20] [20] Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39(5), 806-820.
[21] Chambers, J. R., & Windschitl, P. D. (2004). Biases in social comparative judgments: The role of nonmotivational factors in above-average
and comparative-optimism effects. Psychological Bulletin, 130(5).
[22] Chambers, J. R., Windschitl, P. D., & Suls, J. (2003). Egocentrism, event frequency, and comparative optimism: When what happens
frequently is "more likely to happen to me". Personality and Social Psychology Bulletin, 29(11), 1343-1356.
Overconfidence effect
176
[23] Kruger, J., & Burrus, J. (2004). Egocentrism and focalism in unrealistic optimism (and pessimism). Journal of Experimental Social
Psychology, 40(3), 332-340.
[24] Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin,
103(2), 193-210.
[25] Harris, A. J. L., & Hahn, U. (2011). Unrealistic Optimism about Future Life Events: A cautionary note. Psychological Review, 118(1),
135-154.
[26] Kahneman, Daniel (19 October 2011). "Don't Blink! The Hazards of Confidence" (http:/ / www. nytimes. com/ 2011/ 10/ 23/ magazine/
dont-blink-the-hazards-of-confidence. html?ref=general& src=me& pagewanted=all). New York Times. . Retrieved 25 October 2011.
[27] [27] Plous, S. (1993). The psychology of judgment and decision making. New York: McGraw-Hill.
[28] Thompson, L., & Loewenstein, G. (1992). Egocentric interpretations of fairness and interpersonal conflict. Organizational Behavior and
Human Decision Processes, 51(2), 176-197.
[29] Babcock, L., & Olson, C. (1992). The causes of impasses in labor disputes. Industrial Relations, 31, 348-360.
[30] [30] Johnson, D. D. P. (2004). Overconfidence and war: The havoc and glory of positive illusions. Cambridge, MA: Harvard University Press.
[31] [31] Aumann, R. J. (1976). Agreeing to disagree. Annals of Statistics, 4, 1236-1239.
[32] Daniel, K. D., Hirshleifer, D. A., & Sabrahmanyam, A. (1998). Investor psychology and security market under- and overreactions. Journal
of Finance, 53(6), 1839-1885.
[33] Oskamp, Stuart (1965). "Overconfidence in case-study judgements". The Journal of Consulting Psychology (American Psychological
Association) 2: 261265. reprinted in Kahneman, Daniel; Paul Slovic, Amos Tversky (1982). Judgment under uncertainty: Heuristics and
biases. Cambridge University Press. pp.287293. ISBN978-0-521-28414-1.
[34] Radzevick, J. R., & Moore, D. A. (2011). Competing to be certain (but wrong): Social pressure and overprecision in judgment. Management
Science, 57(1), 93-106.
[35] Fowler, James, and Dominic Johnson. "On Overconfidence." Seed Magazine. Seed Magazine, January 7, 2011. Web. 22 Jul 2011. http:/ /
seedmagazine.com/ content/ article/ on_overconfidence/
[36] Judge, T. A., Locke, E. A., & Durham, C. C. (1997). The dispositional causes of job satisfaction: A core evaluations approach. Research in
Organizational Behavior, 19, 151188.
References
Adams, P. A., & Adams, J. K. (1960). Confidence in the recognition and reproduction of words difficult to spell.
The American Journal of Psychology, 73(4), 544-552.
Cannell, J. J. (1989). How public educators cheat on standardized achievement tests: The "Lake Wobegon" report.
Johnson, D. D. P. (2004). Overconfidence and war: The havoc and glory of positive illusions. Cambridge, MA:
Harvard University Press.
Larrick, R. P., Burson, K. A., & Soll, J. B. (2007). Social comparison and confidence: When thinking you're
better than average predicts overconfidence (and when it does not). Organizational Behavior & Human Decision
Processes, 102(1), 76-94.
Further reading
Larrick, R. P., Burson, K. A., & Soll, J. (2007). Social comparison and confidence: When thinking you're better
than average predicts overconfidence (and when it does not). Organizational Behavior & Human Decision
Processes, 102(1), 76-94.
Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502-517.
Baron, Johnathan (1994). Thinking and Deciding. Cambridge University Press. pp.219224.
ISBN0-521-43732-6.
Gilovich, Thomas; Dale Griffin, Daniel Kahneman (Eds.). (2002). Heuristics and biases: The psychology of
intuitive judgment. Cambridge, UK: Cambridge University Press. ISBN 0-521-79679-2
Sutherland, Stuart (2007). Irrationality. Pinter & Martin. pp.172178. ISBN978-1-905177-07-3.
Pareidolia
177
Pareidolia
A satellite photo of a mesa in Cydonia, often
called the Face on Mars. Later imagery from
other angles did not contain the illusion.
Pareidolia (pron.: /prdoli/ parr-i-DOH-lee-) is a psychological
phenomenon involving a vague and random stimulus (often an image or
sound) being perceived as significant. Common examples include
seeing images of animals or faces in clouds, the man in the moon or the
Moon rabbit, and hearing hidden messages on records when played in
reverse.
The word comes from the Greek words para (, "beside, alongside,
instead") in this context meaning something faulty, wrong, instead of;
and the noun eidlon ( "image, form, shape") the diminutive of
eidos. Pareidolia is a type of apophenia, seeing patterns in random data.
Examples
Religious
There have been many instances of perceptions of religious imagery and themes, especially the faces of religious
figures, in ordinary phenomena. Many involve images of Jesus,
[1]
the Virgin Mary,
[2]
the word Allah,
[3]
or other
religious phenomena: In September 2007 in Singapore, for example, a callus on a tree resembled a monkey, leading
believers to pay homage to the "Monkey god" (either Sun Wukong or Hanuman) in the so-called "monkey tree
phenomenon".
[4]
Publicity surrounding sightings of religious figures and other surprising images in ordinary objects has spawned a
market for such items on online auctions like eBay. One famous instance was a grilled cheese sandwich with the
Virgin Mary's face.
[5]
Divination
Various European ancient divination practices involve the interpretation of shadows cast by objects. For example, in
Nordic molybdomancy, a random shape produced by pouring molten tin into cold water is interpreted by the shadow
it casts in candlelight.
Fossils
From the late 1970s through the early 1980s, Japanese researcher Chonosuke Okamura self-published a famous
series of reports titled "Original Report of the Okamura Fossil Laboratory" in which he described tiny inclusions in
polished limestone from the Silurian period (425 mya) as being preserved fossil remains of tiny humans, gorillas,
dogs, dragons, dinosaurs, and other organisms, all of them only millimeters long, leading him to claim "There have
been no changes in the bodies of mankind since the Silurian period ... except for a growth in stature from 3.5 mm to
1,700 mm."
[6][7]
Okamura's research earned him a winner of the Ig Nobel Prize (a parody of the Nobel Prizes) in
biodiversity.
[8]
See List of Ig Nobel Prize winners (1996).
[9]
Pareidolia
178
Projective tests
The Rorschach inkblot test uses pareidolia in an attempt to gain insight into a person's mental state. The Rorschach is
a projective test, as it intentionally elicits the thoughts or feelings of respondents which are "projected" onto the
ambiguous inkblot images. Projection in this instance is a form of "directed pareidolia" because the cards have been
deliberately designed not to resemble anything in particular.
[1]
Electronic voice phenomenon
In 1971, Konstantns Raudive wrote Breakthrough, detailing what he believed was the discovery of electronic voice
phenomenon (EVP). EVP has been described as auditory pareidolia.
[1]
Backmasking
The allegations of backmasking in popular music have also been described as auditory pareidolia.
[1][10]
Art
In his notebooks, Leonardo da Vinci wrote of pareidolia as a device for painters, writing "if you look at any walls
spotted with various stains or with a mixture of different kinds of stones, if you are about to invent some scene you
will be able to see in it a resemblance to various different landscapes adorned with mountains, rivers, rocks, trees,
plains, wide valleys, and various groups of hills. You will also be able to see divers combats and figures in quick
movement, and strange expressions of faces, and outlandish costumes, and an infinite number of things which you
can then reduce into separate and well conceived forms."
[11]
Explanations
Evolutionary advantage
A drawing which, despite not bearing much
resemblance to a real face, most people will
identify as a picture of one.
Carl Sagan hypothesized that as a survival technique, human beings are
"hard-wired" from birth to identify the human face. This allows people
to use only minimal details to recognize faces from a distance and in
poor visibility but can also lead them to interpret random images or
patterns of light and shade as being faces.
[12]
The evolutionary
advantages of being able to discern friend from foe with split-second
accuracy are numerous; prehistoric (and even modern) men and women
who accidentally identify an enemy as a friend could face deadly
consequences for this mistake. This is only one among many
evolutionary pressures responsible for the development of the facial
recognition capability of modern humans.
[13]
In Cosmos: A Personal Voyage Sagan claimed that Heikegani crabs'
occasional resemblance to Samurai resulted in their being spared from
capture and thus exaggerate the trait in their offspring, a hypothesis
proposed by Julian Huxley in 1952. Such claims have been met with skepticism.
[14]
A 2009 magnetoencephalography study found that objects incidentally perceived as faces evoke an early (165 ms)
activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas other
common objects do not evoke such activation. This activation is similar to a slightly earlier peak at 130 ms seen for
images of real faces. The authors suggest that face perception evoked by face-like objects is a relatively early
process, and not a late cognitive reinterpretation phenomenon.
[15]
An fMRI study in 2011 similarly showed that
repeated presentation of novel visual shapes that were interpreted as meaningful led to decreased fMRI responses for
Pareidolia
179
real objects. These result indicate that interpretation of ambiguous stimuli depends on similar processes as those
elicited for known objects.
[16]
These studies help to explain why people identify a few circles and a line as a "face" so quickly and without
hesitation. Cognitive processes are activated by the "face-like" object, which alert the observer to both the emotional
state and identity of the subject even before the conscious mind begins to process or even receive the
information. The "stick figure face," despite its simplicity, conveys mood information (in this case, disappointment
or mild unhappiness). It would be just as simple to draw a stick figure face that would be perceived (by most people)
as hostile and aggressive. This robust and subtle capability is the result of eons of natural selection favoring people
most able to quickly identify the mental state, for example, of threatening people, thus providing the individual an
opportunity to flee or attack preemptively. In other words, processing this information subcortically (and therefore
subconsciously) before it is passed on to the rest of the brain for detailed processing accelerates judgment and
decision making when alacrity is paramount.
[13]
This ability, though highly specialized for the processing and
recognition of human emotions, also functions to determine the demeanor of wildlife.
[17]
Combined with Apophenia (seeing patterns in randomness) and hierophany (a manifestation of the sacred),
pareidolia may have helped early societies organize chaos and make the world intelligible.
[18][19]
Pathologies
There are a number of conditions that can cause an individual to lose his/her ability to recognize faces; stroke,
tumors, and trauma to the ventral fusiform gyrus are the most common culprits. This is known as prosopagnosia.
Pareidolia can also be related to obsessivecompulsive disorder as seen in one woman's case.
[20]
Natural
Smiley face in Galle
Crater on Mars.
Human face on Pedra da
Gavea near Rio de Janeiro.
Garuda (Ancient Eagle) seen
from the sides of Tirumala
Hills
Apache head in rocks near
Ebihens, France.
Pareidolia
180
A face of Lord Venkateshwara is
seen on Tirumala Hills. It
appears to be sleeping.
Anthropomorphic grimacing face
visible in the red shale of the
gorges of Cians, France.
Bust of a woman with a hat, in
profile, in the gorges of Daluis,
France, better known as "La
Gardienne des Gorges".
Tree with mature ivy, suggestive
of a person clinging to the tree's
trunk, in Scotland.
Romanian Sphinx in Bucegi
Mountains.
Artificial
Pareidolia examples
This alarm clock appears to have a
sad face.
Pareidolia
181
False wood with multiple pareidolia
aspects.
A rusty piece of machinery looks like
the face of a beast.
Pareidolia
182
A cardboard box that appears to be shocked
and unhappy
An electrical outlet (center) that appears
to be happy
References
[1] Zusne, Leonard; Warren H. Jones (1989). Anomalistic Psychology: A Study of Magical Thinking (http:/ / books. google. com/
books?isbn=0805805087). Lawrence Erlbaum Associates. pp.7779. ISBN0-8058-0508-7. . Retrieved 2007-04-06.
[2] NY Times: "In New Jersey, a Knot in a Tree Trunk Draws the Faithful and the Skeptical" July 23, 2012 (http:/ / www. nytimes. com/ 2012/
07/ 23/ nyregion/ in-a-tree-trunk-in-new-jersey-some-see-our-lady-of-guadalupe. html?hpw)
[3] Ibrahim, Yahaya (2011-01-02). "In Maiduguri, a tree with engraved name of God turns spot to a Mecca of sorts" (http:/ / sundaytrust. com.
ng/ index. php?option=com_content& view=article&
id=5698:in-maiduguri-a-tree-with-engraved-name-of-god-turns-spot-to-a-mecca-of-sorts& catid=17:community-news-kanem-trust&
Itemid=28). Sunday Trust (Media Trust Limited, Abuja). . Retrieved 2012-03-21.
[4] Ng Hui Hui (13 September 2007). "Monkey See, Monkey Do?" (http:/ / newpaper. asia1. com. sg/ printfriendly/ 0,4139,141806,00. html).
The New Paper. pp.1213. .
[5] "'Virgin Mary' toast fetches $28,000" (http:/ / news. bbc. co. uk/ 2/ hi/ americas/ 4034787. stm). BBC News. 23 November 2004. . Retrieved
2006-10-27.
[6] Spamer, E.. "Chonosuke Okamura, Visionary" (http:/ / improbable. com/ airchives/ paperair/ volume6/ v6i6/ okamura-6-6. html).
Philadelphia: Academy of Natural Sciences. . archived at Improbable Research (http:/ / improbable. com/ )
[7] Berenbaum, May (2009). The earwig's tail: a modern bestiary of multi-legged legends. Harvard University Press. pp.7273.
ISBN0-674-03540-2.
[8] Marc Abrahams (2004-03-16). "Tiny tall tales: Marc Abrahams uncovers the minute, but astonishing, evidence of our fossilised past" (http:/ /
www.guardian.co.uk/ education/ 2004/ mar/ 16/ highereducation. research). London: The Guardian. .
[9] Conner, Susan; Kitchen, Linda (2002). Science's most wanted: the top 10 book of outrageous innovators, deadly disasters, and shocking
discoveries. Most Wanted Series. Brassey's. p.93. ISBN1-57488-481-6.
Pareidolia
183
[10] Vokey, John R; J. Don Read (November 1985). "Sublminal message: between the devil and the media". American Psychologist. 11 40 (11):
12311239. doi:10.1037/0003-066X.40.11.1231. PMID4083611.
[11] John R; J. Don Read (1923). "Leonardo Da Vinci S Note-Books Arranged And Rendered Into English" (http:/ / www. archive. org/ stream/
leonardodavincis007918mbp/ leonardodavincis007918mbp_djvu. txt). Empire State Book Company. ..
[12] Sagan, Carl (1995). The Demon-Haunted World Science as a Candle in the Dark. New York: Random House. ISBN0-394-53512-X.
[13] Svoboda, Elizabeth (2007-02-13). "Facial Recognition Brain Faces, Faces Everywhere" (http:/ / www. nytimes. com/ 2007/ 02/ 13/
health/ psychology/ 13face. html). The New York Times (New York Times). . Retrieved July 3, 2010.
[14] Joel W. Martin (1993). "The Samurai Crab" (http:/ / crustacea. nhm. org/ people/ martin/ publications/ pdf/ 103. pdf) (PDF). Terra 31 (4):
3034. .
[15] Hadjikhani N, Kveraga K, Naik P, Ahlfors SP (February 2009). "Early (M170) activation of face-specific cortex by face-like objects".
Neuroreport 20 (4): 4037. doi:10.1097/WNR.0b013e328325a8e1. PMC2713437. PMID19218867.
[16] Voss first1=JL last2=Federmeier first2=KD last3=Paller first3=K (2011). "The potato chip really does look like Elvis! Neural hallmarks of
conceptual processing associated with finding novel shapes subjectively meaningful" (http:/ / cercor. oxfordjournals. org/ content/ early/ 2011/
11/ 10/ cercor.bhr315). Cerebral Cortex. doi:10.1093/cercor/bhr315. PMID22079921. .
[17] "Dog Tips Emotions in Canines and Humans" (http:/ / www. paw-rescue. org/ PAW/ PETTIPS/ DogTip_EmotionsInCaninesAndHumans.
php). Partnership for Animal Welfare. . Retrieved July 3, 2010.
[18] Bustamante Patricio, Yao Fay, Bustamante Daniela, 2010 b, The Worship to the Mountains: A Study of the Creation Myths of the Chinese
Culture (http:/ / www.rupestreweb. info/ china.html)
[19] Bustamante, Patricio; Yao, Fay; Bustamante, Daniela (2010). "Pleistocene Art: the archeological material and its anthropological meanings"
(http:/ / www. ifraoariege2010.fr/ docs/ Articles/ Bustamante_et_al-Signes. pdf) (PDF). .
[20] Fontenelle, Leonardo. "Leonardo F. Fontenelle. Pareidolias in obsessive-compulsive disorder" (http:/ / www. ingentaconnect. com/ content/
psych/ nncs/ 2008/ 00000014/ 00000005/ art00004). . Retrieved October 28, 2011.
External links
Cloud pareidolia (http:/ / www. environmentalgraffiti. com/ featured/ 33-creepiest-clouds-on-earth/ 1515) 33
examples of meteorological pareidolia.
Religious Pareidolia (http:/ / www. yoism. org/ ?q=node/ 129) Extensive video and photographic collection of
pareidolia.
Skepdic.com (http:/ / skepdic. com/ pareidol. html) Skeptic's Dictionary definition of pareidolia
Lenin in my shower curtain (http:/ / www. badastronomy. com/ bad/ misc/ lenin. html) (Bad Astronomy)
The Stone Face: Fragments of An Earlier World (http:/ / www. mnmuseumofthems. org/ Faces/ intro. html)
Feb. 13, 2007, article in The New York Times about cognitive science of face recognition (http:/ / www. nytimes.
com/ 2007/ 02/ 13/ health/ psychology/ 13face. html)
Famous Pareidolias (http:/ / www. pareidolias. net)
Snopes.com (http:/ / www. snopes. com/ rumors/ wtcface. asp) Faces of 'Satan' seen in World Trade Center smoke
Pessimism bias
184
Pessimism bias
Pessimism bias is an effect in which people exaggerate the likelihood that negative things will happen to them. It
contrasts with optimism bias, which is a more general, systematic tendency to underestimate personal risks and
overestimate the likelihood of positive life events.
[1][2]
Depressed people are particularly likely to exhibit a
pessimism bias.
[3][4]
Surveys of smokers have found that their ratings of their risk of heart disease showed a small
but significant pessimism bias; however, the literature as a whole is inconclusive.
[1]
References
[1] SR Sutton, How accurate are smokers' perceptions of risk? (http:/ / www. informaworld. com/ index/ 784101725. pdf), Health, Risk &
Society, 1999,
[2] de Palma, Andre; Picard, Nathalie (2009). "Behaviour Under Uncertainty" (http:/ / books. google. com/ books?id=qlp8itjp-RcC& pg=PA423).
In Kitamura, Ryichi; Yoshii, Toshio; Yamamoto, Toshiyuki. The Expanding Sphere of Travel Behaviour Research: Selected Papers from the
11th International Conference on Travel Behaviour Research. Emerald Group Publishing. pp.423. ISBN978-1-84855-936-3. . Retrieved 6
January 2011.
[3] Sharot, Tali; Riccardi, Alison M.; Raio, Candace M.; Phelps, Elizabeth A. (2007). "Neural mechanisms mediating optimism bias". Nature 450
(7166): 102105. doi:10.1038/nature06280. ISSN0028-0836. PMID17960136.
[4] Wang, PS; AL Beck, P Berglund (2004), "Effects of major depression on moment-in-time work performance" (http:/ / ajp. psychiatryonline.
org/ cgi/ content/ abstract/ 161/ 10/ 1885), American Journal of Psychiatry (American Psychiatric Association) 161: 18851891,
Planning fallacy
Daniel Kahneman
The planning fallacy is a tendency for people and organizations to
underestimate how long they will need to complete a task, even when they
have experience of similar tasks over-running. The term was first proposed
in a 1979 paper by Daniel Kahneman and Amos Tversky.
[1][2]
Since then
the effect has been found for predictions of a wide variety of tasks,
including tax form completion, school work, furniture assembly, computer
programming and origami.
[1][3]
The bias only affects predictions about
one's own tasks; when uninvolved observers predict task completion times,
they show a pessimistic bias, overestimating the time taken.
[3][4]
In 2003,
Lovallo and Kahneman proposed an expanded definition as the tendency to
underestimate the time, costs, and risks of future actions and at the same
time overestimate the benefits of the same actions. According to this
definition, the planning fallacy results in not only time overruns, but also
cost overruns and benefit shortfalls.
[5]
Demonstration
In a 1994 study, 37 psychology students were asked to estimate how long it would take to finish their senior theses.
The average estimate was 33.9 days. They also estimated how long it would take "if everything went as well as it
possibly could" (averaging 27.4 days) and "if everything went as poorly as it possibly could" (averaging 48.6 days).
The average actual completion time was 55.5 days, with only about 30% of the students completing their thesis in
the amount of time they predicted.
[6]
Another study asked students to estimate when they would complete their personal academic projects. Specifically,
the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their
personal projects would be done.
[4]
Planning fallacy
185
13% of subjects finished their project by the time they had assigned a 50% probability level;
19% finished by the time assigned a 75% probability level;
45% finished by the time of their 99% probability level.
A survey of Canadian tax payers, published in 1997, found that they mailed in their tax forms about a week later than
they predicted. They had no misconceptions about their past record of getting forms mailed in, but expected that they
would get it done more quickly next time.
[7]
This illustrates a defining feature of the planning fallacy; that people
recognize that their past predictions have been over-optimistic, while insisting that their current predictions are
realistic.
[3]
Explanations
Kahneman and Tversky's original explanation for the fallacy was that planners focus on the most optimistic scenario
for the task, rather than using their full experience of how much time similar tasks require.
[3]
One explanation
offered by Roger Buehler and colleagues is wishful thinking; in other words, people think tasks will be finished
quickly and easily because that is what they want to be the case.
[1]
In a different paper, Buehler and colleagues
suggest an explanation in terms of the self-serving bias in how people interpret their past performance. By taking
credit for tasks that went well but blaming delays on outside influences, people can discount past evidence of how
long a task should take.
[1]
One experiment found that when people made their predictions anonymously, they do not
show the optimistic bias. This suggests that the people make optimistic estimates so as to create a favorable
impression with others.
[1]
Some have attempted to explain the planning fallacy in terms of impression management theory.
One explanation, focalism, may account for the mental discounting of off-project risks. People formulating the plan
may eliminate factors they perceive to lie outside the specifics of the project. Additionally, they may discount
multiple improbable high-impact risks because each one is so unlikely to happen.
Planners tend to focus on the project and underestimate time for sickness, vacation, meetings, and other "overhead"
tasks. Planners also tend not to plan projects to a detail level that allows estimation of individual tasks, like placing
one brick in one wall; this enhances optimism bias and prohibits use of actual metrics, like timing the placing of an
average brick and multiplying by the number of bricks. Complex projects that lack immutable goals are also subject
to mission creep, scope creep, and featuritis. As described by Fred Brooks in The Mythical Man-Month, adding new
personnel to an already-late project incurs a variety of risks and overhead costs that tend to make it even later; this is
known as Brooks's law.
Another possible explanation is the "authorization imperative": Much of project planning takes place in a context
where financial approval is needed to proceed with the project and the planner often has a stake in getting the project
approved. This dynamic may lead to a tendency on the part of the planner to deliberately underestimate the project
effort required. It is easier to get forgiveness (for overruns) than permission (to commence the project if a realistic
effort estimate were provided.) Such deliberate underestimation has been named strategic misrepresentation.
Planning fallacy
186
Methods to curb the planning fallacy
Daniel Kahneman, Amos Tversky, and Bent Flyvbjerg developed reference class forecasting to eliminate or reduce
the effects of the planning fallacy in decision making.
[8]
Notes
[1] Pezzo, Mark V.; Litman, Jordan A.; Pezzo, Stephanie P. (2006). "On the distinction between yuppies and hippies: Individual differences in
prediction biases for planning future tasks". Personality and Individual Differences 41 (7): 13591371. doi:10.1016/j.paid.2006.03.029.
ISSN01918869.
[2] Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures". TIMS Studies in Management Science 12:
313327.
[3] Buehler, Roger; Griffin, Dale, & Ross, Michael (2002). "Inside the planning fallacy: The causes and consequences of optimistic time
predictions". In Thomas Gilovich, Dale Griffin, & Daniel Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment, pp.
250270. Cambridge, UK: Cambridge University Press.
[4] Buehler, Roger; Dale Griffin, Michael Ross (1995). "It's about time: Optimistic predictions in work and love". European Review of Social
Psychology (American Psychological Association) 6: 132. doi:10.1080/14792779343000112.
[5] Lovallo, Dan; Daniel Kahneman (July 2003). "Delusions of Success: How Optimism Undermines Executives' Decisions". Harvard Business
Review: 5663.
[6] Buehler, Roger; Dale Griffin, Michael Ross (1994). "Exploring the "planning fallacy": Why people underestimate their task completion
times". Journal of Personality and Social Psychology (American Psychological Association) 67 (3): 366381.
doi:10.1037/0022-3514.67.3.366.
[7] Buehler, Roger; Dale Griffin, Johanna Peetz (2010). "The Planning Fallacy: Cognitive, Motivational, and Social Origins" (http:/ / www.
psych. nyu. edu/ trope/ Ledgerwood et al_Advances chapter. PDF#page=10). Advances in Experimental Social Psychology (Academic Press)
43: 9. . Retrieved 2012-09-15.
[8] Flyvbjerg, B., 2008, "Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice." (http:/ /
www.sbs.ox.ac. uk/ centres/ bt/ Documents/ Curbing Optimism Bias and Strategic Misrepresentation. pdf) European Planning Studies, vol.
16, no. 1, January, pp. 321.
References
Lev Virine and Michael Trumper. Project Decisions: The Art and Science (http:/ / www. projectdecisions. org),
Vienna, VA: Management Concepts, 2008. ISBN 978-1-56726-217-9
Further reading
If you don't want to be late, enumerate: Unpacking reduces the planning fallacy (http:/ / dx. doi. org/ 10. 1016/ j.
jesp. 2003. 11. 001) by Justin Kruger and Matt Evans
Post-purchase rationalization
187
Post-purchase rationalization
Post-purchase rationalization, also known as Buyer's Stockholm Syndrome, is a cognitive bias whereby someone
who purchases an expensive product or service overlooks any faults or defects in order to justify their purchase. It is
a special case of choice-supportive bias.
Expensive purchases often involve a lot of careful research and deliberation, and many consumers will often refuse
to admit that their decision was made in poor judgement. Many purchasing decisions are made emotionally, based on
factors such as brand-loyalty and advertising, and so are often rationalized retrospectively in an attempt to justify the
choice.
For example, a consumer cannot decide between two popular video game consoles, but in the end decides to
purchase the one that many of their peers also own. After purchasing it, they may find few games for their console
worth purchasing, and more for the console they did not purchase. However, they do not wish to feel they made the
wrong decision, and so will attempt to convince themselves, and their peers, that their original choice was the correct
one, and the consumer's opinion is better than everyone's opinion, e.g. using sour grapes arguments.
[1]
This rationalization is based on the Principle of Commitment and the psychological desire to stay consistent to that
commitment. Some authorities would also consider this rationalization a manifestation of cognitive dissonance.
References
[1] Joel B. Cohen; Marvin E. Goldberg (August 1970). "The Dissonance Model in Post-Decision Product Evaluation". Journal of Marketing
Research (American Marketing Association) 7 (3): 315321. doi:10.2307/3150288. JSTOR3150288.
Pro-innovation bias
In diffusion of innovation theory, a pro-innovation bias reflects a personal bias toward an innovation that someone
is trying to implement or diffuse among a population.
[1]
The bias refers to the fact that the innovation's "champion"
has such strong bias in favor of the innovation, that he/she may not see its limitations or weaknesses and continues to
promote it nonetheless.
An example may be an inventor who creates a new process or product and wants to take it to market for financial
gain. While the invention may be interesting and have promise, if the inventor is experiencing pro-innovation bias,
he/she may not heed market data (or even seek such data) that prostulates that the invention will or may not sell.
References
[1] "Beyond the pro-innovation bias" (http:/ / www. hanken. fi/ public/ en/ beyondtheproinnovationbias). January 26, 2010. . Retrieved April 17,
2011.
Further reading
Rogers, Everett (Original edition (August 16, 2003)). Diffusion of Innovations. Free Press. pp.512.
ISBN0-7432-2209-1.
Pseudocertainty effect
188
Pseudocertainty effect
The pseudocertainty effect is a concept from prospect theory. It refers to people's tendency to perceive an outcome
as certain while in fact it is uncertain (Kahneman & Tversky, 1986).
[1]
It is observed in multi-stage decisions, in
which evaluation of outcomes in previous decision stage is discarded when making an option in subsequent stages.
Example
Kahneman and Tversky (1986) illustrated the pseudocertainty effect by the following examples.
First, consider this problem:
Which of the following options do you prefer?
C. 25% chance to win $30 and 75% chance to win nothing
D. 20% chance to win $45 and 80% chance to win nothing
In this case, 42% of participants chose option C while 58% chose option D.
Now, consider this problem:
Consider the following two stage game. In the first stage, there is a 75% chance to end the game without winning
anything, and a 25% chance to move into the second stage. If you reach the second stage you have a choice between:
E. a sure win of $30
F. 80% chance to win $45 and 20% chance to win nothing
Your choice must be made before the outcome of the first stage is known.
This time,74% of participants chose option E while only 26% chose option F.
In fact, the actual probability of winning money in option E (25% x 100% = 25%) and option F (25% x 80% = 20%)
is the same as the probability of winning money in option C (25%) and option D (20%) respectively. In the second
problem, since individuals have no choice on options in the first stage, individuals tend to discard the first option
when evaluating the overall probability of winning money, but just to consider the options in the second stage that
individuals have a choice on. This is also known as cancellation, meaning that possible options are yielding to the
same outcome thus ignoring decision process in that stage.
External links
Kahneman, Daniel and Tversky, Amos. The Framing of Decisions and the Psychology of Choice Science 211
(1981), pp.4538, copyright 1981 by the American Association for the Advancement of Science. [2]
References
[1] (http:/ / www.cog. brown. edu/ courses/ cg195/ pdf_files/ fall07/ Kahneman& Tversky1986. pdf), Tversky, A., & Kahneman, D. (1986).
Rational Choice and the Framing of Decisions. The Journal of Business, 59, S251-S278.
[2] http:/ / www. cs. umu.se/ kurser/ TDBC12/ HT99/ Tversky. html
Reactance (psychology)
189
Reactance (psychology)
Reactance is a motivational reaction to offers, persons, rules, or regulations that threaten or eliminate specific
behavioral freedoms. Reactance occurs when a person feels that someone or something is taking away his or her
choices or limiting the range of alternatives.
Reactance can occur when someone is heavily pressured to accept a certain view or attitude. Reactance can cause the
person to adopt or strengthen a view or attitude that is contrary to what was intended, and also increases resistance to
persuasion. People using reverse psychology are playing on at least an informal awareness of reactance, attempting
to influence someone to choose the opposite of what they request.
Definition
Psychological reactance occurs in response to threats to perceived behavioral freedoms.
[1][2]
An example of such
behavior can be observed when an individual engages in a prohibited activity in order to deliberately taunt the
authority who prohibits it, regardless of the utility or disutility that the activity confers. An individual's freedom to
select when and how to conduct their behavior, and the level to which they are aware of the relevant freedomand
are able to determine behaviors necessary to satisfy that freedomaffect the generation of psychological reactance.
It is assumed that if a person's behavioral freedom is threatened or reduced, they become motivationally aroused. The
fear of loss of further freedoms can spark this arousal and motivate them to re-establish the threatened freedom.
Because this motivational state is a result of the perceived reduction of one's freedom of action, it is considered a
counterforce, and thus is called "psychological reactance".
There are four important elements to reactance theory: perceived freedom, threat to freedom, reactance, and
restoration of freedom. Freedom is not an abstract consideration, but rather a feeling associated with real behaviors,
including actions, emotions, and attitudes.
Reactance also explains denial as it is encountered in addiction counselling. According to William R. Miller,
[3]
"Research demonstrates that a counselor can drive resistance (denial) levels up and down dramatically according to
his or her personal counseling style". Use of a "respectful, reflective approach" described in motivational
interviewing and applied as motivation enhancement therapy, rather than by argumentation, the accusation of "being
in denial", and direct confrontations, lead to the motivation to change and avoid the resistance and denial, or
reactance, elicited by strong direct confrontation.
[4]
For a complete review of how confrontation became popular in
addiction treatment, see Miller, W.R. & White, W.
[5]
Theory
Reactance theory assumes there are "free behaviors" individuals perceive and can take part in at any given moment.
For a behavior to be free, the individual must have the relevant physical and psychological abilities to partake in it,
and must know they can engage in it at the moment, or in the near future.
"Behavior" includes any imaginable act. More specifically, behaviors may be explained as "what one does (or
doesn't do)", "how one does something", or "when one does something". It is not always clear, to an observer, or the
individuals themselves, if they hold a particular freedom to engage in a given behavior. When a person has such a
free behavior they are likely to experience reactance whenever that behavior is restricted, eliminated, or threatened
with elimination.
There are several rules associated with free behaviors and reactance:
1. When certain free behaviors are threatened or removed, the more important a free behavior is to a certain
individual the greater the magnitude of the reactance.
Reactance (psychology)
190
a. a. The level of reactance has a direct relationship to the importance of the eliminated or threatened behavioral
freedom, in relationship to the importance of other freedoms at the time.
2. 2. With a given set of free behaviors, the greater the proportion threatened or eliminated, the greater will be the total
level of reactance.
3. 3. When an important free behavior has been threatened with elimination, the greater will be the threat, and the
greater will be the level of reactance.
a. a. When there is a loss of a single free behavior, there may be by implication a related threat of removal of other
free behaviors now or in the future.
b. b. A free behavior may be threatened or eliminated by virtue of the elimination (or threat of elimination) of
another free behavior; therefore a free behavior may be threatened by the relation of the elimination of (or
threat to) another person's free behavior.
Other core concepts of the theory are justification and legitimacy. A possible effect of justification is a limitation of
the threat to a specific behavior or set of behaviors. For example, if Mr Doe states that he is interfering with Mrs.
Smith's expectations because of an emergency, this keeps Mrs Smith from imagining that Mr Doe will interfere on
future occasions as well. Likewise, legitimacy may point to a set of behaviors threatened since there will be a general
assumption that an illegitimate interference with a person's freedom is less likely to occur. With legitimacy there is
an additional implication that a person's freedom is equivocal.
Effects of reactance
In the phenomenology of reactance there is no assumption that a person will be aware of reactance. When a person
becomes aware of reactance, they will feel a higher level of self-direction in relationship to their own behavior. In
other words, they will feel that if they are able to do what they want, then they do not have to do what they do not
want. In this case when the freedom is in question, that person alone is the director of their own behavior.
When considering the direct re-establishment of freedom, the greater the magnitude of reactance, the more the
individual will try to re-establish the freedom that has been lost or threatened. When a freedom is threatened by a
social pressure then reactance will lead a person to resist that pressure. Also, when there are restraints against a direct
re-establishment of freedom, there can be attempts at re-establishment by implication whenever possible.
Freedom can and may be reestablished by a social implication. When an individual has lost a free behavior because
of a social threat, then the participation in a free-like behavior by another person similar to himself will allow him to
re-establish his own freedom.
In summary the definition of psychological reactance is a motivational state that is aimed at re-establishment of a
threatened or eliminated freedom. A short explanation of the concept is that the level of reactance has a direct
relationship between the importance of a freedom which is eliminated or threatened, and a proportion of free
behaviors eliminated or threatened.
Empirical evidence
A number of studies have looked at psychological reactance, providing empirical evidence for the behaviour; some
key studies are discussed below.
Brehm's 1981 study Psychological reactance and the attractiveness of unobtainable objects: sex differences in
children's responses to an elimination of freedom examined the differences in sex and age in a child's view of the
attractiveness of obtained and unobtainable objects. The study reviewed how well children respond in these
situations and determined if the children being observed thought the "grass was greener on the other side". It also
determined how well the child made peace with the world if they devalued what they could not have. This work
concluded that when a child cannot have what they want, they experience emotional consequences of not getting
it.
[6]
Reactance (psychology)
191
In this study the results were duplicated from a previous study by Hammock and J. Brehm (1966). The male subjects
wanted what they could not obtain, however the female subjects did not conform to the theory of reactance.
Although their freedom to choose was taken away, it had no overall effect on them.
Silvia's 2005 study Deflecting reactance: the role of similarity in increasing compliance and reducing resistance
concluded that one way to increase the activity of a threatened freedom is to censor it, or provide a threatening
message toward the activity. In turn a "boomerang effect" occurs, in which people choose forbidden alternatives.
This study also shows that social influence has better results when it does not threaten one's core freedoms. Two
concepts revealed in this study are that a communicator may be able to increase the positive force towards
compliance by increasing their credibility, and that increasing the positive communication force and decreasing the
negative communication force simultaneously should increase compliance.
[7]
Miller et al., concluded in their 2006 study, Identifying principal risk factors for the initiation of adolescent smoking
behaviors: the significance of psychological reactance, that psychological reactance is an important indicator in
adolescent smoking initiation. Peer intimacy, peer individuation, and intergenerational individuation are strong
predictors of psychological reactance. The overall results of the study indicate that children think that they are
capable of making their own decisions, although they are not aware of their own limitations. This is an indicator that
adolescents will experience reactance to authoritative control, especially the proscriptions and prescriptions of adult
behaviors that they view as hedonically relevant.
[8]
Measurement of reactance
Dillard & Shen, in their 2005 paper On the nature of reactance and its role in persuasive health communication,
provided evidence that psychological reactance could be measured,
[9]
in contrast to the contrary opinion of Jack
Brehm, who developed the theory. In their work they measured the impact of psychological reactance with two
parallel studies: one advocating flossing and the other urging students to limit their alcohol intake.
They formed several conclusions about reactance. Firstly reactance is mostly cognitive; this allows reactance to be
measurable by self-report techniques. Also, in support of previous research, they conclude reactance is in part related
to an anger response. This verifies Brehm's description that during the reactance experience one tends to have hostile
or aggressive feelings, often aimed more at the source of a threatening message than at the message itself. Finally,
within reactance, both cognition and affect are intertwined; Dillard and Shen suggest they are so intertwined that
their effects on persuasion cannot be distinguished from each other.
Dillard and Shen's research indicates reactance can effectively be studied using established self-report methods.
Furthermore, it provided a better understanding of reactance theory and its relationship to persuasive health
communication.
Miller et al. conducted their 2007 study Psychological reactance and promotional health messages: the effects of
controlling language, lexical concreteness, and the restoration of freedom at the University of Oklahoma, with the
primary goal being to measure the effects of controlling language in promotional health messages. Their research
revisited the notion of restoring freedom by examining the use of a short postscripted message tagged on the end of a
promotional health appeal. Results of the study indicated that more concrete messages generate greater attention than
less concrete (more abstract) messages. Also, the source of concrete messages can be seen as more credible than the
source of abstract messages. They concluded that the use of more concrete, low-controlling language, and the
restoration of freedom through inclusion of a choice-emphasizing postscript, may offer the best solution to reducing
ambiguity and reactance created by overtly persuasive health appeals.
[10]
Reactance (psychology)
192
References
[1] Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.
[2] Brehm, S. S., & Brehm, J. W. (1981). Psychological Reactance: A Theory of Freedom and Control. Academic Press.
[3] Miller, W. R. (2000) Motivational Enhancement Therapy: Description of Counseling Approach. in Boren, J. J. Onken, L. S., & Carroll, K. M.
(Eds.) Approaches to Drug Abuse Counseling, National Institute on Drug Abuse, 2000, pp. 89-93.
[4] Miller, W.R. & Rollnick, S. Motivational Interviewing: Preparing People to Change Addictive Behavior. NY: Guilford Press, 1991.
[5] Miller, W. R., & White, W., (2007) Confrontation in Addiction Treatment Counselor Magazine October 4, 2007
[6] Brehm, Sharon S. (1981). Psychological reactance and the attractiveness of unobtainable objects: Sex differences in children's responses to an
elimination of freedom. Sex Roles, Volume 7, Number 9,937-949
[7] Silvia, P. J. (2005). Deflecting reactance: The role of similarity in increasing compliance and reducing resistance. Basic and Applied Social
Psychology, 27, 277284.
[8] Miller, C. H., Burgoon, M., Grandpre, J., & Alvaro, E. (2006). Identifying principal risk factors for the initiation of adolescent smoking
behaviors: The significance of psychological reactance. Health Communication 19, 241-252.
[9] Dillard, J., & Shen, L. (2005). On the nature of reactance and its role in persuasive health communication. Communication Monographs, 72,
144-168.
[10] Miller, C. H., Lane, L. T., Deatrick, L. M., Young, A. M., & Potts, K. A. (2007). Psychological reactance and promotional health messages:
The effects of controlling language, lexical concreteness, and the restoration of freedom. Human Communication Research, 33, 219-240.
Baron, R. A., et al. (2006). Social psychology, Pearson
Reactive devaluation
Reactive devaluation is a cognitive bias that occurs when a proposal is devalued if it appears to originate from an
antagonist. The bias was proposed by Lee Ross and Constance Stillinger.
In an initial experiment conducted in 1991, Stillinger and co-authors asked pedestrians whether they would support a
drastic bilateral nuclear arms reduction program. If they were told the proposal came from President Ronald Reagan,
90 percent said it would be favorable or even-handed to the United States; if they were told the proposal came from a
group of unspecified policy analysts, 80 percent thought it was favorable or even; but, if respondents were told it
came from Mikhail Gorbachev only 44 percent thought it was favorable or neutral to the United States.
[1]
In another experiment, a contemporaneous controversy at Stanford University led to the university divesting of
South African assets because of the apartheid regime. Students at Stanford were asked to evaluate the university's
divestment plan before it was announced publicly and after such. Proposals including the actual eventual proposal
were valued more highly when they were hypothetical.
[1]
In another study, experimenters showed Israeli participants a peace proposal which had been actually proposed by
Israel. If participants were told the proposal came from an Palestinian source they rated it lower than if they were
told (correctly) the identical proposal came from the Israeli government. If participants identified as "hawkish" were
told it came from "dovish" Israeli government they believed it was relatively bad for their people and good for the
other side, but not if participants identified as "doves".
[2]
Reactive devaluation could be caused by loss aversion or attitude polarization,
[3]
or nave realism.
[4]
Reactive devaluation
193
References
[1] Ross, Lee; Constance Stillinger (1991). "Barriers to conflict resolution". Negotiation Journal 8: 389404.
[2] Maoz, Ifat; Andrew Ward, Michael Katz & Lee Ross (2002). "Reactive Devaluation of an "Israeli" vs. "Palestinian" Peace Proposal". Journal
of Conflict Resolution 46 (4): 515546.
[3] Ross, Lee (1995). "Reactive Devaluation in Negotiation and Conflict Resolution". In Kenneth Arrow, Robert Mnookin, Lee Ross, Amos
Tversky, Robert B. Wilson (Eds.). Barriers to Conflict Resolution. New York: WW Norton & Co.
[4] Ross, L., & Ward, A. (1996). Naive realism in everyday life: Implications for social conflict and misunderstanding. In T. Brown, E. S. Reed
& E. Turiel (Eds.), Values and knowledge (pp. 103135). Hillsdale, NJ: Erlbaum.
Serial position effect
Graph showing the serial position effect. The vertical axis shows the
percentage of words recalled; the horizontal axis shows their position
in the sequence.
The serial position effect, a term coined by Hermann
Ebbinghaus through studies he performed on himself,
refers to the finding that recall accuracy varies as a
function of an item's position within a study list.
[1]
When asked to recall a list of items in any order (free
recall), people tend to begin recall with the end of the
list, recalling those items best (the recency effect).
Among earlier list items, the first few items are recalled
more frequently than the middle items (the primacy
effect).
[2][3]
One suggested reason for the primacy effect is that the
initial items presented are most effectively stored in
long-term memory because of the greater amount of
processing devoted to them. (The first list item can be rehearsed by itself; the second must be rehearsed along with
the first, the third along with the first and second, and so on.) The primacy effect is reduced when items are
presented quickly and is enhanced when presented slowly (factors that reduce and enhance processing of each item
and thus permanent storage). Longer presentation lists have been found to reduce the primacy effect.
[4]
One theorized reason for the recency effect is that these items are still present in working memory when recall is
solicited. Items that benefit from neither (the middle items) are recalled most poorly. An additional explanation for
the recency effect is related to temporal context: if tested immediately after rehearsal, the current temporal context
can serve as a retrieval cue, which would predict more recent items to have a higher likelihood of recall than items
that were studied in a different temporal context (earlier in the list).
[5]
The recency effect is reduced when an
interfering task is given. Intervening tasks involve working memory, as the distractor activity, if exceeding 15 to 30
seconds in duration, can cancel out the recency effect.
[6]
Additionally, if recall comes immediately after test, the
recency effect is consistent regardless of the length of the studied list,
[4]
or presentation rate.
[7]
Amnesiacs with poor ability to form permanent long-term memories do not show a primacy effect, but do show a
recency effect if recall comes immediately after study.
[8]
Patients with Alzheimer's Disease exhibit a reduced
primacy effect but do not produce a recency effect in recall.
[9]
Serial position effect
194
Primacy effect
The primacy effect, in psychology and sociology, is a cognitive bias that results in a subject recalling primary
information presented better than information presented later on. For example, a subject who reads a sufficiently
long list of words is more likely to remember words toward the beginning than words in the middle.
Many researchers tried to explain this phenomenon through free recall tests. In some experiments in the late 20th
century, it was noted that participants who knew that they were going to be tested on a list presented to them would
rehearse items. Meaning, as items were presented, the participants would repeat those items to themselves and as
new items were presented, the participants would continue to rehearse previous items along with the newer items. It
was demonstrated that the primacy effect had a greater influence on recall when there was more time between
presentation of items so that participants would have a greater chance to rehearse previous (prime) items.
[10][11][12]
Overt-Rehearsal was a technique that was meant to test participants' rehearsal patterns. In an experiment using this
technique, participants were asked to recite out loud the items that come to mind. In this way, the experimenter was
able to see that participants would repeat earlier items more than items in the middle of the list, thus rehearsing them
more frequently and having a better recall of the prime items than the middle items later on.
[13]
In another experiment, by Brodie and Murdock, the recency effect was found to be partially responsible for the
primacy effect.
[14]
In their experiment, they also used the overt-rehearsal technique and found that in addition to
rehearsing earlier items more than later items, participants were rehearsing earlier items later on in the list. In this
way, earlier items were closer to the test period by way of rehearsal and could be partially explained by the recency
effect.
Recency effect
Two traditional classes of theories explain the recency effect.
Dual Store Models
These models postulate that later study list items are retrieved from a highly accessible short-term buffer, i.e. the
short-term store (STS) in human memory. This allows items that are recently studied to have an advantage over
those that were studied earlier, as earlier study items have to be retrieved with greater effort from ones long-term
memory store (LTS).
An important prediction of such models is that the presentation of a distractor, for example solving arithmetic
problems for 10-30 seconds, during the retention period (the time between list presentation and test) attenuates such
the recency effect. Since the STS has limited capacity, the distractor displaces later study list items from the STS so
that at test, these items can only be retrieved from the LTS, and have lost their earlier advantage of being more easily
retrieved from the short-term buffer. As such, dual-store models successfully account for both the recency effect in
immediate recall tasks, and the attenuation of such an effect in the delayed free recall task.
A major problem with this model, however, is that it cannot predict the long-term recency effect observed in delayed
recall, when a distractor intervenes between each study item during the interstimulus interval (continuous distractor
task).
[15]
Since the distractor is still present after the last study item, it should displace the study item from STS such
that the recency effect is attenuated. The existence of this long-term recency effect thus raises the possibility that
immediate and long-term recency effects share a common mechanism.
[16]
Single Store Models
According to single-store theories, a single mechanism is responsible for serial position effects.
A first type of model is based on relative temporal distinctiveness, in which the time lag between test and the study
of each list item determines the relative competitiveness of an items memory trace at retrieval.
[17][18]
In this model,
end-of-list items are though to be more distinct, and hence more easily retrieved.
Serial position effect
195
Another type of model is based on contextual variability, which postulates that retrieval of items from memory is
cued not only based on ones mental representation of the study item itself, but also of the study context.
[19][20]
Since
context varies and increasingly changes with time, on an immediate free-recall test, when memory items compete for
retrieval, more recently studied items will have more similar encoding contexts to the test context, and are more
likely to be recalled.
Outside of immediate free recall, these models are also able to predict the presence of the recency effect (or lack
thereof) in delayed free recall and continual-distractor free recall conditions. Under delayed recall conditions, the
state of test context would have drifted away with an increasing retention interval, leading to an attenuated recency
effect. Under continual distractor recall conditions, while the increased interpresentation intervals reduces the
similarities between the given study contexts and test context, the relative similarities amongst items remains
unchanged. As long as the recall process is competitive, recent items will win out, so a recency effect is observed.
Ratio Rule
Overall, an important empirical observation regarding the recency effect is that it is not the absolute duration of
retention intervals (RI, the time between end of study and test period) or of inter-presentation intervals (IPI, the time
between different study items) that matters. Rather, the amount of recency is determined by the ratio of RI to IPI (the
ratio rule). As a result, as long as this ratio is fixed, recency will be observed regardless of the absolute values of
intervals, so that recency can be observed at all time scales, a phenomenon known as time scale invariance. This
contradicts dual-store models, which assume that recency depends on the size of STS, and the rule governing the
displacement of items in the STS.
Potential explanations either then explain the recency effect as occurring through a single, same mechanism, or
re-explain it through a different type of model that postulates two different mechanisms for immediate and long-term
recency effects. One such explanation is provided by Davelaar et al. (2005),
[21]
who argue that there are
dissociations between immediate and long-term recency phenomena that cannot be explained by a single-component
memory model, and who argues for the existence of a STS that explains immediate recency, and a second
mechanism based on contextual drift that explains long-term recency.
Related effects
In 1977, William Crano decided to outline a study to further the previous conclusions on the nature of order effects,
in particular those of primacy vs. recency, which were said to be unambiguous and opposed in their predictions. The
specifics tested by Crano were:
[22]
Change of meaning hypothesis
"adjectives presented first on a stimulus list established a set, or expectation, through which the meanings of
the later descriptors were modified in an attempt to maintain consistency in the mind of the receiver."
Inconsistency discounting
"later descriptions on the stimulus list were discounted if inconsistent with earlier trait adjectives."
Attention decrement hypothesis
"earlier adjectives would wield considerably more influence than the later ones, and a primacy effect in the
typical impression formation task would be expected to occur ... even when the stimulus list contains traits of a
high degree of consistency."
The continuity effect or lag-recency effect predicts that having made a successful recall, the next recall is likely to
be a neighboring item in serial position during the study period. The difference between the two items' serial position
is referred to as serial position lag. Another factor, called the conditional-response probability, represents the
likelihood that a recall of a certain serial position lag was made. A graph of serial position lag versus conditional
response probability reveals that the next item recalled minimizes absolute lag, with a higher likelihood for the
adjacent item rather than the previous one.
Serial position effect
196
Notes
[1] Ebbinghaus, Hermann (1913). On memory: A contribution to experimental psychology. New York: Teachers College.
[2] Deese and Kaufman (1957) Serial effects in recall of unorganized and sequentially organized verbal material, J Exp Psychol. 1957
Sep;54(3):180-7
[3] Murdock, B.B., Jr. (1962) The Serial Position Effect of Free Recall, Journal of Experimental Psychology, 64, 482-488.
[4] Murdock, Bennet (1962). "Serial Position Effect of Free Recall". Journal of Experimental Psychology 64 (2): 482488.
[5] Howard, Marc W.; Michael J. Kahana (2001). "A Distributed Representation of Temporal Context". Journal of Mathematical Psychology.
doi:10.1006/jmps.2001.1388.
[6] Bjork, Robert A.; William B. Whitten (1974). "Recency-Sensitive Retrieval Processes in Long-Term Free Recall". Cognitive Psychology 6:
173189.
[7] Murdock, Bennet; Janet Metcalf (1978). "Controlled Rehearsal in Single-Trial Free Recall". Journal of Verbal Learning and Verbal Behavior
17: 309324.
[8] Carlesimo, Giovanni; G.A. Marfia, A. Loasses, and C. Caltagirone (1996). "Recency effect in anterograde amneisa: Evidence for distinct
memory stores underlying enhanced retrieval of terminal items in immediate and delayed recall paradigms". Neuropsychologia 34 (3):
177184.
[9] Bayley, Peter J.; David P. Salmon, Mark W. Bondi, Barbara K. Bui, John Olichney, Dean C. Delis, Ronald G. Thomas, and Leon J. Thai
(March 2000). "Comparison of the serial position effect in very mild Alzheimer's disease, mild Alzheimer's disease, and amnesia associated
with electroconvulsive therapy". Journal of the International Neuropsychological Society 6 (3): 290298. doi:10.1017/S1355617700633040.
[10] Glenberg, A.M; M.M. Bradley, J.A. Stevenson, T.A. KrausM.J. Tkachuk, A.L. Gretz (1980). "A two-process account of long-term serial
position effects". Journal of Experimental Psychology: Human Learning and Memory (6): 355369.
[11] Marshall, P.H.; P.R. Werder (1972). "The effects of the elimination of rehearsal on primacy and recency". Journal of Verbal Learning and
Verbal Behavior (11): 649653.
[12] Rundus, D. "Maintenance rehearsal and long-term recency". Memory and Cognition (8(3)): 226230.
[13] Rundus, D (1971). "An analysis of rehearsal processes in free recall". Journal of Experimental Psychology (89): 6377.
[14] Brodie, D.A.; B.B. Murdock. "Effects of presentation time on nominal and functional serial position curves in free recall". Journal of Verbal
Learning and Verbal Behavior (16): 185200.
[15] Bjork & Whitten (1974). Recency sensitive retrieval processes in long-term free recall, Cognitive Psychology, 6, 173-189.
[16] Greene, R. L. (1986). Sources of recency effects in free recall. Psychological Bulletin, 99(12), 221-/228.
[17] Bjork & Whitten (1974). Recency sensitive retrieval processes in long-term free recall, Cognitive Psychology, 6, 173-189.
[18] Neath, I., & Knoedler, A. J. (1994). Distinctiveness and serial position effects in recognition and sentence processing. Journal of Memory
and Language, 33, 776-795
[19] Howard, M. W., & Kahana, M. (1999). Contextual variability and serial position effects in free recall. Journal of Experimental Psychology:
Learning, Memory and Cognition, 24(4), 923-941.
[20] Howard, M. W., & Kahana, M. J. (2002). A distributed representation of temporal context. Journal of Mathematical Psychology, 46(3),
269-299.
[21] Davelaar, E. K., Goshen-Gottstein, Y., Ashkenazi, A., Haarmann, H. J., & Usher, M. (2005). The demise of short-term memory revisited:
Empirical and computational investigations of recency effects. Psychological Review, 112, 3-42.
[22] Kohler, Christine. "Order Effects Theory: Primacy versus Recency" (http:/ / www. ciadvertising. org/ sa/ spring_04/ adv382j/ christine/
primacy. html). Center for Interactive Advertising, The University of Texas at Austin. . Retrieved 2007-11-04.
References
Frensch, P.A. (1994). Composition during serial learning: a serial position effect. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 20(2), 423-443.
Healy, A.F., Havas, D.A., & Parkour, J.T. (2000). Comparing serial position effects in semantic and episodic
memory using reconstruction of order tasks. Journal of Memory and Language, 42, 147-167.
Glanzer, M. & Cunitz, A. R. (1966). Two storage mechanisms in Free Recall. Journal of Verbal Learning and
Verbal Behaviour,5,351-360.
Kahana, M. J., Howard, M. W., & Polyn, S. M.(2008). Associative Retrieval Processes in Episodic Memory.
Psychology.
Paper 3.
Howard, M. W. & Kahana, M. (1999). Contextual Variability and Serial Position Effects in Free Recall. Journal
of Experimental Psychology: Learning, Memory & Cognition, 25(4), 923-941.
Serial position effect
197
Further reading
Luchins, Abraham S. (1959) Primacy-recency in impression formation
Liebermann, David A. Learning and memory: An integrative approach. Belmont, CA: Thomson/Wadsworth,
2004, ISBN 978-0-534-61974-9.
Recency illusion
The recency illusion is the belief or impression that something is of recent origin when it is in fact long-established.
The term was invented by Arnold Zwicky, a linguist at Stanford University who was primarily interested in
examples involving words, meanings, phrases, and grammatical constructions.
[1]
However, use of the term is not
restricted to linguistic phenomena: Zwicky has defined it simply as, "the belief that things you have noticed only
recently are in fact recent".
[2]
Linguistic items prone to the Recency Illusion include:
"Singular they" - the use of they to reference a singular antecedent, as in someone said they liked the play.
Although this usage is often cited as a modern invention, it is found in Jane Austen and Shakespeare.
[3]
The grammatically incorrect phrase between you and I, a hypercorrection today which could also be found
occasionally in Early Modern English.
The intensifier really as in it was a really wonderful experience, and the moderating adverb pretty as in it was a
pretty exciting experience. Many people have the impression that these usages are somewhat slang-like, and have
developed relatively recently. In fact, they go back to at least the 18th century, and are commonly found in the
works and letters of such writers as Benjamin Franklin.
"Aks" as a production of African-American English only. Use of "aks" in place of "ask" dates back to the 1600s
and Middle English, though typically in this context spelled "ax."
[4]
According to Zwicky, the illusion is caused by selective attention.
[2]
References
[1] Intensive and Quotative ALL: something old, something new, John R. Rickford, Thomas Wasow, Arnold Zwicky, Isabelle Buchstaller,
American Speech 2007 82(1):3-31; Duke University Press (what Arnold Zwicky (2005) has dubbed the "recency illusion," whereby people
think that linguistic features theyve only recently noticed are in fact new).
[2] Language Log: Just between Dr. Language and I (http:/ / itre. cis. upenn. edu/ ~myl/ languagelog/ archives/ 002386. html)
[3] [3] Shakespeare, The Comedy of Errors, Act IV, Scene 3 (1594): "There's not a man I meet but doth salute me / As if I were their
well-acquainted friend"
[4] [4] [Lippi-Green, Rosina. English with an Accent: Language, Ideology, and Discrimination in the United States. London: Routledge, 1997.
Print.]
External links
New Scientist article (http:/ / www. newscientist. com/ channel/ being-human/ mg19626302.
300-the-word-recency-illusion. html) (subscription only; hard copy at New Scientist, 17 November 2007 p. 60)
Restraint bias
198
Restraint bias
Restraint bias is the tendency for people to overestimate their ability to control impulsive behavior. An inflated
self-control belief may lead to greater exposure to temptation, and increased impulsiveness. Therefore, the restraint
bias has bearing on addiction. For example, someone might experiment with drugs, simply because they believe they
can resist any potential addiction.
[1]
An individual's inability to control, or their temptation can come from several
different visceral impulses, Visceral Impulses can include hunger, sexual arousal, and fatigue. These impulses
provide information about the current state and behavior needed to keep the body satisfied.
[1]
Empathy Gap Effect: The Empathy Gap Effect deals with individuals having trouble appreciating the power that
the impulse states have on their behavior. There is a cold-to-hot empathy gap that states when people are in a cold
state, like not experiencing hunger, they tended to underestimate those influences in a hot state. The underestimation
of the visceral impulses can be contributed to restricted memory for the visceral experience which means the
individual can recall the impulsive state but cannot recreate the sensation of the impulsive state.
[1]
Impulse Control and Attention: Studies have proved that when people believe that they have stronger sense of
self-control over situations in their environment, they have greater control over their impulse control. Individuals
also tend to overestimate their capacity for self-control when one is told that they have a high capacity for
self-restraint.
[1]
The more someone is told that they have a high capacity for self-restraint, the more they believe it
and display higher levels of impulse control. Attention has a lot to do with biases, self and impulse controls in our
environment. The less attention an individual pays to something, the less control they have over whatever they are
doing. Focusing attention to oneself can lead to successful self-control which can be helpful in many aspects of life.
Self-control engages conflict between competing pressures, pressures that can be brought on by situational or
internal prompts from the environment. Some of the cues make the individual act on or engage in that behavior or act
to prevent the individual from taking action.
[2]
References
[1] Nordgren LF, van Harreveld F, van der Pligt J (2009). "The restraint bias: how the illusion of self-restraint promotes impulsive behavior."
(http:/ / www. ncbi.nlm. nih. gov/ entrez/ eutils/ elink. fcgi?dbfrom=pubmed& tool=sumsearch. org/ cite& retmode=ref& cmd=prlinks&
id=19883487). Psychol Sci 20 (12): 15238. doi:10.1111/j.1467-9280.2009.02468.x. PMID19883487. .
[2] Mann, T., & Ward, A. (2007). Attention, Self-Control, and Health Behaviors. Current Directions in Psychological Science , 280-283
Rhyme-as-reason effect
199
Rhyme-as-reason effect
The rhyme-as-reason effect is a cognitive bias whereupon a saying or aphorism is judged as more accurate or
truthful when it is rewritten to rhyme.
In experiments, subjects judged variations of sayings which did and did not rhyme, and tended to evaluate those that
rhymed as more truthful (controlled for meaning). For example, the statement "What sobriety conceals, alcohol
reveals" was judged to be more accurate than by different participants who saw "What sobriety conceals, alcohol
unmasks".
[1]
The effect could be caused by the Keats heuristic, according to which a statement's truth is evaluated according to
aesthetic qualities;
[2]
or the fluency heuristic, according to which things could be preferred due their ease of
cognitive processing.
[3]
For an example of the persuasive quality of the rhyme-as-reason effect, see "if it doesn't fit, you must acquit," the
signature phrase used by Johnnie Cochran to gain acquittal for O.J. Simpson in Simpson's murder trial.
References
[1] McGlone, M. S.; J. Tofighbakhsh (2000). "Birds of a feather flock conjointly (?): rhyme as reason in aphorisms.". Psychological Science 11
(5): 424428.
[2] McGlone, M. S.; J. Tofighbakhsh (1999). "The Keats heuristic: Rhyme as reason in aphorism interpretation". Poetics 26 (4): 235244.
[3] Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Risk compensation
Booth's rule#2: "The safer skydiving gear becomes, the more chances skydivers will take,
in order to keep the fatality rate constant"
Risk compensation (also Peltzman
effect, risk homeostasis) is an
observed effect in ethology whereby
people tend to adjust their behavior in
response to perceived level of risk,
behaving less cautiously where they
feel more protected and more
cautiously where they feel a higher
level of risk. The theory emerged out
of road safety research after it was
observed that many interventions failed
to achieve the expected level of
benefits but has since found application
in many other fields.
Notable examples include observations
of increased levels of risky behaviour
by road users following the introduction of compulsory seatbelts and bicycle helmet and motorists driving faster and
following more closely behind the vehicle in front following the introduction of antilock brakes. It has also been
suggested that free condom distribution programs often fails to reduce HIV prevalence as predicted due to an
increase in risky sexual behavior and that "the safer skydiving gear becomes, the more chances skydivers will take,
in order to keep the fatality rate constant". This balancing behaviour does not mean an intervention does not work
Risk compensation
200
and the effect may be less than, equal to, or greater than the true efficacy of the intervention. It is likely to be least
when an intervention is imperceptible and greatest when an intervention is intrusive or conspicuous.
Shared space is a relatively new approach to the design of roads where the level of uncertainty for drivers and other
road users is deliberately increased by removing traditional demarcations between vehicle traffic such as railings and
traffic signals, and has been observed to result in lower vehicle speeds and fewer road casualties. In Sweden,
following the change from driving on the left to driving on the right there was a 40% drop in crashes, which was
linked to the increased apparent risk. The crash rate returned to its former level after people became familiar with the
new arrangement.
Moral hazard is a related effect where a decision-maker benefits from the positive effects of a decision, with others
suffering the related negative effects.
Examples
Road transport
Anti-lock brakes
There are at least three studies which show that drivers' response to antilock brakes is to drive faster, follow closer
and brake later, accounting for the failure of ABS to result in any measurable improvement in road safety. The
studies were performed in Canada, Denmark and Germany.
[1][2][3]
A study led by Fred Mannering, a professor of
civil engineering at Purdue University supports risk compensation, terming it the "offset hypothesis".
[4]
Bicycle helmets
The issue of risk compensation has been a central topic in the heated debate concerning the effectiveness of Bicycle
helmet legislation. A study from March 2007, first published in Accident Analysis & Prevention as reported in
Scientific American reported that drivers drove an average of 8.5 cm closer, and came within 1 meter 23% more
often, when a cyclist was wearing a helmet. Statements made in the report included: "The closer a driver is to the
cyclist, the greater chance of a collision", "Drivers passed closer to the rider the further out into the road he was",
and "The bicyclists apparel affects the amount of clearance the overtaking motorist gives the bicyclist". This
research thus implies risk compensation, not among cyclists but among fellow road users.
[5]
Seat belts
A 2007 study based on data from the Fatality Analysis Reporting System (FARS) of the National Highway Traffic
Safety Administration concluded that between 1985 and 2002 there were "significant reductions in fatality rates for
occupants and motorcyclists after the implementation of belt use laws", and that "seatbelt use rate is significantly
related to lower fatality rates for the total, pedestrian, and all non-occupant models even when controlling for the
presence of other state traffic safety policies and a variety of demographic factors."
[6]
In 1994 research of people both wore and habitually didn't wear seatbelts had concluded that drivers were found to
drive faster and less carefully when belted.
[7]
Earlier research carried out by John Adams in 1981 had suggested that there was no correlation between the passing
of seat belt legislation and the total reductions in injuries or fatalities based on comparisons between states with and
without seat belt laws. He also suggested that some injuries were being displaced from car drivers to pedestrians and
other road users.
[8]
This paper was published at a time when Britain was considering a seat belt law, so the
Department of Transport commissioned a report into the issue in which the author, Isles, agreed with Adams'
conclusions.
[9]
The Isles Report was never published officially but a copy was leaked to the Press some years
later.
[10]
The law was duly passed and subsequent investigation showed some reduction in fatalities, the cause of
which could not be conclusively stated, due to the simultaneous introduction of evidential breath testing.
[11]
Risk compensation
201
Shared space
Shared space is an urban design approach which seeks to minimise demarcations between vehicle traffic and
pedestrians, often by removing features such as curbs, road surface markings, traffic signs and regulations. Typically
used on narrower streets within the urban core and as part of living streets within residential areas, the approach has
also be applied to busier roads, including Exhibition Road in Kensington, London.
Schemes are often motivated by a desire to reduce the dominance of vehicles, vehicle speeds and road casualty rates.
First proposed in 1991, the term is now strongly associated to the work of Hans Monderman who suggested that by
creating a greater sense of uncertainty and making it unclear who had right of way, drivers reduce their speed, and
everyone reduces their level of risk compensation. The approach is frequently opposed by organisations representing
the interests of blind, partially sighted and deaf who often express a strong preference for the clear separation of
pedestrian and vehicular traffic.
Speed limits
There is strong evidence that reducing speed limits normally reduces crash, injury and fatality rates,
[12]
for example,
a 2003 review of changes to speed limits in a number of jurisdictions showed that in most cases where speed limits
had been decreased that the number of crashes and fatalities had decreased and that where speed limits had been
increased the number of crashes and fatalities had increased.
[12]
A 1994 study by Jeremy Jackson and Roger Blackman using a driving simulator, reported that increased speed limits
and a reduction of speeding fines had significantly increased driving speed but resulted in no change in the accident
frequency. It also showed that increased accident cost caused large and significant reductions in accident frequency
but no change in speed choice. The abstract states that the results suggest that regulation of specific risky behaviors
such as speed choice may have little influence on accident rates.
[13]
Sport
Ski helmets
In relation to the use of Ski helmets, Dr. Jasper Shealy, a professor from Rochester Institute of Technology who has
been studying skiing and snowboarding injuries for more than 30 years said "There is no evidence they reduce
fatalities,", and that "We are up to 40 percent usage but there has been no change in fatalities in a 10-year
period."
[14][15]
There is evidence that helmeted skiers tend to go faster.
[16]
Skydiving
Booth's rule #2, coined by skydiving pioneer Bill Booth, states that "The safer skydiving gear becomes, the more
chances skydivers will take, in order to keep the fatality rate constant". Even though skydiving equipment has made
huge leaps forward in terms of reliability in the past two decades, and safety devices such as AADs have been
introduced, the fatality rate has stayed roughly constant since the early 1980s.
[17]
This can largely be attributed to an
increase in the popularity of high performance canopies, which fly much faster than traditional parachutes. High
speed manoeuvres close to the ground have increased the number of landing fatalities in recent years,
[18]
even
though these jumpers have perfectly functioning parachutes over their heads.
Risk compensation
202
Safety equipment in children
Experimental studies have suggested that children who wear protective equipment are likely to take more risks.
[19]
Health
Risky sexual behavior and HIV/AIDS
Evidence on risk compensation associated with HIV prevention interventions is mixed. Harvard researcher Edward
C. Green argued that the risk compensation phenomenon could explain the failure of condom distribution programs
to reverse HIV prevalence, providing a detailed explanations of his views in an op-ed article for The Washington
Post
[20]
and an extended interview with the BBC.
[21]
A 2007 article in the Lancet suggested that "condoms seem to
foster disinhibition, in which people engage in risky sex either with condoms or with the intention of using
condoms".
[22][23]
Another report compared risk behaviour of men based on whether they were circumcised.
[24]
Peltzman effect
The Peltzman effect is the hypothesized tendency of people to react to a safety regulation by increasing other risky
behavior, offsetting some or all of the benefit of the regulation. It is named after Sam Peltzman, a professor of
Economics at the University of Chicago Booth School of Business. When the offsetting risky behavior encouraged
by the safety regulation has negative externalities, the Peltzman effect can result in redistributing risk to innocent
bystanders who would behave in a risk-averse manner even without the regulation. For example, if some
risk-tolerant drivers who would not otherwise wear a seat belt respond to a seat belt law by driving less safely, there
would be more total collisions. Overall injuries and fatalities may still decrease due to greater seat belt use, but
drivers who would wear seat belts regardless would see their overall risk increase. Similarly, safety regulations for
automobiles may put pedestrians or bicyclists in more danger by encouraging risky behavior in drivers without
offering additional protection for pedestrians and cyclists.
The Peltzman effect has been used to explain Smeed's Law, an empirical claim that traffic fatality rates increase with
the number of vehicle registrations per capita, and differing safety standards have no effect. Recent empirical studies
have rejected Smeed's Law, which is inconsistent with the observation of declining fatality rates in many countries,
along with the associated theory of risk homeostasis. [25]. Roy Baumeister has suggested that the use of helmets in
football and gloves in boxing lead to examples of the Peltzman effect.
[26]
Risk homeostasis
Professor Gerald J. S. Wilde, a professor emeritus of psychology at Queen's University, Kingston, Ontario, Canada
noted that when Sweden changed from driving on the left to driving on the right in 1967, this was followed by a
marked reduction in the traffic fatality rate for 18 months after which the trend returned to its previous values. It was
suggested that drivers had responded to increased perceived danger by taking more care, only to revert to previous
habits as they became accustomed to the new regime. This hypothesis is elucidated in Wilde's book.
[27]
The
hypothesis of risk homeostasis holds that everyone has his or her own fixed level of acceptable risk. When the level
of risk in one part of the individual's life changes, there will be a corresponding rise or fall in risk elsewhere to bring
the overall risk back to that individual's equilibrium. Wilde argues that the same is true of larger human systems, e.g.
a population of drivers.
For example, in a Munich study, part of a fleet of taxicabs were equipped with anti-lock brakes (ABS), while the
remainder had conventional brake systems. In other respects, the two types of cars were identical. The crash rates,
studied over 3 years, were a little higher for the cabs with ABS, and Wilde concluded that drivers of ABS-equipped
cabs took more risks, assuming that ABS would take care of them; non-ABS drivers were said to drive more
carefully since the could not rely on ABS in a dangerous situation. There is much more to this study as shown in
following reference:
[28]
Likewise, it has been found that drivers behave less carefully around bicyclists wearing
Risk compensation
203
helmets than around unhelmeted riders.
[29]
The idea of risk homeostasis has garnered criticism.
[30]
Some critics say that risk homeostasis theory is contradicted
by car crash fatality rates. These rates have fallen after the introduction of seat belt laws.
[31][32][33][34]
References
[1] Grant and Smiley, "Driver response to antilock brakes: a demonstration on behavioural adaptation" from Proceedings, Canadian
Multidisciplinary Road Safety Conference VIII, June 1416, Saskatchewan 1993.
[2] Sagberg, Fosser, and Saetermo, "An investigation of behavioural adaptation to airbags and antilock brakes among taxi drivers" Accident
Analysis and Prevention #29 pp 293302 1997.
[3] [3] Aschenbrenner and Biehl, "Improved safety through improved technical measures? empirical studies regarding risk compensation processes
in relation to anti-lock braking systems". In Trimpop and Wilde, Challenges to Accident Prevention: The issue of risk compensation behaviour
(Groningen, NL, Styx Publications, 1994).
[4] Venere, Emil. (2006-09-27). Study: Airbags, antilock brakes not likely to reduce accidents, injuries (http:/ / news. uns. purdue. edu/
html4ever/ 2006/ 060927ManneringOffset. html). Purdue University News Service.
[5] "Strange but True: Helmets Attract Cars to Cyclists" (http:/ / www. sciam. com/ article. cfm?chanID=sa029&
articleID=778EF0AB-E7F2-99DF-3594A60E4D9A76B2). Scientific American. .
[6] Houston, David J., and Lilliard E. Richardson. "Risk Compensation or Risk Reduction? Seatbelts, State Laws, and Traffic Fatalities." Social
Science Quarterly (Blackwell Publishing Limited) 88.4 (2007): 913936. Business Source Complete. EBSCO. Web. 9 June 2011.
[7] Janssen, W. (1994). "Seat belt wearing and driving behaviour: An instrumented-vehicle study Apr; Vol 26(2)" (http:/ / www. ncbi. nlm. nih.
gov/ entrez/ query. fcgi?cmd=Retrieve& db=PubMed& list_uids=8198694& dopt=Abstract). Accident Analysis and Prevention. pp.2492. .
[8] "The efficacy of [[seatbelt (http:/ / www. geog.ucl.ac.uk/ ~jadams/ PDFs/ SAE seatbelts. pdf)] legislation: A comparative study of road
accident fatality statistics from 18 countries"]. Dept of Geography, University College, London. 1981. .
[9] Adams, John (4 January 2007). "Seat belt legislation and the Isles Report" (http:/ / www. john-adams. co. uk/ 2007/ 01/ 04/
seat-belt-legislation-and-the-isles-report/ ). Risk in a Hypermobile World. . Retrieved 1 August 2012.
[10] Isles, J. E. (April 1981). "The implications of European Statistics" (http:/ / john-adams. co. uk/ wp-content/ uploads/ 2007/ 01/ isles report.
pdf). Department for Transport. . Retrieved 1 August 2012.
[11] Adams (1995), Risk
[12] British Columbia Ministry of Transportation (2003). "Review and Analysis of Posted Speed Limits and Speed Limit Setting Practices in
British Columbia" (http:/ / www.th. gov. bc. ca/ publications/ eng_publications/ speed_review/ Speed_Review_Report. pdf). p.26 (tables 10
and 11). . Retrieved 2009-09-17.
[13] Jackson JSH, Blackman R (1994). A driving-simulator test of Wilde's risk homeostasis theory. Journal of Applied Psychology.
[14] http:/ / www.rit.edu/ news/ utilities/ pdf/ 2008/ 2008_03_04_Buffalo_News_use_head_on_slopes_Shealy. pdf Use your head on the ski
slopes. Dont just rely on your helmet. By Fletcher Doyle --News Sports Reporter. Buffalo News. Updated: 03/04/08 9:19 AM.
[15] Do Helmets Reduce Fatalities or Merely Alter the Patterns of Death? Journal of ASTM International Volume 5, Issue 10 (November 2008).
ISSN: 1546-962X. Shealy, Jasper E. Professor Emeritus, Rochester Institute of Technology, NY. Johnson, Robert J. Professor, University of
Vermont College of Medicine, VT. Ettlinger, Carl F. President, Vermont Safety Research, VT. doi:10.1520/JAI101504 http:/ / www. astm.
org/ DIGITAL_LIBRARY/ JOURNALS/ JAI/ PAGES/ 1043. htm
[16] [16] How Fast Do Winter Sports Participants Travel on Alpine Slopes? Shealy, JE. Rochester Institute of Technology, Rochester, NY, USA.
Ettlinger, CF. Vermont Safety Research, Underhill Center, VT, USA. Johnson, RJ. McClure Musculoskeletal Research Center, University of
Vermont College of Medicine, Burlington, VT, USA. Journal of ASTM International Volume 2, Issue 7 (July/August 2005) "The average
speed for helmet users of 45.8 km/h (28.4 mph) was significantly higher than those not using a helmet at 41.0 km/h (25.4 mph."
doi:10.1520/JAI12092
[17] "US Skydiving Fatalities History" (http:/ / web. archive. org/ web/ 20030211051448/ http:/ / www. skydivenet. com/ fatalities/
fatalities_history. html). .
[18] "http:/ / mypages. iit.edu/ ~kallend/ skydive/ fatalities.gif" (http:/ / www. iit. edu/ ~kallend/ skydive/ fatalities. gif). .
[19] Understanding children's injury-risk behavior: Wearing safety gear can lead to increased risk taking. Morrongiello BA, Walpole B, and
Lasenby J. Accident Analysis & Prevention Volume 39, Issue 3, May 2007, Pages 618623
[20] Green, Edward C. (2009-03-29). "The Pope May Be Right" (http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/ 2009/ 03/ 27/
AR2009032702825. html). The Washington Post. .
[21] "The pope was right about condoms, says Harvard HIV expert" (http:/ / www. bbc. co. uk/ blogs/ ni/ 2009/ 03/
aids_expert_who_defended_the_p.html). Sunday Sequence. BBC Radio Ulster. 2009-03-29. .
[22] Shelton, James D (2007-12-01). "Ten myths and one truth about generalised HIV epidemics" (http:/ / www. thelancet. com/ journals/ lancet/
article/ PIIS0140-6736(07)61755-3/ fulltext?_eventId=login). The Lancet 370 (9602): 18091811. doi:10.1016/S0140-6736(07)61755-3. .
[23] Gray (http:/ / www. jhsph.edu/ faculty/ directory/ profile/ 928/ Gray/ Ronald), , et al; Kigozi, Godfrey; Serwadda, David; Makumbi,
Frederick; Watya, Stephen; Nalugoda, Fred; Kiwanuka, Noah; Moulton, Lawrence H et al. (2007-02-01). "Male circumcision for HIV
prevention in men in Rakai, Uganda: a randomised trial" (http:/ / www. thelancet. com/ journals/ lancet/ article/ PIIS0140673607603134/
abstract). The Lancet 369 (9562): 657666. doi:10.1016/S0140-6736(07)60313-4. .
Risk compensation
204
[24] Wilson, Nicholas, Wentao Xiong, and Christine Mattson (2011). "Is Sex Like Driving? Risk Compensation Associated with Randomized
Male Circumcision in Kisumu, Kenya" (http:/ / web. williams. edu/ Economics/ wp/ Wilson_Circumcision. pdf). Williams College Economics
Department Working Paper Series. .
[25] http:/ / onlinelibrary.wiley. com/ doi/ 10. 1111/ j.1539-6924. 1986. tb00196. x/ abstract
[26] http:/ / www.econtalk. org/ archives/ 2011/ 11/ baumeister_on_g. html
[27] Wilde, Gerald J.S. (2001). Target Risk 2: A New Psychology of Safety and Health (http:/ / psyc. queensu. ca/ target/ index. html).
ISBN0-9699124-3-9. .
[28] Munich taxicab experiment discussion (http:/ / psyc.queensu. ca/ target/ chapter07. html)
[29] Drivers leave less margin when overtaking helmeted cyclists (http:/ / www. educatedguesswork. org/ movabletype/ archives/ 2006/ 09/
risk_homeostasi_1.html)
[30] O'Neill B, Williams A (June 1998). "Risk homeostasis hypothesis: a rebuttal". Inj. Prev. 4 (2): 923. doi:10.1136/ip.4.2.92. PMC1730350.
PMID9666359.
[31] D. C. Andreassen (1985). "Linking deaths with vehicles and population". Traffic Engineering and Control 26 (11): 547549.
[32] J. Broughton (1988). "Predictive models of road accident fatalities". Traffic Engineering and Control 29 (5): 296300.
[33] S. Oppe (1991). "The development of traffic and traffic safety in six developed countries". Accident Analysis and Prevention 23 (5):
401412. doi:10.1016/0001-4575(91)90059-E. PMID1741895.
[34] J. R. M. Ameen and J. A. Naji (2001). "Causal models for road accident fatalities in Yemen". Accident Analysis and Prevention 33 (4):
547561. doi:10.1016/S0001-4575(00)00069-5. PMID11426685.
Other sources
Adams, John (1995). Risk. Routledge. ISBN1-85728-068-7.
Wilde, Gerald J.S. (1994). Target Risk (http:/ / psyc. queensu. ca/ target/ ). PDE Publications.
ISBN0-9699124-0-4. Retrieved 2006-04-26.
External links
'Naked' streets are safer, say Tories (http:/ / www. timesonline. co. uk/ tol/ news/ article1295120. ece) The
Times
Sam Peltzman on IDEAS (http:/ / ideas. repec. org/ f/ ppe234. html) at RePEc
Sam Peltzman podcast (http:/ / www. econtalk. org/ archives/ 2006/ 11/ peltzman_on_reg. html) Interview at
EconTalk
"Regulation and the Wealth of Nations" (http:/ / pcpe. libinst. cz/ nppe/ 3_2/ nppe3_2_3. pdf) (New Perspectives
on Political Economy. Volume 3, Number 2, 2007, pp.185 204)
"Scrap the Traffic Lights" (http:/ / www. foxnews. com/ opinion/ 2010/ 08/ 03/
john-stossel-private-sector-government-business-economy-traffic-accidents/ ) John Stossel shows some concrete
examples.
"Regulation and the Natural Progress of Opulence" (http:/ / faculty. chicagobooth. edu/ sam. peltzman/ teaching/
aei brookings0904. pdf) (PDF), a lecture by Peltzman at the American Enterprise Institute in 2004
Selective perception
205
Selective perception
Selective perception is the process by which individuals perceive what they want to in media messages and
disregard the rest. It is a broad term to identify the behavior all people exhibit to tend to "see things" based on their
particular frame of reference. Selective perception may refer to any number of cognitive biases in psychology related
to the way expectations affect perception. Human judgment and decision making is distorted by an array of
cognitive, perceptual and motivational biases, and people tend not to recognise their own bias, though they tend to
easily recognise (and even overestimate) the operation of bias in human judgment by others.
[1]
One of the reasons
this might occur might be because people are simply bombarded with too much stimuli every day to pay equal
attention to everything, therefore, they pick and choose according to their own needs.
[2]
To understand when and why a particular region of a scene is selected, studies observed and described the eye
movements of individuals as they go about performing specific tasks. In this case, vision was an active process that
integrated scene properties with specific, goal-oriented oculomotor behaviour.
[3]
Several other studies have shown that students who were told they were consuming alcoholic beverages (which in
fact were non-alcoholic) perceived themselves as being "drunk", exhibited fewer physiological symptoms of social
stress, and drove a simulated car similarly to other subjects who had actually consumed alcohol. The result is
somewhat similar to the placebo effect.
In one classic study on this subject related to the hostile media effect (which is itself an example of selective
perception), viewers watched a filmstrip of a particularly violent Princeton-Dartmouth American football game.
Princeton viewers reported seeing nearly twice as many rule infractions committed by the Dartmouth team than did
Dartmouth viewers. One Dartmouth alumnus did not see any infractions committed by the Dartmouth side and
erroneously assumed he had been sent only part of the film, sending word requesting the rest.
[4]
Selective perception is also an issue for advertisers, as consumers may engage with some ads and not others based on
their pre-existing beliefs about the brand.
Seymour Smith, a prominent advertising researcher, found evidence for selective perception in advertising research
in the early 1960s, and he defined it to be "a procedure by which people let in, or screen out, advertising material
they have an opportunity to see or hear. They do so because of their attitudes, beliefs, usage preferences and habits,
conditioning, etc."
[5]
People who like, buy, or are considering buying a brand are more likely to notice advertising
than are those who are neutral toward the brand. This fact has repercussions within the field of advertising research
because any post-advertising analysis that examines the differences in attitudes or buying behavior among those
aware versus those unaware of advertising is flawed unless pre-existing differences are controlled for. Advertising
research methods that utilize a longitudinal design are arguably better equipped to control for selective perception.
Selective perceptions are of two types:
Low level Perceptual vigilance
High level Perceptual defense
Selective perception
206
References
[1] Emily Pronin, " Perception and misperception of bias in human judgment (http:/ / web. missouri. edu/ ~segerti/ capstone/ Biasinjudgement.
pdf)," Trends in Cognitive Sciences, Volume 11, Issue 1, January 2007, pp. 37-43.
[2] http:/ / lilt. ilstu.edu/ rrpope/ rrpopepwd/ articles/ perception3. html
[3] [3] Canosa, R.L. (2009). Real-world vision: selective perception and task. ACM Trans. Appl. Percpt., 6, 2, Article 11, 34 pages.
[4] Hastorf, A.H. & Cantril, H. (1954). They saw a game: A case study. Journal of Abnormal and Social Psychology, 49, 129-134.
[5] Nowak, Theodore and Smith, Seymour. "Advertising WorksAnd Advertising Research Does Too." Presentation to ESOMAR. Spain:
1970s.
Further reading
Selective Perception in Stock Investing (http:/ / www. investingator. org/ selective-perception. html)
Semmelweis reflex
The Semmelweis reflex or "Semmelweis effect" is a metaphor for the reflex-like tendency to reject new evidence or
new knowledge because it contradicts established norms, beliefs or paradigms.
The term originated from Ignaz Semmelweis, who discovered that childbed fever mortality rates could be reduced
ten-fold if doctors would wash their hands (we would now say disinfect) with a chlorine solution between having
contact with infected patients and non-infected patients. His hand-washing suggestions were rejected by his
contemporaries (see Contemporary reaction to Ignaz Semmelweis).
While there is some uncertainty regarding the origin and generally accepted use of the expression, the use of the
expression Semmelweis Reflex has been documented and at least used by the author Robert Anton Wilson.
[1]
In his
book The Game of Life, Dr. Timothy Leary provided the following polemical definition of the Semmelweis reflex:
"Mob behavior found among primates and larval hominids on undeveloped planets, in which a discovery of
important scientific fact is punished". The expression has found way into philosophy and religious studies as
"unmitigated Human skepticism concerning causality".
References
[1] Wilson, Robert Anton (1991). The Game of Life. New Falcon Publications. ISBN1561840505.
Selection bias
207
Selection bias
Selection bias is a statistical bias in which there is an error in choosing the individuals or groups to take part in a
scientific study.
[1]
It is sometimes referred to as the selection effect. The term "selection bias" most often refers to
the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not
taken into account then certain conclusions drawn may be wrong.
Types
There are many types of possible selection bias, including:
Sampling bias
Sampling bias is systematic error due to a non-random sample of a population,
[2]
causing some members of the
population to be less likely to be included than others, resulting in a biased sample, defined as a statistical sample of
a population (or non-human factors) in which all participants are not equally balanced or objectively represented.
[3]
It is mostly classified as a subtype of selection bias,
[4]
sometimes specifically termed sample selection bias,
[5][6]
but
some classify it as a separate type of bias.
[7]
A distinction, albeit not universally accepted, of sampling bias is that it undermines the external validity of a test (the
ability of its results to be generalized to the rest of the population), while selection bias mainly addresses internal
validity for differences or similarities found in the sample at hand. In this sense, errors occurring in the process of
gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias.
Examples of sampling bias include self-selection, pre-screening of trial participants, discounting trial subjects/tests
that did not run to completion and migration bias by excluding subjects who have recently moved into or out of the
study area.
Time interval
Early termination of a trial at a time when its results support a desired conclusion.
A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to
be reached by the variable with the largest variance, even if all variables have a similar mean.
Exposure
Susceptibility bias
Clinical susceptibility bias, when one disease predisposes for a second disease, and the treatment for the first
disease erroneously appears to predispose to the second disease. For example, postmenopausal syndrome gives
a higher likelihood of also developing endometrial cancer, so estrogens given for the postmenopausal
syndrome may receive a higher than actual blame for causing endometrial cancer.
[8]
Protopathic bias, when a treatment for the first symptoms of a disease or other outcome appear to cause the
outcome. It is a potential bias when there is a lag time from the first symptoms and start of treatment before
actual diagnosis.
[8]
It can be mitigated by lagging, that is, exclusion of exposures that occurred in a certain time
period before diagnosis.
[9]
Indication bias, a potential mix up between cause and effect when exposure is dependent on indication, e.g. a
treatment is given to people in high risk of acquiring a disease, potentially causing a preponderance of treated
people among those acquiring the disease. This may cause an erroneous appearance of the treatment being a
cause of the disease.
[10]
Selection bias
208
Data
Partitioning data with knowledge of the contents of the partitions, and then analyzing them with tests designed for
blindly chosen partitions.
Rejection of "bad" data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.
Rejection of "outliers" on statistical grounds that fail to take into account important information that could be
derived from "wild" observations.
[11]
Studies
Selection of which studies to include in a meta-analysis (see also combinatorial meta-analysis).
Performing repeated experiments and reporting only the most favorable results, perhaps relabelling lab records of
other experiments as "calibration tests", "instrumentation errors" or "preliminary surveys".
Presenting the most significant result of a data dredge as if it were a single experiment (which is logically the
same as the previous item, but is seen as much less dishonest).
Attrition
Attrition bias is a kind of selection bias caused by attrition (loss of participants),
[12]
discounting trial subjects/tests
that did not run to completion. It includes dropout, nonresponse (lower response rate), withdrawal and protocol
deviators. It gives biased results where it is unequal in regard to exposure and/or outcome. For example, in a test of a
dieting program, the researcher may simply reject everyone who drops out of the trial, but most of those who drop
out are those for whom it was not working. Different loss of subjects in intervention and comparison group may
change the characteristics of these groups and outcomes irrespective of the studied intervention.
[12]
Observer selection
Data is filtered not only by study design and measurement, but by the necessary precondition that there has to be
someone doing a study. In situations where the existence of the observer or the study is correlated with the data
observation selection effects occur, and anthropic reasoning is required.
[13]
An example is the past impact event record of Earth: if large impacts cause mass extinctions and ecological
disruptions precluding the evolution of intelligent observers for long periods, no one will observe any evidence of
large impacts in the recent past (since they would have prevented intelligent observers from evolving). Hence there
is a potential bias in the impact record of Earth.
[14]
Astronomical existential risks might similarly be underestimated
due to selection bias, and an anthropic correction has to be introduced.
[15]
Avoidance
In the general case, selection biases cannot be overcome with statistical analysis of existing data alone, though
Heckman correction may be used in special cases. An informal assessment of the degree of selection bias can be
made by examining correlations between exogenous (background) variables and a treatment indicator. However, in
regression models, it is correlation between unobserved determinants of the outcome and unobserved determinants of
selection into the sample which bias estimates, and this correlation between unobservables cannot be directly
assessed by the observed determinants of treatment.
[16]
Selection bias
209
Related issues
Selection bias is closely related to:
publication bias or reporting bias, the distortion produced in community perception or meta-analyses by not
publishing uninteresting (usually negative) results, or results which go against the experimenter's prejudices, a
sponsor's interests, or community expectations.
confirmation bias, the distortion produced by experiments that are designed to seek confirmatory evidence instead
of trying to disprove the hypothesis.
exclusion bias, results from applying different criteria to cases and controls in regards to participation eligibility
for a study/different variables serving as basis for exclusion.
Notes
[1] Dictionary of Cancer Terms selection bias (http:/ / www. cancer. gov/ Templates/ db_alpha. aspx?CdrID=44087) Retrieved on September
23, 2009.
[2] Medical Dictionary - 'Sampling Bias' (http:/ / www. medilexicon. com/ medicaldictionary. php?t=10087) Retrieved on September 23, 2009
[3] TheFreeDictionary biased sample (http:/ / medical-dictionary. thefreedictionary. com/ Sample+ bias) Retrieved on 2009-09-23. Site in turn
cites: Mosby's Medical Dictionary, 8th edition.
[4] Dictionary of Cancer Terms Selection Bias (http:/ / medical. webends. com/ kw/ Selection Bias) Retrieved on September 23, 2009
[5] The effects of sample selection bias on racial differences in child abuse reporting (http:/ / www. ncbi. nlm. nih. gov/ pubmed/ 9504213) Ards
S, Chung C, Myers SL Jr. Child Abuse Negl. 1999 Dec;23(12):1209; author reply 1211-5. PMID 9504213
[6] Sample Selection Bias Correction Theory (http:/ / www. cs. nyu. edu/ ~mohri/ postscript/ bias. pdf) Corinna Cortes, Mehryar Mohri, Michael
Riley, and Afshin Rostamizadeh. New York University.
[7] Page 262 in: Behavioral Science. Board Review Series. (http:/ / books. google. com/ books?id=f0IDHvLiWqUC& printsec=frontcover&
source=gbs_navlinks_s#v=onepage& q=& f=false) By Barbara Fadem. ISBN 0-7817-8257-0, ISBN 978-0-7817-8257-9. 216 pages
[8] Feinstein AR, Horwitz RI (November 1978). "A critique of the statistical evidence associating estrogens with endometrial cancer". Cancer
Res. 38 (11 Pt 2): 40015. PMID698947.
[9] Tamim H, Monfared AA, LeLorier J (March 2007). "Application of lag-time into exposure definitions to control for protopathic bias".
Pharmacoepidemiol Drug Saf 16 (3): 2508. doi:10.1002/pds.1360. PMID17245804.
[10] Matthew R. Weir (2005). Hypertension (Key Diseases) (Acp Key Diseases Series). Philadelphia, Pa: American College of Physicians.
p.159. ISBN1-930513-58-5.
[11] Kruskal, W. (1960) Some notes on wild observations, Technometrics. (http:/ / www. tufts. edu/ ~gdallal/ out. htm)
[12] [12] Jni P, Egger M. Empirical evidence of attrition bias in clinical trials. Int J Epidemiol. 2005 Feb;34(1):87-8.
[13] [13] Nick Bostrom, Anthropic Bias: Observation selection effects in science and philosophy. Routledge, New York 2002
[14] Milan M. Crkovic, Anders Sandberg, and Nick Bostrom. Anthropic Shadow: Observation Selection Effects and Human Extinction Risks
(http:/ / www. nickbostrom.com/ papers/ anthropicshadow. pdf). Risk Analysis, Vol. 30, No. 10, 2010.
[15] Max Tegmark and Nick Bostrom, How unlikely is a doomsday catastrophe? (http:/ / arxiv. org/ abs/ astro-ph/ 0512204|) Nature, Vol. 438
(2005): 75. arXiv:astro-ph/0512204
[16] Heckman, J. (1979) Sample selection bias as a specification error. Econometrica, 47, 15361.
Social comparison bias
210
Social comparison bias
Social comparison bias can be defined as having feelings of dislike and competitiveness with someone that is seen
physically, or mentally better than yourself.
Introduction
A majority of people in society base their moods and feelings off of how well they are doing compared to other
people in their environment. Social comparison bias happens in everyday society regularly. Social comparison bias
can be defined as having feelings of dislike and competitiveness with someone that is seen physically, or mentally
better than yourself.
[1]
This can be compared to social comparison, which is believed to be central to achievement
motivation, feelings of injustice, depression, jealousy and people's willingness to remain in relationships or jobs.
[2][3]
In this competitive day and age, citizens in society are competing to get the best grades, the best jobs and the best
houses. Although in many situations, social comparison bias is fairly self-explanatory. For example, you might make
a comparison if you shop at low-end department stores and a peer shops at the designer stores, and you are overcome
with feelings of resentment, anger and envy with that peer. This social comparison bias involves wealth and social
status. Some of us make social comparisons, but are largely unaware of them.
[4]
In most cases, we try to compare
ourselves to those in our peer group or with whom we are similar.
[5]
Research
There are many studies revolving around social comparison and the effects it has on mental health. One study
involved the relationship between depression and social comparison.
[6]
Thwaites and Dagnan, in "Moderating
variables in the relationship between social comparison and depression", investigated the relationship between social
comparison and depression utilizing an evolutionary framework. Their hypothesis was that depression was an
outcome from social comparisons that people carried out. This study investigated the moderating effects on social
comparison of the importance of comparison dimensions to the person, and of the perceived importance of the
dimensions to other people. What the researchers used to measure the depression in their participants was a
self-esteem test called the Self Attributes Questionnaire created by Pelham and Swann in 1989. The test consisted of
10-point Likert scale ratings on 10 individual social comparison dimensions (e.g. intelligence, social skills, sense of
humor). Questions were added to explore beliefs regarding the importance of social comparison dimensions. Data
were collected from a combined clinical sample and non-clinical sample of 174 people.
[6]
They concluded that
social comparison did have a relationship with depression based on the data that they collected. More people that
contributed in social comparisons had a higher level of depression than people that rarely used social comparison.
Cognitive effects
One major symptom that can occur with social comparison bias is the mental disorder of depression. Depression is
typically diagnosed during a clinical encounter using the Diagnostic and Statistical Manual of Mental Disorders
volume IV (DSM-IV) criteria. Symptoms include depressed mood, hopelessness, and sleep difficulties, including
both hypersomnia and insomnia.
[6]
Clinical depression can be caused by many factors in a persons life. Depression
is the most common mental illness associated to social comparison bias.
[7]
Depression has a biological explanation to
why people lose hope in life. It is caused by the brain because of the hippocampus decreasing in size and the
lowering levels of serotonin that circulates through the brain.
[8]
Another negative symptom that is associated to
social comparison bias is suicide ideation. Suicide ideation can be defined as the constant thoughts about suicide
and suicide attempts. Suicide is the taking of ones own life.
[5]
Suicide ideation can occur due to social comparison
bias because people that compare themselves to people that are seen better than themselves get mentally discouraged
because they believe they can not perform or look a certain way which causes low self-esteem. Low self-esteem is
Social comparison bias
211
one of the main factors in suicide ideation.
[1]
Social comparison bias in the media
Mainstream media is also a main contributor to social comparisons.
[9]
Everywhere you go, advertisements tries to
portray to the public what beauty should be. Magazines, commercials, billboards, they all show what beauty is
supposed to look like. When growing generation of our youth and adults see this, they socially compare themselves
to the advertisements they see all around them.
[10]
When they do not look a certain way or weigh a certain amount,
society puts them down for it. This can cause low self-esteem and an onset of depression because they do not fit the
mold of what beauty is seen to be.
[9]
People get criticized when they do not look like the models in the magazine or
on TV. After socially comparing oneself to the people in the media, it can have negative effects and cause mental
anxiety, stress, negative body image and even eating disorders.
[11]
With media being such an important part of our
generation, having low self-esteems and negative self-images of ourselves impacts society with tragic incidents
including suicide and self-harm. Social comparison to others whether on TV and magazines can cause a person to
lose confidence in themselves and stress over trying to be perfect and be what society expects them to be. In an
experiment that studied womens body image after comparing themselves to different types of models, body image
was significantly more negative after viewing thin media images than after viewing images of either average size
models, or plus size models.
[11]
Media is one of the leading causes for bad body image among youth and adults
because of social comparison.
[12]
Social comparison bias through social media
Social media being a main source of news and breaking new stories, people can connect to people from all over the
world and learn in new ways.
[11]
It is easier to see peoples private life on a public network. This being said, social
networks such as Facebook makes viewing someones daily life as simple as sending a request. Society is exposed to
everyones lives and people are starting to compare themselves with their friends that they have on Facebook. It is
easy to log in and see someone brag about their success or their new belongings and feel bad about yourself. In
recent studies, researchers have been linking Facebook with depression in this generation of social media.
[11]
They
may start to have low self-esteem by seeing their friends online have more exciting lives and more popularity. This
social comparison bias among social network users online can make people start to think of their lives as not as
fulfilling as they want to be. They see pictures or statuses about job promotions or new jobs, vacations; new
relationships, fun outings or even those that can afford nice things. This can cognitively effect a persons self-esteem
and cause depression.
[13]
They can start to feel bad about their appearance and their life in general. Social media
does have an impact on the amount of social comparison people have.
[14]
Social comparison bias in the classroom
Social comparisons are also very important in the school system. Students depending on their grade level are very
competitive about the grades they receive compared to their peers. Social comparisons not only influence students'
self-concepts but also improve their performance.
[15]
This social comparison process leads to a lower self-concept
when the class level is high and to a higher self-concept when the class level is low.
[15]
Therefore, two students with
equal performance in a domain may develop different self-concepts when they belong to different classes with
different performance levels.
[10]
Social comparisons are important and valid predictors of students' self-evaluations
and achievement behavior. Students may feel jealously or competitiveness when it comes to grades and getting into
better colleges and universities than their peers. Social comparison can also motivate students to do well because
they want to keep along with their peers.
Social comparison bias
212
Conclusion
Social comparison bias can occur in peoples everyday life. Whether it is on social networking sites, in the media, in
society regarding wealth and social status or in the school system. It can be negative to ones mental health due to the
increasing risks of depression, suicide ideation and other mental disorders.
[16]
Social comparison in this generation is
everywhere and society revolves comparing ourselves to each other if it is to have a higher self-esteem or to try and
better themselves as a whole. With social comparison being so important, it will lead to social comparison bias and
cause negative effects in a persons life. With the research found, the hypothesis was proven correct stating that
depression does has a relationship with the social comparison that people in society participate in.
References
[1] Garciaa, Song & Tesser 2010, pp.97101
[2] Buunk & Gibbons 1997
[3] Suls & Wheeler 2000
[4] Smith & Leach 2004, pp.297308
[5] Taylor & Lobel 1989, pp.569575
[6] Thwaites & Dagnan 2004, pp.309323
[7] Pyszczynski & Greenberg 1987, pp.122138
[8] Nolen-Hoeksema & Morrow 1993, pp.561570
[9] Richins 1991, pp.7381
[10] [10] Wood V. J. 1989.
[11] Kaplan & Haenlein 2010, pp.5968
[12] Menon, Kyung & Agrawal 2008, pp.3952
[13] [13] Pappas 2012
[14] Kendler & Karkowski-Shuman 1997, pp.539547
[15] [15] Moller 2006
[16] Kendler 1995, pp.59
Sources
Burson, A.; Larrick, R; Soll, J. (2005). "Social Comparison and Confidence: When Thinking You're Better than
Average Predicts Overconfidence". Ross School of Business (Paper No. 1016).
Garciaa, S.; Song, H.; Tesser, A. (2010). "Tainted recommendations: The social comparison bias". Organizational
Behavior and Human Decision Processes 113 (2): 97101.
Huguet, P.; Dumas, F. (2001). "Social comparison choices in the classroom: further evidence for students' upward
comparison tendency and its beneficial impact on performance". European Journal of Social Psychology:
557578.
Kaplan, A.; Haenlein, M. (2010). "Users of the world, unite! The challenges and opportunities of Social Media".
Business Horizons 53: 5968.
Kendler, K. S. (1995). "Major depression and the environment: a psychiatric genetic perspective".
Pharmacopsychiatry 31: 59.
Kendler, K. S.; Karkowski-Shuman, L. (1997). "Stressful life events and genetic liability to major depression:
genetic control of exposure to the environment?". Psychol Med 27: 539547.
Menon, G.; Kyung, E.; Agrawal, N. (2008). "Biases in social comparisons: Optimism or pessimism?".
Organizational Behavior and Human Decision Processes 108 (2009): 3952.
Mller, J.; Kller, O. (2001). "Dimensional comparisons: An experimental approach to the internal/external frame
of reference model". Journal of Educational Psychology 93: 826835.
Monteil, J.; Huguet, P. (1993). "The Influence of Social Comparison Situations on Individual Task Performance:
Experimental Illustrations". International Journal Of Psychology 28 (5): 627643.
Nolen-Hoeksema, S.; Morrow, J. (1993). "Effects of rumination and distraction on naturally occurring depressed
mood". Cognitive Emotion 7: 561570.
Pappas, S. (2012). "Facebook With Care: Social Networking Site Can Hurt Self-Esteem". Live Science Journal.
Social comparison bias
213
Pyszczynski, T.; Greenberg, J. (1987). "Self-regulatory perseveration and the depressive self-focusing style: a
self-awareness theory of reactive depression". Psychol Bull 102: 122138.
Richins, M. (1991). "Social Comparison and the Idealized Images of Advertising.". Journal of Consumer
Research 18 (1): 7381.
Smith, H.; Leach, C. (2004). "Group membership and everyday social comparison experiences". Eur J Soc
Psychol 34 (3): 297308.
Taylor, S.; Lobel, M. (1989). "Social Comparison Activity Under Threat: Downward Evaluation and Upward
Contacts". Psychological Review (The American Psychological Association,) 96 (4): 569575.
Thwaites, R.; Dagnan, D. (2004). "Moderating variables in the relationship between social comparison and
depression: An evolutionary perspective". Psychology and Psychotherapy (Theo, Res, Pra) 77: 309323.
Social desirability bias
Social desirability bias is the tendency of respondents to answer questions in a manner that will be viewed
favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad," or undesirable
behavior. The tendency poses a serious problem with conducting research with self-reports, especially
questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.
Topics where SDR is of special concern are self-reports of abilities, personality, sexual behavior, and drug use.
When confronted with the question "How often do you masturbate?", for example, respondents may be pressured by
the societal taboo against masturbation, and either under-report the frequency or avoid answering the question.
Therefore the mean rates of masturbation derived from self-report surveys are likely to be severe underestimates.
When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the
fact that controlled substances, including the more commonly used marijuana, are generally illegal. Respondents
may feel pressured to deny any drug use or rationalize it, e.g., "I only smoke marijuana when my friends are around."
The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions
for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either
case, the mean reports from both groups are likely to be distorted by social desirability bias.
Other topics that are sensitive to social desirability bias:
Personal income and earnings, often inflated when low and deflated when high.
Feelings of low self-worth and/or powerlessness, often denied.
Excretory functions, often approached uncomfortably, if discussed at all.
Compliance with medicinal dosing schedules, often inflated.
Religion, often either avoided or uncomfortably approached.
Patriotism, either inflated or, if denied, done so with a fear of other party's judgement.
Bigotry and intolerance, often denied, even if it exists within the responder.
Intellectual achievements, often inflated.
Physical appearance, either inflated or deflated
Acts of real or imagined physical violence, often denied.
Indicators of charity or "benevolence," often inflated.
Illegal acts, often denied.
Social desirability bias
214
Individual differences
The fact that people differ in their tendency to engage in socially desirable responding (SDR) is a special concern to
those measuring individual differences with self-reports. Individual differences in SDR make it difficult to
distinguish those people with good traits who are responding factually from those distorting their answers in a
positive direction.
When socially desirable responding (SDR) cannot be eliminated, researchers may resort to evaluating the tendency
and then control for it. A separate measure of SDR must be administered together with the primary measure (test or
interview) aimed at the subject matter of the research/investigation.The key assumption is that respondents who
answer in a socially desirable manner on that scale are also responding desirably to all self reports throughout the
study.
In some cases the entire questionnaire package from high scoring respondents may simply be discarded.
Alternatively, respondents' answers on the primary questionnaires may be statistically adjusted commensurate with
their SDR tendencies. For example, this adjustment is performed automatically in the standard scoring of MMPI
scales.
The major concern with SDR scales is that they confound style with content. After all, people actually differ in the
degree to which they possess desirable traits (e.g., nuns versus criminals). Consequently, measures of social
desirability confound true differences with social-desirability bias.
Standard measures
Until recently, the most commonly used measure of socially desirable responding was the Marlowe-Crowne Social
Desirability Scale.
[1]
The original version comprised 33 True-False items. A shortened version, the StrahanGerbasi
comprises only 10 items, but some have raised questions regarding the reliability of this measure.Thompson and
Phua
[2]
.
[3]
In 1991, Delroy Paulhus published the Balanced Inventory of Desirable Responding: a questionnaire designed to
measure two forms of SDR.
[4]
This 40-item instrument provides separate subscales for "impression management",
the tendency to give inflated self-descriptions to an audience; and self-deceptive enhancement, the tendency to give
honest but inflated self-descriptions. The commercial version of the BIDR called "Paulhus Deception Scales
(PDS)",
[5]
".
Non-English measures
Scales designed to tap response styles are available in all major languages, including Italian
[6]
and German
[7]
Another measure has been used in surveys or opinion polls carried out by interviewing people face-to-face or
through the telephone.
[8]
Other response styles
'Extreme response bias' (ERB) takes the form of exaggerated extremity preference, e.g. for '1' or '7' on 7-point scales.
Its converse, 'moderacy bias' entails a preference for middle range (or midpoint) responses (e.g. 3-5 on 7-point
scales). 'Acquiescence' is the tendency to prefer the higher ratings over lower ratings, whatever the content of the
question.
Social desirability bias
215
Anonymity and confidentiality
When the subjects' details are not required, as in sample investigations and screenings, anonymous administration is
preferably used as the person does not feel directly and personally involved in the answers he or she is going to give.
Anonymous self-administration provides neutrality, detachment and reassurance. An even better result is obtained by
returning the questionnaires by mail or ballot boxes so as to further guarantee anonymity and the impossibility to
identify the subjects who filled in the questionnaires.
Neutralized administration
SDR tends to be reduced by wording questions in a neutral fashion. Another is to use forced-choice questions where
the two options have been equated for their desirability.
One approach is to administer tests through a computer (self-administration software).
[9]
A computer, even compared
to the most competent interviewer, provides a higher sense of neutrality: it does not appear to be judging.
Behavioral measurement
The most recent approachthe Over-claiming Techniqueassesses the tendency to claim knowledge about
non-existent items. More complex methods to promote honest answers include the Randomized response and
Unmatched count techniques, as well as the Bogus Pipeline Technique.
References
[1] Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology,
24, 349-354.
[2] http:/ / www. springerlink. com/ content/ g5771006303277ww/ fulltext. pdf
[3] Thompson, E. R. & Phua, F. T. T. 2005. Reliability among senior managers of the Marlowe-Crowne short-form social desirability scale
(http:/ / www. springerlink.com/ content/ g5771006303277ww/ fulltext. pdf), Journal of Business and Psychology, 19, 541-554.
[4] Paulhus, D.L. (1991). Measurement and control of response biases. In J.P. Robinson et al. (Eds.), Measures of personality and social
psychological attitudes. San Diego: Academic Press
[5] Paulhus D.L., (1998) Paulhus Deception Scales (PDS) is published by Multi-Health Systems of Toronto.
[6] Roccato M., (2003) Desiderabilit Sociale e Acquiescenza. Alcune Trappole delle Inchieste e dei Sondaggi. LED Edizioni Universitarie,
Torino. ISBN 88-7916-216-0
[7] Stoeber, J. (2001). The social desirability scale-17 (SD-17). European Journal of Psychological Assessment, 17, 222-232.
[8] Corbetta P., (2003) La ricerca sociale: metodologia e tecniche. Vol. I-IV. Il Mulino, Bologna.
[9] McBurney D.H., (1994) Research Methods. Brooks/Cole, Pacific Grove, California.
Status quo bias
216
Status quo bias
Status quo bias is a cognitive bias; an irrational preference for the current state of affairs. The current baseline (or
status quo) is taken as a reference point, and any change from that baseline is perceived as a loss. Status quo bias
should be distinguished from a rational preference for the status quo ante, as when the current state of affairs is
objectively superior to the available alternatives, or when imperfect information is a significant problem. A large
body of evidence, however, shows that an irrational preference for the status quo--a status quo bias--frequently
affects human decision-making.
Status quo bias interacts with other non-rational cognitive processes such as loss aversion, existence bias,
endowment effect, longevity, mere exposure, and regret avoidance. Experimental evidence for the detection of status
quo bias is seen through the use of the Reversal test. A vast amount of experimental and field examples exist.
Behavior in regards to retirement plans, health, and ethical choices show evidence of the status quo bias.
Examples
Kahneman, Thaler, and Knetsch created experiments that could produce this effect reliably.
[1]
Samuelson and
Zeckhauser (1988) demonstrated status quo bias using a questionnaire in which subjects faced a series of decision
problems, which were alternately framed to be with and without a pre-existing status quo position. Subjects tended to
remain with the status quo when such a position was offered to them.
[2]
Hypothetical Choice Tasks Subjects were given a hypothetical choice task in the following "neutral" version, in
which no status quo was defined: "You are a serious reader of the financial pages but until recently you have had few
funds to invest. That is when you inherited a large sum of money from your great-uncle. You are considering
different portfolios. Your choices are to invest in: a moderate-risk company, a high-risk company, treasury bills,
municipal bonds." Other subjects were presented with the same problem but with one of the options designated as
the status quo. In this case, the opening passage continued: "A significant portion of this portfolio is invested in a
moderate risk company . . . (The tax and broker commission consequences of any changes are insignificant.)" The
result was that an alternative became much more popular when it was designated as the status quo.
[3]
Electric Power Consumers California electric power consumers were asked about their preferences regarding
trade-offs between service reliability and rates. The respondents fell into two groups, one with much more reliable
service than the other. Each group was asked to state a preference among six combinations of reliability and rates,
with one of the combinations designated as the status quo. A strong bias to the status quo was observed. Of those in
the high-reliability group, 60.2 percent chose the status quo, whereas a mere 5.7 percent chose the low-reliability
option that the other group had been experiencing, despite its lower rates. Similarly, of those in the low reliability
group, 58.3 chose their low-reliability status quo, and only 5.8 chose the high-reliability option.
[4]
The US states of New Jersey and Pennsylvania inadvertently ran a real-life experiment providing evidence of status
quo bias in the early 1990s. As part of tort law reform programs, citizens were offered two options for their
automotive insurance: an expensive option giving them full right to sue and a less expensive option with restricted
rights to sue. In New Jersey the cheaper option was the default and most citizens selected it. Only a minority chose
the cheaper option in Pennsylvania, where the more expensive option was the default. Similar effects have been
shown for contributions to retirement plans, choice of internet privacy policies and the decision to become an organ
donor.
Status quo bias
217
Explanations
Status quo bias has been attributed to a combination of loss aversion and the endowment effect, two ideas relevant to
prospect theory. An individual weighs the potential losses of switching from the status quo more heavily than the
potential gains; this is due to the prospect theory value function being steeper in the loss domain.
[2]
As a result, the
individual will prefer not to switch at all. However, the status quo bias is maintained even in the absence of gain/loss
framing: for example, when subjects were asked to choose the colour of their new car, they tended towards one
colour arbitrarily framed as the status quo.
[2]
Loss aversion, therefore, cannot wholly explain the status quo bias,
[5]
with other potential causes including regret avoidance,
[5]
transaction costs
[6]
and psychological commitment.
[2]
Rational Routes to Status Quo Maintenance
A status quo bias can also be a rational route if there are cognitive or informational limitations.
Informational limitations
Decision outcomes are rarely certain, nor is the utility they may bring. Because some errors are more costly than
others
[7]
(Haselton & Nettle, 2006), sticking with what worked in the past is a safe option, as long as previous
decisions are good enough'.
[8]
Cognitive limitations
Choice is often difficult,
[9]
and decision makers may prefer to do nothing
[10]
and or to maintain their current course
of action
[11]
because it is easier. Status quo alternatives often require less mental effort to maintain (Eidelman &
Crandall, 2009).
Irrational Routes to the Status Quo Bias
The irrational maintenance of the status quo bias links and confounds many cognitive biases.
Existence bias
An assumption of longevity and goodness are part of the status quo bias. People treat existence as a prima facie case
for goodness, aesthetic and Longevity increases this preference.
[12]
The status quo bias affects peoples preferences;
people report preferences for what they are likely rather than unlikely to receive. People simply assume, with little
reason or deliberation, the goodness of existing states.
[12]
Longevity is a corollary of the existence bias: if existence is good, longer existence should be better. This thinking
resembles quasi-evolutionary notions of survival of the fittest, and also the augmentation principle in attribution
theory.
[13]
Inertia is another reason used to explain a bias towards the status quo. Another explanation is fear of regret in
making a wrong decision, i.e. If we choose a partner, when we think there could be someone better out there.
[14]
Mere exposure
Mere exposure is an explanation for the status quo bias. Existing states are encountered more frequently than
non-existent states and because of this they will be perceived as more true and evaluated more preferably. One way
to increase liking for something is repeated exposure over time.
[15]
Loss aversion
Loss aversion also leads to greater regret for action than for inaction;
[16]
more regret is experienced when a decision
changes the status quo than when it maintains it.
[17]
Together these forces provide an advantage for the status quo;
people are motivated to do nothing or to maintain current or previous decisions.
[11]
Change is avoided, and decision
makers stick with what has been done in the past.
Changes from the status quo will typically involve both gains and losses, with the change having good overall
consequences if the gains outweigh these losses. A tendency to overemphasize the avoidance of losses will thus
Status quo bias
218
favor retaining the status quo, resulting in a status quo bias. Even though choosing the status quo may entail
forfeiting certain positive consequences, when these are represented as forfeited "gains" they are psychologically
given less weight than the "losses" that would be incurred if the status quo were changed. (
[18]
)
Omission bias
Omission bias may account for some of the findings previously ascribed to status quo bias. Omission bias is
diagnosed when a decision maker prefers a harmful outcome that results from an omission to a less harmful outcome
that results from an action (; Ilana Ritov and Jonathan Baron, "Status-Quo and Omission Biases," Journal of Risk and
Uncertainty 5 [1992]: 4961).
Detection
The Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences,
consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall
consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved
through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from
status quo bias. The rationale of the Reversal Test is: if a continuous parameter admits of a wide range of possible
values, only a tiny subset of which can be local optima, then it is prima facie implausible that the actual value of that
parameter should just happen to be at one of these rare local optima.
[18]
Neural Activity
A study found that erroneous status quo rejections have a greater neural impact than erroneous status quo
acceptances. This asymmetry in the genesis of regret might drive the status quo bias on subsequent decisions.
[19]
A study was done using a visual detection task in which subjects tended to favor the default when making difficult,
but not easy, decisions. This bias was suboptimal in that more errors were made when the default was accepted. A
selective increase in subthalamic nucleus (STN) activity was found when the status quo was rejected in the face of
heightened decision difficulty. Analysis of effective connectivity showed that inferior frontal cortex, a region more
active for difficult decisions, exerted an enhanced modulatory influence on the STN during switches away from the
status quo.
[20]
Research by UCL scientists that examines the neural pathways involved in 'status quo bias' in the human brain and
found that the more difficult the decision we face, the more likely we are not to act. The study, published in
Proceedings of the National Academy of Sciences (PNAS), looked at the decision-making of participants taking part
in a tennis 'line judgement' game while their brains were scanned using functional MRI (fMRI). The 16 study
participants were asked to look at a cross between two tramlines on a screen while holding down a 'default' key.
They then saw a ball land in the court and had to make a decision as to whether it was in or out. On each trial, the
computer signalled which was the current default option - 'in' or 'out'. The participants continued to hold down the
key to accept the default and had to release it and change to another key to reject the default. The results showed a
consistent bias towards the default, which led to errors. As the task became more difficult, the bias became even
more pronounced. The fMRI scans showed that a region of the brain known as the subthalamic nucleus (STN) was
more active in the cases when the default was rejected. Also, greater flow of information was seen from a separate
region sensitive to difficulty (the prefrontal cortex) to the STN. This indicates that the STN plays a key role in
overcoming status quo bias when the decision is difficult.
[20]
Status quo bias
219
Behavioral Economics and the Default position
Against this background, two behavioral economists devised an opt-out plan to help employees of a particular
company build their retirement savings. In an opt-out plan, the employees are automatically enrolled unless they
explicitly ask to be excluded. They found evidence for status quo bias and other associated effects. They also noted
that changing the default alternatives has, in some instances, been shown to have dramatic effects on peoples
choices.
[21]
Conflict
Status-Quo educational bias can be both a barrier to political progress and a threat to the state's legitimacy/ argue that
the values of stability, compliance, and patriotism underpin important reasons for status quo bias that appeal not to
the substantive merits of existing institutions but merely to the fact that those institutions are the status quo
[22]
Relevant fields
The Status quo bias is seen in important real life decisions; it has been found to be prominent is data on selections of
health care plans and retirement programs.
[2]
Politics
Preference for the status quo represents a core component of conservative ideology because preference for the status
quo is one significant element of conservative ideology, the bias in its favor plays a role under certain conditions
in promoting political conservatism.
[12]
Ethics
Status quo bias may be responsible for much of the opposition to human enhancement in general and to genetic
cognitive enhancement in particular.
[18]
Education
Education can (sometimes unintentionally) encourage childrens belief in the substantive merits of a particular
existing law or political institution, where the effect does not derive from an improvement in their ability or critical
thinking about that law or institution. However, this biasing effect is not automatically illegitimate or
counterproductive: a balance between social inculcation and openness needs to be maintained.
[23]
Reading in schools within the elementary classroom, reading aloud sessions that exclude ethnically diverse materials
create a bias in favor of the status quo that is harmful to children's education.
[24]
Health
An experiment to determine if status-quo biasbias toward current medication even when better alternatives are
offeredexists in a stated-choice study among asthma patients who take prescription combination maintenance
medications. The results of this study indicate that the status quo bias may exist in stated-choice studies, especially
with medications that patients have to take daily such as asthma maintenance medications. Stated-choice
practitioners should include a current medication in choice surveys to control for this bias.
[25]
Status quo bias
220
Retirement plans
An example of the status quo bias affecting retirement plans is a study done that examined the U.S. equity mutual
fund. They found that people maintained the plan they had previously, even if it was no longer the optimal
choice.
[26]
References
[1] Kahneman, D.; Knetsch, J. L.; Thaler, R. H. (1991). "Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias". Journal of
Economic Perspectives 5 (1): 193206.
[2] Samuelson, W.; Zeckhauser, R. (1988). "Status quo bias in decision making". Journal of Risk and Uncertainty 1: 759.
[3] Samuelson, William; Richard Zeckhauser (1988). "Status Quo Bias in Decision Making". Journal of Risk and Uncertainty: 759.
[4] Hartman, Raymond S.; Chi-Keung Woo (1991). "Consumer Rationality and the Status Quo". Quarterly Journal of Economics 106: 141162.
[5] Korobkin, R. (1997). "The status quo bias and contract default rules". Cornell Law Review 83: 608687.
[6] Tversky, A.; Kahneman, D. (1991). "Loss aversion in riskless choice: a reference-dependent model". The Quarterly Journal of Economics
106 (4): 10391061.
[7] Haselton; Nettle (2006). Personality and Social Psychology Review 10 (1): 4766.
[8] Simon, H.A. (March 1956). "Rational Choice and the Structure of the Environment". Psychological Review 63 (2): 129138.
doi:10.1037/h0042769.
[9] Iyengar, Sheena; Lepper, Mark R (December 2000). "When choice is demotivating: Can one desire too much of a good thing?". Journal of
Personality and Social Psychology 79 (6): 9951006.
[10] Baron, Jonathan; Ilana Ritov (2004). "Omission bias, individual differences, and normality". Organizational Behavior and Human Decision
Processes 94: 7485.
[11] Samuelson, William; Zeckhauser (1988). "Status Quo Bias in Decision Making". Boston University.
[12] Eidelman, Scott; Christian S. Crandall (March 2012). "Bias in Favour of the Status Quo". Social and Personality Psychology Compass 3 (6):
270281.
[13] Kelley, H.H. (1972). "Attribution in Social Interaction". General Learning Press.
[14] [14] Venkatesh, B.. "Benefitting from status quo bias".
[15] Bornstein, R.F. (1989). "Exposure and affect: Overview and meta- analysis of research". Psychological Bulletin 106: 265289.
[16] Kahneman; Tversky, Slovic (1982). "udgment Under Uncertainty: Heuristics and Biases". Cambridge University Press.
[17] Inman, J.J.; Zeelenberg (2002). "Regret repeat versus switch decisions: The attenuation role of decision justifiability". Journal of Consumer
Research 29: 116128.
[18] Bostrom, Nick; Toby Orb (July 2006). "The Reversal Effect: Eliminating Status Quo Bias in Applied Ethics". Thics 116.
[19] Nicolle; Fleming, Bach, Driver & Dolan (March 2011). "A Regret-Induced Status Quo Bias". The Journal of Neuroscience 31 (9): 3320
3327.
[20] Fleming; Thomas, Dolan (February 2010). "Overcoming Status-Quo in the Brain". MIT.
[21] Thaler, Richard H.; Shlomo Benartzi (2004). "Save More Tomorrow: Using Behavioural Economics to Increase Employee Saving". Journal
of Political Economy 112: 164187.
[22] MacMullen (07 2011). "On Status-Quo Bias in Civic Education". Journal of Politics 73 (3): 872886.
[23] MacMullen (7 2011). "On Status Quo Bias in Civic Education". Journal of Politics 73 (3): 872886.
[24] Gonzalez-Jensen, Margarita; Sadler, Norma (April 1997). "Behind Closed Doors: Status Quo Bias in Read Aloud Selections". Equity &
Excellence in Education 30 (1): 2731.
[25] Hauber, Mohamed; Meddis, Johnson, Wagner (2008). "Status Quo Bias in Stated Choice Studies: Is it Real?". Health Values: 567568.
[26] Kempf, Alexandre; Stefan Ruenzi (2006). "Status Quo Bias and the Number of Alternatives: An Empirical Illustration from the Mutual
Fund Industry". Journal of Behavioral Finance 7 (4): 204213.
Status quo bias
221
Further reading
Barry, W. J. (2012). "Challenging the Status Quo Meaning of Educational Quality: Introducing Transformational
Quality (TQ) Theory.". Educational Journal of Living Theories 4: 129.
Johnson, E. J.; Hershey, J.; Meszaros, J.; Kunreuther, H. (1993). "Framing, Probability Distortions, and Insurance
Decisions". Journal of Risk and Uncertainty 7: 3551.
Seiler, Michael J.; Vicky, Traub; Harrison (2008). "Familiarity Bias and the Status Quo Alternative". Journal of
Housing Research 17: 139154.
Mandler, Michael. (June, 2004). Welfare economics with status quo bias: a policy paralysis problem and cure.
Royal Holloway College, University of London.
Wittman, Donald (2007). "Is Status Quo Bias Consistent With Downward-Sloping Demand?.". Economic Inquiry
46: 243288.
Kim, H.W. and A. Kankanhalli. (2008). Investigating User Resistance to Information Systems Implementation: A
Status Quo Bias Perspective. "MIS Quarterly".
Stereotype
Police officers buying donuts and coffee, an
example of perceived stereotypical behavior in
North America.
A stereotype is a thought that may be adopted
[1]
about specific types
of individuals or certain ways of doing things, but that belief may or
may not accurately reflect reality.
[2][3]
However, this is only a
fundamental psychological definition of a stereotype.
[3]
Within and
across different psychology disciplines, there are different concepts
and theories of stereotyping that provide their own expanded
definition. Some of these definitions share commonalities, though each
one may also harbor unique aspects that may complement or contradict
the others.
Etymology
The term stereotype derives from the Greek words (stereos), "firm, solid"
[4]
and (typos),
"impression,"
[5]
hence "solid impression".
The term comes from the printing trade and was first adopted in 1798 by Firmin Didot to describe a printing plate
that duplicated any typography. The duplicate printing plate, or the stereotype, is used for printing instead of the
original.
The first reference to "stereotype" in its modern use in English, outside of printing, was in 1850, in a noun, meaning
"image perpetuated without change."
[6]
But it was not until 1922 that "stereotype" was first used in the modern
psychological sense by American journalist Walter Lippmann in his work Public Opinion.
[7]
Stereotype
222
Relationship with other types of intergroup attitudes
Stereotypes, prejudice and discrimination are understood as related but different concepts.
[8][9][10][11]
Stereotypes are
regarded as the most cognitive component, prejudice as the affective and discrimination as the behavioral component
of prejudicial reactions.
[8][9]
In this tripartite view of intergroup attitudes, stereotypes reflect expectations and beliefs
about the characteristics of members of groups perceived as different from one's own, prejudice represents the
emotional response, and discrimination refers to actions.
[8][9]
Although related, the three concepts can exist independently of each other.
[12][9]
According to Daniel Katz and
Kenneth Braly, stereotyping leads to racial prejudice when people emotionally react to the name of a group, ascribe
characteristics to members of that group, and then evaluate those characteristics.
[10]
Possible prejudicial effects of stereotypes
[3]
are:
Justification of ill-founded prejudices or ignorance
Unwillingness to rethink one's attitudes and behavior towards stereotyped group
Preventing some people of stereotyped groups from entering or succeeding in activities or fields
[13]
Content
Stereotype content model, adapted from Fiske et al. (2002): Four types of
stereotypes resulting from combinations of perceived warmth and competence.
Stereotype content refers to the attributes
that people think characterize a group.
Studies of stereotype content examine what
people think of others rather than the
reasons and mechanisms involved in
stereotyping.
[14]
Early theories of stereotype content
proposed by social psychologists like
Gordon Allport assumed that stereotypes of
outgroups reflected uniform
antipathy.
[15][16]
Katz and Braly, for
instance, argued in their classic 1933 study
that ethnic stereotypes were uniformly
negative.
[14]
By contrast, a newer model of stereotype
content theorizes that stereotypes are frequently ambivalent and vary along two dimensions: warmth and
competence. Warmth and competence are respectively predicted by lack of competition and status; groups that do
not compete with the ingroup for the same resources (e.g., college space) are perceived as warm while high-status
(e.g., economically or educationally successful) groups are considered competent. The groups within each of the four
combinations of high and low levels of warmth and competence elicit distinct emotions.
[17]
The model explains the
phenomenon that some outgroups are admired but disliked while others are liked but disrespected. It was empirically
tested on a variety of national and international samples and was found to reliably predict stereotype content.
[15][18]
Stereotype
223
Functions
Early studies believed that stereotypes were only used by rigid, repressed, and authoritarian people. This idea has
been overturned by more recent studies that suggested that stereotypes are commonplace. Now, stereotypes are said
to be collective group beliefs, meaning that people who belong to the same social group share the same set of
stereotypes.
[12]
Relationship between cognitive and social functions
Stereotyping can serve cognitive functions on an interpersonal level, and social functions on an intergroup
level.
[12][3]
For stereotyping to function on an intergroup level (see social identity approaches: social identity theory
and self-categorization theory), an individual must see themselves as part of a group and being part of that group
must also be salient for the individual.
[12]
Craig McGarty, Russell Spears, and Vincent Y. Yzerbyt (2002) argued that the cognitive functions of stereotyping
are best understood in relation to its social functions, and vice versa.
[19]
Cognitive functions
Stereotypes can help make sense of the world. They are a form of categorization which helps to simplify and
systematize information so the information is easier to be identified, recalled, predicted, and reacted to.
[12]
Stereotypes are categories of objects or people. Between stereotypes, objects or people are as different to each other
as possible.
[1]
Within stereotypes, objects or people are as similar to each other as possible.
[1]
As to why people find it easier to understand categorized information, Gordon Allport has suggested possible
answers in his 1954 publication:
[20]
First, people can consult the category of something for ways to respond to that
thing. Second, things are more specific when they are in a category than when they are not, because categorization
accentuates properties that are shared by all members of a group. Third, people can readily describe things in a
category, because, fourth and related, things in the same category have distinct characteristics. Finally, people can
take for granted the characteristics of a particular category because the category itself may be an arbitrary grouping.
Moreover, stereotypes function as time- and energy-savers which allow people to act more efficiently.
[1]
David
Hamilton's 1981 publication gave rise to the view that stereotypes are people's biased perceptions of their social
contexts.
[1]
In this view, people use stereotypes as shortcuts to make sense of their social contexts, and this makes
people's task of understanding their world less cognitively demanding.
[1]
Social functions: social categorization
When stereotypes are used for explaining social events, for justifying activities of ones own group (ingroup) to
another group (outgroup), or for differentiating the ingroup as positively distinct from outgroups, the overarching
purpose of stereotyping is for people to put their collective self (their ingroup membership) in positive light.
[21]
Explanation purposes
Stereotypes can be used to explain social events.
[12][21]
Henri Tajfel
[12]
gave the following example: Some people
found that the anti-Semitic contents of The Protocols of the Elders of Zion only made sense if the Jews have certain
characteristics. According to Tajfel,
[12]
Jews were stereotyped as being evil, as yearning for world domination, etc.,
because these stereotypes could explain the anti-Semitic facts as presented in The Protocols of the Elders of Zion.
Stereotype
224
Justification purposes
People create stereotypes of an outgroup to justify the actions that their ingroup has or plans to commit towards that
outgroup.
[12][20][21]
For example, according to Tajfel,
[12]
Europeans stereotyped Turkish, Indian, and Chinese people
as being incapable of achieving financial advances without European help. This stereotype was used to justify
European colonialism in Turkey, India, and China.
Intergroup differentiation
An assumption is that people want their ingroup to have a positive image relative to outgroups, and so people want to
differentiate their ingroup from relevant outgroups in a desirable way.
[12]
If an outgroup does not affect the ingroups
image, then from an image preservation point of view, there is no point for the ingroup to be positively distinct from
that outgroup.
[12]
People can actively create certain images for relevant outgroups by stereotyping. People do so when they see that
their ingroup is no longer as clearly and/or as positively differentiated from relevant outgroups, and they want to
restore the intergroup differentiation to a state that favours the ingroup.
[12][21]
Social functions: self categorisation
People will change their stereotype of their ingroups and outgroups to suit the context they are in.
[21][3]
People are
likely to self-stereotype their ingroup as homogenous in an intergroup context, and they are less likely to do so in an
intragroup context where the need to emphasise their group membership is not as great.
[21]
Stereotypes can
emphasise a persons group membership in two steps: First, stereotypes emphasise the persons similarities with
ingroup members on relevant dimensions, and also the persons differences from outgroup members on relevant
dimensions.
[21]
Second, the more the stereotypes emphasise within-group similarities and between-group
differences, the more salient the persons social identity will become, and the more depersonalise that person will
be.
[21]
A depersonalised person will abandon his/her individual differences and embrace the stereotypes associated
with his/her relevant group membership.
[21]
Social functions: social influence and consensus
Stereotypes are an indicator of ingroup consensus.
[21]
When there are intragroup disagreement over stereotypes of
the ingroup and/or outrgroups, ingroup members will take collective action to prevent other ingroup members from
diverging from each other.
[21]
John C. Turner proposed in 1987
[21]
that if ingroup members disagree on an outgroup stereotype, then one of three
possible collective actions will follow: First, ingroup members may negotiate with each other and conclude that they
have different outgroup stereotypes because they are stereotyping different subgroups of an outgroup (e.g., Russian
gymnasts versus Russian boxers). Second, ingroup members may negotiate with each other, but conclude that they
are disagreeing because of categorical differences amongst themselves. Accordingly, in this context, it is better to
categorise ingroup members under different categories (e.g., Democrats versus Republican) than under a shared
category (e.g., American). Finally, ingroup members may influence each other to arrive at a common outgroup
stereotype.
Stereotype
225
Formation
Different disciplines give different accounts of how stereotypes develop: Psychologists may focus on an individual's
experience with groups, patterns of communication about those groups, and intergroup conflict. As for sociologists,
they may focus on the relations among different groups in a social structure. They suggest that stereotypes are the
result of conflict, poor parenting, and inadequate mental and emotional development.
Illusory correlation
Research has shown that stereotypes can develop based on a cognitive mechanism known as illusory correlation an
erroneous inference about the relationship between two events.
[1][22][23]
If two events which are statistically
infrequent co-occur, observers overestimate the frequency of co-occurrence of these events. The underlying reason is
that rare, infrequent events are distinctive and salient and, when paired, become even more so. The heightened
salience results in more attention and more effective encoding, which strengthens the belief that the events are
correlated.
[24][25]
In the intergroup context, illusory correlations lead people to misattribute rare behaviors or traits at higher rates to
minority group members than to majority groups, even when both display the same proportion of the behaviors or
traits. Black people, for instance, are a minority group in the United States and interaction with blacks is a relatively
infrequent event for an average white American. Similarly, undesirable behavior (e.g. crime) is statistically less
frequent than desirable behavior. Since both events "blackness" and "undesirable behavior" are distinctive in the
sense that they are infrequent, the combination of the two leads observers to overestimate the rate of
co-occurrence.
[24]
Similarly, in workplaces where women are underrepresented and negative behaviors such as
errors occur less frequently than positive behaviors, women become more strongly associated with mistakes than
men.
[26]
In a landmark study, David Hamilton and Richard Gifford (1976) examined the role of illusory correlation in
stereotype formation. Subjects were instructed to read descriptions of behaviors performed by members of groups A
and B. Negative behaviors outnumbered positive actions and group B was smaller than group A, making negative
behaviors and membership in group B relatively infrequent and distinctive. Participants were then asked who had
performed a set of actions: a person of group A or group B. Results showed that subjects overestimated the
frequency with which both distinctive events, membership in group B and negative behavior, co-occurred, and
evaluated group B more negatively. This despite the fact the proportion of positive to negative behaviors was
equivalent for both groups and that there was no actual correlation between group membership and behaviors.
[24]
Although Hamilton and Gifford found a similar effect for positive behaviors as the infrequent events, a meta-analytic
review of studies showed that illusory correlation effects are stronger when the infrequent, distinctive information is
negative.
[22]
Hamilton and Gifford's distinctiveness-based explanation of stereotype formation was subsequently extended.
[25]
A
1994 study by McConnell, Sherman, and Hamilton found that people formed stereotypes based on information that
was not distinctive at the time of presentation, but was considered distinctive at the time of judgement.
[27]
Once a
person judges non-distinctive information in memory to be distinctive, that information is re-encoded and
re-represented as if it had been distinctive when it was first processed.
[27]
Stereotype
226
Common environment
One explanation for why stereotypes are shared is that they are the result of a common environment that stimulates
people to react in the same way.
[1]
The problem with the common environment explanation in general is that it does not explain how shared
stereotypes can occur without direct stimuli.
[1]
Research since the 1930s suggested that people are highly similar
with each other in how they describe different racial and national groups, although those people have no personal
experience with the groups they are describing.
[28]
Socialisation and upbringing
Another explanation says that people are socialised to adopt the same stereotypes.
[1]
Some psychologists believe that
although stereotypes can be absorbed at any age, stereotypes are usually acquired in early childhood under the
influence of parents, teachers, peers, and the media.
If stereotypes are defined by social values, then stereotypes will only change as per changes in social values.
[1]
The
suggestion that stereotype content depend on social values reflects Walter Lippman argument in his 1922 publication
that stereotypes are rigid because they cannot be changed at peoples will.
[10]
Studies emerging since the 1940s refuted the suggestion that stereotype contents cannot changed at peoples will.
Those studies suggested that one groups stereotype of another group will become more or less positive depending on
whether their intergroup relationship has improved or degraded.
[10][29][30]
Intergroup events (e.g., World War Two,
Persian Gulf conflict) often changed intergroup relationships. For example, after WWII, Black American students
held a more negative stereotype of people from countries that were Americas WWII enemies.
[10]
If there are no
changes to an intergroup relationship, then relevant stereotypes will not change.
[11]
Intergroup relations
According to a third explanation, shared stereotypes are neither caused by the coincidence of common stimuli, nor
by socialisation. It explains that stereotypes are shared because group members are motivated to behave in certain
ways, and stereotypes reflect those behaviours.
[1]
It is important to note from this explanation that stereotypes are the
consequence, not the cause, of intergroup relations. This explanation assumes that when it is important for people to
acknowledge both their ingroup and outgroup, then those people will aim to emphasise their difference from
outgroup members, and their similarity to ingroup members.
[1]
Activation
An initial study of stereotype activation was conducted by Patricia Devine in 1989. She suggested that stereotypes
are automatically activated in the presence of a member (or some symbolic equivalent) of a stereotyped group and
that the unintentional activation of the stereotype is equally strong for high- and low-prejudice persons. To test her
hypothesis, Devine used a priming paradigm: words related to the cultural stereotype of blacks were presented
rapidly in subjects' parafoveal visual field (i.e., out of their direct line of vision) so that the participants could not
consciously identify the primes. Some participants were presented with a high proportion (80%) of words related to
the racial stereotype, and others with a lower proportion (20%). Then, during an ostensibly unrelated
impression-formation task, subjects read a paragraph describing a race-unspecified target person's ambiguously
hostile behaviors and rated the target person on several trait scales. Ambiguously hostile behaviors were examined
because pretesting had revealed that hostility was an important component of the cultural stereotype of blacks.
Results showed that subjects who received the high proportion of racial primes rated the target person in the story as
significantly more hostile than participants who were presented with the lower proportion of ethnic primes. This
effect held true for both high- and low-prejudice subjects (as measured by the Modern Racism Scale). Thus, the
racial stereotype was activated even for low-prejudice individuals who did not personally endorse it.
[31][32][33]
Stereotype
227
Subsequent research challenged Devine's findings.
[34]
Lepore and Brown (1997), in particular, noted that the primes
used by Devine were both neutral category labels (e.g., "Blacks") and stereotypic attributes (e.g., "lazy"). They
argued that if only the neutral category labels were primed, people high and low in prejudice would respond
differently. In a design similar to Devine's, Lepore and Brown primed the category of African-Americans using
labels such as "blacks" and "West Indians" and then assessed the differential activation of the associated stereotype
in the subsequent impression-formation task. They found that high-prejudice participants increased their ratings of
the target person on the negative stereotypic dimensions and decreased them on the positive dimension whereas
low-prejudice subjects tended in the opposite direction. The results suggest that the level of prejudice and stereotype
endorsement affects people's judgements when the category and not the stereotype per se is primed.
[35]
Accuracy
A magazine feature from Beauty Parade from March 1952 stereotyping women
drivers. It features Bettie Page as the model.
Stereotypes can be efficient shortcuts and
sense-making tools. They can, however,
keep people from processing new or
unexpected information about each
individual, thus biasing the impression
formation process.
[1]
Early researchers
believed that stereotypes were inaccurate
representations of reality.
[28]
A series of
pioneering studies which appeared in the
1930s found no empirical support for widely
held racial stereotypes.
[10]
By the
mid-1950s, Gordon Allport wrote that "it is
possible for a stereotype to grow in defiance
of all evidence".
[20]
Research on the role of illusory correlations
in the formation of stereotypes suggests that stereotypes can develop because of incorrect inferences about the
relationship between two events (e.g., membership in a social group and bad or good attributes). This means that at
least some stereotypes are inaccurate.
[24][27][22]
There is empirical social science research which shows that stereotypes are often accurate.
[36]
Jussim et al. reviewed
four studies concerning racial and seven studies which examined gender stereotypes about demographic
characteristics, academic achievement, personality and behavior. Based on that, the authors argued that some aspects
of ethnic and gender stereotypes are accurate while stereotypes concerning political affiliation and nationality are
much less accurate.
[37]
A study by Terracciano et al. also found that stereotypic beliefs about nationality do not
reflect the actual personality traits of people from different cultures.
[38]
Effects
Attributional ambiguity
Attributional ambiguity refers to the uncertainty that members of stereotyped groups experience in interpreting the
causes of others' behavior toward them. Stereotyped individuals who receive negative feedback can attribute it either
to personal shortcomings, such as lack of ability or poor effort, or the evaluator's stereotypes and prejudice toward
their social group. Alternatively, positive feedback can either be attributed to personal merit or discounted as a form
of sympathy or pity.
[39][40][41]
Stereotype
228
Crocker et al. (1991) showed that when black participants were evaluated by a white person who was aware of their
race, black subjects mistrusted the feedback, attributing negative feedback to the evaluator's stereotypes and positive
feedback to the evaluator's desire to appear unbiased. When the black participants race was unknown to the
evaluator, they were more accepting of the feedback.
[42]
Attributional ambiguity has been shown to impact a person's self-esteem. When they receive positive evaluations,
stereotyped individuals are uncertain of whether they really deserved their success and, consequently, they find it
difficult to take credit for their achievements. In the case of negative feedback, ambiguity has been shown to have a
protective effect on self-esteem as it allows people to assign blame to external causes. Some studies, however, have
found that this effect only holds when stereotyped individuals can be absolutely certain that their negative outcomes
are due to the evaluators's prejudice. If any room for uncertainty remains, stereotyped individuals tend to blame
themselves.
[40]
Attributional ambiguity can also make it difficult to assess one's skills because performance-related evaluations are
mistrusted or discounted. Moreover, it can lead to the belief that one's efforts are not directly linked to the outcomes,
thereby depressing one's motivation to succeed.
[39]
Stereotype threat
The effect of stereotype threat (ST) on math test scores for girls and
boys. Data from Osborne (2007).
[43]
Stereotype threat occurs when people are aware of a
negative stereotype about their social group and
experience anxiety or concern that they might confirm
the stereotype.
[44]
Streotype threat has been shown to
undermine performance in a variety of domains.
[45][46]
Claude M. Steele and Joshua Aronson conducted the
first experiments showing that stereotype threat can
depress intellectual performance on standardized tests.
In one study, they found that black college students
performed worse than white students on a verbal test
when the task was framed as a measure of intelligence.
When it was not presented in that manner, the
performance gap narrowed. Subsequent experiments
showed that framing the test as diagnostic of intellectual ability made black students more aware of negative
stereotypes about their group, which in turn impaired their performance.
[47]
Stereotype threat effects have been demonstrated for an array of social groups in many different arenas, including not
only academics but also sports,
[48]
chess
[49]
and business.
[50]
Self-fulfilling prophecy
Stereotypes lead people to expect certain actions from members of social groups. These stereotype-based
expectations may lead to self-fulfilling prophecies, in which one's inaccurate expectations about a person's behavior,
through social interaction, prompt that person to act in stereotype-consistent ways, thus confirming one's erroneous
expectations and validating the stereotype.
[51][52]
Word, Zanna and Cooper (1974) demonstrated the effects of stereotypes in the context of a job interview. White
participants interviewed black and white subjects who, prior to the experiments, had been trained to act in a
standardized manner. Analysis of the videotaped interviews showed that black job applicants were treated
differently: They received shorter amounts of interview time and less eye contact; interviewers made more speech
errors (e.g., stutters, sentence incompletions, incoherent sounds) and physically distanced themselves from black
applicants. In a second experiment, trained interviewers were instructed to treat applicants, all of whom were white,
Stereotype
229
like the whites or blacks had been treated in the first experiment. As a result, applicants treated like the blacks of the
first experiment behaved in a more nervous manner and received more negative performance ratings than
interviewees receiving the treatment previously afforded to whites.
[53]
A 1977 study by Snyder, Tanke and Berscheid found a similar pattern in social interactions between men and
women. Male undergraduate students were asked to talk to female undergraduates, whom they believed to be
physically attractive or unattractive, on the phone. The conversations were taped and analysis showed that men who
thought that they were talking to an attractive woman communicated in a more positive and friendlier manner than
men who believed that they were talking to unattractive women. This altered the women's behavior: Female subjects
who, unknowingly to them, were perceived to be physically attractive behaved in a friendly, likeable, and sociable
manner in comparison with subjects who were regarded as unattractive.
[54]
Discrimination
Because stereotypes simplify and justify social reality, they have potentially powerful effects on how people
perceive and treat one another.
[55]
As a result, stereotypes can lead to discrimination in labor markets and other
domains.
[56]
For example, Tilcsik (2011) has found that employers who seek job applicants with stereotypically male
heterosexual traits are particularly likely to engage in discrimination against gay men, suggesting that discrimination
on the basis of sexual orientation is partly rooted in specific stereotypes and that these stereotypes loom large in
many labor markets.
[13]
Agerstrm and Rooth (2011) showed that automatic obesity stereotypes captured by the
Implicit Association Test can predict real hiring discrimination against the obese.
[57]
Similarly, experiments suggest
that gender stereotypes play an important role in judgments that affect hiring decisions.
[58][59]
Self-stereotyping
Stereotypes can affect self-evaluations and lead to self-stereotyping.
[60][3]
For instance, Correll (2001, 2004) found
that specific stereotypes (e.g., the stereotype that women have lower mathematical ability) affect women's and men's
evaluations of their abilities (e.g., in math and science), such that men assess their own task ability higher than
women performing at the same level.
[61][62]
Similarly, a study by Sinclair et al. (2006) has shown that Asian
American women rated their math ability more favorably when their ethnicity and the relevant stereotype that Asian
Americans excel in math was made salient. In contrast, they rated their math ability less favorably when their gender
and the corresponding stereotype of women's inferior math skills was made salient. Sinclair et al. found, however,
that the effect of stereotypes on self-evaluations is mediated by the degree to which close people in someone's life
endorse these stereotypes. People's self-stereotyping can increase or decrease depending on whether close others
view them in stereotype-consistent or inconsistent manner.
[63]
Stereotyping can also play a central role in depression, when people have negative self-stereotypes about themselves,
according to Cox, Abramson, Devine, and Hollon (2012).
[3]
This depression that is caused by prejudice (i.e.,
"deprejudice") can be related to a group membership (e.g., MeGayBad) or not (e.g., MeBad). If someone holds
prejudicial beliefs about a stigmatized group and then becomes a member of that group, they may internalize their
prejudice and develop depression. People may also show prejudice internalization through self-stereotyping because
of negative childhood experiences such as verbal and physical abuse.
Stereotype
230
Role in art and culture
American political cartoon titled The Usual Irish Way
of Doing Things, depicting a drunken Irishman lighting
a powder keg and swinging a bottle. Published in
Harper's Weekly, 1871.
Stereotypes are common in various cultural media, where they
take the form of dramatic stock characters. These characters are
found in the works of playwright Bertold Brecht, Dario Fo, and
Jacques Lecoq, who characterize their actors as stereotypes for
theatrical effect. In commedia dell'arte this is similarly common.
The instantly recognizable nature of stereotypes mean that they are
effective in advertising and situation comedy. These stereotypes
change, and in modern times only a few of the stereotyped
characters shown in John Bunyan's The Pilgrim's Progress would
be recognizable.
Media stereotypes of women first emerged in the early 20th
century. Various stereotypic depictions or "types" of women
appeared in magazines, including Victorian ideals of femininity,
the New Woman, the Gibson Girl, the Femme fatale, and the
Flapper.
[64]
More recently, artists such as Anne Taintor and
Matthew Weiner (the producer of Mad Men) have used vintage
images or ideas to insert their own commentary of stereotypes for
specific eras. Weiner's character Peggy Olson continually battles gender stereotypes throughout the series, excelling
in a workplace dominated by men.
Some contemporary studies indicate that racial, ethnic and cultural stereotypes are still widespread in Hollywood
blockbuster movies.
[65]
Portrayals of Latin Americans in film and print media are restricted to a narrow set of
characters. Latin Americans are largely depicted as sexualized figures such as the Latino macho or the Latina vixen,
gang members, (illegal) immigrants, or entertainers. By comparison, they are rarely portrayed as working
professionals, business leaders or politicians.
[66]
In literature and art, stereotypes are clichd or predictable characters or situations. Throughout history, storytellers
have drawn from stereotypical characters and situations, in order to connect the audience with new tales
immediately. Sometimes such stereotypes can be sophisticated, such as Shakespeare's Shylock in The Merchant of
Venice. Arguably a stereotype that becomes complex and sophisticated ceases to be a stereotype per se by its unique
characterization. Thus while Shylock remains politically unstable in being a stereotypical Jew, the subject of
prejudicial derision in Shakespeare's era, his many other detailed features raise him above a simple stereotype and
into a unique character, worthy of modern performance. Simply because one feature of a character can be
categorized as being typical does not make the entire character a stereotype.
Despite their proximity in etymological roots, clich and stereotype are not used synonymously in cultural spheres.
For example a clich is a high criticism in narratology where genre and categorization automatically associates a
story within its recognizable group. Labeling a situation or character in a story as typical suggests it is fitting for its
genre or category. Whereas declaring that a storyteller has relied on clich is to pejoratively observe a simplicity and
lack of originality in the tale. To criticize Ian Fleming for a stereotypically unlikely escape for James Bond would be
understood by the reader or listener, but it would be more appropriately criticized as a clich in that it is overused
and reproduced. Narrative genre relies heavily on typical features to remain recognizable and generate meaning in
the reader/viewer.
Stereotype
231
References
[1] McGarty, Craig; Yzerbyt, Vincent Y.; Spears, Russel (2002). "Social, cultural and cognitive factors in stereotype formation" (http:/ / catdir.
loc.gov/ catdir/ samples/ cam033/ 2002073438.pdf). Stereotypes as explanations: The formation of meaningful beliefs about social groups.
Cambridge: Cambridge University Press. pp.115. ISBN978-0-521-80047-1. .
[2] Judd, Charles M.; Park, Bernadette (1993). "Definition and assessment of accuracy in social stereotypes". Psychological Review 100 (1):
109128. doi:10.1037/0033-295X.100.1.109.
[3] Cox, William T. L.; Abramson, Lyn Y.; Devine, Patricia G.; Hollon, Steven D. (2012). "Stereotypes, Prejudice, and Depression: The
Integrated Perspective" (http:/ / www.archpsychological. com/ blog/ wp-content/ uploads/ 2012/ 09/
deprejudice-txng-dep-n-prejudice-w-tx-for-other.pdf). Perspectives on Psychological Science 7 (5): 427449.
doi:10.1177/1745691612455204. .
[4] (http:/ / www. perseus.tufts. edu/ hopper/ text?doc=Perseus:text:1999. 04. 0057:entry=stereo/ s), Henry George Liddell, Robert
Scott, A Greek-English Lexicon, on Perseus Digital Library
[5] (http:/ / www. perseus. tufts. edu/ hopper/ text?doc=Perseus:text:1999. 04. 0057:entry=tu/ pos), Henry George Liddell, Robert Scott, A
Greek-English Lexicon, on Perseus Digital Library
[6] Online Etymology Dictionary (http:/ / www.etymonline. com/ index. php?term=stereotype)
[7] Kleg, Milton (1993). Hate Prejudice and Racism (http:/ / books. google. com/ books?id=yKrrSa7WqNwC& pg=PA135). Albany: State
University of New York Press. pp.135137. ISBN978-0-585-05491-9. .
[8] Fiske, Susan T. (1998). "Stereotyping, Prejudice, and Discrimination" (http:/ / books. google. com/ books?id=w27pSuHLnLYC&
pg=PA357). In Gilbert, Daniel T.; Fiske, Susan T.; Lindzey, Gardner. The Handbook of Social Psychology. Volume Two (4th ed.). Boston,
Mass.: McGraw-Hill. p.357. ISBN978-0-19-521376-8. .
[9] Denmark, Florence L. (2010). "Prejudice and Discrimination" (http:/ / books. google. com/ books?id=hhGdag3Wf-YC& pg=PA1276). In
Weiner, Irving B.; Craighead, W. Edward. The Corsini Encyclopedia of Psychology. Volume Three (4th ed.). Hoboken, N.J.: John Wiley.
p.1277. ISBN978-0-470-47921-6. .
[10] Katz, Daniel; Braly, Kenneth W. (1935). "Racial prejudice and racial stereotypes". The Journal of Abnormal and Social Psychology
(American Psychological Association) 30 (2): 175193. doi:10.1037/h0059800.
[11] Oakes, P. J., Haslam, S. A., & Turner, J. C. (1994). Stereotyping and social reality. Oxford: Blackwell.
[12] Tajfel, Henri (1981). "Social stereotypes and social groups". In Turner, John C.; Giles, Howard. Intergroup behaviour. Oxford: Blackwell.
pp.144167. ISBN978-0-631-11711-7.
[13] Tilcsik, Andrs (2011). "Pride and Prejudice: Employment Discrimination against Openly Gay Men in the United States". American Journal
of Sociology 117 (2): 586626. doi:10.1086/661653.
[14] Operario, Don; Fiske, Susan T. (2003). "Stereotypes: Content, Structures, Processes, and Context" (http:/ / books. google. com/
books?id=Wfx55Z-Dw10C& pg=PA22). In Brown, Rupert; Gaertner, Samuel L. Blackwell Handbook of Social Psychology: Intergroup
Processes. Malden, MA: Blackwell. pp.2244. ISBN978-1-4051-0654-2. .
[15] Fiske, Susan T.; Cuddy, Amy J. C.; Glick, Peter; Xu, Jun (2002). "A Model of (Often Mixed) Stereotype Content: Competence and Warmth
Respectively Follow From Perceived Status and Competition" (http:/ / www. cos. gatech. edu/ facultyres/ Diversity_Studies/
Fiske_StereotypeContent. pdf). Journal of Personality and Social Psychology (American Psychological Association) 82 (6): 878902.
doi:10.1037//0022-3514.82.6.878. .
[16] Cuddy, Amy J. C.; Fiske, Susan T. (2002). "Doddering But Dear: Process, Content, and Function in Stereotyping of Older Persons" (http:/ /
books.google. com/ books?id=UvxEoFQ0LYwC& pg=PA7). In Nelson, Todd D. Ageism: Stereotyping and Prejudice against Older Persons.
Cambridge, Mass.: MIT Press. pp.78. ISBN978-0-262-14077-5. .
[17] Dovidio, John F.; Gaertner, Samuel L. (2010). "Intergroup Bias" (http:/ / books. google. com/ books?id=Pye5IkCFgRYC& pg=PA1085&
lpg=PA1084). In Susan T., Fiske; Gilbert, Daniel T.; Lindzey, Gardner. Handbook of Social Psychology. Volume Two (5th ed.). Hooboken,
N.J.: John Wiley. p.1085. ISBN978-0-470-13747-5. .
[18] Cuddy, Amy J. C.; et al. (2009). "Stereotype content model across cultures: Towards universal similarities and some differences" (http:/ /
www.people.hbs. edu/ acuddy/ 2009, cuddy et al. , BJSP. pdf). British Journal of Social Psychology (British Psychological Society) 48 (1):
133. doi:10.1348/014466608X314935. .
[19] McGarty, Craig; Spears, Russel; Yzerbyt, Vincent Y. (2002). "Conclusion: stereotypes are selective, variable and contested explanations".
Stereotypes as explanations: The formation of meaningful beliefs about social groups. Cambridge: Cambridge University Press. pp.186199.
ISBN978-0-521-80047-1.
[20] Allport, Gordon W. (1954). The Nature of Prejudice. Cambridge, MA: Addison-Wesley. p.189. ISBN978-0-201-00175-4.
[21] Haslam, S. A., Turner, J. C., Oakes, P. J., Reynolds, K. J., & Doosje, B. (2002). From personal pictures in the head to collective tools in the
word: how shared stereotypes allow groups to represent and change social reality. In C. McGarty, V. Y. Yzerbyt, & R. Spears (Eds.).
Stereotypes as explanations: The formation of meaningful beliefs about social groups (pp. 157-185). Cambridge: Cambridge University Press.
[22] Mullen, Brian; Johnson, Craig (1990). "Distinctiveness-based illusory correlations and stereotyping: A meta-analytic integration". British
Journal of Social Psychology (Wiley-Blackwell on behalf of the British Psychological Society) 29 (1): 1128.
doi:10.1111/j.2044-8309.1990.tb00883.x.
[23] Meiser, Thorsten (2006). "Contingency Learning and Biased Group Impressions" (http:/ / books. google. com/
books?id=RMZL_2H8A4kC& pg=PA183). In Fiedler, Klaus; Justin, Peter. Information Sampling and Adaptive Cognition. Cambridge:
Stereotype
232
Cambridge University Press. pp.183209. ISBN978-0-521-83159-8. .
[24] Hamilton, David L.; Gifford, Robert K. (1976). "Illusory correlation in interpersonal perception: A cognitive basis of stereotypic
judgments". Journal of Experimental Social Psychology (Elsevier) 12 (4): 392407. doi:10.1016/S0022-1031(76)80006-6.
[25] Berndsen, Maritte; Spears, Russel; van der Pligt, Joop; McGarty, Craig (2002). "Illusory correlation and stereotype formation: making
sense of group differences and cognitive biases" (http:/ / books. google. fr/ books?id=dkn8dceHRg8C& pg=PA90). In McGarty, Craig;
Yzerbyt, Vincent Y.; Spears, Russel. Stereotypes as explanations: The formation of meaningful beliefs about social groups. Cambridge:
Cambridge University Press. pp.90110. ISBN978-0-521-80047-1. .
[26] Moskowitz, Gordon B. (2005). Social Cognition: Understanding Self and Others (http:/ / books. google. com/ books?id=_-NLW8Ynvp8C&
pg=PA182). New York: Guilford Press. p.182. ISBN978-1-59385-085-2. .
[27] McConnell, Allen R.; Sherman, Steven J.; Hamilton, David L. (1994). "Illusory correlation in the perception of groups: an extension of the
distinctiveness-based account" (http:/ / allenmcconnell.net/ pdfs/ edbe-JPSP-1994. pdf). Journal of Personality and Social Psychology 67 (3):
414429. doi:10.1037/0022-3514.67.3.414. .
[28] Katz, Daniel; Braley, Kenneth (1933). "Racial stereotypes of one hundred college students". The Journal of Abnormal and Social
Psychology 28 (3): 280290. doi:10.1037/h0074049.
[29] Meenes, Max (1943). "A Comparison of Racial Stereotypes of 1935 and 1942". Journal of Social Psychology 17 (2): 327336.
doi:10.1080/00224545.1943.9712287.
[30] Haslam, S. Alexander; Turner, John C.; Oakes, Penelope J.; McGarty, Craig; Hayes, Brett K. (1992). "Context-dependent variation in social
stereotyping 1: The effects of intergroup relations as mediated by social change and frame of reference". European Journal of Social
Psychology 22 (1): 320. doi:10.1002/ejsp.2420220104.
[31] Devine, Patricia G. (1989). "Stereotypes and Prejudice: Their Automatic and Controlled Components" (http:/ / faculty. washington. edu/
donnaw/ Devine 1989. pdf). Journal of Personality and Social Psychology 56 (1): 518. doi:10.1037/0022-3514.56.1.5. .
[32] Devine, Patricia G.; Monteith, Margo J. (1999). "Automaticty and Control in Stereotyping" (http:/ / books. google. com/
books?id=5X_auIBx99EC& pg=PA341). In Chaiken, Shelly; Trope, Yaacov. Dual-Process Theories in Social Psychology. New York:
Guilford Press. pp.341342. ISBN978-1-57230-421-5. .
[33] Bargh, John A. (1994). "The Four Horsemen of Automaticity: Awareness, Intention, Efficiency, Control in Social Cognition" (http:/ / books.
google.com/ books?id=5ncW0DyNqVwC& pg=PA21). In Wyer, Robert S.; Srull, Thomas K.. Handbook of Social Cognition. Two (2nd ed.).
Hillsdale, NJ: Lawrence Earlbaum. p.21. ISBN978-0-8058-1056-1. .
[34] Brown, Rupert (2010). Prejudice: Its Social Psychology (http:/ / books. google. com/ books?id=PygYKbRoZjcC& pg=PA88) (2nd ed.).
Oxford: Wiley-Blackwell. pp.88. ISBN978-1-4051-1306-9. .
[35] Lepore, Lorella; Brown, Rupert (1997). "Category and Stereotype Activation: Is Prejudice Inevitable?" (http:/ / www. atkinson. yorku. ca/
~jsteele/ PDF/ Optional Readings/ Lepore_Brown_JPSP_1997. pdf). Journal of Personality and Social Psychology 72 (2): 275287.
doi:10.1037/0022-3514.72.2.275. .
[36] Yueh-Ting Lee, Lee J. Jussim, and Clark R. McCauley, ed. (September 1995). Stereotype Accuracy: Toward Appreciating Group
Differences. American Psychological Association. ISBN978-1-55798-307-7.
[37] Jussim, Lee; Cain, Thomas R.; Crawford, Jarret T.; Harber, Kent; Cohen, Florette (2009). "The unbearable accuracy of stereotypes". In
Nelson, Todd D. Handbook of prejudice, stereotyping, and discrimination. New York: Psychology Press. pp.199227.
ISBN978-0-8058-5952-2.
[38] Terracciano, A; Abdel-Khalek, AM; Adm, N; Adamovov, L; Ahn, CK; Ahn, HN; Alansari, BM; Alcalay, L et al. (2005). "National
Character Does Not Reflect Mean Personality Trait Levels in 49 Cultures" (http:/ / www. ncbi. nlm. nih. gov/ pmc/ articles/ PMC2775052/ ).
Science 310 (5745): 96100. doi:10.1126/science.1117199. PMC2775052. PMID16210536. .
[39] Zemore, Sarah E.; Fiske, Susan T.; Kim, Hyun-Jeong (2000). "Gender Stereotypes and the Dynamics of Social Interaction" (http:/ / books.
google.com/ books?id=yJ43_5tJGycC& pg=PA229). In Eckes, Thomas; Trautner, Hanns Martin. The Developmental Social Psychology of
Gender. Mahwah, NJ: Lawrence Erlbaum Associates. pp.229230. ISBN978-0-585-30065-8. .
[40] Crocker, Jennifer; Major, Brenda; Stelle, Claude (1998). "Social Stigma" (http:/ / books. google. com/ books?id=w27pSuHLnLYC&
pg=PA519). In Gilbert, Daniel T.; Fiske, Susan T.; Lindzey, Gardner. The Handbook of Social Psychology. Volume Two (4th ed.). Oxford:
Oxford University Press. pp.519521. ISBN978-0-19-521376-8. .
[41] Whiteley, Bernard E.; Kite, Mary E. (2010). The Psychology of Prejudice and Discrimination (http:/ / books. google. fr/
books?id=mXSJEjl4uZYC& pg=PA428) (2nd ed.). Belmont, CA: Wadsworth Cengage Learning. pp.428435. ISBN978-0-495-59964-7. .
[42] Crocker, Jennifer; Voelkl, Kristin; Testa, Maria; Major, Brenda (1991). "Social stigma: The affective consequences of attributional
ambiguity". Journal of Personality and Social Psychology 60 (2): 218 228. doi:10.1037/0022-3514.60.2.218.
[43] Osborne, Jason W. (2007). "Linking Stereotype Threat and Anxiety". Educational Psychology 27 (1): 135154.
doi:10.1080/01443410601069929.
[44] Quinn, Diane M.; Kallen, Rachel W.; Spencer, Steven J. (2010). "Stereotype Threat". In Dividio, John F.; et al. The SAGE Handbook of
Prejudice, Stereotyping and Discrimination. Thousand Oaks, CA: SAGE Publications. pp.379394. ISBN978-1-4129-3453-4.
[45] Inzlicht, Michael; Tullett, Alexa M.; Gutsell, Jennifer N. (2012). "Stereotype Threat Spillover: The Short- and Long-Term Effects of Coping
with Threats to Social Identity" (http:/ / books.google.com/ books?id=o1JBcAv3f14C& pg=PA108). In Inzlicht, Michael; Schmader, Toni.
Stereotype Threat: Theory, Process, and Application. New York, NY: Oxford University Press. p.108. ISBN978-0-19-973244-9. .
[46] Aronson, Joshua; Stelle, Claude M. (2005). "Chapter 24: Stereotypes and the Fragility of Academic Competence, Motivation, and
Self-Concept" (http:/ / books. google.com/ books?id=B14TMHRtYBcC& pg=PA436). In Elliot, Andrew J.; Dweck, Carol S.. Handbook of
Stereotype
233
Competence and Motivation. New York: Guilford Press. pp.436, 443. ISBN978-1-59385-123-1. .
[47] Steele, Claude M.; Aronson, Joshua (November 1995). "Stereotype threat and the intellectual test performance of African Americans" (http:/
/ users. nber.org/ ~sewp/ events/ 2005.01.14/ Bios+ Links/ Good-rec2-Steele_& _Aronson_95. pdf). Journal of Personality and Social
Psychology 69 (5): 797811. doi:10.1037/0022-3514.69.5.797. PMID7473032. .
[48] Stone, Jeff; Lynch, Christian I.; Sjomeling, Mike; Darley, John M. (1999). "Stereotype threat effects on Black and White athletic
performance". Journal of Personality and Social Psychology 77 (6): 12131227. doi:10.1037/0022-3514.77.6.1213.
[49] Maass, Anne; D'Ettole, Claudio; Cadinu, Mara (2008). "Checkmate? The role of gender stereotypes in the ultimate intellectual sport" (http:/
/ clarksvillechessclub. org/ pdf files/ The role of gender stereotypes in chess. pdf). European Journal of Social Psychology 38 (2): 231245.
doi:10.1002/ejsp.440. .
[50] Gupta, V. K.; Bhawe, N. M. (2007). "The Influence of Proactive Personality and Stereotype Threat on Women's Entrepreneurial Intentions".
Journal of Leadership & Organizational Studies 13 (4): 7385. doi:10.1177/10717919070130040901.
[51] Kassin, Saul M.; Fein, Steven; Markus, Hazel Rose (2011). Social psychology (http:/ / books. google. com/ books?id=3aCdjhGxDjgC&
pg=PA172) (8th ed.). Belmont, CA: Wadsworth, Cengage Learning. pp.172. ISBN978-0-495-81240-1. .
[52] Brown, Rupert (2010). Prejudice: Its Social Psychology (http:/ / books. google. com/ books?id=PygYKbRoZjcC& pg=PA94) (2nd ed.).
Oxford: Wiley-Blackwell. pp.9497. ISBN978-1-4051-1306-9. .
[53] Word, Carl O.; Zanna, Mark P.; Cooper, Joel (1974). "The nonverbal mediation of self-fulfilling prophecies in interracial interaction".
Journal of Experimental Social Psychology (Elsevier) 10 (2): 109120. doi:10.1016/0022-1031(74)90059-6.
[54] Snyder, Mark; Tanke, Elizabeth D.; Berscheid, Ellen (1977). "Social perception and interpersonal behavior: On the self-fulfilling nature of
social stereotypes" (http:/ / jefferson.library.millersville. edu/ reserve/ COMM301_Paul_SocialPerception. pdf). Journal of Personality and
Social Psychology 35 (9): 656666. doi:10.1037/0022-3514.35.9.656. .
[55] Banaji, Mahzarin R. (2002). "The Social Psychology of Stereotypes". In Smelser, Neil; Baltes, Paul. International Encyclopedia of the
Social and Behavioral Sciences. New York: Pergamon. pp.1510015104. doi:10.1016/B0-08-043076-7/01754-X. ISBN978-0-08-043076-8.
[56] Fiske, Susan T.; Lee, Tiane L. (2008). "Stereotypes and prejudice create workplace discrimination" (http:/ / books. google. fr/ books?hl=en&
lr=& id=8edJmBsyRHwC& oi=fnd& pg=PA13). In Brief, Arthur P. Diversity at Work. New York: Cambridge University Press. pp.1352.
ISBN978-0-521-86030-7. .
[57] Agerstrm, Jens; Rooth, Dan-Olof (2011). "The role of automatic obesity stereotypes in real hiring discrimination". Journal of Applied
Psychology 96 (4): 790805. doi:10.1037/a0021594. PMID21280934.
[58] Davison, Heather K.; Burke, Michael J. (2000). "Sex Discrimination in Simulated Employment Contexts: A Meta-analytic Investigation".
Journal of Vocational Behavior 56 (2): 225248. doi:10.1006/jvbe.1999.1711.
[59] Rudman, Laurie A.; Glick, Peter (2001). "Prescriptive Gender Stereotypes and Backlash toward Agentic Women" (https:/ / wesfiles.
wesleyan.edu/ courses/ PSYC-309-clwilkins/ Week3/ Rudman. Glick. 2001. pdf). Journal of Social Issues 57 (4): 743762.
doi:10.1111/0022-4537.00239. .
[60] Sinclair, Stacey; Huntsinger, Jeff (2006). "The Interpersonal Basis of Self-Stereotyping" (http:/ / books. google. com/
books?id=7WtgXfECza8C& pg=PA239). In Levin, Shana; Van Laar, Colette. Stigma and Group Inequality: Social Psychological
Perspectives. Claremont Symposium on Applied Social Psychology. Mahwah, NJ: Lawrence Erlbaum Associates. p.239.
ISBN978-0-8058-4415-3. .
[61] Correll, Shelley J. (2001). "Gender and the career choice process: The role of biased self-assessments" (http:/ / www. chaire-crsng-inal. fsg.
ulaval. ca/ fileadmin/ docs/ documents/ Article/ Gender_and_career_choice_process_2001. pdf). American Journal of Sociology 106 (6):
16911730. doi:10.1086/321299. .
[62] Correll, Shelley J. (2004). "Constraints into Preferences: Gender, Status, and Emerging Career Aspirations" (http:/ / people. uncw. edu/
maumem/ soc500/ Correll2004.pdf). American Sociological Review 69 (1): 93113. doi:10.1177/000312240406900106. .
[63] Sinclair, Stacey; Hardin, Curtis D.; Lowery, Brian S. (2006). "Self-Stereotyping in the Context of Multiple Social Identities" (http:/ / psych.
princeton.edu/ psychology/ research/ sinclair/ pubs/ self stereo and multiple identities. PDF). Journal of Personality and Social Psychology
(American Psychological Association) 90 (4): 529542. doi:10.1037/0022-3514.90.4.529. .
[64] Kitch, Carolyn L. (2001). The Girl on the Magazine Cover: The Origins of Visual Stereotypes in American Mass Media. Chapel Hill, NC:
University of North Carolina Press. pp.116. ISBN978-0-8078-2653-9.
[65] van Ginneken, Jaap (2007). Screening Difference: How Hollywood's Blockbuster Films Imagine Race, Ethnicity, and Culture (http:/ / books.
google.fr/ books?id=kd8WqdD7qIUC& printsec=frontcover). Lanham: Rowman & Littlefield. ISBN9780742555839. .
[66] Romn, Ediberto (2000). "Who Exactly Is Living La Vida Loca: The Legal and Political Consequences of Latino-Latina Ethnic and Racial
Stereotypes in Film and Other Media". Journal of Gender, Race & Justice 4 (1): 3768.
Stereotype
234
Further reading
Hilton, James L.; von Hippel, William (1996). "Stereotypes". Annual Review of Psychology 47 (1): 237271.
doi:10.1146/annurev.psych.47.1.237.
Stuart Ewen, Elizabeth Ewen, Typecasting: On the Arts and Sciences of Human Inequality. New York (Seven
Stories Press) 2006
Stereotype & Society (http:/ / www. stereotypeandsociety. typepad. com) A Major Resource: Constantly updated
and archived
Regenberg, Nina (2007). "Are Blonds Really Dumb?" (http:/ / beta. in-mind. org/ issue-3/
are-blonds-really-dumb). In mind (magazine) (3)
Are Stereotypes True? (http:/ / beta. in-mind. org/ node/ 126)
Stereotype Susceptibility: Identity Salience and Shifts in Quantitative Performance, Margaret Shih, Todd L.
Pittinsky, Nalini Ambady (http:/ / www. blackwell-synergy. com/ doi/ abs/ 10. 1111/ . 00111) Research about the
effects of 'positive' and negative stereotypes on encouraging/discouraging performance.
Turner, Chris (2004). Planet Simpson: How a Cartoon Masterpiece Documented an Era and Defined a
Generation. Toronto: Random House Canada.
Crawford, M. & Unger, R. (2004). Women and Gender: A Feminist Psychology. McGraw Hill New York. New
York. 45-49.
Spitzer, B.L., Henderson, K, A., & Zavian, M. T. (1999). Gender differences in population versus media body
sizes: A comparison over four decades. Sex Roles, 40, 545-565.
External links
Interview (http:/ / www. overfiftyandoutofwork. com/ experts/ susan-fiske-mike-north/ ) with social psychologists
Susan Fiske and Mike North about the stereotyping of older people
How gender stereotypes influence emerging career aspirations (http:/ / www. youtube. com/
watch?v=jwviTwO8M8Q) lecture by Stanford University sociologist Shelley Correll on October 21, 2010
Social Psychology Network (http:/ / www. understandingprejudice. org/ apa/ english/ page11. htm) Stereotyping
Stereotypes (http:/ / mediasmarts. ca/ backgrounder/ stereotypes-teaching-backgrounder) Media Smarts,
Canada's Centre for Digital and Media Literacy
Age and Health based stereotyping (http:/ / www. ahealthcareer. co. uk/ age-stereotypes-health-sector. html) Age
and Health based stereotyping
Subadditivity effect
235
Subadditivity effect
The subadditivity effect is the tendency to judge probability of the whole to be less than the probabilities of the
parts.
[1]
For instance, subjects in one experiment judged the probability of death from cancer in the United States was 18%,
the probability from heart attack was 22%, and the probability of death from "other natural causes" was 33%. Other
participants judged the probability of death from a natural cause was 58%. Natural causes are made up of precisely
cancer, heart attack, and "other natural causes," however, the sum of the latter three probabilities was 73%, and not
58%. According to Tversky and Koehler (1994) this kind of result is observed consistently.
[2]
The same mechanisms may underlie an effect of familiarity on probability judgment. More familiar events are more
available. This is known as the availability heuristic. We find it easier to think of reasons why these events will and
will not happen. In an experiment carried out by Fox and Levav (2000),
[3]
they asked students at Duke University
which of two events was more likely to occur. The first one was "Duke men's basketball defeats UNC men's
basketball at Duke's Cameron Indoor Stadium in January 1999," and the other one was "Duke men's fencing defeats
UNC men's fencing at Duke's Cameron Card Gym in January 1999." The researchers postulated that since Duke
students are much more familiar with basketball than with fencing, 75% of the students thought the basketball
victory was more likely. Other students answered exactly the same questions, however, with Duke and UNC
switched around. After the change, 44% of the students said that a UNC victory in basketball was more likely than a
UNC victory in fencing. 44% plus 75% is 119%, which is larger than 100%, and only one such basketball game
would be played. In this experiment, familiarity with basketball led subjects to think of the basketball event as more
likely than the fencing event, regardless which basketball event was described.
Explanations
In a 2012 article in Psychological Bulletin it is suggested the subadditivity effect can be explained by an
information-theoretic generative mechanism that assumes a noisy conversion of objective evidence (observation)
into subjective estimates (judgment).
[4]
This explanation is different than support theory, proposed as an explanation
by Tversky and Koehler,
[2]
which requires additional assumptions. Since mental noise is a sufficient explanation that
is much simpler and straightforward than any explanation involving heuristics or behavior, Occam's razor would
argue in its favor as the underlying generative mechanism (it is the hypotheses which makes the fewest
assumptions).
[4]
References
[1] [1] Baron, J. (in preparation). Thinking and deciding, 4th edition. New York: Cambridge University Press.
[2] Tversky, A., & Koehler, D.J. (1994). Support theory: A nonextentional representation of subjective probability. Psychological Review, 101,
547567.
[3] Fox, C.R., & Levav, J. (2000). Familiarity bias and belief reversal in relative likelihood judgments. Organizational Behavior and Human
Decision Processes, 82, 268292.
[4] Martin Hilbert (2012) "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin, 138(2), 211237; free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf
Subjective validation
236
Subjective validation
Subjective validation, sometimes called personal validation effect, is a cognitive bias by which a person will
consider a statement or another piece of information to be correct if it has any personal meaning or significance to
them.
[1]
In other words, a person whose opinion is affected by subjective validation will perceive two unrelated
events (i.e., a coincidence) to be related because their personal belief demands that they be related. Closely related to
the Forer effect, subjective validation is an important element in cold reading. It is considered to be the main reason
behind most reports of paranormal phenomena.
[2]
According to Bob Carroll, psychologist Ray Hyman is considered
to be the foremost expert on subjective validation and cold reading.
[3]
References
[1] [1] Forer, B.R. (1949) "The Fallacy of Personal Validation: A classroom Demonstration of Gullibility," Journal of Abnormal Psychology, 44,
118-121.
[2] Cline, Austin. Flaws in Reasoning and Arguments: Subjective Validation, Seeing Patterns & Connections That Aren't Really There (http:/ /
atheism. about. com/ od/ logicalflawsinreasoning/ a/ subjective. htm), About.com, September 10, 2007. Accessed January 10, 2008.
[3] Carrol, Bob. ""Hope in Small Doses"" (http:/ / www.skepticality. com/ hope-in-small-doses/ ). Skepticality. . Retrieved 8/17/2012.
External links
The Skeptic's Dictionary entry on subjective validation (http:/ / skepdic. com/ subjectivevalidation. html)
Survivorship bias
Survivorship bias is the logical error of concentrating on the people or things that "survived" some process and
inadvertently overlooking those that didn't because of their lack of visibility. This can lead to false conclusions in
several different ways. The survivors may literally be people, as in a medical study, or could be companies or
research subjects or applicants for a job, or anything that must make it past some selection process to be considered
further.
Survivorship bias can lead to overly optimistic beliefs because failures are ignored, such as when companies that no
longer exist are excluded from analyses of financial performance. It can also lead to the false belief that the successes
in a group have some special property, rather than being just lucky. For example, if the three of the five students with
the best college grades went to the same high school, that can lead one to believe that the high school must offer an
excellent education. This could be true, but the question cannot be answered without looking at the grades of all the
other students from that high school, not just the ones who "survived" the top-five selection process.
Survivorship bias is a type of selection bias.
In finance
In finance, survivorship bias is the tendency for failed companies to be excluded from performance studies because
they no longer exist. It often causes the results of studies to skew higher because only companies which were
successful enough to survive until the end of the period are included.
For example, a mutual fund company's selection of funds today will include only those that are successful now.
Many losing funds are closed and merged into other funds to hide poor performance. In theory, 90% of extant funds
could truthfully claim to have performance in the first quartile of their peers if the peer group includes funds that
have closed.
Survivorship bias
237
In 1996 Elton, Gruber, & Blake showed that survivorship bias is larger in the small-fund sector than in large mutual
funds (presumably because small funds have a high probability of folding).
[1]
They estimate the size of the bias
across the U.S. mutual fund industry as 0.9% per annum, where the bias is defined and measured as:
"Bias is defined as average a for surviving funds minus average for all funds"
(Where is the risk-adjusted return over the S&P 500. This is the standard measure of mutual fund
out-performance).
Additionally, in quantitative backtesting of market performance or other characteristics, survivorship bias is the use
of a current index membership set rather than using the actual constituent changes over time. Consider a backtest to
1990 to find the average performance (total return) of S&P 500 members who have paid dividends within the
previous year. To use the current 500 members only and create an historical equity line of the total return of the
companies that met the criteria, would be adding survivorship bias to the results. S&P maintains an index of healthy
companies, removing companies that no longer meet their criteria as a representative of the large-cap U.S. stock
market. Companies that had healthy growth on their way to inclusion in the S&P 500, would be counted as if they
were in the index during that growth period, when they were not. Instead there may have been another company in
the index that was losing market capitalization and was destined for the S&P 600 Small-cap Index, that was later
removed and would not be counted in the results. Using the actual membership of the index, applying entry and exit
dates to gain the appropriate return during inclusion in the index, would allow for a bias-free output.
As a general experimental flaw
Survivorship bias (or survivor bias) is a statistical artifact in applications outside finance, where studies on the
remaining population are fallaciously compared with the historic average despite the survivors having unusual
properties. Mostly, the unusual property in question is a track record of success (like the successful funds).
For example, the parapsychology researcher Joseph Banks Rhine believed he had identified the few individuals from
hundreds of potential subjects who had powers of ESP. His calculations were based on the improbability of these
few subjects guessing the Zener cards shown to a partner by chance.
A major criticism which surfaced against his calculations was the possibility of unconscious survivor bias in subject
selections. He was accused of failing to take into account the large effective size of his sample (all the people he
didn't choose as 'strong telepaths' because they failed at an earlier testing stage). Had he done this he might have seen
that from the large sample, one or two individuals would probably achieve the track record of success he had found
purely by chance.
Writing about the Rhine case, Martin Gardner explained that he didn't think the experimenters had made such
obvious mistakes out of statistical naivet, but as a result of subtly disregarding some poor subjects. He said that
without trickery of any kind, there would always be some people who had improbable success, if a large enough
sample were taken. To illustrate this, he speculates about what would happen if one hundred professors of
psychology read Rhine's work and decided to make their own tests; he said that survivor bias would winnow out the
typical failed experiments, but encourage the lucky successes to continue testing. He thought that the common null
hypothesis (of no result) wouldn't be reported, but:
"Eventually, one experimenter remains whose subject has made high scores for six or seven successive
sessions. Neither experimenter nor subject is aware of the other ninety-nine projects, and so both have a strong
delusion that ESP is operating."
He concludes:
"The experimenter writes an enthusiastic paper, sends it to Rhine who publishes it in his magazine, and the
readers are greatly impressed".
If enough scientists study a phenomenon, some will find statistically significant results by chance, and these are the
experiments submitted for publication. Additionally, papers showing positive results may be more appealing to
Survivorship bias
238
editors.
[2]
This problem is known as positive results bias, a type of publication bias. To combat this, some editors
now call for the submission of 'negative' scientific findings, where "nothing happened."
Survivorship bias is one of the issues discussed in the provocative 2005 paper "Why Most Published Research
Findings Are False."
[2]
In business law
Survivorship bias can raise truth-in-advertising problems when the success rate advertised for a product or service is
measured with respect to a population whose makeup differs from that of the target audience whom the company
offering that product or service targets with advertising claiming that success rate. These problems become
especially significant when a) the advertisement either fails to disclose the existence of relevant differences between
the two populations or describes them in insufficient detail; b) these differences result from the company's deliberate
"pre-screening" of prospective customers to ensure that only customers with traits increasing their likelihood of
success are allowed to purchase the product or service, especially when the company's selection procedures and/or
evaluation standards are kept secret; and c) the company offering the product or service charges a fee, especially one
that is non-refundable and/or not disclosed in the advertisement, for the privilege of attempting to become a
customer.
For example, the advertisements of online dating service eHarmony.com pass this test because they fail the first
two prongs but not the third: They claim a success rate significantly higher than that of competing services while
generally not disclosing that the rate is calculated with respect to a viewership subset who possess traits that
increase their likelihood of finding and maintaining relationships and lack traits that pose obstacles to their doing
so (a), and the company deliberately selects for these traits by administering a lengthy pre-screening process
designed to reject prospective customers who lack the former traits and/or possess the latter ones (b), but the
company does not charge a fee for administration of its pre-screening test, with the effect that its prospective
customers face no "downside risk" other than losing the time and expending the effort involved in completing the
pre-screening process (negating c).
[3]
(Similarly, many investors believe that chance is the main reason that most successful fund managers have the track
records they do.)
References
[1] http:/ / rfs.oupjournals. org/ cgi/ reprint/ 9/ 4/ 1097 Elton, Gruber, & Blake, 1996, Survivorship Bias and Mutual Fund Performance, from
"The Review of Financial Studies", volume 9, number 4. In this paper the researchers eliminate survivorship bias by following the returns on
all funds extant at the end of 1976. They show that other researchers have drawn spurious conclusions by failing to include the bias in
regressions on fund performance.
[2] http:/ / www. plosmedicine. org/ article/ info:doi/ 10.1371/ journal. pmed. 0020124 Ioannidis JPA (2005) Why Most Published Research
Findings Are False. PLoS Med 2(8): e124.
[3] http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/ 2007/ 05/ 12/ AR2007051201350. html
Texas sharpshooter fallacy
239
Texas sharpshooter fallacy
The Texas sharpshooter fallacy is an informal fallacy in which pieces of information that have no relationship to
one another are called out for their similarities, and that similarity is used for claiming the existence of a pattern.
[1]
This fallacy is the philosophical/rhetorical application of the multiple comparisons problem (in statistics) and
apophenia (in cognitive psychology). It is related to the clustering illusion, which refers to the tendency in human
cognition to interpret patterns in randomness where none actually exist.
The name comes from a joke about a Texan who fires some shots at the side of a barn, then paints a target centered
on the biggest cluster of hits and claims to be a sharpshooter.
[2][3]
Structure
The Texas sharpshooter fallacy often arises when a person has a large amount of data at his disposal, but only
focuses on a small subset of that data. Random chance may give all the elements in that subset some kind of
common property (or pair of common properties, when arguing for correlation). If the person fails to account for the
likelihood of finding some subset in the large data with some common property strictly by chance alone, that person
is likely committing a Texas Sharpshooter fallacy.
To illustrate, if we pay attention to a cluster of cancer cases in a certain sub-population and then draw our "circle"
around the smallest area that includes this cluster, this sample will appear to be suffering an unusually high rate of
cancer, but if we included the rest of the population, the incidence would regress to the average.
[4]
The fallacy is characterized by a lack of specific hypothesis prior to the gathering of data, or the formulation of a
hypothesis only after data has already been gathered and examined.
[5]
Thus, it typically does not apply if one had an
ex ante, or prior, expectation of the particular relationship in question before examining the data. For example one
might, prior to examining the information, have in mind a specific physical mechanism implying the particular
relationship. One could then use the information to give support or cast doubt on the presence of that mechanism.
Alternatively, if additional information can be generated using the same process as the original information, one can
use the original information to construct a hypothesis, and then test the hypothesis on the new data. See hypothesis
testing. What one cannot do is use the same information to construct and test the same hypothesis (see hypotheses
suggested by the data) to do so would be to commit the Texas sharpshooter fallacy.
Examples
A Swedish study in 1992 tried to determine whether or not power lines caused some kind of poor health effects.
The researchers surveyed everyone living within 300 meters of high-voltage power lines over a 25-year period
and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence
of childhood leukemia was four times higher among those that lived closest to the power lines, and it spurred calls
to action by the Swedish government. The problem with the conclusion, however, was that the number of
potential ailments, i.e. over 800, was so large that it created a high probability that at least one ailment would
exhibit statistically significant difference just by chance alone. Subsequent studies failed to show any links
between power lines and childhood leukemia, neither in causation nor even in correlation.
[6]
Attempts to find cryptograms in the works of William Shakespeare, which tended to report results only for those
passages of Shakespeare for which the proposed decoding algorithm produced an intelligible result. This could be
explained as an example of the fallacy because passages which do not match the algorithm have not been
accounted for.
Attempts to find cryptograms in the Bible, and the Quran Code.
This fallacy is often found in modern-day interpretations of the quatrains of Nostradamus. Nostradamus' quatrains
are often liberally translated from the original (archaic) French, stripped of their historical context, and then
Texas sharpshooter fallacy
240
applied to support the conclusion that Nostradamus predicted a given modern-day event, after the event actually
occurred. For instance, the Nostradamus lines that supposedly predicted 9/11 were taken from three separate and
unrelated passages and a fictional line was added.
[7]
References
[1] Bennett, Bo (2010). Logically Fallacious: The Ultimate Collection of Over 300 Logical Fallacies (http:/ / books. google. com/
books?id=WFvhN9lSm5gC& pg=PR7& dq="texas+ sharpshooter+ fallacy"& hl=en& sa=X& ei=yORuT5mRNYrgiAKxqYSzBQ&
ved=0CEQQ6AEwAjgK#v=onepage& q="texas sharpshooter fallacy"& f=false). ebookit.com. ISBN1456607375. . Retrieved 2012-03-25.
"description: ignoring the difference while focusing on the similarities, thus coming to an inaccurate conclusion"
[2] Atul Gawande (2/8/1999). "The cancer-cluster myth" (http:/ / www. crab. rutgers. edu/ ~mbravo/ cluster. pdf). The New Yorker. . Retrieved
2009-10-10.
[3] Carroll, Robert Todd (2003). The Skeptic's Dictionary: a collection of strange beliefs, amusing deceptions, and dangerous delusions (http:/ /
books.google. com/ books?id=6FPqDFx40vYC& lpg=PA375& vq=texas sharpshooter& dq="texas sharpshooter fallacy" logical fallacies&
pg=PA375#v=snippet& q=texas sharpshooter& f=false). John Wiley & Sons. p.375. ISBN0471272426. . Retrieved 2012-03-25. "The term
refers to the story of the Texan who shoots holes in the side of a barn and then draws a bull's-eye around the bullet holes"
[4] "Cancer Clusters" (http:/ / www. ncri. ie/ cancerinfo/ clusters. shtml). NCR. 2010. . Retrieved 2012-03-25. "The Texas sharpshooter shoots at
the side of a barn and then draws a bull's-eye around the bullet holes. In the same way, we might notice a number of cancer cases, then draw
our population base around the smallest area possible, neglecting to remember that the cancer cases actually came from a much larger
population."
[5] Thompson, William C. (July 18, 2009). "Painting the target around the matching profile: the Texas sharpshooter fallacy in forensic DNA
interpretation" (http:/ / lpr. oxfordjournals.org/ content/ 8/ 3/ 257. full. pdf+ html). Law, Probability, & Risk 8 (3): 257-258.
doi:10.1093/lpr/mgp013. . Retrieved 2012-03-25. "Texas sharpshooter fallacy...this article demonstrates how post hoc target shifting occurs
and how it can distort the frequency and likelihood ratio statistics used to characterize DNA matches, making matches appear more probative
than they actually are."
[6] "FRONTLINE: previous reports: transcripts: currents of fear" (http:/ / www. pbs. org/ wgbh/ pages/ frontline/ programs/ transcripts/ 1319.
html). PBS. 1995-06-13. . Retrieved 2012-07-03.
[7] "Nostradamus Predicted 9/11?" (http:/ / www. snopes. com/ rumors/ nostradamus. asp). snopes.com. . Retrieved 2012-07-03.
External links
Fallacy files entry (http:/ / www. fallacyfiles. org/ texsharp. html)
Time-saving bias
241
Time-saving bias
The time-saving bias describes people's tendency to misestimate the time that could be saved (or lost) when
increasing (or decreasing) speed. In general, people underestimate the time that could be saved when increasing from
a relatively low speed (e.g., 25mph or 40km/h) and overestimate the time that could be saved when increasing from
a relatively high speed (e.g., 55mph or 90km/h). People also underestimate the time that could be lost when
decreasing from a low speed and overestimate the time that could be lost when decreasing from a high speed.
Examples
In one study, participants were asked to judge which of two road improvement plans would be more efficient in
reducing mean journey time. Respondents preferred a plan that would increase the mean speed from 70 to 110km/h
more than a plan that would increase the mean speed from 30 to 40km/h, although the latter actually saves more
time (Svenson, 2008, Experiment 1).
In another study drivers were asked to indicate how much time they feel can be saved when increasing from either a
low (30mph) or high (60mph) speed (Fuller et al., 2009). For example, participants were asked the following
question: You are driving along an open road. How much time do you feel you would gain if you drove for 10 miles
at 40mph instead of 30mph? (Fuller et al., 2009, p.14). Another question had a higher starting speed (60mph) and
two other questions asked about losing time when decreasing speed (from either 30 or 60mph).
Results supported the predictions of the time-saving bias as participants underestimated the time saved when
increasing from a low speed and overestimated the time saved when increasing from a relatively high speed. In
addition, participants also misestimated the time lost when decreasing speed: they generally underestimated the time
lost when decreasing from a low speed and overestimated the time lost when decreasing from a relatively high speed
(Fuller et al., 2009).
Explanation
The physical formula for calculating the time gained when increasing speed is:
(1) t=cD (1/V1 1/V2),
where c is constant and used to transform between units of measurement, t is the time gained, D is the distance
traveled and V1 and V2 are the original and increased speeds, respectively. This formula shows that the relationship
between increasing speed and journey time is curvilinear: a similar speed increase would result in more time saved
when increasing from a low speed compared to a higher speed. For example, when increasing from 20 to 30mph the
time required to complete 10 miles decreases from 30 to 20 minutes, saving 10 minutes. However, the same speed
increase of 10mph would result in less time saved if the initial speed is higher (e.g., only 2 minutes saved when
increasing from 50mph to 60mph). Changing the distance of the journey from 10 miles to a longer or shorter
distance will increase or decrease these time savings, but will not affect the relationship between speed and time
savings.
Svenson (2008) suggested that peoples judgments of time-savings actually follow a Proportion heuristic, by which
people judge the time saved as the proportion of the speed increase from the initial speed. Another study suggested
that people might follow a simpler difference heurtic, by which they judge the time saved based solely on the
difference between the initial and higher speed (Peer, 2010b, Study 3). It seems that people falsely believe that
journey time decreases somewhat linearly as driving speed increases, irrespective of the initial speed, causing the
time-saving bias. Although it is still unclear what is the dominant heuristic people use to estimate time savings, it is
evident that almost none follow the above curvelinear relationship.
Time-saving bias
242
Consequences in driving
Drivers who underestimated the time saved when increasing from a low speed or overestimated the time lost when
decreasing from a high speed, overestimated the speed required for arriving on a specific time and chose unduly high
speeds, sometimes even exceeding the stated speed limit (Peer, 2010a). Similarly, drivers who overestimated the
time saved when increasing from a high speed underestimated the speed required for arriving on time and chose
lower speeds (Peer, 2011).
Consequences in other domains
The time-saving bias is not limited to driving. The same faulty estimations emerge when people are asked to estimate
savings in patients waiting time when adding more physicians to a health care center (Svenson, 2008, Experiment 2)
or when estimating an increase in the productivity of a manufacturing line by adding more workers (Svenson, 2011).
References
1. Fuller, R., Gormley, M., Stradling, S., Broughton, P., Kinnear, N. ODolan, C., & Hannigan, B. (2009). Impact of
speed change on estimated journey time: Failure of drivers to appreciate relevance of initial speed. Accident
Analysis and Prevention, 41, 10-14.
2. 2. Peer, E. (2011). The time-saving bias, speed choices and driving behavior, Transportation Research Part F:
Traffic Psychology and Behaviour. 14, 543-554.
3. Peer, E. (2010a). Speeding and the time-saving bias: How drivers estimations of time saved when increasing
speed affects their choice of speed. Accident Analysis and Prevention, 42, 1978-1982.
4. 4. Peer, E. (2010b). Exploring the time-saving bias: How drivers misestimate time saved when increasing speed.
Judgment and Decision Making, 5(7), 477-488.
5. 5. Svenson, O. (2008). Decisions among time saving options: When intuition is strong and wrong, Acta
Psycholgica, 127, 501-509.
6. 6. Svenson, O. (2009). Driving speed changes and subjective estimates of time savings, accident risks and braking.
Applied Cognitive Psychology, 23, 543-560.
7. Svenson, O. (2011). Biased decisions concerning productivity increase options. Journal of Economic Psychology,
32(3), 440445.
External links
The MPG Illusion
[1]
References
[1] http:/ / www. mpgillusion. com/ %20
Well travelled road effect
243
Well travelled road effect
The well travelled road effect is a cognitive bias in which travellers will estimate the time taken to traverse routes
differently depending on their familiarity with the route. Frequently travelled routes are assessed as taking a shorter
time than unfamiliar routes.
[1][2]
This effect creates errors when estimating the most efficient route to an unfamiliar
destination, when one candidate route includes a familiar route, whilst the other candidate route includes no familiar
routes. The effect is most salient when subjects are driving, but is still detectable for pedestrians and users of public
transport. Much like the Stroop Task
[3][4]
it is hypothesised that drivers use less cognitive effort when traversing
familiar routes and therefore underestimate the time taken to traverse the familiar route.
[5]
The effect has been
observed for centuries but was first studied scientifically in the 1980s and '90s following from earlier fallacy work
undertaken by Daniel Kahneman and Amos Tversky.
[6]
The well travelled road effect has been hypothesised as a
reason that self-reported experience curve effects are overestimated (see Experience curve effects).
References
[1] Allan, L.G. (1979). The perception of time. Perception & Psychophysics, 26, 340-354.
[2] Zakay, B., & Block, R.A. (2004). Prospective and retrospective duration judgments: an executive-control perspective. Acta Neurobiologiae
Experimentalis, 64, 319-328
[3] http:/ / www. snre.umich. edu/ eplab/ demos/ st0/ stroopdesc. html
[4] Zakay D., & Fallach, E. (1984). Immediate and remote time estimation a comparison. Acta Psychologica, 57, 69-81.
[5] Rubia, K. & Smith, A. (2004). The neural correlates of cognitive time management: a review. Acta Neurobiologica, 64, 329-340.
[6] An empirical study of travel time variability and travel choice behavior. Transportation Science, 16, 460475, Jackson, W., & Jucker, J.
Steven J. Milloy (1981)
Zero-risk bias
Zero-risk bias is a tendency to prefer the complete elimination of a risk even when alternative options produce a
greater reduction in risk (overall). This effect on decision making has been observed in surveys presenting
hypothetical scenarios and certain real world policies have been interpreted as being influenced by it.
Baron, Gowda, and Kunreuther identified a zero-risk bias in responses to a questionnaire about a hypothetical
cleanup scenario involving two hazardous sites X and Y, with X causing 8 cases of cancer annually and Y causing 4
cases annually. The respondents ranked three cleanup approaches: two options each reduced the number of cancer
cases by 6 while the third reduced the number by 5 and completely eliminated the cases at site Y. While the latter
option featured the worst reduction overall, 42% of the respondents ranked it better than at least one of the other
options. This conclusion resembled one from an earlier economics study that found people were willing to pay high
costs to completely eliminate a risk.
[1][2]
Multiple real world policies have been said to be affected by this bias. In American federal policy, the Delaney
clause outlawing cancer-causing additives from foods (regardless of actual risk) and the desire for perfect cleanup of
Superfund sites have been alleged to be overly focused on complete elimination. Furthermore, the effort needed to
implement zero-risk laws grew as technological advances enabled the detection of smaller quantities of hazardous
substances. Limited resources were increasingly being devoted to low-risk issues.
[3]
Other biases might underlie the zero-risk bias. One is a tendency to think in terms of proportions rather than
differences. A greater reduction in proportion of deaths is valued higher than a greater reduction in actual deaths. The
zero-risk bias could then be seen as the extreme end of a broad bias about quantities as applied to risk. Framing
effects can enhance the bias, for example, by emphasizing a large proportion in a small set or can attempt to mitigate
the bias by emphasizing total quantities.
[4]
Zero-risk bias
244
References
[1] Baron, Jonathan; Gowda, Rajeev; Kunreuther, Howard (1993). "Attitudes toward managing hazardous waste: What should be cleaned up and
who should pay for it?" (http:/ / www. sas.upenn. edu/ ~baron/ papers. htm/ gowda. html). Risk Analysis 13: 183192. .
[2] Viscusi, W. K.; Magat, W. A.; Huber, J. (1987). "An investigation of the rationality of consumer valuation of multiple health risks". Rand
Journal of Economics 18: 465479.
[3] Kunreuther, Howard (1991). "Managing hazardous waste: past, present and future" (http:/ / opim. wharton. upenn. edu/ risk/ downloads/
archive/ arch156. pdf). Risk Analysis 11: 1926. .
[4] Baron, Jonathan (2003). "Value analysis of political behavior - self-interested : moralistic :: altruistic : moral" (http:/ / www. sas. upenn. edu/
~baron/ papers. htm/ ratsymp.html). University of Pennsylvania Law Review 151: 11351167. .
Actorobserver asymmetry
Actorobserver asymmetry (also actorobserver bias) explains the errors that one makes when forming
attributions about behavior (Jones & Nisbett, 1971). When a person judges their own behavior, and they are the
actor, they are more likely to attribute their actions to the particular situation than to a generalization about their
personality. Yet when a person is attributing the behavior of another person, thus acting as the observer; they are
more likely to attribute this behavior to the persons overall disposition than as a result of situational factors. This
frequent error shows the bias that people hold in their evaluations of behavior (Miller & Norman, 1975). People are
more likely to see their own behavior as affected by the situation they are in, or the sequence of occurrences that
have happened to them throughout their day. But, they see other peoples actions as solely a product of their overall
personality, and they do not afford them the chance to explain their behavior as exclusively a result of a situational
effect.
This term falls under "attribution" or "attribution theory". The specific hypothesis of an actor-observer asymmetry in
attribution (explanations of behavior) was originally proposed by Jones and Nisbett (1971), when they claimed that
"actors tend to attribute the causes of their behavior to stimuli inherent in the situation, while observers tend to
attribute behavior to stable dispositions of the actor (p. 93). Supported by initial evidence, the hypothesis was long
held as firmly established, describing a robust and pervasive phenomenon of social cognition.
However, a meta-analysis of all the published tests of the hypothesis between 1971 and 2004 (Malle, 2006) yielded a
stunning finding: there was no actor-observer asymmetry of the sort Jones and Nisbett (1971) had proposed. Malle
(2006) interpreted this result not so much as proof that actors and observers explained behavior exactly the same way
but as evidence that the original hypothesis was fundamentally flawed in the way it framed people's explanations of
behaviornamely, as attributions to either stable dispositions or to the situation. Against the background of a
different theory of explanation, Malle, Knobe, and Nelson (2007) tested an alternative set of three actor-observer
asymmetries and found consistent support for all of them. Thus, the actor-observer asymmetry does not exist in one
theoretical formulation (traditional attribution theory) but does exist in the new alternative theoretical formulation.
Malle (2011) argues that this favors the alternative theoretical formulation, but current textbooks have not yet fully
addressed this theoretical challenge.
Considerations of actor-observer differences can be found in other disciplines as well, such as philosophy (e.g.,
privileged access, incorrigibility), management studies, artificial intelligence, semiotics, anthropology, and political
science (see Malle, Knobe, & Nelson, 2007, for relevant references).
Actorobserver asymmetry
245
Background and initial formulation
The background to this hypothesis was social psychology's increasing interest in the 1960s in the cognitive
mechanisms by which people make sense of their own and other people's behavior. This interest was instigated by
Fritz Heider's (1958) book, The Psychology of Interpersonal Relations, and the research in its wake has become
known as "attribution research" or "attribution theory."
The specific hypothesis of an "actorobserver asymmetry" was first proposed by social psychologists Jones and
Nisbett in 1971. Jones and Nisbett hypothesized that these two roles produce asymmetric explanations. Actors tend
to attribute the causes of their behavior to stimuli inherent in the situation, while observers tend to attribute behavior
to stable dispositions of the actor (Jones & Nisbett, 1971, p.93). According to this hypothesis, a student who studies
hard for an exam is likely to explain her own (the "actor"'s) intensive studying by referring to the upcoming difficult
exam whereas other people (the "observers") are likely to explain her studying by referring to her dispositions such
as being hardworking or ambitious.
Early evidence and reception
Soon after the publication of the actor-observer hypothesis, numerous research studies tested its validity, most
notably the first such test by Nisbett, Caputo, Legant, and Marecek (1973). The authors found initial evidence for the
hypothesis, and so did Storms (1973), who also examined one possible explanation of the hypothesis: that actors
explain their behaviors by reference to the situation because they attend to the situation (not to their own behaviors)
whereas observers explain the actor's behavior by reference to the actor's dispositions because they attend to the
actor's behavior (not to the situation). Based largely on this initial supporting evidence, the confidence in the
hypothesis became uniformly high. The asymmetry was described as robust and quite general,
[1]
"firmly
established" (Watson, 1982, p.698), and an entrenched part of scientific psychology.
[2]
Likewise, evidence for the
asymmetry was considered to be "plentiful
[3]
and pervasive.
[4]
Recent evidence
Over 100 studies have been published since 1971 in which the hypothesis was put to further tests (often in the
context of testing another hypothesis about causal attributions). Malle (2006) examined this entire literature in a
meta-analysis, which is a robust way of identifying consistent patterns of evidence regarding a given hypothesis
across a broad set of studies. The result of this analysis was stunning: across 170 individual tests, the asymmetry
practically did not exist. (The average effect sizes, computed in several accepted ways, ranged from d = -0.016 to d =
0.095; corrected for publication bias, the average effect size was 0.) Under circumscribed conditions, it could
sometimes be found, but under other conditions, the opposite was found. The conclusion was that the widely held
assumption of an actor-observer asymmetry in attribution was false.
Theoretical reformulation
The result of the meta-analysis implied that, across the board, actors and observers explain behaviors the same way.
But all the tests of the classic hypothesis presupposed that people explain behavior by referring to "dispositional" vs.
"situational" causes. This assumption turned out to be incorrect for the class of behavioral events that people explain
most frequently in real life (Malle & Knobe, 1997): intentional behaviors (e.g., buying a new car, making a mean
comment). People explain unintentional behaviors in ways that the traditional disposition-situation framework can
capture, but they explain intentional behaviors by using very different concepts (Buss, 1989; Heider, 1958). A recent
empirical theory of how people explain behavior was proposed and tested by Malle (1999, 2004), centering on the
postulate that intentional behaviors are typically explained by reasonsthe mental states (typically beliefs and
desires) in light of which and on the grounds of which the agent decided to act (a postulate long discussed in the
philosophy of action). But people who explain intentional behavior have several choices to make, and the theory
Actorobserver asymmetry
246
identifies the psychological antecedents and consequences of these choices: (a) giving either reason explanations or
"causal history of reason (CHR) explanations" (which refer to background factors such as culture, personality, or
contextcausal factors that brought about the agent's reasons but were not themselves reasons to act); (b) giving
either desire reasons or belief reasons; and (c) linguistically marking a belief reason with its mental state verb (e.g.,
"She thought that..."; "He assumes that..."). Empirical studies have so far supported this theoretical framework (for a
review see Malle, 2011).
Within this framework, the actor-observer asymmetry was then reformulated as in fact consisting of three
asymmetries: that actors offer more reason explanations (relative to CHR explanations) than observers do; that actors
offer more belief reasons (relative to desire reasons) than observers do; and that actors use fewer belief reason
markers than observers do (Malle, 1999). Malle, Knobe, and Nelson (2007) tested these asymmetries across 9 studies
and found consistent support for them. In the same studies they also tested the classic person/disposition vs. situation
hypothesis and consistently found no support for it.
Thus, people do seem to explain their own actions differently from how they explain other people's actions. But
these differences do not lie in a predominance of using "dispositional" vs. "situational" causes. Only when people's
explanations are separated into theoretically meaningful distinctions (e.g., reasons vs. causal history of reason
explanations) do the differences emerge.
Implications
The choices of different explanations for intentional behavior (reasons, belief reasons, etc.) indicate particular
psychological functions. Reasons, for example, appear to reflect (among other things) psychological closeness.
People increase reason explanations (relative to CHR explanations) when they explain their own rather than another
person's behavior (Malle et al., 2007), when they portray another person in a positive light (Malle et al., 2007), and
when they explain behaviors of nonhuman agents for whom they have ownership and affection (e.g., a pet fish;
Kiesler, Lee, & Kramer, 2006). Conversely, people use fewer reasons and more CHR explanations when explaining
behaviors of collectives or aggregate groups (O'Laughlin & Malle, 2002). Actor-observer asymmetries can therefore
be seen as part of a broader continuum of psychological distance people have to various kinds of minds (their own,
others', groups', animals' etc.).
Related but distinct concepts
Actor-observer "bias"
Instead of speaking of a hypothesis of an actor-observer asymmetry, some textbooks and research articles speak of
an "actor-observer bias" (within the framework of dispositional vs. situation causes). The term "bias" is typically
used to imply that one of the explainerseither the actor or the observeris biased or incorrect in their
explanations. But which onethe actor or the observeris supposed to be incorrect is not clear from the literature.
On the one hand, Ross's (1977) hypothesis of a fundamental attribution error suggests that observers are incorrect,
because they show a general tendency to overemphasize dispositional explanations and underemphasize situational
ones. On the other hand, Nisbett and Wilson (1975) argued that actors don't really know the true causes of their
actions and often merely invent plausible explanations. Jones and Nisbett (1971) themselves did not commit to
calling the hypothesized actor-observer asymmetry a bias or an error. Similarly, recent theoretical positions consider
asymmetries not a bias but rather the result of multiple cognitive and motivational differences that fundamentally
exist between actors and observers (Malle et al., 2007; Robins et al., 1996).
Actorobserver asymmetry
247
Self-serving bias
The actor-observer asymmetry is often confused with the hypothesis of a self-serving bias in attribution the claim
that people choose explanations in a strategic way so as to make themselves appear in a more positive light. The
important difference between the two hypotheses is that the assumed actor-observer asymmetry is expected to hold
for all events and behaviors (whether they are positive or negative) and require a specific comparison between actor
explanations and observer explanations. The self-serving bias is often formulated as a complete reversal in actors'
and observers' explanation tendencies as a function of positive vs. negative events. In traditional attribution terms,
this means that for positive events (e.g., getting an A on an exam), actors will select explanations that refer to their
own dispositions (e.g., "I am smart") whereas observers will select explanations that refer to the actor's situation
(e.g., "The test was easy"); however, for negative events (e.g., receiving an F on the exam), actors will select
explanations that refer to the situation (e.g., "The test was impossibly hard") whereas observers will select
explanations that refer to the actor's dispositions (e.g., "She is not smart enough").
References
[1] [1] Jones, 1976, p.304
[2] Robins, Spranca, & Mendelsohn, 1996, p.376
[3] Fiske & Taylor, 1991, p.73
[4] [4] Aronson, 2002, p.168
Aronson, E. (2002). The social animal. (8th ed.) New York, NY: Wiley.
Buss, A. R. (1978). Causes and reasons in attribution theory: A conceptual critique. Journal of Personality and
Social Psychology, 36, 13111321.
Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill.
Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.
Jones, E. E., & Nisbett, R. E. (1971). The actor and the observer: Divergent perceptions of the causes of
behavior. New York: General Learning Press.
Jones, E. E. (1976). How do people perceive the causes of behavior? American Scientist, 64, 300305.
Kiesler S., Lee, S. L., & Kramer, A. D. I. (2006). Relationship effects in psychological explanations of nonhuman
behavior. Anthrozoos, 19, 335-352.
Malle, B. F. (1999). How people explain behavior: A new theoretical framework. Personality and Social
Psychology Review, 3, 2348.
Malle, B. F. (2004). How the mind explains behavior: Folk explanations, meaning, and social interaction.
Cambridge, MA: MIT Press.
Malle, B. F. (2006). The actor-observer asymmetry in causal attribution: A (surprising) meta-analysis.
Psychological Bulletin, 132, 895-919.
Malle, B. F. (2011). Time to give up the dogmas of attribution: An alternative theory of behavior explanation. In
J. M. Olson and M. P. Zanna (eds.), Advances of Experimental Social Psychology (Vol. 44, pp. 297-352).
Burlington: Academic Press.
Malle, B. F., & Knobe, J. (1997). Which behaviors do people explain? A basic actor-observer asymmetry. Journal
of Personality and Social Psychology, 72, 288-304.
Malle, B. F., Knobe, J., & Nelson, S. (2007). Actor-observer asymmetries in explanations of behavior: New
answers to an old question. Journal of Personality and Social Psychology, 93, 491-514.
Nisbett, R. E., Caputo, C., Legant, P., & Marecek, J. (1973). Behavior as seen by the actor and as seen by the
observer. Journal of Personality and Social Psychology, 27, 154 164.
Robins, R. W., Spranca, M. D., & Mendelsohn, G. A. (1996). The actorobserver effect revisited: Effects of
individual differences and repeated social interactions on actor and observer attributions. Journal of Personality
and Social Psychology, 71, 375389.
Actorobserver asymmetry
248
Ross, L. D. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process.
Advances in Experimental Social Psychology, 10, 173220.
Storms, M. D. (1973). Videotape and the attribution process: Reversing actors and observers points of view.
Journal of Personality and Social Psychology, 27, 165175.
Watson, D. (1982). The actor and the observer: How are their perceptions of causality divergent? Psychological
Bulletin, 92, 682700.
Defensive attribution hypothesis
The defensive attribution hypothesis (or defensive attribution bias) is a social psychological term from the
attributional approach referring to a set of beliefs held by an individual with the function of defending the individual
from concern that they will be the cause or victim of a mishap. Commonly, defensive attributions are made when
individuals witness or learn of a mishap happening to another person. In these situations, attributions of
responsibility to the victim or harm-doer for the mishap will depend upon the severity of the outcomes of the mishap
and the level of personal and situational similarity between the individual and victim. More responsibility will be
attributed to the harm-doer as the outcome becomes more severe, and as personal or situational similarity decreases.
Holding the victim or harm-doer more responsible allows the individual to believe that the mishap was controllable
and thus, the individual is able to prevent suffering the same mishap in the future.
[1]
Decreasing attributions of
responsibility as similarity increases allows the individual to proactively lay the ground work to protect their own
self-esteem. That is, if they would suffer the mishap themselves, they can see themselves as not blameworthy.
[2]
The
use of defensive attributions is considered a bias because an individual will change their beliefs about a situation
based upon motivation to protect their self-esteem rather than being based upon the characteristics of the
situation.
[2]:112
Foundational studies: Walster (1966 & 1967) and Shaver (1970)
The basis of the defensive attribution bias was developed in studies conducted by Elaine Walster (Hatfield) and
Kelly Shaver. Walster (1966) hypothesized that as the consequences for an accident increase so do the likelihood of
an individual to assign blame to the harm-doer, and presented experimental evidence to support this hypothesis.
Walster assumes, when consequences are mild, that it is easy to feel sympathy for a victim or harm-doer and not
blame them; but as the severity of the consequences increase, it becomes more unpleasant to believe that such a
misfortune could happen to anyone and attributing responsibility helps an individual manage this emotional
reaction.
[1]
Shaver (1970) recognized that Walster had identified an important concept but did not fully apply it to her own
study. Walster(1966) stated that the defensive attribution bias would occur in response to concerns that the accident
could befall the perceiver. Thus, the similarity of the perceiver to the victim in terms of situational similarity or
personality similarity is required for the defensive attributional bias to be activated. Shaver (1970) manipulated the
severity of consequences and personal similarity of the target person described in his experiments to his research
participants and found support for the defensive attribution bias: as personal similarity increased attributions of
responsibility decreased.
Defensive attribution hypothesis
249
Later research: confusion and clarity
The foundational research of Walster (1966) and Shaver (1970) was not as clear-cut as presented above. In a follow
up study, Walster (1967) was unable to replicate her findings in two separate experiments. Shaver (1970) found a
small negative relationship between the severity of the consequences and the responsibility attributed to the
harm-doer.
Clarity to this confusion came in 1981 when Burger
[3]
published a meta-analysis of 22 published studies on the
defensive attribution hypothesis. First, he concluded that there is evidence to suggest that Walsters hypothesized
positive relationship between severity and attributions of responsibility exist. However, the evidence suggests that
this relationship, while positive as Walster predicted, was moderate to weak in terms of the strength of the
relationship. And secondly, he concluded that there is strong evidence to support Shavers hypothesized negative
relationship between similarity and responsibility.
Applied uses
The defensive attribution hypothesis has found many applied uses, especially in regards to sexual assault blame
attributions.
Researchers examining how individuals make blame attributions to victims (women who are rape victims) and
harm-doers (rapists) in sexual assault situations have consistently found that male research participants blamed
rapists less than female research participants did, and that male research participants blamed rape victims more than
female research participants did.
[4]
These findings support Shavers similarity-responsibility hypothesis: male
participants, who are personally similar to (male) rapists, blame rapists less than female participants who are
dissimilar to rapists. On the other hand, female participants, who are personally similar to (female) rape victims,
blame the victims less than male participants.
References
[1] Walster, E. (1966). "Assignment for responsibility for an accident.". Journal of Personality and Social Psychology 3 (1): 7379.
[2] Shaver, K. G. (1970). "Defensive Attribution: Effects of severity and relevance on the responsibility assigned for an accident.". Journal of
Personality and Social Psychology 14 (2): 101113.
[3] Burger, J.M. (1981). "Motivational biases in the attribution of responsibility for an accident: A meta-analysis of the defensive-attribution
hypothesis.". Psychological Bulletin 90 (3): 496512.
[4] Grubb, A.; Harrower, J. (2008). "Attribution of blame in cases of rape: An analysis of participant gender, type of rape and perceived
similarity to the victim.". Aggression and Violent Behavior 13: 396405.
DunningKruger effect
250
DunningKruger effect
The DunningKruger effect is a cognitive bias in which unskilled individuals suffer from illusory superiority,
mistakenly rating their ability much higher than average. This bias is attributed to a metacognitive inability of the
unskilled to recognize their mistakes.
[1]
Actual competence may weaken self-confidence, as competent individuals may falsely assume that others have an
equivalent understanding. David Dunning and Justin Kruger conclude, "the miscalibration of the incompetent stems
from an error about the self, whereas the miscalibration of the highly competent stems from an error about others".
[2]
Historical references
Although the DunningKruger effect was put forward in 1999, Dunning and Kruger have quoted Charles Darwin
("Ignorance more frequently begets confidence than does knowledge")
[3]
and Bertrand Russell ("One of the painful
things about our time is that those who feel certainty are stupid, and those with any imagination and understanding
are filled with doubt and indecision")
[4]
as authors who have recognised the phenomenon. Geraint Fuller,
commenting on the paper, notes that Shakespeare expresses similar sentiment in As You Like It ("The fool doth think
he is wise, but the wise man knows himself to be a fool." (V.i)).
[5]
Hypothesis
The hypothesized phenomenon was tested in a series of experiments performed by Justin Kruger and David
Dunning, both of them of Cornell University.
[2][6]
Kruger and Dunning noted earlier studies suggesting that
ignorance of standards of performance is behind a great deal of incompetence. This pattern was seen in studies of
skills as diverse as reading comprehension, operating a motor vehicle, and playing chess or tennis.
Kruger and Dunning proposed that, for a given skill, incompetent people will:
1. 1. tend to overestimate their own level of skill;
2. 2. fail to recognize genuine skill in others;
3. 3. fail to recognize the extremity of their inadequacy;
4. recognize and acknowledge their own previous lack of skill, if they are exposed to training for that skill
Dunning has since drawn an analogy ("the anosognosia of everyday life")
[1][7]
to a condition in which a person who
suffers a physical disability because of brain injury seems unaware of or denies the existence of the disability, even
for dramatic impairments such as blindness or paralysis.
Supporting studies
Kruger and Dunning set out to test these hypotheses on Cornell undergraduates in various psychology courses. In a
series of studies, they examined the subjects' self-assessment of logical reasoning skills, grammatical skills, and
humor. After being shown their test scores, the subjects were again asked to estimate their own rank, whereupon the
competent group accurately estimated their rank, while the incompetent group still overestimated their own rank. As
Dunning and Kruger noted,
Across four studies, the authors found that participants scoring in the bottom quartile on tests of humor,
grammar, and logic grossly overestimated their test performance and ability. Although test scores put them in
the 12th percentile, they estimated themselves to be in the 62nd.
Meanwhile, people with true ability tended to underestimate their relative competence. Roughly, participants who
found tasks to be relatively easy erroneously assumed, to some extent, that the tasks must also be easy for others.
A follow-up study, reported in the same paper, suggests that grossly incompetent students improved their ability to
estimate their rank after minimal tutoring in the skills they had previously lackedregardless of the negligible
DunningKruger effect
251
improvement in actual skills.
In 2003, Dunning and Joyce Ehrlinger, also of Cornell University, published a study that detailed a shift in people's
views of themselves when influenced by external cues. Participants in the study (Cornell University undergraduates)
were given tests of their knowledge of geography, some intended to positively affect their self-views, some intended
to affect them negatively. They were then asked to rate their performance, and those given the positive tests reported
significantly better performance than those given the negative.
[8]
Daniel Ames and Lara Kammrath extended this work to sensitivity to others, and the subjects' perception of how
sensitive they were.
[9]
Other research has suggested that the effect is not so obvious and may be due to noise and bias
levels.
[10]
Dunning, Kruger, and coauthors' 2008 paper on this subject comes to qualitatively similar conclusions to their
original work, after making some attempt to test alternative explanations. They conclude that the root cause is that, in
contrast to high performers, "poor performers do not learn from feedback suggesting a need to improve."
[4]
Studies on the DunningKruger effect tend to focus on American test subjects. A study on some East Asian subjects
suggested that something like the opposite of the DunningKruger effect may operate on self-assessment and
motivation to improve.
[11]
Awards
Dunning and Kruger were awarded the 2000 Ig Nobel Prize in Psychology for their paper, "Unskilled and Unaware
of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments".
[12]
References
[1] Morris, Errol (2010-06-20). "The Anosognosic's Dilemma: Something's Wrong but You'll Never Know What It Is (Part 1)" (http:/ /
opinionator. blogs.nytimes. com/ 2010/ 06/ 20/ the-anosognosics-dilemma-1/ ). New York Times. . Retrieved 2011-03-07.
[2] Kruger, Justin; David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to
Inflated Self-Assessments". Journal of Personality and Social Psychology 77 (6): 112134. doi:10.1037/0022-3514.77.6.1121.
PMID10626367. CiteSeerX: 10.1.1.64.2655 (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 64. 2655).
[3] Charles Darwin (1871). "The Descent of Man" (http:/ / en. wikiquote. org/ wiki/ Charles_Darwin#The_Descent_of_Man_. 281871. 29) (w).
pp.Introduction, page 4. . Retrieved 2008-07-18.
[4] Ehrlinger, Joyce; Johnson, Kerri; Banner, Matthew; Dunning, David; Kruger, Justin (2008). "Why the unskilled are unaware: Further
explorations of (absent) self-insight among the incompetent" (PDF). Organizational Behavior and Human Decision Processes 105 (105):
98121. doi:10.1016/j.obhdp.2007.05.002. PMC2702783. PMID19568317.
[5] Fuller, Geraint (2011). "Ignorant of ignorance?" (http:/ / pn. bmj. com/ content/ 11/ 6/ 365. short). Practical Neurology 11 (6): 365.
doi:10.1136/practneurol-2011-000117. PMID22100949. .
[6] Dunning, David; Kerri Johnson, Joyce Ehrlinger and Justin Kruger (2003). "Why people fail to recognize their own incompetence" (http:/ /
psy. mq. edu. au/ vision/ ~peterw/ corella/ 237/ incompetence. pdf) (PDF). Current Directions in Psychological Science 12 (3): 8387.
doi:10.1111/1467-8721.01235. . Retrieved 29 December 2012.
[7] Dunning, David, Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself (Essays in Social Psychology), Psychology Press:
2005, pp.1415. ISBN 1-84169-074-0
[8] Joyce Ehrlinger; David Dunning (January 2003). "How Chronic Self-Views Influence (and Potentially Mislead) Estimates of Performance".
Journal of Personality and Social Psychology (American Psychological Association) 84 (1): 517. doi:10.1037/0022-3514.84.1.5.
PMID12518967.
[9] Daniel R. Ames; Lara K. Kammrath (September 2004). "Mind-Reading and Metacognition: Narcissism, not Actual Competence, Predicts
Self-Estimated Ability" (http:/ / www. columbia. edu/ ~da358/ . . . / ames_kammrath_mindreading. pdf) (PDF). Journal of Nonverbal
Behavior 28 (3): 187209. doi:10.1023/B:JONB.0000039649.20015.0e. . Retrieved 29 December 2012.
[10] Burson, K.; Larrick, R.; Klayman, J. (2006). "Skilled or unskilled, but still unaware of it: how perceptions of difficulty drive miscalibration
in relative comparisons". Journal of Personality and Social Psychology 90 (1): 6077. doi:10.1037/0022-3514.90.1.60. PMID16448310.
hdl:2027.42/39168.
[11] DeAngelis, Tori (feb 2003). "Why we overestimate our competence" (http:/ / www. apa. org/ monitor/ feb03/ overestimate. aspx). Monitor
on Psychology (American Psychological Association) 34 (2): 60. . Retrieved 2011-03-07.
[12] "Ig Nobel Past Winners" (http:/ / improbable.com/ ig/ ig-pastwinners. html#ig2000). . Retrieved 2011-03-07.
Egocentric bias
252
Egocentric bias
Egocentric bias is the inclination to overstate changes between the present and the past to make ourselves look
better than we actually are.<ref name="
[1]
Besides simply claiming credit for positive outcomes, which might simply be self-serving bias, people exhibiting
egocentric bias also cite themselves as overly responsible for negative outcomes of group behavior as well (however
this last attribute would seem to be lacking in megalomania).
This may be because people's own actions are immediately accessible to them than others' actions. This is an
example of what is called the availability heuristic.
Egocentric bias in estimates of consensus could be interpreted to support and/or justify one's feelings that their own
behavioral choices are appropriate, normal or correct.
[2]
Michael Ross and Fiore Sicoly first identified this cognitive bias.
One study found that egocentric bias influences perceived fairness. Subjects felt that overpayment to themselves
were more fair than overpayment to others; by contrast, they felt the underpayment to themselves were less fair than
underpayment to others. Greenberg's studies showed that this egocentricism was eliminated when the subjects were
put in a self-aware state, which was applied in his study with a mirror being placed in front of the subjects. When a
person is not self-aware, they perceive that something can be fair to them but not necessarily fair to others and so
fairness was something biased and in the eye of the beholder. When a person is self-aware, there is a uniform
standard of fairness and there is no bias. When made self-aware, subjects rated overpayment and underpayment to
both themselves and to others as equally unfair. It is believed that these results were obtained because self-awareness
elevated subjects' concerns about perceived fairness in payment, thereby overriding egocentric tendencies.
[3]
Egocentric bias has influenced ethical judgements to the point where people not only believe that self-interested
outcomes are preferential but are also the morally sound way to proceed.
Example
A best known example of egocentric bias is a study Ross, Greene and House did in 1977. Students are asked to walk
around a campus with a sandwich board that has the word "repent" on it. People who agreed to do so (50%)
estimated that most of their peers would also agree to do so (average estimation 63.5%). Visa versa, those who
refused to do the experiment think that most people would make the same decision as theirs.
[4]
Another study of egocentric bias took place in Japan. Subjects were asked to write down fair or unfair behaviors that
they themselves or others did. When writing about fair behavior, they tended to start with the word "I" rather than
"others". Likewise, they began unfair behaviors with "others" rather than "I".
[5]
False-consensus effect
Considered to be a facet of egocentric bias, the false-consensus effect contributes to people believing that their
thoughts, actions, and opinions are much more common than they are in reality. They think that they are more
normal and typical than others would consider them.
[2]
Results from a study comparing the perceptual distortion and motivational explanations of egocentric bias in
estimates of consensus showed that an egocentric bias in estimates of consensus was more likely a result of
perceptual distortion than of motivational strategies.
[2]
Egocentric bias
253
References
Epley, N., Caruso, E. M.(2004). Egocentric Ethics. Social Justice Research, 17, 171-185 OCLC363254336
Ross, M. & Sicoly, F. (1979). Egocentric biases in availability and attribution. Journal of Personality and Social
Psychology 37, 322-336. OCLC4646238323
Footnotes
[1] Schacter, Daniel L. (2011). Psychology Ed. 2. 41 Madison Avenue New York, NY 10010: Worth Publishers. ISBN1429237198.
[2] Mullen, Brian (1983-10-01). "Egocentric bias in estimates of consensus". Journal of Social Psychology 121 (1): 31-38.
doi:10.1080/00224545.1983.9924463.
[3] Greenberg, Jerald. Overcoming Egocentric Bias in Perceived Fairness Through Self-Awareness. American Sociological Association. Social
Psychology Quarterly , Vol. 46, No. 2 (Jun., 1983), pp. 152-156. OCLC483814059
[4] [4] Wallin, A. (2011). Is egocentric bias evidence for simulation theory? 178(3), pp. 503-514. Synthese
[5] Tanaka, K. (1993). "Egocentric bias in preceived fairness: it it observed in Japan?". Social Justice Research 6 (3): 273-285.
doi:10.1007/BF01054462.
Extrinsic incentives bias
The extrinsic incentives bias is an attributional bias according to which people attribute relatively more to "extrinsic
incentives" (such as monetary reward) than to "intrinsic incentives" (such as learning a new skill) to others rather
than themselves. It is related to but distinct from self-serving bias; and, it is a counter-example to the fundamental
attribution error as according to the extrinsic bias others are presumed to have situational motivations while oneself
is seen as have dispositional motivations, the opposite of what the fundamental attribution error would predict. The
term was first proposed by Chip Heath, citing earlier research by others in management science.
[1]
In the simplest experiment Heath reported, MBA students were asked to rank the expected job motivations of
Citibank customer service representatives. Their average ratings were as follows:
1. 1. Amount of pay
2. 2. Having job security
3. 3. Quality of fringe benefits
4. 4. Amount of praise from your supervisor
5. 5. Doing something that makes you feel good about yourself
6. 6. Developing skills and abilities
7. 7. Accomplishing something worthwhile
8. 8. Learning new things
Actual customer service representatives rank ordered their own motivations as follows:
1. 1. Developing skills and abilities
2. 2. Accomplishing something worthwhile
3. 3. Learning new things
4. 4. Quality of fringe benefits
5. 5. Having job security
6. 6. Doing something that makes you feel good about yourself
7. 7. Amount of pay
8. 8. Amount of praise from your supervisor
The order of the predicted and actual reported motivations was nearly reversed; in particular, pay was rated first by
others but near last for respondents of themselves. Similar effects were observed when MBA students rated
managers and their classmates.
[1]
Extrinsic incentives bias
254
Debiasing
Heath suggests trying to infer others' motivations as one would by inferring one's motivations.
[1]
References
[1] Heath, Chip (1999). "On the Social Psychology of Agency Relationships: Lay Theories of Motivation Overemphasize Extrinsic Incentives"
(http:/ / faculty-gsb. stanford. edu/ heath/ documents/ social psych of agency. pdf). Organizational Behavior and Human Decision Processes
78 (1): 2562. .
Halo effect
The halo effect or halo error is a cognitive bias in which our judgments of a persons character can be influenced by
our overall impression of him or her. It can be found in a range of situationsfrom the courtroom to the classroom
and in everyday interactions. The halo effect was given its name by psychologist Edward Thorndike, and since then
several researchers have studied the halo effect in relation to attractiveness, and its bearing on the judicial and
educational systems.
History
Edward Thorndike, known for his contributions to educational psychology, coined the term "halo effect" and was the
first to support it with empirical research.
[1]
He gave the phenomenon its name in his 1920 article The Constant
Error in Psychological Ratings. He had noted in a previous study made in 1915 that estimates of traits in the same
person were very highly and evenly correlated. In Constant Error, Thorndike set out to replicate the study in hopes
of pinning down the bias that he thought was present in these ratings.
Supporting evidence
Thorndike's first study of the halo effect was published in 1920. The study included two commanding officers who
were asked to evaluate their soldiers in terms of physical qualities (neatness, voice, physique, bearing, and energy),
intellect, leadership skills, and personal qualities (including dependability, loyalty, responsibility, selflessness, and
cooperation). Thorndike's goal was to see how the ratings of one characteristic affected other characteristics.
Thorndike's experiment showed how there was too much of a correlation in the responses of the commanding
officers. In Thorndike's review he stated, "The correlations were too high and too even. For example, for the three
raters next studied the average correlation for physique with intelligence is .31; for physique with leadership, .39;
and for physique with character, .28."
[1]
The ratings of one of the special qualities of an officer tend to start a trend in
the rating results. If an officer had a particular "negative" attribute given off to the commanding officer, it would
correlate in the rest of that soldier's results.
The correlation in the halo effect experiment was concluded to be a halo error. The halo error showed that the
officers relied mainly on general perception of certain characteristics that determined the results of their answers.
Halo effect
255
Role of attractiveness
The halo effect is not exclusively limited to individual traits or an individual's overall appearance. A persons
attractiveness has also been found to produce a halo effect.
On personality and happiness
Dion and Berscheid (1972) conducted a study on the relationship between attractiveness and the halo effect.
[2]
Sixty
students from University of Minnesota took part in the experiment, half being male and half being female. Each
subject was given three different photos to examine: one of an attractive individual, one of an individual of average
attractiveness, and one of an unattractive individual.
The participants judged the photos subjects along 27 different personality traits (including altruism, conventionality,
self-assertiveness, stability, emotionality, trustworthiness, extraversion, kindness, and sexual promiscuity).
Participants were then asked to predict the overall happiness the photos' subjects would feel for the rest of their lives,
including marital happiness (least likely to get divorced), parental happiness (most likely to be a good parent), social
and professional happiness (most likely to experience life fulfillment), and overall happiness. Finally, participants
were asked if the subjects would hold a job of high status, medium status, or low status.
Results showed that participants overwhelmingly believed the more attractive subjects to have more socially
desirable personality traits than either the averagely attractive or unattractive subjects. Participants also believed that
the attractive individuals would lead happier lives in general, have happier marriages, be better parents, and have
more career success than the unattractive or averagely attractive individuals. Also, results showed that attractive
people were believed to be more likely to hold secure, prestigious jobs compared to unattractive individuals.
[3]
Academics and intelligence
Landy and Sigalls 1974 study demonstrated the halo effect on judgments of intelligence and competence on
academic tasks. 60 male undergraduate students rated the quality of written essays, which included both well-written
and poorly written samples. One third of the participants were presented with a photo of an attractive female as an
author, another third were presented with a photo of an unattractive female as the author, and the last third were not
shown a photo.
Participants gave significantly better writing evaluations for the more attractive author. On a scale of 19 with 1
being the poorest, the well-written essay by the attractive author received an average of 6.7 while the unattractive
author received a 5.9 (with a 6.6 as a control). The gap was larger on the poor essay: the attractive author received an
average of 5.2, the control a 4.7, and the unattractive a 2.7. These results suggest that people are generally more
willing to give physically attractive people the benefit of the doubt when performance is below standard, whereas
unattractive people are less likely to receive this favored treatment.
[4]
In Moore, Filippou, and Perrets 2011 study, the researchers sought to determine if residual cues to intelligence and
personality existed in male and female faces. Researchers attempted to control for the attractiveness halo effect, but
failed. They manipulated the perceived intelligence of photographs of individuals, and it was found that those faces
that were manipulated to look high in perceived intelligences were also rated as more attractive. It was also found
that the faces high in perceived intelligence were also rated highly on perceived friendliness and sense of humor.
[5]
Effects on jurors
Multiple studies have found the halo effect operating within juries. Research shows that attractive individuals receive
lesser sentences and are not as likely to be found guilty than an unattractive individual. Efran (1974) found that
subjects were more generous when giving out sentences to attractive individuals than to unattractive individuals,
even when exactly the same crime was committed. One reason why this occurs is because people with a high level of
attractiveness are seen as more likely to have a brighter future in society due to the socially desirable traits they are
Halo effect
256
believed to possess.
[6]
Monahan (1941) did a study on social workers who are accustomed to interacting with people from all different
types of backgrounds. The study found that the majority of these social workers found it very difficult to believe that
beautiful looking people are guilty of a crime.
[7]
The relation of the crime itself to attractiveness is also subject to the halo effect.
[8]
A study presented two
hypothetical crimes: a burglary and a swindle. The burglary involved a woman illegally obtaining a key and stealing
$2,200; the swindle involved a woman manipulating a man to invest $2,000 in a fabricated business. The results
showed that when the offense was not related to attractiveness (in this case, the burglary), the unattractive defendant
was punished more severely than the attractive one. However, when the offense was related to attractiveness (the
swindle), the attractive defendant was punished more severely than the unattractive one. Participants may have
believed the attractive person more likely to manipulate someone using their looks.
Halo effect in education
Abikoff found that the halo effect is also present in the classroom. In this study, both regular and special education
elementary school teachers watched videotapes of what they believed to be children in regular 4th-grade classrooms.
In reality, the children were actors, depicting behaviors present in attention deficit hyperactivity disorder (ADHD),
oppositional defiant disorder (ODD), or standard behavior. The teachers were asked to rate the frequency of
hyperactive behaviors observed in the children. Teachers rated hyperactive behaviors accurately for children with
ADHD; however, the ratings of hyperactivity and other behaviors associated with ADHD were rated much higher for
the children with ODD-like behaviors, showing a halo effect for children with oppositional defiant disorder.
[9]
Foster and Ysseldyke (1976) also found the halo effect present in teachers evaluations of children. Regular and
special education elementary school teachers watched videos of a normal child whom they were told was either
emotionally disturbed, possessing a learning disorder, mentally retarded, or "normal". The teachers then completed
referral forms based on the child's behavior. The results showed that teachers held negative expectancies toward
emotionally disturbed children, maintaining these expectancies even when presented with normal behavior. In
addition, the mentally retarded label showed a greater degree of negative bias than the emotionally disturbed or
learning disabled.
[10]
Criticisms and limitations
Some researchers allege that the halo effect is not as pervasive as once believed. Kaplans 1978 study yielded much
of the same results as are seen in other studies focusing on the halo effectattractive individuals were rated high in
qualities such as creativity, intelligence, and sensitivity than unattractive individuals. In addition these results,
Kaplan found that women were influenced by the halo effect on attractiveness only when presented with members of
the opposite sex. When presented with an attractive member of the same sex, women actually tended to rate the
individual lower on socially desirable qualities.
[11]
Criticisms have also pointed out that jealousy of an attractive individual could be a major factor in evaluation of that
person. A study by Dermer and Thiel has shown this to be more prevalent among females than males, with females
describing physically attractive women as having socially undesirable traits.
[12]
Halo effect
257
Halo effect and NGOs
The term "halo effect" has been applied to human rights organizations that have used their status to move away from
their stated goals. Political scientist Gerald Steinberg has claimed that non-governmental organizations (NGOs) take
advantage of the "halo effect" and are "given the status of impartial moral watchdogs" by governments and the
media.
[13][14]
Devil effect
The devil effect, also known as the reverse halo effect, is when people allow an undesirable trait to influence their
evaluation of other traits, such as in Nisbett and Wilson's study on likeable versus unlikeable lecturers.
[15]
The devil
effect can work outside the scope of personality traits and is expressed by both children and adults.
[16]
The Guardian
wrote of the devil effect in relation to Hugo Chavez: "Some leaders can become so demonised that it's impossible to
assess their achievements and failures in a balanced way."
[17]
References
[1] Thorndike, E.L (1). "A constant error in psychological ratings.". Journal of Applied Psychology 4 (1): 2529. doi:10.1037/h0071663.
[2] Dion, K; Berscheid, E; Walster, E (December 1972). "What is beautiful is good". Journal of personality and social psychology 24 (3):
285-90. doi:10.1037/h0033731. PMID4655540.
[3] Dion, Karen; Ellen Berscheid, Elaine Walster. "What is Beautiful is Good". Journal of Personality and Social Psychology 3 (24): 285-290.
[4] Landy, D.; Sigall, H.. "Task Evaluation as a Function of the Performers' Physical Attractiveness". Journal of Personality and Social
Psychology 29 (3): 299-304.
[5] Moore, F. R.; Filippou, D., Perrett, D. (2011). "Intelligence and Attractiveness in the Face: Beyond the Attractiveness Halo Effect". Journal
of Evolutionary Psychology 9 (3): 205-217.
[6] Efran, M. G.. "The Effect of Physical Appearance on the Judgment of Guilt, Interpersonal Attraction, and Severity of Recommended
Punishment in Simulated Jury Task". Journal of Research in Personality 8: 45-54.
[7] Monahan, F. (1941). Women in Crime. New York: Washburn.
[8] Ostrove, Nancy; Sigall, Harold (1975). "Beautiful but Dangerous: Effects of Offender Attractiveness an Nature of the Crime on Juridic
Judgment" (http:/ / dtrebouxclasses.pbworks.com/ w/ file/ fetch/ 50029614/ sigall and ostrove. pdf). Journal of Personality and Social
Psychology 31 (3): 410414. .
[9] Abikoff, H.; Courtney, M., Pelham, W.E., Koplewicz, H.S.. "Teachers' Ratings of Disruptive Behaviors: The Influence of Halo Effects".
Journal of Abnormal Child Psychology 21 (5): 519-533.
[10] Foster, Glen; James Ysseldyke (1976). "Expectancy and Halo Effects as a Result of Artificially Induced Teacher Bias". Contemporary
Educational Psychology 1 (1): 37-45.
[11] Kaplan, Robert M. (1978). "Is Beauty Talent? Sex Interaction in the Attractiveness Halo Effect". Sex Roles 4 (2): 195-204.
[12] Dermer, M.; Thiel, D.L (1975). "When beauty may fail" (https:/ / pantherfile. uwm. edu/ dermer/ public/ vita/ dermer_beauty. pdf). Journal
of Personality and Social Psychology 31 (6): 11681176. .
[13] Jeffray, Nathan (24 June 2010). "Interview: Gerald Steinberg" (http:/ / www. thejc. com/ news/ uk-news/ 33415/ interview-gerald-steinberg).
The Jewish Chronicle. .
[14] Balanson, Naftali (8 October 2008). "The 'halo effect' shields NGOs from media scrutiny" (http:/ / www. jpost. com/ Opinion/
Op-EdContributors/ Article. aspx?id=110648). The Jerusalem Post. .
[15] Nisbett, Richard E.; Wilson, Timothy D (1977). "The halo effect: Evidence for unconscious alteration of judgments". Journal of Personality
and Social Psychology (American Psychological Association) 35 (4): 250256. doi:10.1037/0022-3514.35.4.250. ISSN1939-1315.
[16] Koenig, Melissa A.; Jaswal, Vikram K (1 September 2011). "Characterizing Childrens Expectations About Expertise and Incompetence:
Halo or Pitchfork Effects?". Child Development 82 (5): 16341647. doi:10.1111/j.1467-8624.2011.01618.x.
[17] Glennie, Jonathan (3 May 2011). "Hugo Chvez's reverse-halo effect" (http:/ / www. guardian. co. uk/ global-development/ poverty-matters/
2011/ may/ 03/ hugo-chavez-reverso-halo-effect). The Guardian. .
Halo effect
258
Further reading
Sutherland, Stuart (2007). Irrationality (Reprinted. ed.). London: Pinter & Martin. ISBN978-1-905177-07-3.
Dean, Jeremy (2007). "The Halo Effect: When Your Own Mind is a Mystery" (http:/ / www. spring. org. uk/
2007/ 10/ halo-effect-when-your-own-mind-is. php). PsyBlog.
Rosenzweig, Phil (2007). The halo effect : ... and the eight other business delusions that deceive managers (1st
Free Press trade pbk. ed. ed.). New York, NY [etc.]: Free Press. ISBN978-0-7432-9125-5.
Steinberg, Gerald M (30 December 2009). "Human Rights NGOs Need a Monitor" (http:/ / forward. com/ articles/
122209/ human-rights-ngos-need-a-monitor/ ). The Jewish Daily Forward.
Chandra, Ramesh (2004). Social development in India. Delhi, India: Isha. ISBN81-8205-024-3.
Illusion of asymmetric insight
The illusion of asymmetric insight is a cognitive bias whereby people perceive their knowledge of others to surpass
other people's knowledge of themselves. This bias seems to be due to the conviction that observed behaviors are
more revealing of others than self, while private thoughts and feelings are more revealing of the self.
[1]
A study finds that people seem to believe that they know themselves better than their peers know themselves and that
their social group knows and understands other social groups better than other social groups know them
[1]
.
References
[1] Pronin E, Kruger J, Savitsky K, Ross L (October 2001). "You don't know me, but I know you: the illusion of asymmetric insight" (http:/ /
content. apa. org/ journals/ psp/ 81/ 4/ 639). J Pers Soc Psychol 81 (4): 63956. doi:10.1037/0022-3514.81.4.639. PMID11642351. .
External links
http:/ / youarenotsosmart. com/ 2011/ 08/ 21/ the-illusion-of-asymmetric-insight/
Illusion of external agency
259
Illusion of external agency
The illusion of external agency is a set of attributional biases consisting of illusions of influence, insight and
benevolence, proposed by Daniel Gilbert, Timothy D. Wilson, Ryan Brown and Elizabeth Pinel.
[1][2]
In a series of experiments, experimenters induced participants to rationalize a choice or experience (called the
"optimizing" condition) after which they were more likely to make certain attributions of an external agent, as
follows:
illusion of influence. Subjects who had been induced to rationalize liking for a teammate were more likely to
attribute this liking to the influence of "subliminal messages" which experimenters claimed to have attempted to
influence them to the best outcome. In this experiment the experimenters were presumed to have "insight" into the
problem and "benevolence" towards participants.
illusion of insight. Subjects listened to a song chosen for them by a "SmartRadio" that they were told was
benevolent and effective. Some subjects were informed of and rated the song before listening, and these subjects
rated the song more highly and were more likely to continue using it, attributing their liking to the device's
"insight".
illusion of benevolence. Subjects were given a gift; some rated it before receiving it and some rated it afterwards.
Those in the afterwards condition rated it more highly (endowment effect). All participants were told that they
were given the gift by another (unseen) participant as the best gift for them based on a questionnaire; those in the
afterwards condition were more likely to believe that their liking was due to the benevolence of the gift-giver.
Gilbert et al. argued that "participants confused their own optimization of subjective reality with an external agents'
optimizing of objective reality. Simply speaking, participants mistook 'the magic in here' for 'the magic out there.'"
[1]
References
[1] Gilbert, Daniel T.; Ryan P. Brown, Elizabeth C. Pinel, Timothy D. Wilson (2000). "The Illusion of External Agency" (http:/ / wjh-www.
harvard. edu/ ~dtg/ Gilbert et al (External Agency).PDF). Journal of Personality and Social Psychology 79 (5): 690700. .
[2] Gilbert, Daniel (2005). "The vagaries of religious experience" (http:/ / www. edge. org/ 3rd_culture/ gilbert05/ gilbert05_index. html).
Edge.org. .
Illusion of transparency
260
Illusion of transparency
The illusion of transparency is a tendency for people to overestimate the degree to which their personal mental
state is known by others. Another manifestation of the illusion of transparency (sometimes called the observer's
illusion of transparency) is a tendency for people to overestimate how well they understand others' personal mental
states. This cognitive bias is similar to the illusion of asymmetric insight.
Experimental support
Psychologist Elizabeth Newton created a simple test that she regarded as an illustration of the phenomenon. She
would tap out a well-known song, such as "Happy Birthday" or the national anthem, with her finger and have the test
subject guess the song. People usually estimate that the song will be guessed correctly in about 50percent of the
tests, but only 3percent pick the correct song. The tapper can hear every note and the lyrics in his or her head;
however, the observer, with no access to what the tapper is thinking, only hears a rhythmic tapping.
[1]
Public speaking and stage fright
The illusion of transparency is commonly prominent in public speakers. It may be increased by the spotlight effect.
The speaker has an exaggerated sense of how obvious his or her nervousness about a speech is to the audience.
Studies have shown that when the audience is surveyed, the speaker's emotions were not nearly so evident to the
crowd as the speaker perceived them to be.
[2]
Initial anxiety in a public speaking situation can cause stress that,
because of the illusion of transparency, the speaker may feel is evident to the listeners. This mistaken perception can
cause the speaker to compensate, which he or she then feels is even more obvious to the crowd, and the stress
increases in a feedback loop. Awareness of the limits of others' perceptions of one's mental state can help break the
cycle and reduce speech anxiety.
[3]
Studies on public speaking and the illusion of transparency
Kenneth Savitsky and Thomas Gilovich performed two experiments on public speaking anxiety in relation to the
illusion of transparency. The first focused on the speaker's perception of his or her anxiety levels versus an observer's
perception of the speaker's anxiety levels. The results were as expected: the speaker judged himself or herself more
harshly than the observer did.
[3]
In their second study, Savitsky and Gilovich focused on the connection between the illusion of transparency and the
exacerbation of speech anxiety. Participants in this study were divided into three groups: control, reassured, and
informed. All were given a topic and had five minutes to prepare a speech in front of a crowd, after which they rated
themselves on anxiety, speech quality, and appearance, and observers also rated them on anxiety levels and speech
quality. The control group were given no other advance instructions. The reassured and informed groups were both
told in advance that it is normal to feel anxiety about giving a speech. The reassured group were told that research
indicates they should not worry about this. The informed group were told about the illusion of transparency and that
research indicates that emotions are usually not as evident to others as people believe they are. The informed group
rated themselves higher in every respect and were also rated higher by the observers. The informed group,
understanding that the audience would not be able to perceive their nervousness, had less stress and their speech
tended to be better.
[3]
Illusion of transparency
261
The bystander effect
Thomas Gilovich, Victoria Husted Medvec, and Kenneth Savitsky believe that this phenomenon is partially the
reason for the bystander effect. They found that concern or alarm were not as apparent to observers as the individual
experiencing them thought, and that people believed they would be able to read others' expressions better than they
actually could.
[4]
When confronted with a potential emergency, people typically play it cool, adopt a look of nonchalance, and
monitor the reactions of others to determine if a crisis is really at hand. No one wants to overreact, after all, if
it might not be a true emergency. However, because each individual holds back, looks nonchalant, and
monitors the reactions of others, sometimes everyone concludes (perhaps erroneously) that the situation is not
an emergency and hence does not require intervention.
Thomas Gilovich, Victoria Husted Medvec, and Kenneth Savitsky,Journal of Personal and Social
Psychology, Vol. 75, No. 2
Further reading
Kenneth Savitsky and Thomas Gilovich published findings
[5]
Thomas Gilovich, Victoria Husted Medvec, and Kenneth Savitsky published findings
[6]
References
Footnotes
[1] McRaney, David (14 July 2010). "The Illusion of Transparency" (http:/ / youarenotsosmart. com/ 2010/ 07/ 14/ the-illusion-of-transparency/
). . Retrieved 20 July 2011.
[2] "The illusion of transparency and the alleviation of speech anxiety" (http:/ / www. psych. cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky&
Gilovich.03.pdf). Journal of Experimental Social Psychology 39. 25 March 2003. . Retrieved 20 July 2011.
[3] "The illusion of transparency and the alleviation of speech anxiety" (http:/ / www. psych. cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky&
Gilovich.03.pdf). Journal of Experimental Social Psychology 39. 25 March 2003. . Retrieved 20 July 2011.
[4] "The Illusion of Transparency: Biased Assessments of Others' Ability to Read One's Emotional States". Journal of Personality and Social
Psychology 75 (2): 332346. 1998.
[5] http:/ / www. psych.cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky& Gilovich. 03. pdf
[6] http:/ / www. psych.cornell. edu/ sec/ pubPeople/ tdg1/ Gilo. Sav. Medvec. pdf
Bibliography
The three illusions on interpersonal perception: Effects of relationship intimacy on two types of illusion of
transparency and the illusion of asymmetric insight (PDF) (http:/ / psywww. human. metro-u. ac. jp/ Personal/
miamia/ Take_Numa05spsp. pdf)
K Savitsky, T Gilovich, 2003. "The illusion of transparency and the alleviation of speech anxiety." Journal of
Experimental Social Psychology. http:/ / www. psych. cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky& Gilovich. 03.
pdf
T Gilovich, V Medvec, K Savitsky, 1998. "The Illusion of Transparency: Biased Assessments of Others' Ability
to Read One's Emotional States." Journal of Personality and Social Psychology. http:/ / www. psych. cornell. edu/
sec/ pubPeople/ tdg1/ Gilo. Sav. Medvec. pdf
D McRaney 2010. "The Illusion of Transparency." http:/ / youarenotsosmart. com/ 2010/ 07/ 14/
the-illusion-of-transparency/
O Burkeman 2011. "The illusion of transparency: Why your feelings arent really written all over your face."
http:/ / www. oliverburkeman. com/ 2011/ 01/
the-illusion-of-transparency-why-your-feelings-arent-really-written-all-over-your-face/
Illusory superiority
262
Illusory superiority
Illusory superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to
underestimate their negative qualities, relative to others. This is evident in a variety of areas including intelligence,
performance on tasks or tests, and the possession of desirable characteristics or personality traits. It is one of many
positive illusions relating to the self, and is a phenomenon studied in social psychology.
Illusory superiority is often referred to as the above average effect. Other terms include superiority bias, leniency
error, sense of relative superiority, the primus inter pares effect,
[1]
and the Lake Wobegon effect (named after
Garrison Keillor's fictional town where "all the children are above average"). The phrase "illusory superiority" was
first used by Van Yperen and Buunk in 1991.
[1]
Effects in different situations
Illusory superiority has been found in individuals' comparisons of themselves with others in a wide variety of
different aspects of life, including performance in academic circumstances (such as class performance, exams and
overall intelligence), in working environments (for example in job performance), and in social settings (for example
in estimating one's popularity, or the extent to which one possesses desirable personality traits, such as honesty or
confidence), as well as everyday abilities requiring particular skill.
[1]
For illusory superiority to be demonstrated by social comparison, two logical hurdles have to be overcome. One is
the ambiguity of the word "average". It is logically possible for nearly all of the set to be above the mean if the
distribution of abilities is highly skewed. For example, the mean number of human legs is slightly lower than two,
because of a small number of people have only one or no legs. Hence experiments usually compare subjects to the
median of the peer group, since by definition it is impossible for a majority to exceed the median.
A further problem in inferring inconsistency is that subjects might interpret the question in different ways, so it is
logically possible that a majority of them are, for example, more generous than the rest of the group each on their
own understanding of generosity.
[2]
This interpretation is confirmed by experiments which varied the amount of
interpretive freedom subjects were given. As subjects evaluate themselves on a specific, well-defined attribute,
illusory superiority remains.
[3]
Cognitive ability
IQ
One of the main effects of illusory superiority in IQ is the Downing effect. This describes the tendency of people
with a below average IQ to overestimate their IQ, and of people with an above average IQ to underestimate their IQ.
The propensity to predictably misjudge one's own IQ was first noted by C. L. Downing who conducted the first
cross-cultural studies on perceived 'intelligence'. His studies also evidenced that the ability to accurately estimate
others' IQ was proportional to one's own IQ. This means that the lower the IQ of an individual, the less capable they
are of appreciating and accurately appraising others' IQ. Therefore individuals with a lower IQ are more likely to rate
themselves as having a higher IQ than those around them. Conversely, people with a higher IQ, while better at
appraising others' IQ overall, are still likely to rate people of similar IQ as themselves as having higher IQs.
The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist
Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their
intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.
[4][5]
Illusory superiority
263
Memory
Illusory superiority has been found in studies comparing memory self-report, such as Schmidt, Berg & Deelman's
research in older adults. This study involved participants aged between 46 and 89 years of age comparing their own
memory to that of peers of the same age group, 25-year-olds and their own memory at age 25. This research showed
that participants exhibited illusory superiority when comparing themselves to both peers and younger adults,
however the researchers asserted that these judgements were only slightly related to age.
[6]
Cognitive tasks
In Kruger and Dunning's experiments participants were given specific tasks (such as solving logic problems,
analyzing grammar questions, and determining whether or not jokes were funny), and were asked to evaluate their
performance on these tasks relative to the rest of the group, enabling a direct comparison of their actual and
perceived performance.
[7]
Results were divided into four groups depending on actual performance and it was found that all four groups
evaluated their performance as above average, meaning that the lowest-scoring group (the bottom 25%) showed a
very large illusory superiority bias. The researchers attributed this to the fact that the individuals who were worst at
performing the tasks were also worst at recognizing skill in those tasks. This was supported by the fact that, given
training, the worst subjects improved their estimate of their rank as well as getting better at the tasks.
[7]
The paper, titled "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to
Inflated Self-Assessments," won a 2000 Ig Nobel Prize.
[8]
In 2003 Dunning and Joyce Ehrlinger, also of Cornell University, published a study that detailed a shift in people's
views of themselves influenced by external cues. Participants in the study (Cornell University undergraduates) were
given tests of their knowledge of geography, some intended to positively affect their self-views, some intended to
affect them negatively. They were then asked to rate their performance, and those given the positive tests reported
significantly better performance than those given the negative.
[9]
Daniel Ames and Lara Kammrath extended this work to sensitivity to others, and the subjects' perception of how
sensitive they were.
[10]
Work by Burson Larrick and Joshua Klayman has suggested that the effect is not so obvious
and may be due to noise and bias levels.
[11]
Dunning, Kruger, and coauthors' latest paper on this subject comes to qualitatively similar conclusions after making
some attempt to test alternative explanations.
[12]
Academic ability and job performance
In a survey of faculty at the University of Nebraska, 68% rated themselves in the top 25% for teaching ability.
[13]
In a similar survey, 87% of MBA students at Stanford University rated their academic performance as above the
median.
[14]
Findings of illusory superiority in research have also explained phenomena such as the large amount of stock market
trading (as each trader thinks they are the best, and most likely to succeed),
[15]
and the number of lawsuits that go to
trial (because, due to illusory superiority, many lawyers have an inflated belief that they will win a case).
[16]
Self, friends and peers
One of the first studies that found the effect of illusory superiority was carried out in 1976 by the College Board in
the USA.
[17]
A survey was attached to the SAT exams (taken by approximately one million students per year),
asking the students to rate themselves relative to the median of the sample (rather than the average peer) on a number
of vague positive characteristics. In ratings of leadership ability, 70% of the students put themselves above the
median. In ability to get on well with others, 85% put themselves above the median, and 25% rated themselves in the
top 1%.
Illusory superiority
264
More recent research
[18]
has found illusory superiority in a social context, with participants comparing themselves to
friends and other peers on positive characteristics (such as punctuality and sensitivity) and negative characteristics
(such as naivety or inconsistency). This study found that participants rated themselves more favorably than their
friends, but rated their friends more favorably than other peers. These findings were, however, affected by several
moderating factors.
Research by Perloff and Fetzer,
[19]
Brown,
[20]
and Tajfel and Turner
[21]
also found similar effects of participants
rating friends higher than other peers. Tajfel and Turner attributed this to an "ingroup bias" and suggested that this
was motivated by the individual's desire for a "positive social identity".
Popularity
In Zuckerman and Jost's study, participants were given detailed questionnaires about their friendships and asked to
assess their own popularity. By using social network analysis, they were able to show that the participants generally
had exaggerated perceptions of their own popularity, particularly in comparison to their own friends.
[22]
Relationship happiness
Researchers have also found the effects of illusory superiority in studies into relationship satisfaction. For example,
one study found that participants perceived their own relationships as better than others' relationships on average, but
thought that the majority of people were happy with their relationships. Also, this study found evidence that the
higher the participants rated their own relationship happiness, the more superior they believed their relationship was.
The illusory superiority exhibited by the participants in this study also served to increase their own relationship
satisfaction, as it was found that in men especially satisfaction was particularly related to the perception that
one's own relationship was superior as well as to the assumption that few others were unhappy with their
relationship, whereas women's satisfaction was particularly related to the assumption that most others were happy
with their relationship.
[23]
Health
Illusory superiority effects have been found in a self-report study of health behaviors (Hoorens & Harris, 1998). The
study involved asking participants to estimate how often they, and their peers, carried out healthy and unhealthy
behaviors. Participants reported that they carried out healthy behaviors more often than the average peer, and
unhealthy behaviors less often, as would be expected given the effect of illusory superiority. These findings were for
both past self-report of behaviors and expected future behaviors.
[24]
Driving ability
Svenson (1981) surveyed 161 students in Sweden and the United States, asking them to compare their driving safety
and skill to the other people in the experiment. For driving skill, 93% of the US sample and 69% of the Swedish
sample put themselves in the top 50% (above the median). For safety, 88% of the US group and 77% of the Swedish
sample put themselves in the top 50%.
[25]
McCormick, Walkey and Green (1986) found similar results in their study, asking 178 participants to evaluate their
position on eight different dimensions relating to driving skill (examples include the "dangerous-safe" dimension and
the "considerate-inconsiderate" dimension). Only a small minority rated themselves as below average (the midpoint
of the dimension scale) at any point, and when all eight dimensions were considered together it was found that
almost 80% of participants had evaluated themselves as being above the average driver.
[26]
Illusory superiority
265
Immunity to bias
Subjects describe themselves in positive terms compared to other people, and this includes describing themselves as
less susceptible to bias than other people. This effect is called the bias blind spot and has been demonstrated
independently.
Cultural differences
A vast majority of the literature on self-esteem originates from studies on participants in the United States. However,
research that only investigates the effects in one specific population is severely limited as this may not be a true
representation of human psychology as a whole. As a result, more recent research has focused on investigating
quantities and qualities of self-esteem around the globe. The findings of such studies suggest that illusory superiority
varies between cultures.
Self-esteem
While a great deal of evidence suggests that we compare ourselves favorably to others on a wide variety of traits, the
links to self-esteem are uncertain. The theory that those with high self-esteem maintain this high level by rating
themselves over and above others does carry some evidence behind it; it has been reported that non-depressed
subjects rate their control over positive outcomes higher than that of a peer; despite an identical level in performance
between the two individuals.
[27]
Furthermore, it has been found that non-depressed students will also actively rate peers below themselves, as
opposed to rating themselves higher; students were able to recall a great deal more negative personality traits about
others than about themselves.
[28]
The data suggests those with a positive self view are more likely to display the above-average effect, as opposed to
those with a negative self appraisal. Similarly, those with low self-esteem appear to engage in far less illusory
superiority, showing more realism in their self-rating.
These results go against a basic humanistic principle within psychology. In particular, Carl Rogers, a pioneer of
humanistic psychology, claims that those with low self-esteem will be far more likely to attempt to belittle others,
with the aim of strengthening their fragile self view. On the other hand, Rogers hypothesizes that those with high
self-esteem will have no need to put others down or below themselves; and therefore, would be unlikely to exhibit
illusory superiority.
It should be noted though, that in these studies there was no distinction made between people with legitimate and
illegitimate high self-esteem, as other studies have found that absence of positive illusions may coexist with high
self-esteem
[29]
and that self-determined individuals with personality oriented towards growth and learning are less
prone to these illusions.
[30]
Thus it may be likely that while illusory superiority is associated with illegitimate high
self-esteem, people with legitimate high self-esteem don't exhibit it.
Relation to mental health
Psychology has traditionally assumed that generally accurate self-perceptions are essential to good mental health.
[2]
This was challenged by a 1988 paper by Taylor and Brown, who argued that mentally healthy individuals typically
manifest three cognitive illusions, namely illusory superiority, illusion of control and optimism bias.
[2]
This idea
rapidly became very influential, with some authorities concluding that it would be therapeutic to deliberately induce
these biases.
[31]
Since then, further research has both undermined that conclusion and offered new evidence
associating illusory superiority with negative effects on the individual.
[2]
One line of argument was that in the Taylor and Brown paper, the classification of people as mentally healthy or
unhealthy was based on self-reports rather than objective criteria.
[31]
Hence it was not surprising that people prone to
Illusory superiority
266
self-enhancement would exaggerate how well-adjusted they are. One study claimed that "mentally normal" groups
were contaminated by defensive deniers who are the most subject to positive illusions.
[31]
A longitudinal study found
that self-enhancement biases were associated with poor social skills and psychological maladjustment.
[2]
In a
separate experiment where videotaped conversations between men and women were rated by independent observers,
self-enhancing individuals were more likely to show socially problematic behaviors such as hostility or irritability.
[2]
A 2007 study found that self-enhancement biases were associated with psychological benefits (such as subjective
well-being) but also inter- and intra-personal costs (such as anti-social behavior).
[32]
Neuroimaging
The degree to which people view themselves as more desirable than the average person links to reduced activation in
their orbitofrontal cortex and dorsal anterior cingulate cortex. This is suggested to link to the role of these areas in
processing "cognitive control".
[33]
Explanations
Noisy mental information processing
A recent Psychological Bulletin suggests that illusory superiority (as well as other biases) can be explained by a
simple information-theoretic generative mechanism that assumes a noisy conversion of objective evidence
(observation) into subjective estimates (judgment).
[34]
The study suggests that the underlying cognitive mechanism
is essentially similar to the noisy mixing of memories that can cause the conservatism bias or overconfidence: after
our own performance, we readjust our estimates of our own performance more than we readjust our estimates of
others performances. This implies that our estimates of the scores of others are even more conservative (more
influenced by the previous expectation) than our estimates of our own performance (more influenced by the new
evidence received after giving the test). The difference in the conservative bias of both estimates (conservative
estimate of our own performance, and even more conservative estimate of the performance of others) is enough to
create illusory superiority. Since mental noise is a sufficient explanation that is much simpler and straightforward
than any other explanation involving heuristics, behavior, or social interaction,
[17]
Occam's razor would argue in its
favor as the underlying generative mechanism (it is the hypotheses which makes the fewest assumptions).
Selective recruitment
This is the idea that when making a comparison with a peer an individual will select their own strengths and the
other's weaknesses in order that they appear better on the whole. This theory was first tested by Weinstein (1980);
however, this was in an experiment relating to optimistic bias, rather than the better-than-average effect. The study
involved participants rating certain behaviors as likely to increase or decrease the chance of a series of life events
happening to them. It was found that individuals showed less optimistic bias when they were allowed to see others'
answers.
[35]
Perloff and Fetzer (1986) suggested that when comparing themselves to an average peer on a particular ability or
characteristic an individual would choose a comparison target (the peer being compared) that scored less well on that
ability or characteristic, in order that the individual would appear to be better than average. To test this theory Perloff
and Fetzer asked participants to compare themselves to specific comparison targets (a close friend), and found that
illusory superiority decreased when specific targets were given, rather than vague constructs such as the "average
peer". However these results are not completely reliable and could be affected by the fact that individuals like their
close friends more than an "average peer" and may as a result rate their friend as being higher than average, therefore
the friend would not be an objective comparison target.
[19]
Illusory superiority
267
Egocentrism
The second explanation for how the better-than-average effect works is egocentrism. This is the idea that an
individual places greater importance and significance on their own abilities, characteristics and behaviors than those
of others. Egocentrism is therefore a less overtly self-serving bias. According to egocentrism, individuals will
overestimate themselves in relation to others because they believe that they have an advantage that others do not
have, as an individual considering their own performance and another's performance will consider their performance
to be better, even when they are in fact equal. Kruger (1999) found support for the egocentrism explanation in his
research involving participant ratings of their ability on easy and difficult tasks. It was found that individuals were
consistent in their ratings of themselves as above the median in the tasks classified as "easy" and below the median
in the tasks classified as "difficult", regardless of their actual ability. In this experiment the better-than-average effect
was observed when it was suggested to participants that they would be successful, but also a worse-than-average
effect was found when it was suggested that participants would be unsuccessful.
[36]
Focalism
The third explanation for the better-than-average effect is focalism, the idea that greater significance is placed on the
object that is the focus of attention. Most studies of the better-than-average effect place greater focus on the self
when asking participants to make comparisons (the question will often be phrased with the self being presented
before the comparison target e.g. "compare yourself to the average person..."). According to focalism this means
that the individual will place greater significance on their own ability or characteristic than that of the comparison
target. This also means that in theory if, in an experiment on the better-than-average effect, the questions were
phrased so that the self and other were switched (e.g. "compare the average peer to yourself") the better-than-average
effect should be lessened.
[37]
Research into focalism has focused primarily on optimistic bias rather than the better-than-average effect. However,
two studies found a decreased effect of optimistic bias when participants were asked to compare an average peer to
themselves, rather than themselves to an average peer.
[38][39]
Windschitl, Kruger & Simms (2003) have conducted research into focalism, focusing specifically on the
better-than-average effect, and found that asking participants to estimate their ability and likelihood of success in a
task produced results of decreased estimations when they were asked about others' chances of success rather than
their own.
[40]
"Self versus aggregate" comparisons
This idea, put forward by Giladi and Klar, suggests that when making comparisons any single member of a group
will be evaluated to rank above that group's statistical mean performance level or the median performance level of its
members. Research has found this effect in many different areas of human performance and has even generalized it
beyond individuals' attempts to draw comparisons involving themselves.
[41]
Findings of this research therefore
suggest that rather than individuals evaluating themselves as above average in a self-serving manner, the
better-than-average effect is actually due to a general tendency to evaluate any single person or object as better than
average.
Better-than-average heuristic
Alicke and Govorun proposed the idea that, rather than individuals consciously reviewing and thinking about their
own abilities, behaviors and characteristics and comparing them to those of others, it is likely that people instead
have what they describe as an "automatic tendency to assimilate positively-evaluated social objects toward ideal trait
conceptions".
[17]
For example, if an individual evaluated themselves as honest, they would be likely to then
exaggerate their characteristic towards their perceived ideal position on a scale of honesty. Importantly, Alicke has
noted that this ideal position is not always the top of the scale, for example, in the case of honesty, someone who is
Illusory superiority
268
always brutally honest may be regarded as rude. Instead, the ideal is a balance perceived differently by different
individuals.
Non-social explanations
The better-than-average effect may not have wholly social origins: judgements about inanimate objects suffer similar
distortions.
[41]
Moderating factors
While illusory superiority has been found to be somewhat self-serving, this does not mean that it will predictably
occur: it is not constant. Instead the strength of the effect is moderated by many factors, the main examples of which
have been summarized by Alicke and Govorun (2005).
[17]
Interpretability/ambiguity of trait
This is a phenomenon that Alicke and Govorun have described as "the nature of the judgement dimension" and refers
to how subjective (abstract) or objective (concrete) the ability or characteristic being evaluated is.
[17]
Research by
Sedikides & Strube (1997) has found that people are more self-serving (the effect of illusory superiority is stronger)
when the event in question is more open to interpretation,
[42]
for example social constructs such as popularity and
attractiveness are more interpretable than characteristics such as intelligence and physical ability.
[43]
This has been
partly attributed also to the need for a believable self-view.
[44]
The idea that ambiguity moderates illusory superiority has empirical research support from a study involving two
conditions: in one, participants were given criteria for assessing a trait as ambiguous or unambiguous, and in the
other participants were free to assess the traits according to their own criteria. It was found that the effect of illusory
superiority was greater in the condition where participants were free to assess the traits.
[45]
The effects of illusory superiority have also been found to be strongest when people rate themselves on abilities at
which they are totally incompetent. These subjects have the greatest disparity between their actual performance (at
the low end of the distribution) and their self-rating (placing themselves above average). This DunningKruger
effect is interpreted as a lack of metacognitive ability to recognize their own incompetence.
[7]
Method of comparison
The method used in research into illusory superiority has been found to have an implication on the strength of the
effect found. Most studies into illusory superiority involve a comparison between an individual and an average peer,
of which there are two methods: direct comparison and indirect comparison. A direct comparison which is more
commonly used involves the participant rating themselves and the average peer on the same scale, from "below
average" to "above average"
[46]
and results in participants being far more self-serving.
[47]
Researchers have
suggested that this occurs due to the closer comparison between the individual and the average peer, however use of
this method means that it is impossible to know whether a participant has overestimated themselves, underestimated
the average peer, or both.
The indirect method of comparison involves participants rating themselves and the average peer on separate scales
and the illusory superiority effect is found by taking the average peer score away from the individual's score (with a
higher score indicating a greater effect). While the indirect comparison method is used less often it is more
informative in terms of whether participants have overestimated themselves or underestimated the average peer, and
can therefore provide more information about the nature of illusory superiority.
[46]
Illusory superiority
269
Comparison target
The nature of the comparison target is one of the most fundamental moderating factors of the effect of illusory
superiority, and there are two main issues relating to the comparison target that need to be considered.
First, research into illusory superiority is distinct in terms of the comparison target because an individual compares
themselves with a hypothetical average peer rather than a tangible person. Alicke et al. (1995) found that the effect
of illusory superiority was still present but was significantly reduced when participants compared themselves with
real people (also participants in the experiment, who were seated in the same room), as opposed to when participants
compared themselves with an average peer. This suggests that research into illusory superiority may itself be biasing
results and finding a greater effect than would actually occur in real life.
[46]
Further research into the differences between comparison targets involved four conditions where participants were at
varying proximity to an interview with the comparison target: watching live in the same room; watching on tape;
reading a written transcript; or making self-other comparisons with an average peer. It was found that when the
participant was further removed from the interview situation (in the tape observation and transcript conditions) the
effect of illusory superiority was found to be greater. Researchers asserted that these findings suggest that the effect
of illusory superiority is reduced by two main factors, individuation of the target and live contact with the target.
Second, Alicke et al.'s (1995) studies investigated whether the negative connotations to the word "average" may have
an effect on the extent to which individuals exhibit illusory superiority, namely whether the use of the word
"average" increases illusory superiority. Participants were asked to evaluate themselves, the average peer and a
person whom they had sat next to in the previous experiment, on various dimensions. It was found that they placed
themselves highest, followed by the real person, followed by the average peer, however the average peer was
consistently placed above the mean point on the scale, suggesting that the word "average" did not have a negative
effect on the participant's view of the average peer.
[46]
Controllability
An important moderating factor of the effect of illusory superiority is the extent to which an individual believes they
are able to control and change their position on the dimension concerned. According to Alicke & Govorun positive
characteristics that an individual believes are within their control are more self-serving, and negative characteristics
that are seen as uncontrollable are less detrimental to self-enhancement.
[17]
This theory was supported by Alicke's
(1985) research, which found that individuals rated themselves as higher than an average peer on positive
controllable traits and lower than an average peer on negative uncontrollable traits. The idea, suggested by these
findings, that individuals believe that they are responsible for their success and some other factor is responsible for
their failure is known as the self-serving bias.
Individual differences of judge
Personality characteristics vary widely between people and have been found to moderate the effects of illusory
superiority, one of the main examples of this is self-esteem. Brown (1986) found that in self-evaluations of positive
characteristics participants with higher self-esteem showed greater illusory superiority bias than participants with
lower self-esteem.
[48]
Similar findings come from a study by Suls, Lemos & Stewart (2002), but in addition they
found that participants pre-classified as having high self-esteem interpreted ambiguous traits in a self-serving way,
whereas participants who were pre-classified as having low self-esteem did not do this.
[18]
Illusory superiority
270
Worse-than-average effect
In contrast to what is commonly believed, research has found that better-than-average effects are not universal. In
fact, much recent research has found the opposite effect in many, especially more difficult, tasks.
[49]
Notes
[1] Hoorens, Vera (1993). "Self-enhancement and Superiority Biases in Social Comparison". European Review of Social Psychology (Psychology
Press) 4 (1): 113139. doi:10.1080/14792779343000040.
[2] Colvin, C. Randall; Jack Block, David C. Funder (1995). "Overly Positive Self-Evaluations and Personality: Negative Implications for Mental
Health". Journal of Personality and Social Psychology (American Psychological Association) 68 (6): 11521162.
doi:10.1037/0022-3514.68.6.1152. PMID7608859.
[3] Dunning, David; Judith A. Meyerowitz, Amy D. Holzberg (1989). "Ambiguity and self-evaluation: The role of idiosyncratic trait definitions
in self-serving assessments of ability". Journal of Personality and Social Psychology (American Psychological Association) 57 (6):
10821090. doi:10.1037/0022-3514.57.6.1082. ISSN1939-1315.
[4] Davidson, J. E. & C. L. Downing, CMOF Intelligence Handbook of Intelligence, 2000
[5] International Journal of Selection and Assessment, Vol. 13, No. 1, pp. 1124, March 2005
[6] Schmidt, I.W.; I.J. Berg, B.G. Deelman (1999). "Illusory superiority in self-reported memory of older adults". Aging, Neuropsychology, and
Cognition (Neuropsychology, Development and Cognition) 6 (4): 288301. doi:10.1076/1382-5585(199912)06:04;1-B;FT288.
[7] Kruger, Justin; David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to
Inflated Self-Assessments". Journal of Personality and Social Psychology 77 (6): 112134. doi:10.1037/0022-3514.77.6.1121.
PMID10626367.
[8] "The 2000 Ig Nobel Prize Winners" (http:/ / www.improb. com/ ig/ ig-pastwinners. html#ig2000). Improbable Research. . Retrieved
2008-05-27.
[9] Joyce Ehrlinger; David Dunning (January 2003). "How Chronic Self-Views Influence (and Potentially Mislead) Estimates of Performance".
Journal of Personality and Social Psychology (American Psychological Association) 84 (1): 517. doi:10.1037/0022-3514.84.1.5.
PMID12518967.
[10] Daniel R. Ames; Lara K. Kammrath (September 2004). "Mind-Reading and Metacognition: Narcissism, not Actual Competence, Predicts
Self-Estimated Ability". Journal of Nonverbal Behavior (Springer Netherlands) 28 (3): 187209. doi:10.1023/B:JONB.0000039649.20015.0e.
[11] Katherine A. Burson; Richard P. Larrick; Joshua Klayman; YUTAO (2006). "Skilled or Unskilled, but Still Unaware of It: How Perceptions
of Difficulty Drive Miscalibration in Relative Comparisons" (http:/ / faculty. fuqua. duke. edu/ ~larrick/ bio/ Files/ 2006 Burson Larrick
Klayman JPSP. pdf). Journal of Personality and Social Psychology 90 (1): 6077. doi:10.1037/0022-3514.90.1.60. PMID16448310. .
[12] Ehrlinger, Joyce; Johnson, Kerri; Banner, Matthew; Dunning, David; Kruger, Justin (2008). "Why the unskilled are unaware: Further
explorations of (absent) self-insight among the incompetent" (http:/ / www. psy. fsu. edu/ ~ehrlinger/ Self_& _Social_Judgment/
Ehrlinger_et_al_2008. pdf) (PDF). Organizational Behavior and Human Decision Processes (105): 98121. .
[13] Cross, P. (1977). "Not can but will college teachers be improved?". New Directions for Higher Education 17: 115.
[14] "It's Academic." 2000. Stanford GSB Reporter, April 24, pp.145. via Zuckerman, Ezra W.; John T. Jost (2001). "What Makes You Think
You're So Popular? Self Evaluation Maintenance and the Subjective Side of the "Friendship Paradox"" (http:/ / www. psych. nyu. edu/ jost/
Zuckerman & Jost (2001) What Makes You Think You're So Popular1. pdf). Social Psychology Quarterly (American Sociological
Association) 64 (3): 207223. doi:10.2307/3090112. JSTOR3090112. . Retrieved 2009-08-29.
[15] Odean, T. (1998). "Volume, volatility, price, and profit when all traders are above average". Journal of Finance 53 (6): 18871934.
doi:10.1111/0022-1082.00078.
[16] Neale, M.A., & Bazerman, M.H. (1985). The effects of framing and negotiator overconfidence on bargaining behaviours and outcomes.
Academy of Management Journal, 28(1), 3449.
[17] Alicke, Mark D.; Olesya Govorun (2005). "The Better-Than-Average Effect". In Mark D. Alicke, David A. Dunning, Joachim I. Krueger.
The Self in Social Judgment. Studies in Self and Identity. Psychology Press. pp.85106. ISBN978-1-84169-418-4. OCLC58054791.
[18] Suls, J.; K. Lemos, H.L. Stewart (2002). "Self-esteem, construal, and comparisons with the self, friends and peers". Journal of Personality
and Social Psychology (American Psychological Association) 82 (2): 252261. doi:10.1037/0022-3514.82.2.252. PMID11831414.
[19] Perloff, L.S.; B.K. Fetzer (1986). "Self-other judgments and perceived vulnerability to victimization". Journal of Personality and Social
Psychology (American Psychological Association) 50 (3): 502510. doi:10.1037/0022-3514.50.3.502.
[20] Brown, J.D. (1986). "Evaluations of self and others: Self-enhancement biases in social judgments". Social Cognition 4 (4): 353376.
doi:10.1521/soco.1986.4.4.353.
[21] Tajfel, H.; J.C. Turner. "The social identity theory of intergroup behaviour". In S. Worchel & W.G. Austin. Psychology of intergroup
relations (2nd ed.). pp.724. ISBN0-12-682550-5.
[22] Zuckerman, Ezra W.; John T. Jost (2001). "What Makes You Think You're So Popular? Self Evaluation Maintenance and the Subjective
Side of the "Friendship Paradox"" (http:/ / www. psych. nyu. edu/ jost/ Zuckerman & Jost (2001) What Makes You Think You're So Popular1.
pdf). Social Psychology Quarterly (American Sociological Association) 64 (3): 207223. doi:10.2307/3090112. JSTOR3090112. . Retrieved
2009-08-29.
Illusory superiority
271
[23] Buunk, B.P. (2001). "Perceived superiority of one's own relationship and perceived prevalence of happy and unhappy relationships". British
Journal of Social Psychology 40 (4): 565574. doi:10.1348/014466601164984.
[24] Hoorens, V.; P. Harris (1998). "Distortions in reports of health behaviours: The time span effect and illusory superiority". Psychology and
Health 13 (3): 451466. doi:10.1080/08870449808407303.
[25] Svenson, O. (February 1981). "Are we all less risky and more skillful than our fellow drivers?". Acta Psychologica 47 (2): 143148.
doi:10.1016/0001-6918(81)90005-6.
[26] McCormick, Iain A.; Frank H. Walkey, Dianne E. Green (June 1986). "Comparative perceptions of driver ability A confirmation and
expansion". Accident Analysis & Prevention 18 (3): 205208. doi:10.1016/0001-4575(86)90004-7.
[27] Martin, D.J.; Abramson, L.Y.; Alloy, L.B. (1984). "Illusion of control for self and others in depressed and non-depressed college students".
Journal of Personality and Social Psychology 46: 126136.
[28] Kuiper, N.A.; Macdonald, M.R. (1982). "Self and other perception in mild depressives". Social Cognition 1 (3): 223239.
doi:10.1521/soco.1982.1.3.223.
[29] Compton William C. (1992). "Are positive illusions necessary for self-esteem: a research note". Personality and Individual Differences 13
(12): 13431344. doi:10.1016/0191-8869(92)90177-Q.
[30] Knee, C.R., & Zuckerman, M. (1998). A nondefensive personality: Autonomy and control as moderators of defensive coping and
self-handicapping. Journal of Research in Personality, 32(2), 115-130. doi = http:/ / dx. doi. org/ 10. 1006/ jrpe. 1997. 2207
[31] Shedler, Jonathan; Martin Mayman, Melvin Manis (1993). "The Illusion of Mental Health". American Psychologist (American
Psychological Association) 48 (11): 11171131. doi:10.1037/0003-066X.48.11.1117. PMID8259825.
[32] Sedikides, Constantine; Robert S. Horton, Aiden P. Gregg (2007). "The Why's the Limit: Curtailing Self-Enhancement With Explanatory
Introspection". Journal of Personality (Wiley Periodicals) 75 (4): 783824. doi:10.1111/j.1467-6494.2007.00457.x. ISSN0022-3506.
PMID17576359.
[33] Beer JS, Hughes BL. (2010). "Neural systems of social comparison and the "above-average" effect". Neuroimage 49 (3): 26712679.
doi:10.1016/j.neuroimage.2009.10.075. PMID19883771.
[34] Martin Hilbert (2012) "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin, 138(2), 211237; free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf
[35] Weinstein, N.D. (1980). "Unrealistic optimism about future life events". Journal of Personality and Social Psychology (American
Psychological Association) 39 (5): 806820. doi:10.1037/0022-3514.39.5.806.
[36] Kruger, J. (1999). "Lake Woebegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments".
Journal of Personality and Social Psychology 77 (2): 221232. doi:10.1037/0022-3514.77.2.221. PMID10474208.
[37] Schkade, D.A.; D. Kahneman (1998). "Does living in California make people happy? A focussing illusion in judgments of life satisfaction".
Psychological Science 9 (5): 340346. doi:10.1111/1467-9280.00066.
[38] Otten, W.; J. Van der Pligt (1966). "Context effects in the measurement of comparative optimism in probability judgments". Journal of
Personality and Social Psychology 15: 80101.
[39] Eiser, J.R.; S. Pahl, Y.R.A. Prins (2001). "Optimism, pessimism, and the direction of self-other comparisons". Journal of Experimental
Social Psychology 37: 7784. doi:10.1006/jesp.2000.1438.
[40] Windschitl, P.D.; J. Kruger, E.N. Sims (2003). "The influence of egocentrism and focalism on people's optimism in competition: When what
affects us equally affects me more". Journal of Personality and Social Psychology (American Psychological Association) 85 (3): 389408.
doi:10.1037/0022-3514.85.3.389. PMID14498778.
[41] E.E. Giladi & Y. Klar (2002). "When standards are wide of the mark: Nonselective superiority and inferiority biases in comparative
judgments of objects and concepts". Journal of Experimental Psychology: General 131 (4): 538551. doi:10.1037/0096-3445.131.4.538.
[42] Sedikides, C., & Strube, M.J. (1997). Self-evaluation: To thine own self be good, to thine own self be sure, to thine own self be true, and to
thine own self be better. In M.P. Zanna (Ed.), Advances in experimental social psychology (Vol. 29, pp. 209269). New York: Academic
Press.
[43] Reeder, G.D.; Brewer, M.B. (1979). "A schematic model of dispositional attribution in interpersonal perception". Psychological Review 86:
6179. doi:10.1037/0033-295X.86.1.61.
[44] Swann, W.B., Rentfrow, P.J., & Guinn, J.S. (2003). Self Verification: The search for coherence. In M.R. Leary & J.P. Tangney (Eds.),
Handbook of Self and Identity (pp. 367383). New York: Guildford Press.
[45] Dunning, D.; Meyerowitz, J.A.; Holzberg, A.D. (1989). "Ambiguity and self-evaluation: The role of idiosyncratic trait definitions in
self-serving assessments of ability". Journal of Personality and Social Psychology 57 (6): 10821090. doi:10.1037/0022-3514.57.6.1082.
[46] Alicke, M.D.; Klotz, M.L.; Breitenbecher, D.L.; Yurak, T.J.; Vredenburg, D.S. (1995). "Personal contact, individuation, and the
better-than-average effect". Journal of Personality and Social Psychology 68 (5): 804825. doi:10.1037/0022-3514.68.5.804.
[47] Otten, W.; Van; der Pligt, J. (1966). "Context effects in the measurement of comparative optimism in probability judgments". Journal of
Social and Clinical Psychology 15: 80101.
[48] Brown, J.D. (1986). "Evaluations of self and others: Self-enhancement biases in social judgments". Social Cognition 4 (4): 353376.
doi:10.1521/soco.1986.4.4.353.
[49] Moore, D.A. (2007). "Not so above average after all: When people believe they are worse than average and its implications for theories of
bias in social comparison". Organizational Behaviour and Human Decision Processes 102 (1): 4258. doi:10.1016/j.obhdp.2006.09.005.
Illusory superiority
272
References
Alicke, Mark D.; David A. Dunning, Joachim I. Kruger (2005). The Self in Social Judgment. Psychology Press.
pp.85106. ISBN978-1-84169-418-4. especially chapters 5 and 4
Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative
ability judgments. Journal of Personality and Social Psychology, 77, 221232.
Matlin, Margaret W. (2004). "Pollyanna Principle". In Rdiger Pohl. Cognitive Illusions: a handbook on fallacies
and biases in thinking, judgement and memory. Psychology Press. ISBN978-1-84169-351-4.
Myers, David G. (1980). The inflated self: human illusions and the Biblical call to hope. New York: Seabury
Press. ISBN 978-0-8164-0459-9
Sedikides, Constantine; Aiden P. Gregg. (2003). "Portraits of the Self" in Sage handbook of social psychology
Further reading
Dunning, David; Kerri Johnson, Joyce Ehrlinger and Justin Kruger (2003). "Why people fail to recognize their
own incompetence" (http:/ / www3. interscience. wiley. com/ journal/ 118890796/ abstract?CRETRY=1&
SRETRY=0). Current Directions in Psychological Science 12 (3): 8387. doi:10.1111/1467-8721.01235.
E.E. Giladi & Y. Klar (2002). "When standards are wide of the mark: Nonselective superiority and inferiority
biases in comparative judgments of objects and concepts". Journal of Experimental Psychology: General 131 (4):
538551. doi:10.1037/0096-3445.131.4.538.
In-group favoritism
In-group favoritism, sometimes known as in-groupout-group bias, in-group bias, or intergroup bias, refers to a
pattern of preferencing members of ones in-group over out-group members. This can be expressed in evaluation of
others, allocation of resources and many other ways.
[1]
This interaction has been researched by many psychologists and linked to many theories related to group conflict and
prejudice. The phenomenon is primarily viewed from a social psychology standpoint rather than a personality
psychology perspective. Two prominent theoretical approaches to the phenomenon of ingroup favoritism are realistic
conflict theory and social identity theory. Realistic conflict theory proposes that intergroup competition, and
sometimes intergroup conflict, arises when two groups have opposing claims to scarce resources. In contrast, social
identity theory posits a psychological drive for positively distinct social identities as the general root cause of
ingroup favouring behavior.
Origins of the research tradition
In 1906, the sociologist William Sumner posited that humans are a species that join together in groups by their very
nature. However, he also maintained that, beyond this, humans had an innate tendency to favor their own group over
others; saying, "Each group nourishes its own pride and vanity, boasts itself superior, exists in its own divinities, and
looks with contempt on outsiders" (p.13).
[2]
This is seen on the group level with ingroup-outgroup bias, and when
experienced in such larger groups as tribes, ethnic groups, or nations, it is referred to as ethnocentrism.
In-group favoritism
273
Explanations
Competition
Competition between groups for resources has been suggested as the cause of negative prejudices and stereotypes of
the out-group, a phenomenon called realistic conflict theory, or realistic group conflict.
[3]
The Robbers Cave
Experiment is commonly used to exemplify this perspective. In this experiment, 22 eleven year-old boys with similar
backgrounds were studied in a mock summer camp situation. The boys were broken into two groups of twelve and
were studied on their in-group out-group behavior in several different situations. The research revealed startling
evidence that regardless of two groups similarity, group members will behave viciously toward the out-group when
competing for limited resources.
[4]
The ingroup-outgroup bias could readily be seen in the boys' behaviors toward
each other. They underestimated the performance of the other group and overestimated the performance of their own
group. Moreover, "the pro-ingroup tendency went hand in hand with the anti-outgroup tendency" (p.423).
[5]
Self-esteem
It is argued that one of the key determinants of group biases is the need to improve self-esteem. That is individuals
will find a reason, no matter how insignificant, to prove to themselves why their group is superior. This phenomenon
was pioneered and studied most extensively by Henri Tajfel, a British social psychologist who looked at the
psychological root of in-group/out-group bias. To study this in the lab, Tajfel and colleagues created what are now
known as minimal groups (see minimal group paradigm) which occur when complete strangers are formed into
groups using the most trivial criteria imaginable. In Tajfels studies, participants were split into groups by flipping a
coin, and each group then was told to appreciate a certain style of painting none of the participants were familiar
with when the experiment began. What Tajfel and his colleagues discovered was regardless of the fact that a)
participants did not know each other, b) their groups were completely meaningless and c) none of the participants
had any inclination as to which style they like better, almost always across the board participants liked the
members of their own group better and they rated the members of their in-group as more likely to have pleasant
personalities. By having a more positive impression of individuals in the in-group, individuals are able to boost their
own self-esteem as members of that group.
[1]
Robert Cialdini and his research team looked at the number of university T-shirts being worn on college campuses
following either a win or loss at the football game. Not surprisingly, the Monday after a win there were more T-shirts
being worn, on average, than following a loss.
[1][6]
In another set of studies done in the 1980s by Jennifer Crocker and colleagues, self-esteem was studied using
minimal group processes in which it was shown that individuals with high self-esteem who suffer a threat to the
self-concept exhibit greater ingroup biases than people with low self-esteem who suffer a threat to the
self-concept.
[7]
While some studies have supported this notion of a negative correlation between self-esteem and
in-group bias,
[8]
other researchers have found that individuals with low self-esteem have a higher prejudice to both
in-group and out-group members.
[7]
Some studies have even showed that high-self-esteem groups showed a greater
prejudice than did lower self-esteem groups.
[9]
This research may suggest that there is an alternative explanation and
additional reasoning as to the relationship between self-esteem and in-group/out-group biases. Alternatively, it is
possible that researchers have used the wrong sort of self-esteem measures to test the link between self-esteem and
in-group bias (global personal self-esteem rather than specific social self-esteem).
[10]
In-group favoritism
274
In-group favoritism versus out-group negativity
Social psychologists have long made the distinction between ingroup favouritism and outgroup negativity, where
outgroup negativity is the act of punishing or placing burdens upon the outgroup.
[11]
Indeed, a significant body of
research exist that attempts to identify the relationship between ingroup favouritism and outgroup negativity, as well
as conditions that will lead to outgroup negativity.
[12][13][14]
For example, Struch and Schwartz found support for the
predictions of belief congruence theory.
[15]
The belief congruence theory concerns itself with the degree of similarity
in beliefs, attitudes, and values perceived to exist between individuals. This theory also states that dissimilarity
increases negative orientations towards others. When applied to racial discrimination, the belief congruence theory
explains that its the perceived dissimilarity of beliefs that has more of an impact on racial discrimination than race
itself.
References
[1] Aronson, E., Wilson, T. D., & Akert, R. (2010). Social psychology. 7th ed. Upper Saddle River: Prentice Hall.
[2] [2] Sumner, William (1906).
[3] Whitley, B.E., & Kite, M.E. (2010). The Psychology of Prejudice and Discrimination. Belmont, CA: Wadsworth
[4] [4] Muzafer Sherif, O. J. Harvey, B. Jack White, William R. Hood, Carolyn W. Sherif (1954/1961) "Intergroup Conflict and Cooperation: The
Robbers Cave Experiment"
[5] [5] Forsythe (2009).
[6] Cialdini, R., Borden, R., Thorne, A., Walker, M., Freeman, S., & Sloan, L. (1976). Basking in reflected glory: Three (football) field studies.
Journal of Personality and Social Psychology 34, 366-375.
[7] Crocker, J., Thompson, L., McGraw, K., & Ingerman, C. (1987). Downward comparison, prejudice, and evaluations of others: Effects of
self-esteem and threat. Journal of Personality and Social Psychology, 52, 907-916)
[8] Abrams, D., & Hogg, M. (1988). Comments on the motivational status of self-esteem in social identity and intergroup discrimination.
European Journal of Social Psychology, 18, 317-344
[9] Sachdev, I., & Bourhis, R. (1987). Status differentials and intergroup behavior. European Journal of Social Psychology, 17, 277-293.
[10] Rubin, M., & Hewstone, M. (1998). Social identity theorys self-esteem hypothesis: A review and some suggestions for clarification.
Personality and Social Psychology Review, 2, 40-62. [View] (http:/ / dx. doi. org/ 10. 1207/ s15327957pspr0201_3)
[11] Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology
of intergroup relations (pp. 3347). Monterey, CA: Brooks/Cole
[12] Bourhis, R. Y.; Gagnon, A. (2001). Brown, S. L.; Gaertner. eds. "Social Orientations in the Minimal Group Paradigm". Blackwell Handbook
of Social Psychology: Intergroup processes 3 (1): 133-152.
[13] Mummendey, A.; Otten, S. (2001). Brown, S. L.; Gaertner. eds. "Aversive Discrimination". Blackwell Handbook of Social Psychology:
Intergroup processes 3 (1): 112-132.
[14] Turner, J. C.; Reynolds, K. H. (2001). Brown, S. L.; Gaertner. eds. "The Social Identity Perspective in Intergroup Relations: Theories,
Themes, and Controversies". Blackwell Handbook of Social Psychology: Intergroup processes 3 (1): 133-152.
[15] Struch, Naomi; Shalom Schwartz (1989). "Intergroup aggression: Its predictors and distinctness from in-group bias.". Journal of Personality
and Social Psychology 56 (3): 364373.
Nave cynicism
275
Nave cynicism
Naive cynicism is a cognitive bias that occurs when people expect more egocentric bias in others than actually is the
case. The term was proposed by Justin Kruger and Thomas Gilovich.
In one series of experiments, groups including married couples, video game players, darts players and debaters were
asked how often they were responsible for good or bad events relative to a partner. Participants evenly apportioned
themselves for both good and bad events, but expected their partner to claim more responsibility for good events
than bad events (egocentric bias) than they actually did.
Naive cynicism may complement nave realism and the bias blind spot.
References
Kruger, J., & Gilovich, T. (1999). "Naive cynicism" in everyday theories of responsibility assessment: On biased
assumptions of bias. Journal of Personality and Social Psychology, 76, 743-753.
Worse-than-average effect
The worse-than-average effect or below-average effect is the human tendency to underestimate one's achievements
and capabilities in relation to others.
[1]
It is the opposite of the usually pervasive better-than-average effect (in contexts where the two are compared or the
overconfidence effect in other situations). It has been proposed more recently to explain reversals of that effect,
where people instead underestimate their own desirable traits.
This effect seems to occur when chances of success are perceived to be extremely rare. Traits which people tend to
underestimate include juggling ability, the ability to ride a unicycle, the odds of living past 100 or of finding a U.S.
twenty dollar bill on the ground in the next two weeks.
Some have attempted to explain this cognitive bias in terms of the regression fallacy or of self-handicapping. In a
2012 article in Psychological Bulletin it is suggested the worse-than-average effect (as well as other cognitive biases)
can be explained by a simple information-theoretic generative mechanism that assumes a noisy conversion of
objective evidence (observation) into subjective estimates (judgment).
[2]
References
[1] Kruger, J. (1999). "Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments". Journal
of Personality and Social Psychology 77 (2): 221-232.
[2] Hilbert, Martin (2012). "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin 138 (2): 211237. . "free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf"
Google effect
276
Google effect
The Google effect is the tendency to forget information that can be easily found using internet search engines such
as Google, instead of remembering it.
The phenomenon was described and named by Betsy Sparrow (Columbia), Jenny Liu (Wisconsin) and Daniel M.
Wegner (Harvard) in July 2011.
[1][2]
Having easy access to the Internet, the study showed, makes people less likely to remember certain details they
believe will be accessible online. People can still remember, because they will remember what they cannot find
online. They also remember how to find what they need on the Internet.
[3]
Sparrow said this made the Internet a type
of transactive memory.
[2]
One result of this phenomenon is dependence on the Internet; if an online connection is
lost, the researchers said, it is similar to losing a friend.
The study included four experiments conducted with students at Columbia and Harvard.
[3]
In part one, subjects had
to answer trivia questions, followed by naming the colors of words, some of which related to searching on the
Internet. In part two, the subjects read statements related to the trivia questions and had to remember what they read.
They had an easier time with those statements they believed they could find online. In phase three, the subjects had
to remember the details of the statements based on whether they believed the information could be found
somewhere, whether it could be found in a specific place, or whether it could not be found. They remembered the
information they believed to be deleted most easily. In the final phase, the subjects believed the statements would be
stored in folders. They had an easier time remembering the folder names than the statements. One conclusion: people
can remember information if they do not know where to find it, and they can remember how to find what they need
if they cannot remember the information.
[2]
Sparrow said, "We're not thoughtless empty-headed people who don't have memories anymore. But we are becoming
particularly adept at remembering where to go find things. And that's kind of amazing."
[3]
The theory has one major drawback, pointed out by several sources - there is no reason why the effect would not
have also been commonly found before the days of internet-available information, given that books of information
and libraries were readily available. The response is of course that the internet makes information almost
instantaneously available, in a wide variety of locations (even more so with the advent of smartphones), in a way that
has never been possible with books, which are not always readily available.
References
[1] Betsy Sparrow, et al., "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," Science
333:6043:776-778 doi:10.1126/science.1207745, July 15, 2011
[2] "Study Finds That Memory Works Differently in the Age of Google" (http:/ / news. columbia. edu/ research/ 2490). Columbia University.
2011-07-14. . Retrieved 2011-08-04.
[3] Krieger, Lisa M. (2011-07-16). "Google changing what we remember" (http:/ / www. charlotteobserver. com/ 2011/ 07/ 16/ 2457835/
google-changing-what-we-remember.html). San Jose Mercury News. . Retrieved 2011-08-04.
External links
Link to the Science study (http:/ / scim. ag/ B-Sparrow)
Link to video of Betsy Sparrow discussing her research (http:/ / news. columbia. edu/ research/ 2490)
Article Sources and Contributors
277
Article Sources and Contributors
List of biases in judgment and decision making Source: http://en.wikipedia.org/w/index.php?oldid=529909521 Contributors: "alyosha", .digamma, 4RugbyRd, Aaron Kauppi, AerobicFox,
Aeternus, Allion, Altsarc, Amarakana, Andrewaskew, Andries, Anomalocaris, Anonymous4367, Antedater, Anthon.Eff, Arjuna909, Arno Matthias, AstroHurricane001, Avenue, Aznfanatic6,
Badger Drink, Badgettrg, Barnacles phd, Bbartho, Beland, Bella224 44, Bender235, Benjamin Mako Hill, Bert56, Biddingers, Big Bird, Calu2000, CapitalR, CapitalSasha, Captain-n00dle, Cat
Cubed, Cervello84, Chasfh, Chris the speller, Christophernandez, Clay Juicer, Clayoquot, Cogpsych, Colorfulharp233, Counseladvise, Cretog8, CrookedAsterisk, Crzer07, CuriousOne,
Cycleskinla, Cyfal, DCDuring, Dangph, Danman3459, Dave Runger, Deanba, DenisHowe, Diablotin, Digi843, Diomedea Exulans, Douglasjarquin, Download, Dr Ashton, DrL, Dragice,
Dreadfullyboring, Drjeanne, Droll, Drsa12, DwightKingsbury, Edgarde, Edit650, Effie.wang, Ego White Tray, ElentariAchaea, Endogenous -i, ErkDemon, ErnestC, Everything counts, Exeunt,
Eyal.peer, Fjarlq, Foocha, Frogular, Frdrick Lacasse, Fwkb, Gamewizard71, General Wesc, GoingBatty, Good Vibrations, Gracefool, Grumpyyoungman01, Gwern, HDarke, Hahih, Hankconn,
Hans Adler, Hapli, Harold f, Heqwm, Hobsonlane, HonoreDB, Iaoth, InfoCmplx, Ingolfson, Int3gr4te, IvR, J.Ammon, JForget, Jcreigno, Jeffq, Jeisensei, Jenifan, Jmrowland, Joebieber, John
Cross, Johnkarp, Jon.baron, Jonathan.s.kt, Joniale, Jose Icaza, Jtneill, Juniper blue, Jweiss11, Kilmer-san, Knucmo2, Koavf, Kookyunii, Kpmiyapuram, Kwhitten, L2blackbelt, L33tminion, LCP,
Lamjeremy, Lamro, Lawrencekhoo, Leejc, Leonard G., Letranova, Lexein, Lightbound, Litholight, Loudsox, Lova Falk, MJAspen, Magmi, Mahjongg, MartinPoulter, Marudubshinki, Mattjs,
Matzeachmann, Mbrooks21, McGeddon, McSly, MdMcAli, Metamatic, Michael Hardy, Michele123, Mindmatrix, MitchMcM, Mohawkjohn, MoonLichen, Moorlock, Mrwojo, Mukadderat,
NFOlson, Nbarth, Nclean, Nekteo, Nhunt, Nick, NinjaKid, Northamerica1000, Number 0, Olsen34, Osubuckeyeguy, PFConroy, Patriarch, Paul Magnussen, Paul1andrews, Peace01234, Pedrobh,
Pgreenfinch, Philwelch, Pilgaard, Poliquin, Polskivinnik, Power.corrupts, Proberts2003, Punanimal, R42, Rachel Pearce, RafaelRGarcia, Rallette, Rich Farmbrough, RichardF, Richmcl,
Rj.amdphreak, Rjwilmsi, Rl, Rob.bastholm, Robhd, Robin klein, Robinh, Roleren, Rrburke, Rurik3, Samohyl Jan, Sandstein, Sat143su, Sburke, Schlafly, Scwlong, SebastianHelm, Seren-dipper,
Sergei Peysakhov, Shnookle72, Sills bend, SocialPsych101, Sondreskarsten, Sparkit, Srich32977, Startxxx, SteveJanssen77, Stevegallery, StradivariusTV, Superjoe30, Sverre, Swift2plunder,
Taak, Tabletop, Tdent, Teratornis, Texture, The Anome, The Kytan Apprentice, Timwi, Tisane, Tkinkhorst, Toetoetoetoe, Tom harrison, Travelbird, Trylks, Uselessmoose, Utrechton, ValerieBK,
Valerius Tygart, Vincentdebruijn, Waerloeg, Wavelength, WheezePuppet, Whitepaw, WikiSlasher, Wikiteck, Willi252, Wittylama, Wix86, Wjbeaty, Wk muriithi, Wkerst, Wknight94,
Wolfkeeper, Wragge, Wunderbarrrrr, Wykily, Wykypydya, WynnQuon, YechezkelZilber, Zbxgscqf, ZoneSeek, Zvika, , , 260 anonymous edits
Ambiguity effect Source: http://en.wikipedia.org/w/index.php?oldid=526104937 Contributors: Aaron Kauppi, Asparagus, BenC7, JohnsonL623, Jon.baron, Lova Falk, MSchmahl,
MikeCapone, Olsen34, Pearle, Platothefish, Tesseran, Ugncreative Usergname, 4 anonymous edits
Anchoring Source: http://en.wikipedia.org/w/index.php?oldid=527429280 Contributors: 2004-12-29T22:45Z, AThing, Aaron Kauppi, Action potential, Andy Smith, Arthena, Asocall,
AxelBoldt, Bethan 182, BlaiseFEgan, Bluerasberry, ChrisG, Claireilima, DJLumination, DavidWBrooks, Diberri, DonRus, Doug4, Epeefleche, Erwinduke, Excirial, FT2, Gap9551, Gary King,
Grumpyyoungman01, Hadal, Haymaker, Icarus3, JTycherin, Johanna-Hypatia, John of Reading, JorisvS, Kai-Hendrik, Karada, Larrymcp, Lova Falk, Markell West, MartinPoulter,
MathewTownsend, McGeddon, Mdixson, Meco, Msml, Mulat, Nopetro, Outriggr, Peterdjones, Pgreenfinch, Radagast83, Rgfolsom, Robert Weemeyer, Rubingr, Simoes, Spinningspark, Taak,
Tonkiro, Ugncreative Usergname, Vladimir Volokhonsky, Wikipelli, Yeli23, Yunshui, 67 anonymous edits
Attentional bias Source: http://en.wikipedia.org/w/index.php?oldid=530071184 Contributors: Aaron Kauppi, Amykw, Andrewman327, Borkert, Cogpsych, EPM, Grutness, Icarus3, Ioannes
Pragensis, Jon.baron, Kehilles, Lova Falk, MathewTownsend, Mattisse, Nabeth, Rob.bastholm, Skpearman, The Anome, Wavelength, Wrelwser43, 9 anonymous edits
Availability heuristic Source: http://en.wikipedia.org/w/index.php?oldid=527328736 Contributors: (, 2001:660:3302:2822:F66D:4FF:FE17:C4A9, Aaron Kauppi, Anonymous Dissident,
Atomiktoaster, Bearcat, Bender235, Bert Macklin, Bobblewik, Bporopat, Bryan Derksen, Butters7, Cat Cubed, Centrepull, Chantoke, CharlotteWebb, Chuck Carroll, Craig.borchardt, Critic11,
DanielCD, Dukeofomnium, Edward, Eric Shalov, Foggy Morning, Fplay, Framhein, Freakofnurture, Fuzzy artist, Goffrie, Helixblue, Heroeswithmetaphors, Ideatr, Interlope, Jeff Muscato,
Johnkarp, JudithBrizuela, Koavf, Kwhitten, LauraHale, Life of Riley, Lmauri1, Magioladitis, Maroboduus, MartinPoulter, MathewTownsend, Mattisse, Maurice Carbonaro, Michael Hardy,
Midinastasurazz, Mkamensek, MrLV83, Muboshgu, Nishkid64, Orioneight, POYNOR, Paradisefades, Philwelch, Piotrus, Podkayne, Psp2010, QuiteUnusual, Raider Duck, Reetep, Rene Thomas,
Rjwilmsi, SMcCandlish, SP612, Sardanaphalus, Sefeist, Taak, Teratornis, Terpsichoreus, The Brain, TheEditrix2, Xyzzyplugh, YSSYguy, 94 anonymous edits
Availability cascade Source: http://en.wikipedia.org/w/index.php?oldid=458327778 Contributors: Aaron Kauppi, Anthon.Eff, Hq3473, LilHelpa, Malcolma, Sketchmoose, TallulahBelle,
Torrentweb, 6 anonymous edits
Confirmation bias Source: http://en.wikipedia.org/w/index.php?oldid=530453190 Contributors: 2over0, AManWithNoPlan, Aaron Kauppi, AaronSw, Action potential, AdamM, Aeusoes1,
Ahrie, AllGloryToTheHypnotoad, AlphaPhoenixDown, Ambience2000, Anarchia, AndroidCat, Andy Fugard, Angr, Anythingyouwant, Argumzio, Arno Matthias, Art LaPella, Ashmoo,
AxelBoldt, Beland, Belinrahs, Bender235, BiT, Big Bird, Bionicburrito, Bob conklin, Boffob, Bongwarrior, Borkert, BrothaLikeABiark, Bus stop, Bylerda, CRGreathouse, Casliber, Chendy,
Christian75, Cirt, Clark42, Coren, Cosmoguy, CurtisJohnson, Cybercobra, DCGeist, DRosenbach, Dabomb87, Dank, David D., Dbachmann, Ddxc, Dino, Diomedea Exulans, Dr Marcus Hill,
Dreftymac, Duke Ganote, Elembis, Embedded I, Erik, ErkDemon, Finell, Finetooth, Fnielsen, Fraggle81, Furrykef, Geh, Glass Sword, Gregbard, Grumpyyoungman01, Gwern, Hamiltonstone,
Hans Adler, Hayvac, Hectorguinness, Hriber, Hylas Chung, Hyper84, Ibanix, Ida Shaw, IjonTichyIjonTichy, Italo Svevo, J.delanoy, JHP, JQF, James Kidd, JanWMerks, Jeff G., Johndburger,
Johnuniq, Jpgordon, JuhazOne, KSchutte, Kaldari, Karada, Kassorlae, Kelvin Palm, Kintetsubuffalo, Kinu, Koavf, Kobrabones, KrishnaVindaloo, L Kensington, LeadSongDog, Leinad-Z,
Letovanic, Light in Water, Likebox, LilHelpa, Limulus, Loodog, Lorenzo Braschi, Lova Falk, Madd bm, Magioladitis, MartinPoulter, Marudubshinki, MastCell, Mattisse, Mauler90, Mgmirkin,
Michael Hardy, Miguel de Servet, MisterSheik, MistyMorn, Mmortal03, Moni3, Moppy6969, NATTO, Nagelfar, NickCT, Nihiltres, Northwestgnome, Odd nature, Oldag07, Oliver Lineham,
Philwiki, Pmj, Porlob, PseudoPserious, Psych psych, RJHall, Radon210, Raimundo Pastor, Rich Farmbrough, Richard001, Rjwilmsi, Roadrunner, RobertG, Ruakh, SDC, Sadads, SandyGeorgia,
Sharaya, Simplifier, Skalman, Skipnyip, Skywalker415, Slow Thinker, Sluzzelin, Snowmanradio, SpaceFlight89, Speculatrix, StAnselm, Stang, Steven Paul Jobs Apple 1, Straw Cat,
SuperMarioMan, Taak, TableManners, Tabor, TakuyaMurata, Tbhotch, ThatPeskyCommoner, The Anome, The Thing That Should Not Be, The Wilschon, The ed17, TheOriginalSoni, Thespian,
Thetuxone98, Timbatron, Tktktk, Tobacman, Toddst1, Toh, Tommy2010, Tony1, Torrentweb, Trbdavies, Trekphiler, Trilobitealive, Twinsday, U3964057, Useight, ValenShephard, Variasveces,
Visor, Vuo, WeijiBaikeBianji, Weleepoxypoo, Wetman, Wheresmysocks, WikHead, William Avery, William Graham, Wjbeaty, Xadhu, Xanzzibar, Xmarm, Y2kcrazyjoker4, Yaris678, Yelyos,
Yivi00, Zvika, 213 anonymous edits
Bandwagon effect Source: http://en.wikipedia.org/w/index.php?oldid=530649393 Contributors: 1snailbyte, 7, Aaron Kauppi, Achangeisasgoodasa, Aeternus, Al.locke, Aleph-4, Aleph4,
Alexclaz, Andrewaskew, Andrewpmk, Angry Lawyer, Anomalocaris, Applejaxs, Archanamiya, Ashmoo, AxelBoldt, BLueFiSH.as, Badger Drink, Bbarkley2, Bearian, Bender235, Biblbroks,
BigDunc, Bizso, Blmille1, Bluecat4, Bobo192, Bongwarrior, Bremerenator, Brews ohare, Bryan Derksen, Burningdwarf, CJ DUB, Ccacsmss, Chip Zero, Chris the speller, Chuck80, Circumspice,
Cnilep, Colchicum, Count de Des Moines, Cst17, Cunard, Curtis, Cuthbert29, Deconstructhis, Dorftrottel, Dreftymac, Duke33, Eric-Wester, EricEnfermero, Escape Orbit, FHMRUSSIA,
Faradayplank, Fastily, Fayenatic london, FileMaster, Fillmore Jive, Flavio.tosi, Fluffernutter, FrankTobia, Frosted14, FunPika, GeeJo, Gnevin, Gregbard, Gscshoyru, Hackenabush, Hayden120,
Heegoop, IceCreamAntisocial, Idkbro, It's The Economics, Stupid!, J.delanoy, JHP, JYi, James McBride, Jamesmoorhouse23, Jarble, Jeff Dahl, Jkomlos, Joejones2028, Joerg Kurt Wegner,
Josh3580, Jpoelma13, Kaihsu, Kaonslau, Kingturtle, Kopitarian, Kotosan, Krabmeat, LW77, LibraryLion, LinguistAtLarge, Lmatt, Loremaster, Lova Falk, MaCRoEco, Mark Arsten,
MartinPoulter, Masalih, Matty1469, Maurreen, Mcorazao, Mellery, Mfhiller, Mgiganteus1, Mmortal03, Moorlock, Mpcoder, Nareek, NeonMerlin, NoelFlicken, Northamerica1000, Ohnoitsjamie,
OlEnglish, Omphaloscope, Opticburst, Orphan Wiki, Patrick, Penbat, Peter Karlsen, Piepie, Pixelface, Pollinator, Portillo, Retired username, Revth, Rhobite, Richard75, Rjlabs, Rjwilmsi,
Rmfitzgerald50, Roadstaa, Robin63, Sanskritg, Scartelak, Schuym1, Scott Keeler, Seglea, Shanes, Shawnc, ShelfSkewed, Shivambvij, Shoy, SimonP, Skysmith, Smyth, Snowolf, SpaceFlight89,
Sprints, Stefanomione, SteinbDJ, Stephen378, Stevietheman, Student7, Sun Creator, Tabletop, Tasc, Tesseran, The Anome, The wub, ThePoorGuy, Thespian, Thingg, Thudworthy, Tktktk,
Tmtoulouse, Torrentweb, Tresrboles, Trialsanderrors, Twinsday, User2004, VMS Mosaic, Voldemore, Wassermann, Wikipelli, Xxsmarthxx1, 237 anonymous edits
Base rate fallacy Source: http://en.wikipedia.org/w/index.php?oldid=522023101 Contributors: Aaron Kauppi, Aboulis, Aunt Entropy, Barrkel, Bettis211, Bryan Derksen, Diberri, Dragice,
Drilnoth, Eelamstylez77, EmersonLowry, Enochlau, EverGreg, Gary King, Greyengine5, Groyolo, Hairy Dude, Hanxu9, Harold f, Hob Gadling, Huji, Itinerant1, Jacob Robertson, Jcblackmon,
Jesin, JohnWEPurchase, Josang, Julian Brown, Lavenderbunny, Logicchecker, Lova Falk, MartinPoulter, MathewTownsend, Mindmatrix, Omicronpersei8, OnBeyondZebrax, Pgreenfinch, Pjf,
Qbk711, Quantic.quintic, Reedy, Reschly, Revolving Bugbear, Richard001, Rjwilmsi, Schmancy47, Schneelocke, SchuminWeb, Shirley Hou, Sietse, Silence, Taak, Tanner Swett, Tehcarp, The
Anome, WavePart, Wikihiki, Zack, Zingus, 122 anonymous edits
Belief bias Source: http://en.wikipedia.org/w/index.php?oldid=522780155 Contributors: Aaron Kauppi, Apgold, Duckalicious, Fatfingers, Grumpyyoungman01, Grutness, MartinPoulter,
MathewTownsend, Mecanismo, Noamohana, RichardF, Rjwilmsi, SP612, StatsTeacher, The Anome, Warmstar, 9 anonymous edits
Bias blind spot Source: http://en.wikipedia.org/w/index.php?oldid=497108534 Contributors: 2004-12-29T22:45Z, AED, Aaron Kauppi, Andreasmperu2008, Arthur Rubin, D4g0thur,
Flammingo, Grumpyyoungman01, Grutness, Gyan Veda, JHP, Jruderman, Karada, MartinPoulter, Taak, Thespian, 18 anonymous edits
Choice-supportive bias Source: http://en.wikipedia.org/w/index.php?oldid=510817866 Contributors: Aaron Kauppi, Aleksd, Bookschoice, Ccarlson6, Cogpsych, Edit650, JHP, JLaTondre,
JaGa, Jfurr1981, Kostmo, Markclark, MathewTownsend, Mattisse, Neutrality, Pichpich, RichardF, Rjwilmsi, Ruakh, SlayerBloodySlayer, SpaceFlight89, Taak, Thespian, Tom Morris,
Woohookitty, ^demon, 24 anonymous edits
Clustering illusion Source: http://en.wikipedia.org/w/index.php?oldid=520096828 Contributors: 2004-12-29T22:45Z, Aaron Kauppi, Atakdoug, Audacity, Bainemo, Bkell, BlazingThunder,
Boffob, Brandon5485, DavidWBrooks, Dcoetzee, Freude.schoner.gotterfunken, Fyyer, Ground, Gwalla, Hohum, Ieopo, Inky, Janko, Lambiam, Lazarus666, Localh77, Lova Falk, LtNOWIS,
Article Sources and Contributors
278
MER-C, Makescleaf, McGeddon, Michael Hardy, Nandesuka, Nickg, Pavel Vozenilek, Peter.C, Rich Farmbrough, Rjwilmsi, Robin S, ScopyCat, Skeptiker, Taak, Tabor, Tangopaso, Tlogmer,
Zippanova, 42 anonymous edits
Congruence bias Source: http://en.wikipedia.org/w/index.php?oldid=488897155 Contributors: Aaron Kauppi, Anomalocaris, Bearian, Cmdrjameson, Heycam, Jon.baron, MathewTownsend,
Poliquin, RafaelRGarcia, The Anome, Thespian, 6 anonymous edits
Conjunction fallacy Source: http://en.wikipedia.org/w/index.php?oldid=527768433 Contributors: Aaron Kauppi, Andeggs, BenFrantzDale, Bryan Derksen, Bunnyhop11, Claytondaley,
Dfoxvog, Ed g2s, El mbs, Evencorrigeren, Fnielsen, Gaius Cornelius, Iridescent, Karada, Machine Elf 1735, MarkSweep, MartinPoulter, Mattisse, Maximus Rex, Michael Hardy, Mliggett,
Petiatil, Phillipsmcgee, Puellanivis, Qbk, Rasmus Faber, Rbarreira, Riccardofranco, SparxDragon, Strike Eagle, Taak, Unimaxium, Vyznev Xnebara, 38 anonymous edits
Conservatism (belief revision) Source: http://en.wikipedia.org/w/index.php?oldid=521119960 Contributors: Alexwagner, Andrewaskew, Auric, GEBStgo, Gregbard, Kiefer.Wolfowitz,
Melcombe, Porejide, Qwfp, Taak, 3 anonymous edits
Contrast effect Source: http://en.wikipedia.org/w/index.php?oldid=527665047 Contributors: Aaron Kauppi, Bulwersator, Charles Matthews, Danlock8, Danorux, Grutness, Jacobolus,
Jamesmcardle, Jennavecia, Johnkarp, Jonadin93, Katharineamy, Kpmiyapuram, Lugia2453, Mandarax, MartinPoulter, Neparis, Nuvitauy07, Nyttend, Psych psych, Qef, RichardF, Robert P.
O'Shea, Rsabbatini, SF007, Saber girl08, Sgeo, Shwimla, Terpsichoreus, Tevildo, The Anome, Thespian, Typhoon, WBNS, Wykypydya, Xanzzibar, 28 anonymous edits
Curse of knowledge Source: http://en.wikipedia.org/w/index.php?oldid=529335345 Contributors: Barumpus, Bearcat, Bellemonde, Benvewikilerim, CompliantDrone, Donthedev, Gracefool,
Heroeswithmetaphors, Katharineamy, Polskivinnik, Robert1947, Taak, 2 anonymous edits
Decoy effect Source: http://en.wikipedia.org/w/index.php?oldid=507283380 Contributors: Gnobal, Loqi, Rich Farmbrough, Rjwilmsi, Ruakh, Skapur, Wasabe3543, 12 anonymous edits
Denomination effect Source: http://en.wikipedia.org/w/index.php?oldid=526469929 Contributors: DustFormsWords, Grutness, Melchoir, Milowent, 1 anonymous edits
Distinction bias Source: http://en.wikipedia.org/w/index.php?oldid=467584855 Contributors: Aaron Kauppi, Avalon, Bugbrain 04, Fabrictramp, Gmoney5jr, Ioeth, Leolaursen, Mbrooks21,
Rjwilmsi, 3 anonymous edits
Duration neglect Source: http://en.wikipedia.org/w/index.php?oldid=506088439 Contributors: Evercat, Magioladitis, Rjwilmsi, Taak
Empathy gap Source: http://en.wikipedia.org/w/index.php?oldid=530670094 Contributors: 08alisalutsc, 2004-12-29T22:45Z, Aaron Kauppi, Bellemonde, Darkwind, Doczilla, Dravecky,
Framhein, F, Grutness, Guptakhy, Khazar, Lord Arador, Mattisse, Michael Hardy, Must We, Remuel, RichardF, Rjwilmsi, Ronbredo, Skrewler, SocialPsyc, Vegaswikian, Yale3000, 11
anonymous edits
Endowment effect Source: http://en.wikipedia.org/w/index.php?oldid=522652171 Contributors: ARG1900, Aaron Kauppi, Auric, CALR, ChrisGualtieri, Clefticjayjay, Cretog8, David-Sarah
Hopwood, Dillard421, Emfraser, Everything counts, FrankTobia, Heshacher, Isaacdealey, John Broughton, JohnKiat, Jweiss11, Karada, Lesath, MathewTownsend, Mattisse, Mogism, Monre,
NellieBly, Nirmanor, NoWikiFeedbackLoops, Paul Magnussen, Pgreenfinch, Psychobabble, Recury, Rich Farmbrough, RichardF, Rjwilmsi, Rlove, Rongrong.shu, Rory096, Sicherlich,
Simetrical, StradivariusTV, Taak, Teratornis, The Anome, Thespian, 38 anonymous edits
Essentialism Source: http://en.wikipedia.org/w/index.php?oldid=528116499 Contributors: 2002:CFC2:E35E:B:21E:C2FF:FEA0:E72, Abiyoyo, Abuskell, Adomh, Adoniscik, Algae,
Alinerbeaner, Anarchia, Andres, Anon423, Aykantspel, Aztolens, Bataille23, Beland, Bobo192, Bordello, Byelf2007, Camerong, Can't sleep, clown will eat me, Celtlen, Cesarschirmer, Chris the
speller, ChrisChantrill, Clicketyclack, Cybercobra, DNewhall, DSatz, Daedalus71, DancingPenguin, Dannyvocal, Dbachmann, Delaraha, Djmckee1, Dysepsion, Dzethmayr, EPM, Eldamorie,
Evercat, Fearquit87, Fokion, Georg Fedor, George100, Goethean, Goldavius, Green caterpillar, Gregbard, Guanaco, Hippalus, HotshotCleaner, Hriber, Hyacinth, Iamthedeus, Infophile, Infovoria,
Itafroma, J.delanoy, J04n, JWLogue, Jamesofur, Jaredwf, Jbmurray, Jfraatz, Joel Justiss, John Wilkins, Jon Awbrey, Juliancolton, Lauranrg, Leatherstocking, Levineps, Lgbtoz, Life of Riley,
Lucidish, M.O.X, MER-C, Madprofessional, Magister Mathematicae, Mani1, Mendel, Mercury, Mgoodyear, Nathan Laing, Ndaco, Nonexistant User, Novickas, Ontoraul, Owen, Patrick0Moran,
Paul Barlow, PedR, Penyulap, Philogik, Pion, PotatoSamurai, Quadell, Ramaksoud2000, RedWordSmith, Redrose64, Richard001, Rjwilmsi, Robina Fox, RodC, Rorybowman, Rtc, Rumostra,
SP612, Sam Hocevar, Sam Spade, Samlabrier, Santa Sangre, SchreiberBike, ScienceApologist, Silentelkofyesterday, SimonP, SlipperyN, Slrubenstein, StaticElectric, SteveMcCluskey,
Stevertigo, T of Locri, Taoboy49, Template namespace initialisation script, Tesseract2, The Famous Movie Director, Tim bates, Tommy2010, Toric, Twinsday, TyrS, Wetman, Whoosit, Will
Ockham, William Avery, Yonghokim, Yst, Yvwv, 148 anonymous edits
Experimenter's bias Source: http://en.wikipedia.org/w/index.php?oldid=494403939 Contributors: 2over0, Adrian-from-london, Amatulic, Arctic.gnome, Bender235, Bert56, Btyner,
Captain-n00dle, Chris the speller, Cpl Syx, Davidmack, Doczilla, Eewild, Emble64, Fnielsen, Galloping Ghost U of I, Henrygb, Jasongilder, Jfromcanada, Kostmo, Lethe, Limulus, Qwfp,
Rjwilmsi, Rsabbatini, Sisyphustkd, Steve carlson, Tide rolls, Uncle Milty, WikHead, 22 anonymous edits
False-consensus effect Source: http://en.wikipedia.org/w/index.php?oldid=521921813 Contributors: 2602:306:2472:B0D9:125:AB2C:D914:244F, Aaron Kauppi, Alan Liefting, Avb, Avihu,
Bishonen, Black Shadow, Bus Bax, Christopherlin, Coin945, Cretog8, David Delony, Dejitarob, DerHexer, Hazard-SJ, Heidimo, JaGa, Jabowery, Jake Wartenberg, Johnkarp, Jonathan Kovaciny,
JorisvS, Karada, Kesal, Koavf, Lawrencekhoo, Lotje, MartinPoulter, Mattisse, Midway, Namazukage7, Northamerica1000, Nroese, Piano non troppo, PoisonedQuill, RJBurkhart, RichardF,
RyanHoliday, Skysmith, Sslevine, Taak, Thanatos7474, TheRegicider, Thefreemarket, Tp1024, Trymybest, Vandypsyddbc, WhatamIdoing, 37 anonymous edits
Functional fixedness Source: http://en.wikipedia.org/w/index.php?oldid=525540924 Contributors: Arbitrarily0, Artichoke-Boy, Bahboom03, Captain Quirk, Closedmouth, Damian Yerrick,
Dcflyer, Dental, DragonflySixtyseven, EPM, Gary King, GregorB, Hinrik, Imnotminkus, Jackk, Jdick1, Jdralphs, John Vandenberg, Klnorman, Mattisse, Mboverload, Perfect Proposal, Piotrus,
Pmanderson, Reedy, Robert Stevenson, Salthizar, Stepha, SteveBaker, StriderTol, Teratornis, VsevolodKrolikov, Youneedmoretrees, 70 anonymous edits
Forer effect Source: http://en.wikipedia.org/w/index.php?oldid=530567405 Contributors: 806f0F, ALargeElk, Aaron Kauppi, Aeroknight, Android Mouse, Andy4226uk, Andycjp, Argumzio,
Ashmoo, Astrologist, AxelBoldt, BevvyB, Big Bird, Bobrayner, Brinerustle, Ccacsmss, Chealer, Cognoscente18, Consumed Crustacean, Cosmic Latte, Cybercobra, David Edgar, David H Braun
(1964), DavidFarmbrough, Denisgomes, Diberri, Doczilla, Download, Drbug, Dream of Nyx, Dugwiki, Dupontgu, EEng, Fnielsen, GreenReaper, Henrygb, Hu12, Iohannes Animosus, Ioverka,
Jazzwick, Jdevola, Jefffire, Jezcentral, Jfliu, JoeSmack, Jokestress, Jonathan.s.kt, Jonathanbishop, Jynus, Kelner, Kokey, Krelnik, Kvn8907, Livajo, Lova Falk, Martarius, Mattisse, Mboverload,
McGeddon, Mikker, Mrwojo, Mushroom, Myanw, NL Derek, Nathan Johnson, Of, Padillah, Panslabrinth, Paul Magnussen, Phe, Poko, Pppwikiwikime, Quiddity, Rabidwolfe, Rainbrow Man,
Repku, Rich Farmbrough, RichLow, RichardF, SGGH, SURIV, Samuella, Sardaka, Skomorokh, Smyth, SpeedyGonsales, Stevage, Steven Walling, T@Di, Taak, Tfiascone, The bellman,
Themindset, Tom harrison, Tom.Reding, Trivialist, UnicornTapestry, Urhixidur, Vanished user 47736712, Vaughan, Vrenator, WikiDoctorchecker, William Pietri, WinkJunior, Xnuala, -jlfr,
107 anonymous edits
Framing effect (psychology) Source: http://en.wikipedia.org/w/index.php?oldid=530272433 Contributors: Aaron Kauppi, Alfredo ougaowen, Cogpsych, Cretog8, Dark Silver Crow,
DarwinPeacock, Dzforman, Emurphy42, Fabrictramp, Glaje, Hazard-SJ, Helvetius, Hwang.joshua, Jodyng888, Kebes, Lova Falk, MartinPoulter, MathewTownsend, MatthewVanitas, MrOllie,
PeterEastern, Rjwilmsi, Stefanomione, Superp, Tisane, Ucucha, WissensDrster, Wodrow, 18 anonymous edits
Gambler's fallacy Source: http://en.wikipedia.org/w/index.php?oldid=530273222 Contributors: 2004-12-29T22:45Z, 2005, AKGhetto, Aaron Kauppi, Alex W, AlexanderM, Alksub, Aly89,
Andeggs, Andyroo316, Antarctic-adventurer, Aoxiang, Areldyb, Ashley Pomeroy, Aveekbh, Avirunes, AxelBoldt, Baccyak4H, Badger Drink, Banedon, Barklund, Bender235, Bfinn, Bigturtle,
Bkell, Blue Tie, Brad78, Bryan Derksen, Bush6984, CSTAR, Cablehorn, Calair, Camaj, Camw, Cgwaldman, Cjrcl, Cmglee, CobaltBlue, Constructive editor, Conversion script, Courcelles,
Cyclist, D o m e, DanielCD, DavidDouthitt, DavidWBrooks, Day viewing, Dcoetzee, Den fjttrade ankan, Deor, DocWatson42, Doniago, Donnaidh sidhe, DragonHawk, E090, Edward,
Electricbassguy, Emurphy42, Enric Naval, Eurosong, Evanreyes, Father Goose, FeatherPluma, Feezo, Fiveclubs, Fleecemaster, FoCuSandLeArN, Fosterd2, Fr, Furrykef, GDstew4, GVnayR,
Gayasri, Gazpacho, Giftlite, Gonzalo Diethelm, Grace Note, Graham87, GreenReaper, Gregbard, Grumpyyoungman01, Gwern, HAGADAG, Headcase88, Heron, Horovits, HumphreyW, Hyphz,
Iceberg3k, Jaguar9a9, Jasperdoomen, JesseRafe, Jimjam27, Jnestorius, Jokes Free4Me, Julesd, Karada, Kazvorpal, Labans, Lenoxus, Lifefeed, Liko81, LilHelpa, LukeH, Luqui, Magog the Ogre,
Malcolm Farmer, MathHisSci, McGeddon, Melchoir, Melcombe, Meno25, Michaelbluejay, Mike Van Emmerik, Molinari, Musiphil, Navigatr85, Nbarth, Netsumdisc, Notoldyet, NykeYoung,
O'kelly, Orthologist, Ozkaplan, PAR, Pacomartin, Pakaran, PanagosTheOther, Pat Hayes, PatrikR, Pigman, Pimnl, Pratik.mallya, Quarl, Quiddity, Qwfp, RatnimSnave, Rbarreira, Reki, Rhalah,
Rjwilmsi, Roma emu, Ruzbehabbasi, S2000magician, SCF71, Sandebert, Sbyrnes321, Serious stam, Shantavira, Sietse Snel, Silence, Slicing, SmartGuy Old, Smjg, Snoyes, Sockatume, Songm,
Spoon!, Statoman71, StuRat, Superm401, Taak, Takwish, Tamfang, Tarquin, The Anome, TheFix63, TheOtherStephan, Thumperward, Timo Honkasalo, Tomeasy, Tomisti, Torbad,
UltimateHombre, Uusijani, Uvaphdman, Vbailo, Vicki Rosenzweig, Vonbontee, Waleswatcher, Wolfkeeper, Woodstone, Wotnow, Wrp103, Xanzzibar, 192 anonymous edits
Hindsight bias Source: http://en.wikipedia.org/w/index.php?oldid=530212244 Contributors: 1-is-blue, Aaron Kauppi, Acidjazz1, Adiseven, Arno Matthias, Ashmoo, Auric, Baseball Bugs,
Billymac00, Bnosek, CCRoxtar, Can't sleep, clown will eat me, Cmcalder, Cogpsych, Cookiehead, David Gerard, Dcoetzee, Diza, Dzhim, EEaton, Elekhh, Epeefleche, Exiled Ambition,
GenDanvs, Gilliam, Gioto, Ground Zero, Grumpyyoungman01, Hraefen, IceKarma, Inimino, Irimi, JorisvS, Karada, Kevin Gorman, Khazar, Kimleonard, LOL, Lamro, Lawandeconomics1,
Letranova, LoveMonkey, Maikel, MartinPoulter, MathewTownsend, Mattisse, Michael Hardy, Mild Bill Hiccup, Mmendis, Mogism, Mpontes, NewEnglandYankee, Nickg, Northern bear,
PMLawrence, Peterlewis, Piledhigheranddeeper, Psyc3330 w11, RichardF, Rjwilmsi, Sam Spade, Shakescene, Sikyanakotik, Sillyjack, Skywalker415, Sordon1234, Spidey104, Taak, Tabletop,
Tariqabjotu, Tktktk, Twistedream13, Txomin, Tyciol, Ubiq, Unreal7, Vadmium, Wetman, Wikidea, Wtatour, Wykypydya, Xenlab, 89 anonymous edits
Article Sources and Contributors
279
Hostile media effect Source: http://en.wikipedia.org/w/index.php?oldid=512126904 Contributors: Aaron Kauppi, Afa86, Andrewaskew, Bender235, Bookowl, Circeus, Crosbiesmith,
DanielCD, Dcoetzee, Duncharris, Dusty78, Ec5618, Florian Blaschke, Ginagrammatica, Hall Monitor, Harizotoh9, Hkimscil, Homo sapiens, Humblefool, Humus sapiens, Ida Shaw, J.smith,
J04n, JQF, Jfdwolff, Johnthescavenger, Jruderman, Jwillbur, Karada, Keith-264, Ken Gallager, Livewireo, MartinPoulter, Mattbuck, Mattisse, Maurreen, Michael Safyan, Mr. Wood,
NorthernThunder, Padidliwa, Palmiro, RichardF, Rick Norwood, Rjwilmsi, SimonP, Spot87, Subversive.sound, Taak, Tempshill, Unara, Uriber, VeryVerily, Viajero, 26 anonymous edits
Hyperbolic discounting Source: http://en.wikipedia.org/w/index.php?oldid=522720487 Contributors: 1-is-blue, 271828182, Aaron Kauppi, AaronSw, Allion, Andrem07, Atticusdrew,
Bender235, Berendt, Boing! said Zebedee, Btyner, Can't sleep, clown will eat me, Cancan101, Ciphergoth, Cybercobra, David Eppstein, Devadatta, Dmmaus, Drpickem, Edward, Eon01,
Gainslie, IamNotU, Insignia1983, John Quiggin, Karada, Kgrad, Khazar2, Lova Falk, Lpetrazickis, Mandarax, MarginalCost, Matthewcgirling, Matthewfallshaw, Moxfyre, NathanHurst,
PegArmPaul, Pgreenfinch, Pkearney, Quorn3000, R'n'B, Redglasses, Rjwilmsi, Rl, Seglea, Taak, Temporaluser, Ticoneva, Tobacman, Undead warrior, Wrelwser43, WriterHound, 61 anonymous
edits
Illusion of control Source: http://en.wikipedia.org/w/index.php?oldid=524422632 Contributors: Aaron Kauppi, Andreasmperu2008, Andrewaskew, Archola, Axblood, Belovedfreak,
Byelf2007, Charles Matthews, ChicJanowicz, Croweml11, Edgar181, Emperor, Giraffedata, Gmukskr, Gregbard, Hanxu9, Karada, Kevinkor2, Kwertii, Magenta1, Markoc, MartinPoulter,
MathewTownsend, Narssarssuaq, Noblige, Peterdjones, Populus, Presearch, Rfl, Rich Farmbrough, Rjwilmsi, Sam Spade, Schwnj, SleekWeasel, Taak, Tesseract2, The Anome, Thespian, WAS,
Xyzzyplugh, 27 anonymous edits
Illusion of validity Source: http://en.wikipedia.org/w/index.php?oldid=498971850 Contributors: Rjwilmsi, Sun Creator, Taak, 1 anonymous edits
Illusory correlation Source: http://en.wikipedia.org/w/index.php?oldid=525487227 Contributors: 1000Faces, Aaron Kauppi, Aarre, Adashiel, Arbitrarily0, Avicennasis, BL Lacertae, Bkel,
Bolgj, BrooxBroox91, Chester Markel, Chriszuma, DynamoDegsy, Eleland, Fotherge, Gretchen Tiddlywinks, InfoCmplx, Jennisama84, Karmela, Kelly Martin, Khazar2, Kschutz, Lova Falk,
MartinPoulter, MathewTownsend, Mav, Mrrhum, NinjaKid, Picaroon, Poliquin, Qwfp, Rebecca.lidh, Rjwilmsi, TFF 23, The Anome, U3964057, Uncle G, Wizardman, 32 anonymous edits
Information bias (psychology) Source: http://en.wikipedia.org/w/index.php?oldid=457016985 Contributors: Aaron Kauppi, Alansohn, Bporopat, Can't sleep, clown will eat me, Cmdrjameson,
Croctotheface, Hanxu9, MUNpsych, Mikael Hggstrm, Poliquin, The Anome, Whitepaw, Yingsen, 16 anonymous edits
Insensitivity to sample size Source: http://en.wikipedia.org/w/index.php?oldid=519290452 Contributors: Magioladitis, Mistercow, Rjwilmsi, Sun Creator, Taak, 1 anonymous edits
Just-world hypothesis Source: http://en.wikipedia.org/w/index.php?oldid=525903161 Contributors: 2001:708:30:1480:D593:D9DE:4CF:1894, 2004-12-29T22:45Z, 20040302, Aaron
Brenneman, Aaron Kauppi, Achowat, Antaeus Feldspar, Audriusa, Azcat1997, Badger Drink, BenFrantzDale, Benandorsqueaks, Bender235, CaptainVideo890, Careds44, Cheaal01, DrL,
Edgar181, Eep, EurekaLott, Exeunt, Fragglet, Frdrick Lacasse, George Ponderevo, Gioto, Gregbard, Gunza, Gurch, Gkhan, Halil1, Icarus3, Ilkali, Jayen466, Jesin, John Bessa, JohnOwens,
Jokestress, JoshuaZ, Kai-Hendrik, Karada, Lisamh, Lova Falk, Machine Elf 1735, Meshach, Mindmatrix, Mohehab, Naomi Gratis, Niceguyedc, Paulscrawl, Penbat, Piledhigheranddeeper, Pnrj,
Portillo, R'n'B, Rebroad, Rjwilmsi, SkpVwls, Soultaco, Steven X, SummerWithMorons, TJRC, Taak, The Anome, TurtleTrax, U3964057, Ubiq, Vipinhari, Vladkornea, Wolfdog, Yngvadottir,
106 anonymous edits
Less-is-better effect Source: http://en.wikipedia.org/w/index.php?oldid=506091628 Contributors: Bearian, Magioladitis, Rjwilmsi, Taak
Loss aversion Source: http://en.wikipedia.org/w/index.php?oldid=529417370 Contributors: Aaron Kauppi, Amorymeltzer, Andycjp, Atakdoug, BenjaminBarber, Can't sleep, clown will eat me,
Charlespeirce11, Choess, Ciphergoth, Cosfly, Cpryby, Cretog8, DVirus101, Dw31415, E0N, Epeefleche, Ethanpew, Fmerenda, GAdam, Galinarou, Gnfnrf, Harryfdoherty, Iaoth, Jlngjlng, John
Broughton, John Quiggin, JohnChrysostom, Jzietz, Karada, Leelee Sobieskihamper, Luk, Mandarax, MartinPoulter, MathewTownsend, Maurreen, Maxcanada, Mydogategodshat, Netsumdisc,
Outofthebox, Proberts2003, Psychobabble, Quiddity, RandomP, Reagle, Regancy42, RegentsPark, SHIMONSHA, Sewadj, Stefano85, Sunbeam44, Taak, Tabledhote, The Gnome,
Themightyrambo, Umutarikan, Visualday, WolfmanSF, WpZurp, ZildjianAVC, 80 anonymous edits
Ludic fallacy Source: http://en.wikipedia.org/w/index.php?oldid=516079793 Contributors: Aaron Kauppi, Andycjp, Arthur Rubin, BigHairRef, Boffob, Chris 73, Ciphergoth, Cretog8,
Cybercobra, Edenscourt, Epistemeter, Eric Kvaalen, Everyking, FeralOink, Gregbard, Hakeem.gadi, Herda050, Heron, Horrisgoesskiing, Intellectual47, JMSwtlk, Jrf, Jweiss11, KelleyCook,
Khazar, Lachambre, Lamro, Lasindi, Lolo Sambinho, LoveMonkey, MER-C, Malcolmxl5, Merriam, Michael Hardy, Nbarth, OrangeDog, Paradoctor, ReverendDave, Robert Brockway,
Rrostrom, Schi, Skomorokh, Srich32977, Staszek Lem, Stephen378, The Mysterious El Willstro, Uranographer, VinceBowdren, YechezkelZilber, Znmlnkth, 39 anonymous edits
Mere-exposure effect Source: http://en.wikipedia.org/w/index.php?oldid=494940346 Contributors: Aaron Kauppi, Aeternus, AndyJones, Brighterorange, Causa sui, Ciphergoth, CloudNine,
Cogpsych, Cymru.lass, Dcoetzee, Dpr, Guslacerda, Harb7833, Janviermichelle, Jcbutler, Johnkarp, Jonah.harris, Jonathan.s.kt, JorisvS, Joyous!, Karada, LilHelpa, MER-C, MartinPoulter,
Masterofpsi, Mattisse, Nectarflowed, Oakraiders3184, Pathoschild, Psp2010, Psychonaut, Rhinoracer, RichardF, Sadi Carnot, Sam Spade, Schwnj, Snacks good, Steven X, Subversive.sound,
Taak, Tabletop, The Magnificent Clean-keeper, TheProject, Woohookitty, 41 anonymous edits
Money illusion Source: http://en.wikipedia.org/w/index.php?oldid=524415930 Contributors: Anual, Atlastawake, Beland, Bender235, Bob Hu, Bombastus, Byelf2007, Byronmercury, Calm,
CommonsDelinker, Copyeditor42, Cybercobra, Dkish1, DontClickMeName, Dusik, Esaintpierre, Fplay, Fuzzbox, Fyrael, Gareth Jones, Grafen, Gregbard, Jasonbook99, John Nevard, John
Quiggin, Johnkarp, Karada, Magioladitis, Magister, MartinPoulter, MathewTownsend, Mfulvio, MrBurns, One, Pamar1908, Pgreenfinch, Plastikspork, Psychobabble, Rich Farmbrough,
Rjwilmsi, Sam Hocevar, SimonP, Taak, TheGeoffMeister, Thomasmeeks, Unimath, X!, XMCHx, 43 anonymous edits
Moral credential Source: http://en.wikipedia.org/w/index.php?oldid=522959124 Contributors: Dozerbraum, Dweezle7, Grutness, Hongmt, Keith Cascio, Loogel, Pathoschild, Peace01234,
Rallette, Rjwilmsi, U3964057, 2 anonymous edits
Negativity bias Source: http://en.wikipedia.org/w/index.php?oldid=529562663 Contributors: Aaron Kauppi, Batard0, Belovedfreak, BrandiCarolyn, Brighterorange, Cmdrcool,
CommonsDelinker, Doczilla, Edit650, Editor Emeritus, Fnielsen, Funandtrvl, Geoffjw1978, Gobonobo, HatchMcGatch, Inhumandecency, JorisvS, Kenton Scott, Lockley, Magioladitis,
Mandarax, Mattisse, Mogism, Moonriddengirl, PonyToast, Recury, Rjwilmsi, Selket, Ups46694, Wolfdog, YK Times, 10 anonymous edits
Neglect of probability Source: http://en.wikipedia.org/w/index.php?oldid=525872719 Contributors: Aaron Kauppi, Arno Matthias, Can't sleep, clown will eat me, Danman3459, Exeunt,
Groyolo, Jon.baron, Loma66, Primalmoon, The Anome, 3 anonymous edits
Normalcy bias Source: http://en.wikipedia.org/w/index.php?oldid=526783504 Contributors: 4RugbyRd, Aaron Kauppi, Crystallina, Goodwin-Brent, Gunnanmon, Hibana, J04n,
LaFolleCycliste, Lova Falk, Malcolma, Meclee, Mheenan, Michaeltaft, Niteowlneils, Piotrus, Remedia8, Rjwilmsi, SchreiberBike, Snorre, StvnLunsford, Teratornis, Woohookitty, X736e65616b,
Zstauber, 35 anonymous edits
Observer-expectancy effect Source: http://en.wikipedia.org/w/index.php?oldid=508874056 Contributors: 2over0, Aaron Kauppi, Akeron, Alienlifeformz, Andycjp, AstroHurricane001,
BD2412, Brandon5485, BullRangifer, Chealer, Circeus, Darrenhusted, DavidWBrooks, Dino, Doctor Dodge, Draconiszeta, Ed Poor, Elabro, Emc2, Grutness, Herd of Swine, JH-man, Jokl,
Karada, Kermit2, Koavf, Ksyrie, LMackinnon, Lectonar, Lindsay658, Lova Falk, Mattisse, Maury Markowitz, Mikael Hggstrm, MrX, Neutrality, Notoldyet, Oligomous, One Salient Oversight,
Paulduffill, Plastikspork, Res.being, Richard001, RichardF, Roberto Almeida, Rul3rOfW1k1p3d1a, Salsb, Steve carlson, Themightyquill, Unreal, WLU, Wikid77, 29 , anonymous edits
Omission bias Source: http://en.wikipedia.org/w/index.php?oldid=517058838 Contributors: Aaron Kauppi, Echo927, Harro5, Headwes, Jarble, John Cross, Nutcracker, Rami R, Sergei
Peysakhov, The Anome, 10 anonymous edits
Optimism bias Source: http://en.wikipedia.org/w/index.php?oldid=521546678 Contributors: 4RugbyRd, Aaron Kauppi, Allion, Cheapskate08, Circeus, Cutler, Cybercobra, Ehheh, Elektrik
Shoos, Gsaup, J04n, Jim Sukwutput, JonPoley, Joseph Solis in Australia, Kwhitten, Leibelk, Lova Falk, MartinPoulter, MathewTownsend, Mdd, Mileworth, Nclean, Northamerica1000, Pilgaard,
Pm master, Pnm, ProjectSavior, Psinu, Quisquillian, Redhanker, Renesis, Rjwilmsi, Seglea, Simon Kilpin, Smcg8374, SocialNeuro, Sonderbro, Ssscienccce, Sun Creator, The Thing That Should
Not Be, Thumperward, Tisane, Tom Morris, Trigger hurt, Twinsday, Wcby205, 60 anonymous edits
Ostrich effect Source: http://en.wikipedia.org/w/index.php?oldid=529398643 Contributors: 72Dino, Andycjp, Biscuittin, Cretog8, Cybercobra, Fabrictramp, Father Goose, KConWiki,
MartinPoulter, Mikhailovich, Nem0sum, Onyu2008, Pan Dan, Rackabello, Radagast83, Rjwilmsi, Solomonfromfinland, Speednat, Taak, 7 anonymous edits
Outcome bias Source: http://en.wikipedia.org/w/index.php?oldid=415004403 Contributors: Aaron Kauppi, Absentis, Comrade Graham, Engineeringsimon, Hanxu9, JHunterJ, JubalHarshaw,
Loodog, Merehap, Mrwojo, Pseudomonas, Shnookle72, The Anome, Wykypydya, 3 anonymous edits
Overconfidence effect Source: http://en.wikipedia.org/w/index.php?oldid=529132146 Contributors: A crazy cranium, Aaron Kauppi, Ancheta Wis, Argumzio, Arno Matthias, Ashertg,
Bender235, CIreland, ChrisKnott, Cretog8, CyberSkull, DarkAdonis255, DigitalNinja, Donandrewmoore, Ewlyahoocom, Funguscheese, Geoffrey Pruitt, Giraffedata, Gmukskr, Iaoth,
Isaacdealey, Jacobisq, JorisvS, Karoch, KillerChihuahua, Lamro, LilHelpa, Lova Falk, MartinPoulter, Martinlc, Mattisse, Muboshgu, Penbat, Pilcrow, RJBurkhart, RichardF, Sam Spade,
SheeEttin, Simon1223hk, Smalljim, Theconfidenceman, Timrichardson, Tomeasy, Wikiliki, WissensDrster, 36 anonymous edits
Article Sources and Contributors
280
Pareidolia Source: http://en.wikipedia.org/w/index.php?oldid=530675940 Contributors: 2001:470:816F:0:14B7:43DC:739D:CF92, 2004-12-29T22:45Z,
2A01:E35:2EF7:6150:B413:5AF9:660:D975, 7&6=thirteen, 83d40m, A2Z, AV3000, Aaron Kauppi, Adamantios, Agamemnon2, Agota, Aiken drum, Al Lemos, Alan Liefting, Ankimai, Arno
Matthias, Arrivisto, Asenine, AstroHurricane001, Audacity, Augustosfaces, Axeman89, BD2412, Befuddled steve, Beland, Beyondsolipsist, Binksternet, Bluefist, Bostwickenator, Brandon5485,
Cablehorn, Cacycle, Calicocat, Can't sleep, clown will eat me, Celuici, Cgingold, Chinju, Chris Capoccia, Chrisorapello, Chrissmith, Citizen Premier, Cmdrjameson, CobraWiki, Codicorumus,
CoombaDelray, Crystalroseluv, Csernica, Cybercobra, Cynwolfe, DSatz, Daniel J. Leivick, Darrenhusted, David Kernow, DavidWBrooks, Davkal, Deflective, Deglr6328, Dirkbike, Djdole,
Docether, Dorgan, Drbreznjev, DreamGuy, Drugonot, Duomillia, Dureo, Dyanega, Ec5618, Edhubbard, Embe111, Emilylbaker, Emurphy42, Exok, Fabrice Ferrer, Fama Clamosa, Flcelloguy,
Fotaun, Gadz, Gelbukh, Genghisgandhi, Geoff B, Ghelae, Ghostexorcist, GorillaWarfare, Grendelkhan, Groyolo, Gwalla, Halsteadk, HamburgerRadio, Heah, Hellbus, Hormigo,
ILoveGarrysmod, Icy118, Incnis Mrsi, IsmaelCavazos, JATorres6, JFlav, JJ Harrison, JMCC1, JamesMLane, Jchthys, Jclerman, Jeffq, Johann Gambolputty, Johnuniq, Joseph Banks,
JoshHolloway, JuanTres, Justaperfectday, Karada, Kazvorpal, Kencf0618, KennethBarnes, Keraunoscopia, Kieff, Koven.rm, Krsont, Kwamikagami, Kylemcinnes, Light current, Lotje,
MacGyverMagic, Macedonian, Machead, Magnetic hill, Maproom, Mark Renier, Martarius, Martinevans123, Marx01, Master Jay, Maury Markowitz, Mauveipedia, McSly, MementoVivere,
Mendaliv, Metapsych27, Michael Hardy, Miklos legrady, Mimzy1990, Mizunoryu, Mogism, Nagelfar, Nameyxe, Nestify, Nivas28, Nufy8, Omnipaedista, Osiris333, Ost316, Pablo-flores, Paul
Richter, Paulburnett, Paulnasca, Perey, Perfectblue97, Plantigrade, Plasmic Physics, PoccilScript, Portillo, President Rhapsody, Preslethe, Psychonaut3000, Pt, Quiddity, Quinet, Raistlinknight,
Randallmcdabb, Rjwilmsi, Robin Johnson, Roi 1986, Rui Silva, Rursus, Saintlink, Saltywood, Scientific29, Seduisant, Seraphita, Sergeant Cribb, Shawn81, Shenme, Shirahadasha, Shirudo,
Sibanak, Sonjaaa, Speedoflight, Spiral5800, Subfonic, SudoGhost, Swdev, Synthe, THB, Taak, Tecsaz, Teratornis, Tevildo, ThePedanticPrick, Thrissel, Til Eulenspiegel, Timotheus Canens II,
Toddst1, Tommy2010, Treeman1234, Tronno, Twang, Twthmoses, Undomelin, V-Man737, Valugi, Vegaswikian, Velella, Violetmermaid, Viriditas, Vsion, Vuo, Wetman, Wikipedian231,
WikipedianProlific, WurmWoode, Xanzzibar, Xasodfuih, Yomna 1, Zzyzx11, , 207 anonymous edits
Pessimism bias Source: http://en.wikipedia.org/w/index.php?oldid=491636057 Contributors: Bearcat, George Ho, Grutness, MartinPoulter, Rjwilmsi, Tisane, Xezbeth, 4 anonymous edits
Planning fallacy Source: http://en.wikipedia.org/w/index.php?oldid=512859155 Contributors: 4RugbyRd, A bit iffy, Aaron Kauppi, Acidjazz1, Allion, Boffob, Cheapskate08, DJ Clayworth,
DavidLevinson, Dontdoit, Ehheh, Elpincha, Engi08, Epeefleche, Eric Hawthorne, Gsaup, Ike9898, Jackvinson, JonDePlume, Karada, Kwhitten, MartinPoulter, MathewTownsend, Mbarbier,
Michael Hardy, Pilgaard, Pm master, Poli08, SchuminWeb, Sonderbro, Taak, The Anome, Trilliumz, TuomoPaqvalin, Van helsing, Widefox, 17 anonymous edits
Post-purchase rationalization Source: http://en.wikipedia.org/w/index.php?oldid=512281749 Contributors: Aeluwas, Aymatth2, Benlisquare, Blazemore, Bluetooth954, CanadianPenguin,
Closedmouth, Corbenine, Denisarona, Dizzious, Excirial, Gfoley4, Googfan, Grayfell, GregorB, Grutness, Havermayer, Hqb, Jim1138, Jtneill, Kbh3rd, Kostmo, LFaraone, MartinPoulter,
Medicineluke, Mejogid, Nixeagle, Odie5533, Oligophagy, PKT, Penbat, Pikamander2, PlasticPackage, Snigbrook, Tbhotch, The Anome, Wikipelli, ZildjianAVC, Zzuuzz, , 141 anonymous
edits
Pro-innovation bias Source: http://en.wikipedia.org/w/index.php?oldid=524507238 Contributors: Buddy23Lee, Chowbok, DoctorKubla, Emeraude, Malcolma, Mbamark, 1 anonymous edits
Pseudocertainty effect Source: http://en.wikipedia.org/w/index.php?oldid=488718082 Contributors: Aaron Kauppi, Bluemoose, Charles Matthews, Grutness, Jodyng888, Loudsox,
MathewTownsend, Mattisse, Maurreen, Mostargue, RichardF, Rjwilmsi, Smmurphy, The Anome, 7 anonymous edits
Reactance (psychology) Source: http://en.wikipedia.org/w/index.php?oldid=506340880 Contributors: Abiliocesar, Adambro, Atlantia, B.S. Lawrence, Chemturion, Curlie, Dicklyon, Dinomite,
DrMel, Dravecky, Efiiamagus, Ferrarimangp, Fifelfoo, Fratrep, Fubar Obfusco, GregorB, Gypsydancer, Henrysteinberger, Internoob, Iridescent, IronGargoyle, JamesAM, Jjron, Jmah, Johnor,
JorisvS, KHAAAAAAAAAAN, Mandarax, Mattisse, Pathoschild, Pbugyi, Pegship, PhnomPencil, Piercetheorganist, Poco a poco, Psywikiuser, Recury, Sam Spade, SchreiberBike, Sketchmoose,
Steve carlson, Stifle, That Guy, From That Show!, Topbanana, Versageek, Wikieditor1988, 50 anonymous edits
Reactive devaluation Source: http://en.wikipedia.org/w/index.php?oldid=522654680 Contributors: Magioladitis, Paul Magnussen, Taak, 2 anonymous edits
Serial position effect Source: http://en.wikipedia.org/w/index.php?oldid=524639218 Contributors: 12dstring, Aaron Kauppi, Beland, Bfinn, Brossow, Dhaluza, Discospinster, Doczilla,
Egboring, Elizkatz, Eshana89, Franois Pichette, Gary King, HereToHelp, Hintswen, IanManka, Jarble, Jason Quinn, Jeff3000, Kevin.strong, Kku, Kpmiyapuram, Lova Falk, MER-C,
Malcolmxl5, Mattisse, Mikemoral, Mindmatrix, Mkahana, Obli, Piledhigheranddeeper, RichardF, Rkj22, SJS1971, Sonicyouth86, Taak, 61 anonymous edits
Recency illusion Source: http://en.wikipedia.org/w/index.php?oldid=528104307 Contributors: Bedivere, Cnilep, Entail, Jmk, Keith Cascio, Kyoakoa, Pol098, Tabledhote, Windharp, 23
anonymous edits
Restraint bias Source: http://en.wikipedia.org/w/index.php?oldid=513536079 Contributors: Ahart1psy, Sun Creator, ThePastIsObdurate, 2 anonymous edits
Rhyme-as-reason effect Source: http://en.wikipedia.org/w/index.php?oldid=506094709 Contributors: Archieboy2, Magioladitis, Taak, 4 anonymous edits
Risk compensation Source: http://en.wikipedia.org/w/index.php?oldid=528120836 Contributors: ADM, Adambro, Alanbraggins, Alex Sims, AmericanEnglish, Aslakken, Bearian, Beefman,
Beland, Boonukem, Brian the Editor, Cutler, Dan100, Daniel J. Leivick, DeFacto, Dennis Bratland, Dhodges, Ephebi, Fmalina, Fratrep, George Ponderevo, Gfbs, GregorB, InTheZone, J36miles,
JHP, John Nevard, Just zis Guy, you know?, JzG, LMB, Mattisse, Mokgand, Nubiatech, Old Moonraker, PPBlais, Parrot of Doom, Peregrine981, PeterEastern, Prumpf, Pruss, RedWolf, Richard
Keatinge, RichardF, Rjwilmsi, Russell Thomas, SabreWolfy, Sf, Sharkford, Snori, SrJoben, Srich32977, Steinsky, TeamZissou, Triku, Varitek, Walraslaws, 45 anonymous edits
Selective perception Source: http://en.wikipedia.org/w/index.php?oldid=526104389 Contributors: Aaron Kauppi, Adresearchpro, Alan Au, Amtiss, Bjoram11@yahoo.co.in, Duke toaster,
Embram, Emrahertr, Gyan Veda, Jleboeuf, Karada, Lithoderm, Mattisse, Myasuda, Nsanshaman, Pmanderson, Rschmertz, Skrshnn, Spidern, TSullivan, Taak, Tiffuny, Ugncreative Usergname,
20 anonymous edits
Semmelweis reflex Source: http://en.wikipedia.org/w/index.php?oldid=499015448 Contributors: Adam78, AguC, Auric, Bdamokos, Bender235, Boffob, DragonflySixtyseven, Florian
Blaschke, LilHelpa, LordIlford, Margaret9mary, Misarxist, Polisher of Cobwebs, Power.corrupts, Seb az86556, Seren-dipper, Tapalmer99, William Avery, Wireless Keyboard, Zellskelington, 7
anonymous edits
Selection bias Source: http://en.wikipedia.org/w/index.php?oldid=515277688 Contributors: Aaron Kauppi, Adrian J. Hunter, Amatulic, Anders Sandberg, AoS1014, Arcadian, Beland, Boffob,
Cureden, Darrenhusted, Dbachmann, Den fjttrade ankan, Diza, Doors22, Dr.K., Drae, Ed Poor, Esben.juel, Farmanesh, Gazpacho, Giftlite, Graeme Bartlett, HaeB, Henrygb, Hlovdal, Hstovring,
Hylas Chung, Impossiblepolis, Insanity Incarnate, Iris lorain, Isomorphic, Jpatokal, Kanodin, Karada, Kiefer.Wolfowitz, Kvng, Lamro, Luciuskwok, Madhero88, Michael Hardy, Mikael
Hggstrm, Millahnna, MishaPan, MistyMorn, Naadia07, Oddity, PeR, Quantanew, Raimundo Pastor, Reyk, Rjwilmsi, Roadrunner, RodC, RoyBoy, Salih, Securiger, Sigma 7, Sked123,
SlayerBloodySlayer, StAnselm, Tabbbycat, Tabletop, Taxman, Thosjleep, Tobacman, Tom Lougheed, WinstonSmith, Zvika, 41 anonymous edits
Social comparison bias Source: http://en.wikipedia.org/w/index.php?oldid=503122275 Contributors: F, Grutness, John, Lish792, LittleWink, Pjoef, Redrose64, Seren-dipper,
ThePastIsObdurate, Zach014, 2 anonymous edits
Social desirability bias Source: http://en.wikipedia.org/w/index.php?oldid=527123437 Contributors: Ancodia, Anschelsc, Arno Matthias, BDD, Bender235, Biglovinb, Brentt, Chris the speller,
Cjmclark, Dcoetzee, DisplayGeek, Dpaulhus, Ecrooker, Etfp2008, Fallenangei, Htra0497, Joejones2028, Joseph Solis in Australia, LilHelpa, Mattisse, Melcombe, Michaelcarraher, Mycatharsis,
NaBUru38, Nick Number, Nick Wilson, NisJorgensen, Outercell, Polocrunch, Psinu, Scientific29, Sioraf, Someones life, TigerShark, Trevinci, Underpants, Wandatheavenger, Ybbor, 20
anonymous edits
Status quo bias Source: http://en.wikipedia.org/w/index.php?oldid=530608856 Contributors: 806f0F, ASmartKid, Ackatsis, Anders Sandberg, Andrewaskew, Arcandam, Btyner, Byelf2007,
CALR, Chameleon, Cherubino, Colipon, Comrade Graham, Cretog8, Eumolpo, Futurix, Giftlite, Grumpyyoungman01, Grunfe07, Grutness, Gurchzilla, Gwern, JHP, Jeneralist, Justinfr, Karada,
Liza Freeman, Luna Santin, MathewTownsend, Mlichter, Netsumdisc, Psychobabble, R'n'B, Rjwilmsi, Smyth, Taak, TheJJJunk, Ute in DC, Wykypydya, 32 anonymous edits
Stereotype Source: http://en.wikipedia.org/w/index.php?oldid=527535391 Contributors: (, -- April, 159753, 21655, 78.26, A-giau, A3RO, A8UDI, ABF, Aasb, Abce2, Abeg92, Abomasnow,
Abusive Aussie Husband-Battered Southern Wife stereotype, Ace of Spades, Ace ofgabriel, Acroterion, Adam78, Addihockey10, Addshore, Aditya, AdjustShift, Aeusoes1, Ahoerstemeier,
Airpirate545, Ajo Mama, Aka042, Alan Liefting, Alansohn, Alexandre Vassalotti, Alexf, AlexiusHoratius, Alexwany, All Hallow's Wraith, Allens, Allstar86, Alphachimp, Altenmann, Andrew
Levine, AndrewHowse, Andrewaskew, Andy Marchbanks, Andy pyro, Angela, Animum, Anna Lincoln, Anshuk, Anthius, Aranel, Arbor, ArchonMagnus, Arkwatem, Artdemon01, Ashley
Pomeroy, Attys, Auntof6, Auric, Avalean, Avenged Eightfold, Avjoska, Avoided, Avono, Awesomeguy92, B, BD2412, Bazonka, Bbbrown, Bearcat, Beezhive, Beland, Bellerophon5685, Ben
Ward, Ben@liddicott.com, Benc, Bencherlite, Benefros, BennettL, Benson85, Bfoxius, BigHairRef, Bigcitydeserter, Billinghurst, Bit Lordy, Bjsmd, BjrnEF, Blaise Joshua, Blazin213,
Bloodkith, Boaby, Bobbaxter, Bobo192, Boffob, Boing! said Zebedee, Bonadea, Bongwarrior, BoogieRock, Bookgrrl, BorderlineWaxwork, Boris Barowski, Bows&Arrows, Brendan Moody,
Brews ohare, Bsadowski1, Bubblegumwrapper, Bz2, C xong, CJGB, CTF83!, Cab88, Cajade, Calicore, Calineed, Callmarcus, Calmer Waters, Caltas, Cameron Dewe, Capricorn42,
Captain-n00dle, Capybara21, CardinalDan, Carlsotr, Carolinamnz, Cat10001a, Cautioned band, Cdc, Cenarium, Chamal N, Changchih228, CharlesC, Chartran, Chickenfeeders, Chris the speller,
Christopher Connor, Christopher Kraus, Cinco555, Ck lostsword, ClanCC, Classicstruggle, ClaudineChionh, Clegs, Cleopatra*Cate, Cliffy01, Clintville, Clovis Sangrail, ClydeOnline,
Cmptrsvyfm, Cobi, Cocytus, Cogibyte, Cogito-ergo-sum, Computerjoe, Conical Johnson, Cpiral, Cramyourspam, Cremepuff222, Crimzon Sun, Crito2161, Cro fever, Crohall, Crosbiesmith,
Cryptic, Cspalletta, Cst17, Curps, D6, DMacks, DVD R W, DVdm, Dancingwombatsrule, Daniel Quinlan, Darth Panda, Davemarshall04, Dc freethinker, Deb, Deitrib, Delbart27, DeltaQuad,
Article Sources and Contributors
281
Dennisthe2, DennyColt, Deon, Der Falke, Der kenner, DerHexer, Deutschgirl, Dgreen34, Dgw, Diddims, Difluoroethene, Dina, Djm256, Dneyder, Dnvrfantj, DoctorW, Donreed, Doris Don't,
Doulos Christos, Dpbsmith, Dpr, DrOliPo, Drbreznjev, Dreaded Walrus, Dreamafter, Drib55, Drilnoth, Dubious Irony, Dylan Lake, Dysepsion, E2eamon, Eb00kie, Edhabib, Edwalton, Edward,
El aprendelenguas, Elagatis, Elias Enoc, Elipongo, Eliteunited, Emeraldcityserendipity, EncephalonSeven, Endy Leo, Ensrifraff, Enviroboy, Epbr123, Equalityactiv, Escape Orbit, Etafly,
Euchiasmus, Euryalus, Everyking, Exert, Extransit, Fagtard123, Falcon8765, Falconleaf, Favonian, Fieldday-sunday, Fisher.G, FlareNUKE, Fluffernutter, Flyer22, FlyingToaster, Flyspeck,
FonsScientiae, Formeruser-81, Frecklefoot, FredR, Freechild, Fritz freiheit, Fritzpoll, Froid, FrostyBytes, Frymaster, Furrykef, Fyyer, F, GB fan, Gaff, Gambiteer, Gatorgirl7563, Gdo01,
Gene.arboit, GeorgeBuchanan, Gilliam, Gjd001, Glane23, Goatasaur, GoingBatty, Gointemm, Goplat, GorillaWarfare, Govus, Graeme Bartlett, Grafen, Greatrobo76, Grim-Gym, Grim23, Gssq,
Gurch, Gurchzilla, Guybrarian, Gw2005, Gkhan, Hadal, Haham hanuka, Hajahmz, Hamera123, HappyCamper, Happyapples19, Hariva, Hauskalainen, Haze120190, Hbackman, Hbent, Hdt83,
Hello71, Hemanshu, Henry W. Schmitt, Heracles31, Hersfold, Herunumen, Highvale, Hignopulp, Hmains, Hoof Hearted, Howth575, Hunt567, I run like a Welshman, IGeMiNix, IPb0mb3r,
IZAK, Ian Moody, Iantnm, IceCreamSammich, IceUnshattered, Ikanreed, Iluvdawgs, Imaperson123, ImperatorExercitus, ImperfectlyInformed, Inferno, Lord of Penguins, Inklein, Insanity
Incarnate, InverseHypercube, Ipharvey09, Iridescent, Iritakamas, IronGargoyle, Irunwithscissors, Ixfd64, J.delanoy, J3ff, JD554, JDoorjam, JForget, JNW, JSpung, Ja 62, Jacek Kendysz, Jagged
85, Jagz, Jambronination, JamesAM, JamieJones, Janejellyroll, Janus Shadowsong, JayFout, Jayinhar, Jcbutler, Jclemens, Jcw69, Jd027, Jeandr du Toit, Jengod, Jeodesic, Jiang, Jim.henderson,
Jim1138, Jimmy da tuna, Jimphilos, JmanofAus, Jmatter1, JoanneB, Joe9320, John of Reading, JohnBlackburne, JohnInDC, Johnny 42, Jokestress, Joost26, Jorobeq, Jovianeye, Jtneill, Jtoomim,
Juffodnreofdniruneo, Juliaguar, Jumping cheese, JustPhil, Justinfr, Justinphd, Kan06e, KaragouniS, Karpouzi, Kaszeta, Katalaveno, Kathleen.sheedy, Katieh5584, Kavehmz, Keitei, Keith D,
Kerotan, Kevinngo1234, Keyblade5, Kg3042, Khazar2, Kiiimiko, Kilo-Lima, King Lopez, King of Hearts, Kingpin13, Kingturtle, Kirkevan11, Kirzmac, Kiteinthewind, Kittykat94,
Kiyokoakiyama, Kjell Knudde, KlappCK, Knight of Truth, KnowledgeOfSelf, KnowlegeFirst, Koavf, Konye obaji ori, Kowkamurka, Krychek, Kruter-Oliven, Kschutz, Kubigula, Kukini,
Kurt10, Kyoko, Kzzl, L Kensington, L.to.the.P, LERK, Lanztrain, LcawteHuggle, LeeJ55, LeedsKing, Leonardo2505, Levineps, Light current, Lijnema, Likeminas, LilHelpa, Lilleskvat,
Llykstw, Lokionly, Loodog, Lord Lugie, Lordoliver, Loremaster, Lova Falk, Lowtech42, Lph, Lu-igi board, Lucas Duke, Lucidish, Lukefulford, MNAdam, Maberk, Macedonian, MagicBear,
Magiclite, Mahmud Halimi Wardag, Mailer diablo, Makeemlighter, MalakronikMausi, Malone23kid, Mancl20, Manticore, MapsMan, Marek69, Maris stella, Mark Arsten, Martarius,
Martian.knight, MartinPoulter, Master Jay, MatthewVanitas, Mattis, Maximillion Pegasus, Maywoods, Mboverload, Mc95, Meaghan, Memo@sdsu.edu, Menchi, Mendaliv, Mentifisto, Meol,
Meph1986, Mephistoe, MichaK, Michael Hardy, Mihai Capot, Mihalis, Mike Klaassen, Mike2000, Milnivri, Mirokado, Mirv, Mishatx, Miss Madeline, Miss Mama Bear, Miss kat,
MissQCgold2005, Moe Epsilon, Monnicat, Moonriddengirl, Morenoodles, Mr. Billion, Mr. Stradivarius, Mrmuk, Mrvoid, Msikma, Mwelch, My76Strat, Mygerardromance, N. Harmonik, N5iln,
NByz, Nabeth, Navy Blue, NawlinWiki, Nazgul812, Nburden, Neg, NellieBly, Nemesis 961, Neo-Jay, NeonNiteLite, Neuropsychology, Neutrality, Niceguyedc, Nightenbelle, Nightscream,
Ninja-4976, Nkocharh, Nmatavka, Nnp, Noctibus, Noleander, Northamerica1000, NorthernThunder, NorwegianBlue, Notay001, Nowheresville, OGoncho, ONEder Boy, Oda Mari,
Oddball31593, Ohnoitsjamie, Oleg Alexandrov, Oli Filth, Oliver Lineham, Omicronpersei8, Omnipaedista, Onceonthisisland, Onexdata, Optiguy54, Optoi, Oranjeboom31, Oxymoron83,
PDXblazers, PL290, PM800, Packages, Pastinakel, Patrick, Paul A, Paul Magnussen, Pax85, Penbat, Pengyanan, Philip Trueman, Phoenix7777, PhoenixWing, Piano non troppo, Pietru,
Pink!Teen, Piotrus, Planetary, Plushpuffin, Polozooza, PonileExpress, Ponyo, PotentialDanger, Pretzelpaws, Protonk, Pryd3, Psy463 1029, Pundit, PurpleAlex, Qtoktok, QuixoticKate, Qwyrxian,
R-41, R.G., RA0808, RB972, RJHall, Ranjithsutari, Rapturerocks, Ratemonth, Razorflame, ReZips, Reach Out to the Truth, Realismadder, Rebeleleven, Recardojoe, Recognizance, RedWolf,
Rednbluearmy, Reenem, Res2216firestar, ResearchRave, Revolutionary, Rhotard, Rich Farmbrough, Richardspraus, RickDC, Rippa76, Rjwilmsi, Rlquall, Robbie098, RobbieTitwank, Robert K
S, Robertvan1, Ronhjones, Roodaman1, Rotem Dan, Rowmn, Rrburke, Rushbugled13, Rx4evr, Ryan032, SAE1962, SJP, SMC, Saint-Paddy, Salvio giuliano, Sandwichsauce, Sapphire Flame,
Sardanaphalus, SatyrTN, SaveThePoint, Sceptre, SchfiftyThree, Schickel, ScholarK93, Schroeder74, Seb144, Secretlondon, Semperf, Sentenal01, Sephiroth BCR, Seraphcrono, Sfrostee,
Shadowjams, Shannon.jones553, Shanny98pretty, Shedlund, Shifter95, Shirik, Shnitzled, Shovan Luessi, Siebrand, Sifaka, SigPig, SimonP, Sizzlefoshizzle, SkyWalker, Slakr, Slawojarek,
Slysplace, Smallman12q, Smaug123, Snowdog, Snowmanradio, Socialpsychra, Soetermans, Soliloquial, Sonicyouth86, Sophixer, Southafrican41, SpaceFlight89, SpeedyGonsales, Stacin61,
StaticGull, Stefanomione, StillmakerR, StradivariusTV, Stuartewen, Suffusion of Yellow, Suidafrikaan, Suncrafter, SuperHamster, Superking, Sutcher, Sven Manguard, Svetlana Miljkovic,
Sweetfreek, Swimmerz, Syst3mfailur3, T-borg, TFOWR, THB, Taak, Tanaats, Temporaluser, Terracciano, Th1rt3en, Thand, The Anome, The Iconoclast, The MoUsY spell-checker, The
Rambling Man, The Squicks, The Thing That Should Not Be, TheDoober, TheLadyRaven, TheTechieGeek63, Thebanjohype, Theli34, Thingg, Thisis0, ThomasO1989, Thomasmeeks,
TiagoTiago, Tide rolls, TigerBasenji, Titoxd, Tktktk, Tobby72, Tommy2010, Tomsega, Tonsofpcs, Toolboks, Tregoweth, Trusilver, Tstormcandy, Tucker001, Twinsday, U3964057, U4667275,
USN1977, Ubardak, Ughmypussyhurts, Ukexpat, Ulric1313, Ultraexactzz, UnDeRsCoRe, User2004, User92361, Uvmcdi, Vague, Vanished user e99239jf9rf980239ifmlsmlsi4u, Vdegroot,
Vegetator, Vemblut, Verne Equinox, Versus22, Vicarious, Vicenarian, Vicpro, Vinny Burgoo, Violetriga, Vlad2000Plus, Voivod616, Vrenator, Vwu, Vzbs34, WBardwin, WODUP, WTucker,
WadeSimMiser, Wafulz, Wakaw, Wavehunter, Wavelength, Wcp07, Welshleprechaun, Westendgirl, WhatamIdoing, Whomp, WikHead, Wiki13, Wikidenizen, Wikiwatcher1, Will Beback,
WillMak050389, William Avery, Willie44, Wimt, Wiwaxia, Wknight94, Wolfdog, WolfgangFaber, Woohookitty, Writ Keeper, Wtmitchell, Wykypydya, Xerodn, Xiner, YUL89YYZ, YVNP,
Yahel Guhan, Yamamoto Ichiro, Ydong2, Yopie, Yuvn86, Z-d, Zachary8222, Zadcat, Zane RH, Zanibas, Zanimum, Zeboko13, ZeiP, Zeraeph, Zib Blooog, Zigger, Ziggurat, Zimmygirl7,
Zorro-the-coyote, , , , , 2182 anonymous edits
Subadditivity effect Source: http://en.wikipedia.org/w/index.php?oldid=517392441 Contributors: Aaron Kauppi, CRGreathouse, Craig Pemberton, GoingBatty, JeffreyN, Jon.baron, Jweiss11,
Pedrobh, The Anome, 2 anonymous edits
Subjective validation Source: http://en.wikipedia.org/w/index.php?oldid=517612569 Contributors: Aaron Kauppi, Argumzio, Big Bird, Fasten, Gregbard, Grutness, Ilikeliljon, Jokestress,
MartinPoulter, Mattisse, Neothunder, Robofish, Sgerbic, Wavelength, 7 anonymous edits
Survivorship bias Source: http://en.wikipedia.org/w/index.php?oldid=528174026 Contributors: Alternator, BunnyandYummy, Charliebruce, Den fjttrade ankan, Destynova, DireColt,
DocendoDiscimus, Ehn, Farmanesh, Foobaz, Gettingtoit, Gracefool, Iridescent, JimHardy, Jonathanstray, Koavf, MartinPoulter, Rahulkamath, Saxifrage, Shawnc, Skarsa72, UnitedStatesian,
Wordsmith, Wragge, Ze miguel, 26 anonymous edits
Texas sharpshooter fallacy Source: http://en.wikipedia.org/w/index.php?oldid=529286319 Contributors: A. di M., AlexWangombe, Amcbride, An Sealgair, Andeggs, Auto469680, Bassington,
BenFrantzDale, BrainMagMo, Bryan Derksen, BryanD, Chardish, Cold Light, Cydmab, DavidWBrooks, Dcljr, Dom Kaos, Duoduoduo, Dysmorodrepanis, Editor2020, Erebos12345, Everyking,
Gdr, George100, Gwern, Hippo43, Hu, JH-man, Jemmy Button, Kvn8907, L33th4x0rguy, Lazarus666, Lo2u, Logan, Lova Falk, Machine Elf 1735, Matt Gies, Mrdice, Mukadderat, Namangwari,
NantucketNoon, NiD.29, Omedalus, Penfield, Planet-man828, Primarscources, Pudge MclameO, Redfell, Rfl, Rjwilmsi, Robert K S, Rumping, ShowToddSomeLove, Silence, Skaaii, Skeptiker,
Slyguy, Stefanomione, StradivariusTV, Taak, Terpsichoreus, The Anome, Thunderbunny, Tktktk, User2004, Xerces8, Yworo, , , 54 anonymous edits
Time-saving bias Source: http://en.wikipedia.org/w/index.php?oldid=526804394 Contributors: Delusion23, Eyal.peer, Gregbard, Kolbasz, Righteousskills, RudolfRed, SwisterTwister
Well travelled road effect Source: http://en.wikipedia.org/w/index.php?oldid=505913326 Contributors: Aaron Kauppi, Chris the speller, Gregbard, Jeffpc2, SHIMONSHA, Shadowjams, 2
anonymous edits
Zero-risk bias Source: http://en.wikipedia.org/w/index.php?oldid=518648158 Contributors: Aspects, Cesiumfrog, Effie.wang, Evercat, Jeepday, Jon.baron, Mrwojo, Omnipaedista,
Rodneylbrownjr, Schutz, Tlogmer, Wk muriithi, ZildjianAVC, 7 anonymous edits
Actorobserver asymmetry Source: http://en.wikipedia.org/w/index.php?oldid=529824516 Contributors: 1000Faces, Akegarasu, Arno Matthias, BD2412, Bovineone, Chronulator, David0811,
Editor64, Elimegrover, Frdrick Lacasse, Gyrobo, JorisvS, Kanadajinlee, Koavf, MartinPoulter, Mattg82, Mboverload, Northamerica1000, Peace01234, Phorapples, Rlove, Ruakh,
Rul3rOfW1k1p3d1a, Unara, Ynhockey, 14 anonymous edits
Defensive attribution hypothesis Source: http://en.wikipedia.org/w/index.php?oldid=525696550 Contributors: DoctorKubla, Dr Ashton, Funnyfarmofdoom, JorisvS, Rich Farmbrough,
SwisterTwister, Taak, 2 anonymous edits
DunningKruger effect Source: http://en.wikipedia.org/w/index.php?oldid=530362582 Contributors: 19cass20, 2over0, 49oxen, AKAF, Aaron Kauppi, AaronTovo, Abominatorz,
Airplaneman, Al E., Alansohn, Algebraist, Andres, Andycjp, Antandrus, Anthon.Eff, Anthonyhcole, Antonielly, Aprock, Argumzio, Arjuna909, Arthur Rubin, Ashahmur, Aunt Entropy,
AussieScribe, Avish, Awgy, Aymatth2, BaShildy, Badger151, Bearian, Ben Standeven, Billswelden, Blaxthos, Brian A Schmidt, Bryan Derksen, Callidior, Captain obtuse, Carmichael,
Chameleon, Chasingtheflow, Chowbok, Chrylis, Chuunen Baka, Cogiati, Cognita, Colorfulharp233, CommunistPancake, Control.valve, Cowtown850, DANKASHEN, DMacks, DRTllbrg,
DVdm, Dabomb87, DanielCD, Darrell Greenwood, David Gerard, Davididd, Dennis Bratland, Dicklyon, Diego Moya, Dogface, Dusti, ERcheck, ElbridgeGerry, Enigmocracy, Enquire,
Enzodogoslo, Ernestfax, Eshafto, Everdred, Ewlyahoocom, Extremidiz2000, EyeKnows, F5487jin4, Famspear, Fastily, Fifty53, Florian Blaschke, F, Gabriel Kielland, Garkbit, Gary King,
Gavin.collins, Geoffrey.landis, Geometry guy, Glrx, Gracefool, Gstrz, Gzuckier, Hallows AG, Hans Adler, HonoreDB, Hu, IRWolfie-, IjonTichyIjonTichy, Immanos, InBalance,
Informationtheory, Jack Merridew, JamesBWatson, Janeky, Jarbon, Jesusaurous, JoeSmith9751, Jonathanischoice, Jrtayloriv, Jtneill, Julia Rossi, Julian Herzog, Julianonions, Jumbolino,
Jusdafax, Just plain Bill, Kakofonous, Kazrak, KennethSides, Kephir, KillerChihuahua, KimDabelsteinPetersen, Kintetsubuffalo, Kmpolacek, Koavf, Krausertoss, Kuru, L Kensington, LZ6387,
Lawrencekhoo, Leandrod, Leftmostcat, Lesath, LiberalDiggingEffect, LilHelpa, Logos5557, Lova Falk, MZMcBride, Mandarax, MarSch, MartinPoulter, Materialscientist, Mattisse, Mavigogun,
Mbmiller, McGeddon, MelbourneStar, Mephistophelian, Metazeno, Mgiganteus1, Michael C Price, Micmachete, MikeDawg, Mindmatrix, MonoApe, Mooveoveryou, Nial2k7, OisinisiO,
PatrickFisher, Patrickgoold, Penbat, Perkerk, Phil Boswell, Pietrow, Pigsonthewing, Pinethicket, Power.corrupts, Prari, Pyrospirit, Raithlin, Raul654, Reconsider the static, RedHouse18,
RevWaldo, Ritartederchild, Rjwilmsi, Robert1947, Rolo Tamasi, Roman clef, Ronbtni, Ronz, Rtyq2, Salamurai, SamJain1975, Secretza, Shadowjams, Shaggorama, Siawase, SkepticalRaptor,
Soap, Splashburn, Sroc, Suffusion of Yellow, SunshineSet, Superborsuk, Svick, TJPotomac, Tagishsimon, TangLung, Tbhotch, Tempodivalse, Teratornis, The Thing That Should Not Be,
TheFSaviator, Thine Antique Pen, ThreeOfCups, Thumperward, Timp21337, Tom Sauce, Tommy2010, Tritium6, Tulkss, Tyrol5, Unomi, Utcursch, Uucp, V2Blast, Vaughan, Vicki Rosenzweig,
Vrenator, VsevolodKrolikov, Wegesrand, WhatamIdoing, Wikipelli, Willerror, William Avery, William Pietri, Wingman4l7, Winston365, Wyldweasil, XP1, Xanzzibar, Xerographica, Xmacro,
Yakushima, Yamamoto Ichiro, Yayay, Youremyjuliet, ZX81, Zachweiss0491, ZoneSeek, 376 ,. anonymous edits
Article Sources and Contributors
282
Egocentric bias Source: http://en.wikipedia.org/w/index.php?oldid=527533557 Contributors: Aaron Kauppi, Archie06, Aventureuse, Bovineone, Diberri, Ellery7, George100, Grutness,
J.delanoy, Jahiegel, JorisvS, Koavf, Ktspectate, MartinPoulter, Melaniegyq, Mikimacmiki, Rebrane, Schwnj, Silkroses123, Sly1993, Taak, The wub, WikHead, Wmahan, 16 anonymous edits
Extrinsic incentives bias Source: http://en.wikipedia.org/w/index.php?oldid=506088658 Contributors: Magioladitis, Taak, 2 anonymous edits
Halo effect Source: http://en.wikipedia.org/w/index.php?oldid=528555846 Contributors: ***Ria777, 16@r, 2004-12-29T22:45Z, Aaron Kauppi, Andycjp, Anna Frodesiak, Anna Lincoln, Arno
Matthias, Ashmoo, Audriusa, Bagatelle, Bedson21, Betacommand, Bryantl05, CZmarlin, Calen11, Carmichael, CatherineMunro, Cavenba, Chameleon, ColdFeet, Cretog8, D o m e,
DanEdmonds, David Hoeffer, Dcflyer, Dddaye, Dorftrottel, Download, DropDeadGorgias, Dryman, E2eamon, Henrygb, Hobartimus, Ibn Battuta, Im Kwando, Inhumandecency,
InverseHypercube, Ishikawa Minoru, Joriki, JoshuaZ, Jrockley, JustAGal, Karada, Kent Wang, Kimhaney3, Kjkolb, Koavf, Kramsti, Kripkenstein, Kukini, L Square, Leosdad, Lionel Allorge,
Liquidblue8388, Lord Spring Onion, Lova Falk, Lue3378, MartinPoulter, MathewTownsend, Matthew.murdoch, Mattis, McSly, Mild Bill Hiccup, Mindloss, Mrpaintedwings, Nesbit, Netkinetic,
Nuujinn, Ocaasi, Oliphaunt, Patriarch, PaulAndrewAnderson, Photobiker, Pinethicket, Prof. Squirrel, Quenjames, RadManCF, Raymondwinn, Recognizance, Recury, Redbull addict, Rfl,
RichardF, Rjwilmsi, Robin S, Rorro, Rosalien, Rossami, SP612, Salavat, Slogby, SlubGlub, Soosim, Spencer, Spencerk, Squiddy, Suidafrikaan, Superking, Susko, SynergyStar, Taak, Tesseran,
Tom.k, TomTheHand, Usernamefortonyd, Vegetator, Viriditas, Waldo333, WereSpielChequers, Whaiaun, WhyBeNormal, Xyzzyplugh, Yaksar, 2009, , 207 anonymous edits
Illusion of asymmetric insight Source: http://en.wikipedia.org/w/index.php?oldid=498447704 Contributors: Andrewaskew, Arno Matthias, Cat Cubed, M4gnum0n, MathewTownsend, Pol098,
RDBrown, Rjwilmsi, Sketchmoose, The Anome, Yngvadottir, 3 anonymous edits
Illusion of external agency Source: http://en.wikipedia.org/w/index.php?oldid=481451264 Contributors: Giraffedata, Taak, TucsonDavid
Illusion of transparency Source: http://en.wikipedia.org/w/index.php?oldid=516534076 Contributors: Aaron Kauppi, Cat Cubed, George Ponderevo, GregorB, Int3gr4te, KJamison7,
MTHarden, Mandarax, Mgiganteus1, Pnrj, Reaper Eternal, Robin klein, Sharktopus, The Anome, Yngvadottir, 3 anonymous edits
Illusory superiority Source: http://en.wikipedia.org/w/index.php?oldid=529544667 Contributors: AKAF, Aaron Kauppi, AdrianLozano, Alansohn, Andrewaskew, Andycjp, Antonielly,
Ayvengo21, BGH122, Barticus88, Bayardo San Roman, Bender235, Brooke87, Bruce1ee, Chris the speller, Chronulator, Coldnorth, Crossmr, Crzer07, Cybercobra, Darrell Greenwood,
DreamGuy, Dvd-junkie, Eastlaw, Edward, Elvey, Florian Blaschke, Gioto, Groyolo, Imersion, Jack Merridew, Jake Wartenberg, Jakebarrington, JamesDC, Jeff Silvers, Jim1138, Johnuniq,
Kai-Hendrik, Koavf, Ksyrie, LittleHow, Male1979, MartinPoulter, Michael C Price, Monkats, MoraSique, Northamerica1000, OpenFuture, PatrickFisher, Penbat, Pietrow, Pinethicket, Psych
psych, Rinick, RiverDesPeres, Rjwilmsi, Safety Cap, Slmcguinness, Sue Rangell, Sun Creator, SuzanneIAM, Svick, Teratornis, Timstreet1, Usernameandnonsense, Viniciusmc, WhatamIdoing,
Wjejskenewr, Z8, Zanotam, Zenomax, 74 anonymous edits
In-group favoritism Source: http://en.wikipedia.org/w/index.php?oldid=529771533 Contributors: Aaron Kauppi, Acadmica Orientlis, Alliefaye13, Antaeus Feldspar, Avb, B528491, Beland,
Ben Ben, Chameleon, ChrisGualtieri, Coreyrudd, CourtsW, Dicklyon, Donkeykong0303, Drmies, Eey, GhostDude, Iaoth, Inkblot svr, Jlchenn, Johnkarp, Karada, Kentma, Khazar2, Koavf,
Manop, MartinPoulter, Michael Snow, Myriddin07, Nyttend, Okcmorgan, Pace212, Penbat, R'n'B, RichardF, Robertsteadman, Roop25, SchreiberBike, Spokewrote, SummerWithMorons, Sun
Creator, Taak, Tanner Swett, Tassedethe, Tectonicura, Thishyperreality, Trnj2000, U3964057, Unara, Vodafone3, ZigZagZoug, 20 anonymous edits
Nave cynicism Source: http://en.wikipedia.org/w/index.php?oldid=511263453 Contributors: Bgwhite, Darkwind, Magioladitis, Taak
Worse-than-average effect Source: http://en.wikipedia.org/w/index.php?oldid=528073309 Contributors: Aaron Kauppi, CWenger, Fanra, Fenice, Gioto, Graymornings, HerbertHuey,
Johnkarp, Karada, Kephir, MartinPoulter, Mattisse, Mike-stalkfleet, RichardF, Rushbugled13, Schwnj, Taak, 7 anonymous edits
Google effect Source: http://en.wikipedia.org/w/index.php?oldid=520275788 Contributors: BDS2006, Baseball Watcher, Evasivo, Grutness, Hajatvrc, Macrakis, Nohomers48, SchreiberBike,
Ser Amantio di Nicolao, Tim!, Vchimpanzee, 4 anonymous edits
Image Sources, Licenses and Contributors
283
Image Sources, Licenses and Contributors
File:Daniel KAHNEMAN.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Daniel_KAHNEMAN.jpg License: Public Domain Contributors: Ephraim33, InverseHypercube,
Tabularius, Urbourbo
File:Fred Barnard07.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Fred_Barnard07.jpg License: Public Domain Contributors: Fred Barnard (1846-1896)
File:MRI-Philips.JPG Source: http://en.wikipedia.org/w/index.php?title=File:MRI-Philips.JPG License: Creative Commons Attribution 3.0 Contributors: Jan Ainali
File:Handgun collection.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Handgun_collection.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Joshuashearn
File:Pourbus Francis Bacon.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pourbus_Francis_Bacon.jpg License: Public Domain Contributors: BurgererSF, Hsarrazin, Shakko
Image:Klayman Ha1.svg Source: http://en.wikipedia.org/w/index.php?title=File:Klayman_Ha1.svg License: Public Domain Contributors: MartinPoulter
Image:Klayman Ha2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Klayman_Ha2.svg License: Public Domain Contributors: MartinPoulter
Image:Klayman ha3 annotations.svg Source: http://en.wikipedia.org/w/index.php?title=File:Klayman_ha3_annotations.svg License: Creative Commons Attribution 3.0 Contributors:
MartinPoulter
Image:Witness impeachment.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Witness_impeachment.jpg License: Creative Commons Attribution 2.0 Contributors: Eric Chan from
Palo Alto, United States
Image:Simultaneous Contrast.svg Source: http://en.wikipedia.org/w/index.php?title=File:Simultaneous_Contrast.svg License: Public Domain Contributors: This hand-written SVG version by
Qef Original bitmap version by English Wikipedia user Xanzzibar Based on a similar bitmap image by K. P. Miyapuram
Image:Successive contrast.svg Source: http://en.wikipedia.org/w/index.php?title=File:Successive_contrast.svg License: Public Domain Contributors: This hand-written SVG version by Qef
Original bitmap version by K. P. Miyapuram (public domain)
Image:Contrast.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Contrast.jpg License: Public Domain Contributors: Nuvitauy07 (talk)
File:ValunFunProspectTheory2.png Source: http://en.wikipedia.org/w/index.php?title=File:ValunFunProspectTheory2.png License: Creative Commons Attribution-ShareAlike 3.0 Unported
Contributors: ValunFunProspectTheory.png: *Valuefun.jpg: Rieger at en.wikipedia derivative work: JohnKiat (talk) derivative work: JohnKiat (talk)
File:Simple-indifference-curves-2.png Source: http://en.wikipedia.org/w/index.php?title=File:Simple-indifference-curves-2.png License: Creative Commons Attribution 2.5 Contributors:
Simple-indifference-curves.svg: Original uploader was SilverStar at en.wikipedia derivative work: JohnKiat (talk)
File:Toronto Maple Leafs bild.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Toronto_Maple_Leafs_bild.JPG License: Creative Commons Attribution-ShareAlike 3.0 Unported
Contributors: Egon Eagle
File:Little Rock integration protest.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Little_Rock_integration_protest.jpg License: Public Domain Contributors: John T. Bledsoe
Image:genimage.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Genimage.jpg License: Public Domain Contributors: Karl Duncker
File:Lawoflargenumbersanimation2.gif Source: http://en.wikipedia.org/w/index.php?title=File:Lawoflargenumbersanimation2.gif License: Creative Commons Zero Contributors:
User:Sbyrnes321
File:Hyperbolic vs. exponential discount factors.svg Source: http://en.wikipedia.org/w/index.php?title=File:Hyperbolic_vs._exponential_discount_factors.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Moxfyre
Image:Martian face viking cropped.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Martian_face_viking_cropped.jpg License: Public Domain Contributors: Viking 1, NASA
File:Fakeface.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fakeface.svg License: Public Domain Contributors: Germo
File:Galle crater.gif Source: http://en.wikipedia.org/w/index.php?title=File:Galle_crater.gif License: Public Domain Contributors: Foroa, Lotse, Ruslik0, Waldir, WinstonSmith
Image:Pedra da Gavea proche.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pedra_da_Gavea_proche.jpg License: Public Domain Contributors: Anaximander, Carlos Luis M C
da Cruz, CarolSpears, Chronus, Dantadd, Parigot, Zephynelsson Von, 4 anonymous edits
Image:Garuda_Pareidolia.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Garuda_Pareidolia.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Kavyagandhi21
Image:Apache head in rocks, Ebihens, France.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Apache_head_in_rocks,_Ebihens,_France.jpg License: Public Domain Contributors:
Erwan Mirabeau
Image:Tirupati2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Tirupati2.jpg License: Creative Commons Zero Contributors: Nivas28
Image:Paridolie Cians.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Paridolie_Cians.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Agota
Image:Gardienne.Daluis.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Gardienne.Daluis.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Agota
File:Ivy on tree in Burn anne Woodland.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Ivy_on_tree_in_Burn_anne_Woodland.JPG License: Public Domain Contributors: Roger
Griffith
File:Bucegi Sphinx - Romania - August 2007.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Bucegi_Sphinx_-_Romania_-_August_2007.jpg License: Creative Commons
Attribution 2.0 Contributors: Cristian Bortes from Cluj-Napoca, Romania
Image:Pareidolia 3.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pareidolia_3.jpg License: Creative Commons Attribution-Sharealike 2.0 Contributors: Bukk, Geofrog,
Imbrettjackson, JMCC1, Jat, Markus3, Nikola Smolenski, Wst, 1 anonymous edits
Image:Pareidolia false wood.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pareidolia_false_wood.jpg License: Public Domain Contributors: Paulnasca, 1 anonymous edits
Image:Pareidolia.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pareidolia.jpg License: Creative Commons Attribution 3.0 Contributors: Thom Quine
Image:box-pareidolia-2011-01-30.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Box-pareidolia-2011-01-30.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: Bostwickenator
File:107-2-D1 - Danish electrical plugs - Studio 2011.jpg Source: http://en.wikipedia.org/w/index.php?title=File:107-2-D1_-_Danish_electrical_plugs_-_Studio_2011.jpg License: Creative
Commons Attribution-Sharealike 3.0 Contributors: User:Atomicbre
Image:Serial position.png Source: http://en.wikipedia.org/w/index.php?title=File:Serial_position.png License: GNU Free Documentation License Contributors: Obli (talk) ( Uploads)
Image:Dingyjump.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Dingyjump.jpg License: Public Domain Contributors: Alext606, 1 anonymous edits
File:Cops in a Donut Shop 2011 Shankbone.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Cops_in_a_Donut_Shop_2011_Shankbone.jpg License: Creative Commons Attribution
3.0 Contributors: David Shankbone
File:Mixed stereotype content model (Fiske et al.).png Source: http://en.wikipedia.org/w/index.php?title=File:Mixed_stereotype_content_model_(Fiske_et_al.).png License: Creative
Commons Attribution-Sharealike 3.0 Contributors: User:Sonicyouth86
File:Bettie Page driving.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Bettie_Page_driving.jpg License: Public Domain Contributors: Handcuffed
File:Stereotype threat - osborne 2007.png Source: http://en.wikipedia.org/w/index.php?title=File:Stereotype_threat_-_osborne_2007.png License: Creative Commons Attribution-Sharealike
3.0 Contributors: Sonicyouth86
File:TheUsualIrishWayofDoingThings.jpg Source: http://en.wikipedia.org/w/index.php?title=File:TheUsualIrishWayofDoingThings.jpg License: Public Domain Contributors: Chechof,
Editor at Large, GeorgHH, Infrogmation, Innotata, InverseHypercube, Mdd, Sreejithk2000, Timeshifter, 1 anonymous edits
License
284
License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/