Anda di halaman 1dari 27

Experimental Economics and Deception §

Shane Bonetti
Department of Economics
University of St Andrews
St Andrews, Scotland
December 1996
Revised May 1997

Abstract
Several leading experimental economists have independently
proposed that deception should be proscribed on methodological
grounds as an experimental technique. The basis for this
prescription is the assertion that the psychological reaction to
suspected manipulation jeopardises experimental control and
validity, and contaminates the subject pool. According to this view,
honesty is a methodological public good and deception is
equivalent to not contributing. This paper reviews the literature on
the consequences of the use of deception. It is concluded that there
is little evidence to support the argument that deception should be
proscribed. It is argued that there are potential gains from
deception in data validity and experimental control. These gains
are illustrated by examining ultimatum games and public goods
experiments.

Keywords: experimental economics, deception, public goods, free


riding, ultimatum.

JEL Classification: C9

§ I am grateful to two anonymous referees for their comments, and to Alan Lewis, Friedel Bolle
and Peter Lunt for their kind help.
The Orthodoxy on Deception
Several eminent experimental economists have independently proposed a simple

rule to govern the conduct of experiments. Douglas Davis and Charles Holt

instruct budding experimenters thus:


"The researcher should ... avoid deceiving participants. Most
economists are very concerned about developing and maintaining a
reputation ... for honesty in order to ensure that subject actions are
motivated by the ... monetary rewards rather than by psychological
reaction to suspected manipulation. Subjects may suspect deception
if it is present. Moreover, even if subjects fail to detect deception
within a session, it will jeopardize future experiments if the
subjects ever find out they were deceived and report this
information to their friends" (Davis & Holt, 1992, pages 23-24 ).

They add an injunction and an unexplained observation :


"Many economists believe that deception is highly undesirable in
economics experiments, and ... argue that the results of experiments
using deceptive procedures should not be published. Deceptive
procedures are more common and perhaps less objectionable in
other disciplines (e.g. psychology)" (Davis & Holt, 1992, page 24,
note 28).

John Ledyard offers the following similar observations:


"It is believed by many undergraduates that psychologists are
intentionally deceptive in most experiments. If undergraduates
believe the same about economists, we have lost control. It is for
this reason that modern experimental economists have been
carefully nurturing a reputation for absolute honesty in all their
experiments. ... [I]f the data are to be valid, honesty in procedures is
absolutely crucial. Any deception can be discovered and
contaminate a subject pool not only for that experimenter but for
others. Honesty is a methodological public good and deception is
equivalent to not contributing." (Ledyard, 1995, page 134)

Finally, in his Experiments in Economics, John Hey makes the proscription of

deception a recurring motif:


"The question of trust is an important one: it is an unfortunate fact
that experiments in psychology are tainted by distrust. We do not
want the same taint to be attached to experiments in economics."
(Hey, 1991, page 21)
"[I]t is crucially important that economics experiments actually do
what they say they do and that subjects believe this. I would not
like to see experiments in economics degenerate to the state
witnessed in some areas of experimental psychology where it is
common knowledge that the experimenters say one thing and do
another. This would be very harmful to experimental economics."
(Hey, 1991, page 119)

"[Subjects] believing what the experimenters tell them ... seems to


me to be of paramount importance: once subjects start to distrust
the experimenter, then the tight control that is needed is lost." (Hey,
1991, page 173)

"[I]t seems to me particularly crucial that experiments in economics


remain 'whiter than white', and are seen to be so by subjects." (Hey,
1991, page 225)

For want of a better term, I refer to the proponents of this ban on deception as

The Prohibitionists. In an earlier version of this paper I referred to this group as

The Honest Johns, by adoption of the modal given name of the four experimental

economists cited above. The Prohibitionists cite no experimental or other

evidence in support of their methodological proscription. They neither refer to

nor discuss the extensive literature on experimental methodology. They appear

innocent of the literature directly addressing experimental deception. Little is

offered surpassing conjecture in justification or defence of their methodological

prescriptions. The next section compares the views of The Prohibitionists with

the available evidence.

It should be noted at the outset that the impression of experimental

psychology conveyed by The Prohibitionists is rather hyperbolical and

inaccurate. First, to the limited extent that the concern of The Prohibitionists is

with the reputation of experimental economists per se rather than with the effect

of deception on experimental control and validity, it should be noted that

subjects generally report positive evaluations of deceptive research and

deceptive researchers (Christensen, 1988; Collins et al., 1979; Gerdes, 1979;


Glasgow et al., 1977; Korn, 1987; Lustig et al., 1993; Pihl et al., 1981; Schwartz &

Gottlieb, 1981; Smith & Richardson, 1985a, 1985b). Second, it is not the case that

all or even most psychological experiments use deception. Finally, there is no

evidence that deception methodologies are being pursued with decreasing

frequency by psychologists (Dunston & Ross, 1986; Gross & Fleming, 1982;

Levenson et al., 1976; Sieber et al., 1995). Of course, that psychologists show no

sign of abandoning deceptive experimental techniques does not of itself

demonstrate that deception is acceptable. It is not a sound response to The

Prohibitionists to adopt a justificationist line: if deception bore calamitous


consequences for experimental validity, then psychologists, being rational,

would eschew deception. They have not, therefore it does not! This is

unacceptable as a rebuff to The Prohibitionists because their argument is that

deception is a public bad. This implies that there may be private benefits from

deception but that experimenters acting egoistically will ignore the costs

imposed on others by choosing deception. That is, the persistence of deception in

psychology might be interpreted by The Prohibitionists as a failure of collective

rationality rather than a testament to the rationality of deception.

Evidence
It is reasonably clear that The Prohibitionists oppose deception on the ground

that it causes the data produced to be invalid. That is, the argument against

deception is methodological. Methodological arguments are notoriously difficult

to assess or rebut. It must be admitted that the assertions of The Prohibitionists

are at least plausible or logically possible.

Psychologists have been experimenting with human subjects for more than a

century, and have subjected the practice of experimental deception to active

scrutiny for at least the past four decades. Important early contributors include
Vinacke (1954), Rosenthal (1963) and Kelman (1967). It seems likely that

experimental economists may be able to learn from the psychologists' experience

and evidence.

There is a school of thought which argues that this is not the case. Davis &

Holt assert that there may be some distinction between experimental

psychologists and experimental economists, and that this implies an explanation

for the inter-disciplinary difference in the use of deception or the desirability of

the use of deception. Davis & Holt are unable to offer any argument in support

of this asserted difference. Similarly, the author has encountered some


experimental economists unconvinced by the arguments against prohibition, and

prepared to adopt a strained "reverse analogy" to justify ignoring the evidence

and experience of psychologists. The reverse analogy is about experimental

payoffs. Psychologists use hypothetical payoffs, at least in some experimental

settings. Economists do not regard hypothetical payoffs as methodologically

sound. That is, there are differences between the methodology of valid

psychological and economic experimentation. Therefore, it is argued, it is

plausible that these differences extend to the use of deception.

Whatever the merits of this reverse analogy on purely logical grounds, this
paper proceeds from the assumption that the psychological responses of human

subjects generally do not change when the subjects are moved from a psychology

laboratory to an economics laboratory. That is, the starting point for this paper is

a presumption that psychological evidence on experimental methodology should

be of interest to economists.

The Prohibitionist view has two key elements: the direct effect of deception

on subject behaviour, and the indirect or pooling effect on later groups of

experimental subjects. These are considered below.


Direct Effect

The essence of The Prohibitionist direct argument is that deceiving subjects taints

their behaviour. Knowledge or belief or suspicion that they might be deceived

significantly alters subject behaviour.

Subjects who have been previously deceived by experimentalists are more

likely to expect future deception (Beins, 1993; Christensen, 1977; Krupat &

Garonzik, 1994). However, the mere anticipation of deception does not of itself

cause, or establish the existence of, an alteration of behaviour on the basis of that

apprehension. There are certainly reported cases of alteration in laboratory


behaviour because of perceived deception. For example, MacCoun & Kerr (1987)

base their methodological opposition to the use of deception on an unfortunate

but bizarre and improbable episode in which an experimental subject suffered an

epileptic seizure during an experimental session. Three of the other five

experimental subjects in the room suspected that the seizure was a confabulation,

and part of the experiment.

More than anecdote is required to test the effect of deception. Fortunately, the

proposition that anticipated deception alters behaviour has been tested in a wide

range of experimental settings. The most important work has been performed in
three fields: (i) obedience and conformity; (ii) attribution; and (iii) social

dilemmas. The evidence varies according to the experimental setting, but tends

toward the conclusion that anticipated deception does not usually alter

measured behavior.

(i) Obedience experiments test the extent to which individuals obey the

commands of an authority figure. They permit identification of the determinants

of the decision to obey or disobey. The classic study is Milgram (1974).

Conformity experiments test the effect of group pressure upon the modification
and distortion of individual opinions, attitudes, perceptions and actions. The

classic studies are those of Asch (1951, 1955). The influence of suspicion on

subject behaviour in such experiments has been examined several times. As

McGuire notes of this "suspiciousness hypothesis":


"[A]ny sign that the subject is suspicious of the persuasive intent of
the experimenter is likely to elicit alarm. ... [T]he experimental
manipulation might ... affect the subject's suspiciousness of
persuasive intent. Any relationship which is found might be due ...
to ... the subject's suspiciousness" (McGuire, 1969, page 22).

Finney (1987) studied the behaviour of 120 psychology students undertaking

Asch's line-judgement task. He found that informing subjects that they might be
deceived did not influence conformity. Chipman (1966) drew the same

conclusion.

However, there are two apparently conflicting results. Stang (1976) had 65

female undergraduate subjects make visual and informational judgements. He

found less conformity by suspicious subjects, but suggested that this was a

consequence of flawed design of deceptive conformity experiments rather than

of any generalised or inevitable tendency for deception to taint behaviour.

Christensen (1977) conducted two verbal conditioning experiments with

respectively 64 and 30 undergraduate student subjects. In a verbal conditioning


experiment the subject is required to learn paired or associated relations between

a set of stimulus items and a set of response items. Subjects who had been

exposed to a prior manipulative experimental experience did not reveal verbal

conditioning.

In their review of deception in conformity experiments Stricker et al. (1969)

propose a conclusion not overturned by subsequent evidence on conformity and

related experiments: "suspicion need not be associated invariably with

differences in level of performance". Indeed, McGuire (1969, pages 25-6) comes to

precisely the same conclusion. After a survey of conformity and attitude change
research he concludes that "the experimental results seem to defy description by

the suspiciousness hypothesis".

(ii) Attribution experiments test the process of attributing motives, intentions

and characteristics to another person. Wiener & Erker (1986) conducted a 2x2

factorial design experiment dealing with attribution of responsibility. A 2x2

factorial experimental design is one in which the levels of one independent

variable is varied over all possible values of another independent variable. They

divided their sample of 64 undergraduates into two groups. Half of the sample
were informed that they might be misinformed. The two experimental conditions

were this pre-briefing (present or absent) and misinformation (present or absent).

Attributions of responsibility, based on details of an actual rape trial, were

unaffected by the presence or absence of pre-briefing.

(iii) Social dilemma experiments are more familiar to economists. Allen (1983)

conducted a prisoner's dilemma experiment to examine reactions to a

psychological experiment involving deception and forewarning. The experiment

had a 2x2 factorial design with two levels of deception (deception present vs
deception absent) and two levels of forewarning (presence of deception

suggested vs presence of deception not suggested). 48 undergraduates

participated in a 2-person prisoner's dilemma cooperation game, 12 in each

experimental condition. With regard to the deception factor, it was found that

deceived subjects performed more competitively than did nondeceived subjects.

More importantly, Allen concludes that forewarning of deception does not

eliminate the possibility that subjects can subsequently be deceived.

The Pooling Effect


The alternative ground on which The Prohibitionist argument rests is an indirect

pooling effect. The inadvertent discovery of experimental deception or the

debriefing of experimental subjects after deceptive experiments inevitably taints

the behaviour of the pool of experimental subjects for later experiments,

according to The Prohibitionist view. McGuire remarks that:


"There has long been some concern that participation in deception
experiments ... produces suspicious ... persons who are unsuited to
serve as subjects in subsequent experiments because this acquired
sophistication will cause them to behave in a way unrepresentative
of the ... population" (McGuire, 1969, page 32).

However, after reviewing the evidence available at that time, McGuire


concluded that "the results to date provide little substantiation for this

reasonable concern". A decade later, West & Gunn (1978) drew the same

conclusion:
"Even a casual reading of the research literature fails to confirm the
... expectation that the proportion of subjects classified as
suspicious of experimental procedures has evidenced a significant
increase in recent years".

Key support for this view comes from Stricker's (1967) review of 16 studies

which report the existence and extent of suspicion of deception among subjects

in deception experiments. The median percentage of undeceived subjects was

four percent, and the range was from zero to 23 percent. Similar results are

reported in Stricker et al. (1969). In general, subjects exposed to prior deception

behave little different from "naive" subjects (Fillenbaum, 1966). Even this small

proportion of undeceived subjects may overstate any "tainting" effect. Brock &

Becker (1966) found that behaviour in a subsequent experiment was affected only

for subjects who had performed similar tasks in an earlier deceptive experiment

and had been completely debriefed.

As for the post-experiment debriefing or "dehoaxing", it is certainly true that

badly timed and constructed debriefing can occasionally cause contamination of


a pool of subjects for later runs of the same experiment (Lipton & Garza, 1978).

However, this can be minimised by careful design of the debriefing method

(Brock & Becker, 1966; Gruder et al., 1977; Mills, 1976; Stricker et al., 1969; Walsh,

1976).

Why Deceive?
This brief review of the available evidence reveals that the concerns which The

Prohibitionists express about the use of deception in experiments are

exaggerated. Data supportive of The Prohibitionist view are rare, and the balance
of the evidence is certainly to the contrary. Deceiving experimental subjects

seems unlikely to bring experimental economists into disrepute. It is unlikely to

taint the behaviour of experimental subjects. However, this does not establish the

need for deception, or the gains which deception might bring. One way to

understand these gains is to review several areas of experimental economics.

Binmore et al. (1985) and Data Validity

The experimental work of Binmore, Shaked & Sutton (1985) is a good starting

place. They examine a two-stage ultimatum game. Their hypotheses are that
human agents' utility functions do not include fairness or the welfare of others,

and that human players choose Nash equilibria rather than focal points. There is

an instructive lesson about deception to be gleaned from this research, though it

is to be found in a less than obvious place. Binmore and his colleagues do engage

in deception, in concealing from their subjects the existence of a second stage of

the game until the first stage has been completed. However, more illuminating

for our purposes is the instructions which Binmore et al. offer their subjects: :
"How do we want you to play? YOU WILL BE DOING US A
FAVOUR IF YOU SIMPLY SET OUT TO MAXIMIZE YOUR
WINNINGS". (The emphasis and capitals appear in the original)
This is, of course, quite extraordinary advice to offer subjects, as Thaler (1988)

has remarked. What exactly is wrong with the Binmore et al. experimental

design, and with this advice to their subjects? There is powerful evidence that

subjects conceptualise experiments in terms of a general desire to help the

experimenter, and in particular a desire to learn hypotheses and to act in

accordance with them. This means that useful experimentation on responses over

which subjects can exercise voluntary control cannot be done if the investigator

tells the subject the theoretical hypothesis beforehand. It means in addition that if
you tell subjects how you would like them to behave, then on the whole that is

how they can be expected to behave (Crano & Brewer, 1973; Flay & Hamid, 1977;

Jones & Gerard, 1967; Rosenthal, 1963, 1976; Turner, 1981).

If experimental economists wish validly to test strategy selection, or how

humans respond to threats in bargaining, whether they free ride, care about

fairness, take sunk costs into account or commit the gambler's fallacy, there is a

general rule: beware of telling subjects how they should behave or how they are

expected to behave. Cues to the socially desirable, expected or typical response

usually invalidate the reliability of experimental results (Turner, 1981; Roese &
Jamieson, 1993), unless the experimental hypothesis involves and requires a test

of the consequence of such instructions. That is, such cues are usually only

acceptable when a "demand effect" is part of the experimental hypothesis.

The inference might be drawn that this canon of experimental design implies

only that: (i) subjects should not usually be told how to behave; (ii) subjects

should not usually be told the experimental hypothesis, for fear that they will

purposefully conform to it or purposefully contradict it.

However, it carries a further implication. Subjects told neither of these things

will wonder for themselves what it is the experimenter is after, what hypothesis
the experimenter is testing. From the subject's point of view, the "typically

ambiguous experiment is a problem-solving situation, and the goal is to discover

the study's real nature" (Stricker et al., 1969), or more prosaically "to penetrate

the experimenter's inscrutability" (Orne, 1969, citing Riecken, 1962, 31). There is

therefore an argument for filling this gap in the subjects' minds by announcing a

purpose of the experiment which is not the true purpose. This "deception ... is an

attempt on the part of the investigator to circumvent those cognitive processes of

the subject which would interfere with his research" (Orne & Holland, 1968), or

"to keep ... subjects naive about the purpose of the experiment so that they can
respond ... spontaneously" (Kelman, 1967). If the deception is plausible, then

there is little danger that the experimental subjects will figure out the true

hypothesis for themselves. As Walster et al. observe:


"[I]n many experiments, the experimenter's hypothesis would be all
too apparent to the subjects if a false explanation of the
experimenter's purposes were not provided. If the purpose of the
experiment were clear to subjects they might try to assist or thwart
what they believe to be the experimenter's aims" (Walster et al.,
1967, page 371).

The typical experimental subject will always attend to cues, information and

hunches about features of the experimental design. Deception is a way in which

the attention of the subjects can be effectively distracted, thus ensuring that the
behavior which is measured is more natural and spontaneous, and less affected

and contrived.

Public Goods Experiments and Control

The analysis of free riding and public goods problems is one of the most difficult

and exciting areas of experimental economics. There is now considerable

evidence that people do not behave as predicted by the standard models of

public goods (Andreoni, 1988; Bohm, 1972; Isaac et al., 1985; Isaac & Walker,
1988; Isaac et al., 1994; Ledyard, 1995; Marwell & Ames, 1982; Molander, 1992;

Schneider & Pommerhene, 1981; Weimann, 1994). The standard public goods

experiment uses the "voluntary contribution mechanism". Subjects play a

repeated game in which they are endowed with tokens which they must invest in

either private or public investment. The private rate of return (by convention set

to unity) exceeds the public rate of return (called the marginal per capita return

or MPCR). However, public investment yields a benefit for all subjects so that the

payoff function for subject i is:


j=N

(1) N i= Ei – xi + MPCR Σ xj
j=1
where:
N = payoff
Ei = endowment
xi = public investment by subject i
MPCR = marginal per capita return
Σxj = sum of individual public investments
N = number of subjects

This equation means that each subject's payoff is equal to his or her endowment
(Ei), less the amount contributed to the public good by that subject (xi), plus the

return to the subject from the public good investment of all subjects (MPCR
Σ xj). In general, there is a public good problem if:
(2) (1/N) < MPCR < 1

where N is the group size and MPCR is the marginal per capita return. If (2)
holds, the Nash equilibrium is xi = 0 ∀ i. There are many ways in which

deceptive experimental design can cast light upon the puzzles which public

goods experiments reveal. Indeed, deception is not unknown in the public goods

and free rider literature (E.g. Bohm, 1972; Schneider & Pommerehne, 1981;

Andreoni, 1988, page 295, esp. note 7; Weimann, 1994). Four examples illustrate

the potential utility of deception in increasing experimental control.


(i) "Exploitation aversion"

The choice between cooperation and free-riding may be a consequence of

behaviour similar to that observed in conformity experiments. That is, individual

choices may depend on the actions of other players. It may be that any given

player is more likely to be cooperative if the other players are very cooperative,

and more likely to free ride if the other players are very selfish. However, as

Weimann (1994, page 187) remarks, in standard public goods experiments this

question "cannot be answered because 'behaviour of others' was never a

controlled variable in such experiments". Weimann therefore conducts an


experiment involving 5 person groups in which:
"[T]he contributions of the 4 other players must be controlled. ...
Each player played on his own against 4 fictitious players whose
contributions were made by the experimenter. In E6 (Weimann's
experiment 6) these 'phantoms' were very cooperative: they
invested on average 89.75% of their tokens in the public good, and
in E7 (Weimann's experiment 7) they were very selfish and
contributed only 15.75% on average. The 15 subjects in E6 had to
believe they were playing with highly cooperative people, the 14
subjects in E7 had to have the impression they were dealing with
very uncooperative people". (Weimann, 1994, page 189, parenthetic
comments added)

Weimann uses this clever experimental design to demonstrate that"[i]f all

others do not cooperate, subjects react in a very natural way: because they do not

like to be exploited they also stop cooperation" (Weimann, 1994, page 198). The
discovery of this evidence supporting what Weimann calls "exploitation

aversion" necessarily required deception.

(ii) Size effects

According to standard collective choice theory, the tendency for free-riding

increases with group size. That is, free riding will be more severe the larger is the

social group (Sandler, 1992). The laboratory and field evidence usually
demonstrates no such group size effect (Isaac & Walker, 1988; Isaac et al., 1994;

Lipford, 1995; Olson & Caddell, 1994). Clearly, it is not the actual size of the

group which is the potentially important variable, but the perceived size of the

group. Far greater experimental data can be derived for any given experimental

cost by using deceptive procedures in which subjects are told the size of group of

which they are a member, and receive fictitious information regarding the

contributions of the other members of the group. In particular, very large group

experiments are possible using this technique when they would be prohibitively

expensive if deception were prohibited. Isaac et al. (1994) attempt to solve this
cost problem by using course-related credit points as the payoff in their

experiment. A major problem with this method is that it necessarily limits the

experimental economist to using economics students as subjects. Despite the

evidence presented by Isaac et al. (1985), there is a lingering doubt that

economics students are typical in their predisposition to cooperate (Frank et al.,

1993). Deceptive procedures would improve the accuracy and power of

hypothesis testing in at least two ways. First, a wider range of the group sizes

can be studied for any given number of experimental subjects. Second, the

hypothesis that the absence of complete free-riding is a consequence of a


combination of satisficing behaviour and the search for a minimum profitable

coalition could be tested more accurately by focusing precisely on those

combinations of group size and MPCR which seem to yield cooperative

behaviour. Satisficing means that players are "happy enough" to end up with

more than they started with, which requires a minimum profitable coalition i.e.

MPCR x N* >1 where N* is the expected number of players in the coalition. If the

probability that any randomly chosen person will cooperate is α then the

Expected Number of Cooperators = N* = αN. The condition for a minimum

profitable coalition is MPCR . N* > 1 which implies MPCR . αN > 1. For given α
and MPCR, the larger is N the greater is the chance that this condition will be

satisfied.

(iii) The "warm glow" hypothesis

Ledyard (1995) and Andreoni (1989) hypothesise that investing in the public

good may in part be a consequence of a "warm inner glow", that is that such

investment may yield some utility independent of the associated monetary

payoffs. It is usual in public good experiments to avoid the use of the term

"public", possibly because of the fear that economics student subjects will make
the association "public good" and therefore infer that free riding is the

experimental hypothesis. Most experimenters using the standard voluntary

contribution mechanism adopt the terms "group" and "private" to denote the

available investments. However, these terms too may carry implications which

result in "warm inner glow" behaviour: "private" may be regarded as roughly

synonymous with "selfish", for example. A little dissimulation would permit

testing of the hypothesis that "warm inner glow" cooperation is the consequence

of experimental artefact caused by nomenclature, rather than a true response to

the payoffs. For instance, behaviour could be compared using the standard
"group/private" terminology" and some neutral terminology like "A/B". These

letters of the alphabet are probably not the best choices and surely some test of

an ordering hypothesis would be required. Letters from later in the alphabet and

more neutral from a student subject point of view would probably be preferable.
(iv) Pulsing

The experimental evidence shows that cooperation tends to diminish with

repetition. However, the decay is not monotonic (E.g. Andreoni, 1988, Tables 1

and 3, pages 296, 299; Isaac & Walker, 1988, Figures II and IV, pages 190, 194;

Isaac et al., 1994, Figures 2-4, pages 11-12). There appear sporadic attempts by

some subjects "to get others to cooperate by unilateral increases in contributions"

(Isaac et al, 1985, page 65). Such "pulsing behaviour could be interpreted as an

attempt to influence others' allocations through signalling" (Isaac et al., 1994,

pages 24-25). Pulsing is therefore a behavioural phenomenon of critical


importance: it represents an attempt to reignite the dying embers of cooperation.

However, under the usual experimental conditions, without deception or proper

control, "pulses do not, in general, prevent the continued decay of contribution"

(Isaac et al., 1985, 66). It is therefore necessary, if the causes and consequences of

pulsing are to be understood, to "control" (that is, simulate) the behaviour of N–1

members of an N member group. Deception-free experimental methods offer

little hope of unravelling this crucial element of behaviour in free-riding

experiments.

Summary and Conclusions


The experimental evidence directly undercuts the basis of attempts to proscribe

deception. The implications of that evidence can be summarised briefly.

Deception and the suspicion of deception do not generally or necessarily alter

subject behaviour. Deception does not appear to "jeopardize future experiments"

or "contaminate a subject pool". It does not mean that "we have lost control". Nor

does it "taint" experiments or cause the data they produce to be invalid. Indeed,

there is good reason to think that the selective use of deception can enhance

control and ensure validity.


It if for that reason that there are relatively few psychologists who have

emerged from the methodological debate on deception adopting the extreme

position advocated by The Prohibitionists. The general consensus is that any

dogmatic proscription of experimental deception is inappropriate, unnecessary

and dangerous (Aitkenhead & Dordoy, 1983; Oetting, 1978; Rosenthal, 1976;

Swingle, 1968, page 31; Trice, 1986; West & Gunn, 1978). Thus Kelman (1967), one

of the more widely cited sceptics regarding the use of experimental deception,

admits that:
"There are good reasons for using deception in many experiments.
There are many significant problems that probably cannot be
investigated without the use of deception. ... I have not forsworn
the use of deception under any and all circumstances".

Two exceptions to this general consensus may be mentioned.1 First, there are

some psychologists who have concluded that deception is ethically unacceptable

because it may cause serious harm to subjects, or because subjects cannot give a

genuine consent to involvement in a deceptive experiment (Baumrind, 1985;

Brandt, 1978; Flanagan, 1973; Jung, 1971; Kroger & Wood, 1986; Mixon, 1977a,

1977b; Noble, 1983; Oliansky, 1991; Rubin, 1973, 1985; Shipley, 1977; Zimbardo,

1974). None of The Prohibitionists have relied on such ethical arguments in

support of their case. Second, there is a group of psychologists who have


advocated role playing experiments as a substitute for deceptive experiments

(Eisner, 1977; Forward et al., 1976; Greenwood, 1983; Hendrick, 1977; Krupat,
1977; Mitchell et al., 1977; Petranek, 1985, 1994; cf. Meeus & Raaijmakers, 1985).

There has been comparatively little interest in role playing experiments among
economists, by comparison to their colleagues in management and international

1 In addition to these two, there are the fundamentalist Christian non-empiricist anti-
psychologists (E.g. Foster & Ledbetter, 1987) who assert that the deception inherent in
psychological research is sinful, and that in any case knowledge gained by empirical methods is
invalid because it does not derive from the bible. These would be strange bedfellows indeed for
economists.
relations departments. However, there is a similarity between the expressed

view of The Prohibitionists and the approach of some "role playing

experimenters". In particular, Hynan (1982) argues that if the experimenter is

able to engender competition within role-playing "experiments", thus effectively

distracting the subjects from the fiction of the experience, subjects will behave in

a way which is natural. This view echoes that of many experimental economists

who believe that if only we could persuade subjects to attend closely to their

payoffs, no other feature of the experimental design is terribly important. In

other words, there seems to be a view among some experimental economists that
a "steep objective function" is the key feature in the design of an experiment. The

discussion of ultimatum games and data validity above should make clear that

the typical experimental subject will always attend to cues, information and

hunches about features of the design other than the payoffs. Competition or the

egoistic instinct does not guarantee data validity.

In expansive mood in the closing chapter of his Experiments in Economics,

John Hey suggests a vision splendid of the future of experimental economics:


"[W]e need to cast ourselves adrift from neoclassical economics and
take the plunge into the sea of ill-defined experiments. ... Almost
definitionally, they need to be both ill-defined as far as the subjects
are concerned and well structured as far as the experimenter is
concerned: he or she needs to retain control over the experiment.
The problem then is to control how much and what part of the
structure is to be hidden from the subjects" (Hey, 1991, pages 228-
229).

If experimental economics is to proceed according to this vision, a necessary

corollary of keeping structure "hidden from the subjects" is experimental

deception. Indeed, that some deceptive experimental results have been published

in the economics literature would seem to imply that editors are sensibly

ignoring The Prohibitionist view, or that it has not yet fully taken hold.
Rather than providing glib answers to difficult methodological and design

questions it would seem useful general practice for experimental economists to

follow the Roth Rule: "It is important, I think, to avoid establishing rigid

orthodoxies on questions of methodology" (Roth, 1995, page 86).


References
Marilyn Aitkenhead & Jackie Dordoy (1983), "Research on the ethics of research",
Bulletin of the British Psychological Society, 36, 315-318.
David F. Allen (1983), "Follow-up analysis of use of forewarning and deception
in psychological experiments", Psychological Reports, 52, 3, 899-906.
James Andreoni (1988), "Why free ride: Strategies and learning in public goods
experiments", Journal of Public Economics, 37, 291-304.
James Andreoni (1989), "Giving with impure altruism: Applications to charity
and Ricardian equivalence", Journal of Political Economy, 97, 1447-1458.
Solomon E. Asch (1951), "Effects of group pressure upon the modification and
distortion of judgement", in H. Guetzkow (Ed.), Groups, Leadership and
Men, (Carnegia, Pittsburgh, Pa).
Solomon E. Asch (1955), "Opinions and social pressure", Scientific American, 11,
32.
Diana Baumrind (1985), "Research using intentional deception: Ethical issues
revisited", American Psychologist, 40, 2, 165-174.
Bernard C. Beins (1993), "Using the Barnum effect to teach about ethics and
deception in research", Teaching of Psychology, 20, 1, 33-35.
Ken Binmore, A. Shaked & John Sutton (1985), "Testing Noncooperative
Bargaining Theory: A Preliminary Study", American Economic Review, 75, 5,
1178-1180.
P. Bohm (1972), "Estimating the demand for public goods: An Experiment",
European Economic Review, 3, 111-130.
Lewis W. Brandt (1978), "Don't sweep the ethical problems under the rug –
Totalitarian versus equalitarian ethics", Canadian Psychological Review, 19, 1,
63-66.
T. C. Brock & L. A. Becker (1966), "'Debriefing' and susceptibility to subsequent
experimental manipulations", Journal of Experimental Social Psychology, 2,
314-323.
Raymond R. Burke, Wayne S. DeSarbo, Richard L. Oliver & Thomas S. Robertson
(1988), "Deception by implication: An experimental investigation", Journal of
Consumer Research, 14, 4, 483-494.
A. Chipman (1966), "Conformity as a differential function of social pressure and
judgement difficulty", Journal of Social Psychology, 4, 532-537.
Larry Christensen (1977), "The negative subject: Myth, reality, or a prior
experimental experience effect?", Journal of Personality and Social
Psychology, 35, 6, 392-400.
Larry Christensen (1988), "Deception in psychological research: When is its use
justified?", Personality and Social Psychology Bulletin, 14, 4,664-675.
Frank L. Collins, I. Franklin Kuhn & Glen D. King (1979), "Variables affecting
subjects' ethical ratings of proposed experiments", Psychological Reports, 44,
1, 155-164.
W. D. Crano & M. B. Brewer (1973), Principles of Research in Social Psychology,
(McGraw-Hill, London).
Douglas D. Davis & Charles A. Holt (1992), Experimental Economics, (Princeton
University Press, Princeton, N.J.).
Patricia J. Dunston & Sherman Ross (1986), "Deception in psychological research:
A continuing problem", Perceptual and Motor Skills, 62, 1, 290
Margaret S. Eisner (1977), "Ethical problems in social psychological
experimentation in the laboratory", Canadian Psychological Review, 18, 3,
233-241.
S. Fillenbaum (1966), "Prior deception and subsequent experimental
performance: The 'faithful' subject", Journal of Personality and Social
Psychology, 4, 532-537.
Phillip D. Finney (1987), "When consent information refers to risk and deception:
Implications for social research", Journal of Social Behavior and Personality, 2,
1, 37-48
Michael F. Flanagan (1973), "Role playing: An alternative to deception? A review
of the issue", American Psychologist, 28, 5, 444-445.
Brian R. Flay & Nicholas P. Hamid (1977), "Artifact in social psychological
research: The subject's view", New Zealand Psychologist, 6, 2, 84-96.
John Forward, Rachelle Canter & Ned Kirsch (1976), "Role-enactment and
deception methodologies? Alternative paradigms?", American Psychologist,
31, 8, 595-604.
James D. Foster & Mark F. Ledbetter (1987), "Christian anti-psychology and the
scientific method", Journal of Psychology and Theology, Spring, 15, 1, 10-18.
Robert H. Frank, Thomas Gilovich & Dennis T. Regan (1993), "Does studying
economics inhibit cooperation?", Journal of Economic Perspectives, 7, 2, 159-
171.
Eugenia P. Gerdes (1979), "College students' reactions to social psychological
experiments involving deception", Journal of Social Psychology, 107, 1, 99-
110.
David R. Glasgow, Cyril J. Sadowski & Stephen F. Davis (1977), "The project
must count: Fostering positive attitudes toward the conduct of research",
Bulletin of the Psychonomic Society, 10, 6, 471-474.
John D. Greenwood (1983), "Role-playing as an experimental strategy in social
psychology", European Journal of Social Psychology, 13, 3, 235-254.
Alan E. Gross & India Fleming (1982), "Twenty years of deception in social
psychology", Personality and Social Psychology Bulletin, 8, 3, 402-408.
Charles. L. Gruder, Alfred Stumpfhauser & Robert S. Wyer (1977), "Improvement
in experimental performance as a result of debriefing about deception",
Personality and Social Psychology Bulletin, 3, 3, 434-437.
Clyde Hendrick (1977), "Role-playing as a methodology for social research: A
symposium", Personality and Social Psychology Bulletin, 3, 3, 454.
John D. Hey (1991), Experiments in Economics, (Oxford, Basil Blackwell).
Michael T. Hynan (1982), "Aggression in a competitive task", Psychological
Reports, 50, 2, 663-672.
R. Mark Isaac, Kenneth F. McCue & Charles R. Plott (1985), "Public goods
provision in an experimental environment", Journal of Public Economics, 26,
51-74.
R. Mark Isaac & James Walker (1988), "Group size effects in public goods
provision: The voluntary contribution mechanism", Quarterly Journal of
Economics, 103, 179-200.
R. Mark Isaac, James Walker & Arlington W. Williams (1994), "Group size and
the voluntary provision of public goods: Experimental evidence utilizing
large groups", Journal of Public Economics, 54, 1-36.
Edward E. Jones & Harold B. Gerard (1967), Foundations of Social Psychology,
(John Wiley & Sons, New York).
John Jung (1971), The Experimenter's Dilemma, (Harper & Row, New York).
Herbert C. Kelman (1967), "Human Use of Human Subjects: The Problem of
Deception in Social Psychological Experiments", Psychological Bulletin, 67, 1-
11.
James H. Korn (1987), "Judgments of acceptability of deception in psychological
research.", Journal of General Psychology, 114, 3, 205-216.
Rolf O. Kroger & Linda A. Wood (1986), "Needed: Radical surgery", American
Psychologist, 41, 3, 317-318.
Edward Krupat (1977), "A re-assessment of role playing as a technique in social
psychology", Personality and Social Psychology Bulletin, 3, 3, 498-504.
Edward Krupat & Ron Garonzik (1994), "Subjects' expectations and the search for
alternatives to deception in social psychology", British Journal of Social
Psychology, 33, 2, 211-222.
John O. Ledyard (1995), "Public Goods: A Survey of Experimental Research", in
John H. Kagel & Alvin E. Roth (Eds.) (1995), The Handbook of Experimental
Economics, (Princeton University Press, Princeton, NJ).
Hanna Levenson, Morris J. Gray & Arnette Ingram (1976), "Research methods in
personality five years after Carlson's survey", Personality and Social
Psychology Bulletin, 2, 2, 158-161.
J. W. Lipford (1995), "Group-Size and the free-rider hypothesis - an examination
of new evidence from churches", Public Choice, 83, 3-4, 291-303.
Jack P. Lipton & Raymond T. Garza (1978), "Further evidence for subject pool
contamination", European Journal of Social Psychology, 8, 4, 535-539.
B. Andrew Lustig, John Coverdale, Timothy Bayer & Elizabeth Chiang (1993),
"Attitudes toward the use of deception in psychologically induced pain", IRB-
A Review of Human Subjects Research, 15, 6, 6-8.
Robert J. MacCoun & Norbert L. Kerr (1987), "Suspicion in the psychological
laboratory: Kelman's prophecy revisited", American Psychologist, 42, 2, 199.
William J. McGuire (1969), "Suspicioness of experimenter's intent", in Robert
Rosenthal & Ralph L. Rosnow (Eds.), Artifact in Behavioral Research,
(Academic Press, New York).
Gerald Marwell & Ruth E. Ames (1981), "Economists free ride, does anyone
else?", Journal of Public Economics, 15, 295-310.
W. Meeus & Q. Raaijmakers (1985), "Is role-playing an alternative to deception?
A comparison of research strategies", Gedrag Tijdschrift voor Psychologie, 13,
2, 1-12.
Stanley Milgram (1974), Obedience to Authority, (Harper & Row, New York).
Judson Mills (1976), "A procedure for explaining experiments involving
deception", Personality and Social Psychology Bulletin, 2, 1, 3-13.
Elizabeth V. Mitchell, Theodore J. Kaul & Harold B. Pepinsky (1977), "The limited
role of psychology in the roleplaying controversy", Personality and Social
Psychology Bulletin, 3, 3, 514-518.
Don Mixon (1977a), "Temporary false belief", Personality and Social Psychology
Bulletin, 3, 3, 479-488.
Don Mixon (1977b), "Why pretend to deceive?", Personality and Social
Psychology Bulletin, 3, 4, 647-653.
Per Molander (1992), "The Prevalence of Free Riding", Journal of Conflict
Resolution, 36, 4, 756-71.
William Noble (1983), "Ethics, or issues in methodology?", Australian
Psychologist, March, 18, 1, 25-38.
E. R. Oetting (1975), "A response to 'Science, psychology and deception'", Bulletin
of the British Psychological Society, 28, 268-269.
Adam Oliansky (1991), "A confederate's perspective on deception", Ethics and
Behavior, 1, 4, 253-258.
D. V. A. Olson & D. Caddell (1994), "Generous congregations, generous givers -
congregational contexts that stimulate individual giving", Review of
Religious Research, 36, 2, 168-180.
Martin T. Orne (1969), "Demand characteristics and the concept of quasi-
controls", in Robert Rosenthal & Ralph L. Rosnow (Eds.), Artifact in
Behavioral Research, (Academic Press, New York).
Martin T. Orne & Charles H. Holland (1968), "On the ecological validity of
laboratory deceptions", International Journal of Psychiatry, 6, 282-293.
Charles F. Petranek (1985), "The use of innocent deception in simulations",
Simulation and Games, 16, 3, 351-352.
Charles Petranek (1994), "A maturation in experiential learning: Principles of
simulation and gaming", Simulation and Gaming, 25, 4, 513-523.
R. O. Pihl, Camillo Zacchia & Amos Zeichner (1981), "Follow-up analysis of the
use of deception and aversive contingencies in psychological experiments",
Psychological Reports, 48, 3, 927-930.
H. W. Riecken (1962), "A program for research on experiments in social
psychology", in N. F. Washburne (Ed.), Decisions, Values and Groups,
Volume 2, (Pergamon Press, New York).
N. J. Roese & D. W. Jamieson (1993), "20 years of bogus pipeline research: A
critical review and meta-analysis", Psychological Bulletin, 114, 2, 363-375.
Robert Rosenthal (1963), "On the social psychology of the psychological
experiment: The experimenter's hypothesis as unintended determinant of
experimental results", American Scientist, 51, 268-283.
Robert Rosenthal (1976), Experimenter Effects in Behavioral Research, (John
Wiley & Sons, New York).
Alvin E. Roth (1995), "Introduction", in John H. Kagel & Alvin E. Roth (Eds.)
(1995), The Handbook of Experimental Economics, (Princeton University
Press, Princeton, NJ).
Zick Rubin (1973), "Designing honest experiments", American Psychologist, 28, 5,
445-448.
Zick Rubin (1985), "Deceiving ourselves about deception: Comment on Smith &
Richardson's "Amelioration of deception and harm in psychological
research"", Journal of Personality and Social Psychology, 48, 1, 252-253.
Todd Sandler (1992), Collective Action: Theory and Applications, (Harvester
Wheatsheaf, New York).
Friedrich Schneider & Werner W. Pommerhene (1981), "Free riding and collective
action: An experiment in public microeconomics", Quarterly Journal of
Economics, 96, 689-704.
Shalom H. Schwartz & Avi Gottlieb (1981), "Participants' postexperimental
reactions and the ethics of bystander research", Journal of Experimental Social
Psychology, 17, 4, 396-407.
Thorne Shipley (1977), "Misinformed consent: An enigma in modern social
science research", Ethics in Science and Medicine, 4, 3-4, 93-106.
Joan E. Sieber, Rebecca Iannuzzo & Beverley Rodriguez (1995), "Deception
methods in psychology: Have they changed in 23 years?", Ethics and
Behavior, 5, 1, 67-85.
Stevens S. Smith & Deborah Richardson (1985a), "On deceiving ourselves about
deception: Reply to Rubin", Journal of Personality and Social Psychology, 48,
1, 254-255.
Stevens S. Smith & Deborah Richardson (1985b), "Amelioration of deception and
harm in psychological research: The important role of debriefing", Journal of
Personality and Social Psychology, 44, 5, 1075-1082.
David J. Stang (1976), "Ineffective deception in conformity research: Some causes
and consequences", European Journal of Social Psychology, 6, 3, 353-367.
Lawrence J. Stricker (1967), "The true deceiver", Psychological Bulletin, 68, 13-20.
Lawrence J. Stricker, Samuel Messick & Douglas N. Jackson (1969), "Evaluating
deception in psychological research", Psychological Bulletin, 71, 343-351.
Paul G. Swingle (1968), Experiments in social psychology, (Academic Press, New
York).
Richard Thaler (1988), "The Ultimatum Game", Journal of Economic Perspectives,
2, 4, 195-206.
Ashton D. Trice (1986), "Ethical variables?", American Psychologist, 41, 4, 482-
483.
John C. Turner (1981), "Some considerations in generalizing experimental social
psychology", in G. M. Stephenson & J. M. Davis (Eds.), Progress in Applied
Social Psychology, Volume 1, (John Wiley, Chichester).
W. E. Vinacke (1954), "Deceiving experimental subjects", American Psychologist,
9, 155.
Bruce W. Walsh (1976), "Disclosure of deception by debriefed subjects: Another
look", Psychological Reports, 38, 3, 783-786.
Elaine Walster, Ellen Berscheid, Darcy Abrahams & Vera Aronson (1967),
"Effectiveness of debriefing following deceptive experiments", Journal of
Personality and Social Psychology, 6, 371-380.
Stephen G. West & Steven P. Gunn (1978), "Some issues of ethics in social
psychology", in Dennis Krebs (Ed.), Readings in Social Psychology:
Contemporary Perspectives, (Second Edition, Harper & Row, New York).
Joachim Weimann (1994), "Individual behavior in a free riding experiment",
Journal of Public Economics, 54, 185-200.
Richard L. Wiener & Patricia V. Erker (1986), "The effects of prebriefing
misinformed research participants on their attributions of responsibility",
Journal of Psychology, 120, 4. 397-410.
P. G. Zimbardo (1974), "On the ethics of intervention in human psychological
research: With special reference to the Stanford prison experiment",
Cognition, 2, 243-256.

Anda mungkin juga menyukai