Anda di halaman 1dari 14

Journal of Retailing 88 (4, 2012) 542555

Common Method Bias in Marketing: Causes, Mechanisms, and Procedural


Remedies
Scott B. MacKenzie a, , Philip M. Podsakoff b,1
aDepartment of Marketing, Kelley School of Business, Indiana University, Bloomington, IN 47405, United States
b Department of Management at the Kelley School of Business, Indiana University, Bloomington, IN 47405, United States

Abstract
There is a great deal of evidence that method bias influences item validities, item reliabilities, and the covariation between latent constructs.
In this paper, we identify a series of factors that may cause method bias by undermining the capabilities of the respondent, making the task of
responding accurately more difficult, decreasing the motivation to respond accurately, and making it easier for respondents to satisfice. In addition,
we discuss the psychological mechanisms through which these factors produce their biasing effects and propose several procedural remedies that
counterbalance or offset each of these specific effects. We hope that this discussion will help researchers anticipate when method bias is likely to
be a problem and provide ideas about how to avoid it through the careful design of a study.
2012 New York University. Published by Elsevier Inc. All rights reserved.

Keywords: Research Dialog on common method bias; Research methods in retailing

The dangers of the effects of method bias have long been induce regular or irregular changes in the means, variances,
recognized in the research literature (e.g., Arndt and Crane and/or covariances of observations. This especially becomes
1975; Bagozzi 1980, 1984; Campbell and Fiske 1959; Cote a problem when a subset of measurements is so affected. By
and Buckley 1987, 1988; Fiske 1982; Greenleaf 1992; McGuire suppressing or enhancing variation in selected observations in
1969). For example, over 50 years ago in the psychology liter- an orderly way, sources of systematic error may influence key
ature, Campbell and Fiske (1959) called attention to the fact phenomena of interest over and above the true causes and ran-
that any measuring instrument inevitably has: (a) systematic dom error. A few years later Cote and Buckley (1987, p. 317)
trait/construct variance due to features that are intended to repre- conducted a meta-analysis of 70 MTMM studies to obtain an
sent the trait/construct of interest, (b) systematic error variance estimate of the amount of systematic measurement error present
due to characteristics of the specific method being employed in different types of measures and concluded that, although
which may be common to measures of other traits/constructs, trait/construct variance often is assumed to be large in relation to
and (c) random error variance. In this research, they (Fiske 1982, systematic and random measurement error, Our findings indi-
p. 82) defined the term method broadly to encompass several cate the general assumption of minimal measurement error is
key aspects of the measurement process including: the content highly questionable. Measurement error, on average, accounts
of the items, the response format, the general instructions and for most of the variance in a measure. This observation raises
other features of the test-task as a whole, the characteristics of questions about the practice of applying statistical techniques
the examiner, other features of the total setting, and the reason based on the assumption that trait variance is large in relation to
why the subject is taking the test. measurement error variance.
Twenty-five years later in the marketing literature, Bagozzi There are two detrimental effects produced by systematic
(1984, p. 24) further warned that it is important to understand method variance. First, systematic method variance biases
the sources of systematic measurement error because they can estimates of construct validity and reliability (e.g., Bagozzi
1984; Baumgartner and Steenkamp 2001; Buckley, Cote, and

Comstock 1990; Cote and Buckley 1987; Doty and Glick 1998;
Corresponding author. Tel.: +1 812 855 1101.
Lance et al. 2010; MacKenzie, Podsakoff, and Podsakoff 2011;
E-mail addresses: mackenz@indiana.edu (S.B. MacKenzie),
podsakof@indiana.edu (P.M. Podsakoff). Podsakoff et al. 2003; Podsakoff, MacKenzie, and Podsakoff
1 Tel.: +1 812 855 2747. 2012; Williams, Cote, and Buckley 1989). Because a latent

0022-4359/$ see front matter 2012 New York University. Published by Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.jretai.2012.08.001
S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555 543

Table 1
Empirical evidence of the effects of method bias.
Nature of method bias effect Evidence Conclusion

Effects of method bias on item reliability or CFA of MTMM matrices (Buckley, Cote, and Evidence suggests that between 18 percent and 32 percent
validity. Comstock 1990; Cote and Buckley 1987; Doty of the variance in items is attributable to method factors.
and Glick 1998; Lance et al. 2010; Williams,
Cote, and Buckley 1989).
Effects of method bias on the covariation Estimates based on MTMM meta-analytic Meta-analytic evidence suggests that the true correlation
between constructs. studies (Buckley, Cote, and Comstock 1990; between traits in these studies was inflated between 38
Lance et al. 2009). percent and 92 percent by method bias.
Effects of obtaining measures of predictor and Summary of evidence from meta-analytic studies suggests
criterion variables from the same versus that correlation between many widely studied constructs are
different sources (summarized by Podsakoff, inflated from 133 percent to 304 percent when the predictor
MacKenzie, and Podsakoff 2012). and criterion variables are obtained from the same source as
opposed to different sources.
Effects of response styles (Baumgartner and Evidence suggests that 27 percent of the variance in the
Steenkamp 2001). magnitude of correlations between fourteen consumer
behavior constructs was attributable to five response styles.
Effects of proximity (Weijters, Geuens, and The correlation between items measuring unrelated
Schillewaert 2009). constructs increased by 225 percent when they are
positioned next to each other compared to when they were
positioned six items apart.
Effects of item wording (Harris and Bladen Evidence suggests that the correlation between constructs
1994). was 0.21 when item wording bias was controlled, but 0.50
when it was not controlled (an increase of 238 percent).

Source: Adapted from evidence summarized in Podsakoff, MacKenzie and Podsakoff (2012).

construct captures systematic variance among its measures, typically inflated 27304 percent by method variance depending
if systematic method variance is not controlled, this variance upon the nature of the method factor examined. They concluded
will be lumped together with systematic trait variance in the that (2012, p. 10.27), the evidence shows that method biases
construct. This can lead to: (a) incorrect conclusions about can significantly influence item validities and reliabilities as well
the adequacy of a scales reliability and convergent validity as the covariation between latent constructs. This suggests that
(Baumgartner and Steenkamp 2001; Lance in Brannick et al. researchers must be knowledgeable about the ways to control
2010; Williams, Hartman, and Cavazotte 2010); (b) underes- method biases that might be present in their studies.
timates of corrected correlations in meta-analyses because the Several researchers (Bagozzi 1984; Baumgartner and
reliability estimates will be artificially inflated due to method Steenkamp 2001; Podsakoff et al. 2003; Podsakoff, MacKenzie,
variance (Le, Schmidt, and Putka 2009); and (c) biased esti- and Podsakoff 2012; Williams, Hartman, and Cavazotte 2010)
mates of the effects of other correlated predictors on the criterion have noted that there are two fundamental ways to control for
variable (Bollen 1989). method biases. One way is to statistically control for the effects
The second detrimental effect of systematic method vari- of method biases after the data have been gathered; the other is to
ance is that it can bias parameter estimates of the relationship minimize their effects through the careful design of the studys
between two different constructs. It has been widely recog- procedures. Far more has been written about the post hoc ways
nized that method bias can inflate, deflate, or have no effect to statistically control for method bias than about the proce-
on estimates of the relationship between two constructs (e.g., dural controls. Based on their recent review of this extensive
Bagozzi 1984; Baumgartner and Steenkamp 2001; Cote and literature, Podsakoff, MacKenzie, and Podsakoff (2012) recom-
Buckley 1988; Podsakoff et al. 2003; Podsakoff, MacKenzie, mended that researchers try to use what they called the directly
and Podsakoff 2012; Siemsen, Roth, and Oliveira 2010). This is measured latent factor technique (Bagozzi 1984; Podsakoff et al.
a serious problem because it can: (a) bias hypothesis tests and 2003; Williams, Gavin, and Williams 1996) or the measured
cause Type I or Type II errors, (b) lead to incorrect conclusions response style technique (Baumgartner and Steenkamp 2001;
about the proportion of variance accounted for in a criterion Weijters, Schillewaert, and Geuens 2008). The advantage of
variable, and (c) alter conclusions about the nomological and/or these techniques is that they identify the nature of the method
discriminant validity of a scale. bias being statistically controlled. If the specific source of the
Recently, Podsakoff, MacKenzie, and Podsakoff (2012) method bias is unknown or cannot be measured, then they sug-
reviewed the empirical evidence of the effects that method bias gested the use of the CFA marker technique (Williams, Hartman,
has on item reliability and validity as well as on the covariation and Cavazotte 2010) or the common method factor technique
between constructs. This evidence is summarized in Table 1. (Bagozzi 1984; Podsakoff et al. 2003). Although neither of these
As indicated in this table, an average of 1832 percent of the techniques is as good as the previously recommended ones,
variance in a typical measure is attributable to method variance. in some situations they may be the best a researcher can do,
In addition, estimates of the covariation between constructs are and they are better than relying on the weaker, conceptually
544 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555

flawed statistical control procedures (i.e., Lindell and Whitneys neutral point of a rating scale, endorsement of the status quo,
correlation-based marker variable technique or Harmans one or saying dont know so as to avoid expending the effort
factor test) that have been used in some recent retailing research necessary to consider and possibly take more risky stands. In
(e.g., see Arnold and Reynolds 2009; Babakus, Ugur, and Ashill the extreme, respondents could randomly select a response
2009; Grace and Weaven 2011; Spralls, Hunt, and Wilcox 2011). from those offered by a closed-ended question.
In contrast, the literature provides less guidance about what
kinds of procedural controls should be implemented in a given Therefore, the key to understanding when method bias will
study to control for method bias. In part this may be because, be a problem is to identify when respondents are likely to be sat-
although many of the sources of method bias have been identi- isficing rather than optimizing. Generally speaking, as shown in
fied, the mechanisms through which they produce their effects Fig. 1 respondents will optimize when they are able to provide
are not well understood. In view of this, the current paper con- accurate answers and they are motivated to provide accurate
tributes to the literature by: (a) examining the conditions under answers. Both are necessary. If respondents are able to provide
which method bias is likely to be a particularly serious problem, accurate answers, but unwilling to try to do so, then satisficing
(b) identifying the potential mechanisms responsible for it, and will result. Similarly, if respondents are motivated to provide
(c) suggesting several procedural techniques that can be used to accurate answers, but are unable to do so, once again satisfic-
diminish it. Hopefully, this information will provide researchers ing may be the result. The ability of respondents to answer
with better tools to avoid or minimize the detrimental effects of accurately is itself jointly determined by the match between
method bias. the respondents capabilities and the difficulty of the task of
answering the question. If a respondent lacks the experience,
intelligence, and so forth needed to answer accurately, or the
When is method bias likely to be a problem?
task is too difficult, then satisficing will be the likely result. This
suggests that method bias is more likely to be a problem when
Several researchers (Krosnick 1991, 1999; Sudman,
factors are present that: (a) undermine the capabilities of the
Bradburn, and Schwarz 1996; Tourangeau, Rips, and Rasinski
respondent; (b) make the task of responding accurately more
2000) have speculated that, because the cognitive effort required
difficult; (c) decrease the motivation to respond accurately; and
to generate an optimal answer to a long series of questions on a
(d) make it easier for respondents to satisfice (i.e., decrease the
wide range of topics is often substantial, respondents cope with
difficulty of the task of satisficing).
these demands by seeking easier ways to generate their answers.
Krosnick (1991, 1999) reviews a considerable amount of
More specifically, when the difficulty of the task of generating
empirical evidence that is consistent with these general propo-
an optimal answer is high, and a respondents ability, natural
sitions. For example, there is a great deal of evidence that
predisposition, or motivation to expend the required amount
acquiescence bias is more common among respondents with lim-
of cognitive effort are low, Krosnick (1991, 1999) has argued
ited cognitive capabilities (Elliott 1961; Jackson 1959; Schuman
that respondents may satisfice by being less thorough in ques-
and Presser 1981), for items that are more difficult to answer
tion comprehension, memory retrieval, judgment, and response
(Trott and Jackson 1967), and for items that appear toward
selection. In other words, they may expend less effort: thinking
the end of a long questionnaire when respondents are presum-
about the meaning of a question; searching their memories for
ably fatigued (Clancy and Wachsler 1971). Similarly, there is
information to answer the question; integrating the information
evidence that a nondifferentiated response style and random
that has been retrieved to form a judgment; and matching their
measurement error are more common when respondents have
judgments to the response options presented in the question.
limited cognitive capabilities and toward the end of a long ques-
Our fundamental hypothesis is that responses will be more
tionnaire when respondents are fatigued (Herzog and Bachman
strongly influenced by method bias when respondents cannot
1981; Kraut, Wolfson, and Rothenberg 1975; Krosnick and
provide accurate responses and/or when they are unwilling to
Alwin 1988). The same is true for response order effects (i.e., the
try to provide accurate responses. In other words, we expect
tendency to select the first or the last response alternative). More
that when respondents are satisficing rather than optimizing they
specifically, several researchers have found that response order
will be more likely to respond stylistically, and their responses
effects are stronger for: less educated respondents (Krosnick
will be more susceptible to method bias. This is consistent with
and Alwin 1987; McClendon 1986, 1991); respondents with lim-
Krosnick (1999, p. 548) who observes that when respondents
ited cognitive skills (Krosnick 1991; Krosnick and Alwin 1987);
are unwilling or unable to expend the cognitive effort required
and questions that are more difficult and when respondents are
to accurately answer a question, they,
fatigued (Mathews 1927).
. . . can use a number of possible decision heuristics to arrive Finally, empirical evidence also demonstrates that respon-
at a satisfactory answer without expending substantial effort. dents are less likely to say dont know or no opinion when
A person might select the first reasonable response he or motivation is high due to: (a) interest in the topic (Francis and
she encounters in a list rather than carefully processing all Busch 1975; Rapoport 1982); (b) the perceived ability to process
possible alternatives. Respondents could be inclined to accept and understand information relevant to the topic (Krosnick and
assertions made in the questions regardless of content, rather Milburn 1990); and (c) inducements to optimize (McDaniel and
than performing the cognitive work required to evaluate those Rao 1980; Wotruba 1966). In contrast, no opinion or dont
assertions. Respondents might offer safe answers, such as the know responses are more common for questions at the end
S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555 545

Fig. 1. When is method bias likely to be a problem?

of a long questionnaire when motivation is low due to fatigue (Bowling 2005). This can be done using audio computer-assisted
(Dickinson and Kirzner 1985; Ferber 1966), and when intrinsic self-administered interviewing (ACASI) techniques.
motivation to optimize has been undermined (Hansen 1980).
Therefore, in the sections below we will identify some of the Lack of experience thinking about the topic
factors that may increase method bias by increasing the like- Several researchers have suggested that a lack of experi-
lihood of satisficing either by undermining the capabilities of ence thinking about a given topic may impair a respondents
respondents; decreasing their motivation to respond accurately; ability to answer relevant questions about that topic (Fiske and
making the task of responding accurately more difficult or the Kinder 1981; Krosnick 1991). This can produce several detri-
task of satisficing easier. In addition, we will attempt to link mental effects. Comprehension may be impaired because a lack
these mechanisms to the stages in the question response process of experience reduces the respondents ability to link key terms
(Krosnick 1999; Tourangeau, Rips, and Rasinski 2000). Finally, to relevant concepts. Information retrieval may also be more
we will suggest procedural changes researchers can make to difficult because there is less information to retrieve or there
offset these biases. has been less practice retrieving it. Finally, a lack of experi-
ence thinking about a topic may make it more difficult to draw
inferences needed to fill in missing information, and to integrate
Remedies for factors that decrease the ability to respond material that is retrieved. One obvious way to remedy this is to
accurately make sure that you do not ask respondents to tell more than
they can know (Ericsson and Simon 1980; Nisbett and Wilson
To answer a question accurately, the difficulty of the task 1977). This can be avoided by exercising caution when asking
must not exceed the capabilities of the respondent. Therefore, respondents about the motives for their behavior, the effects of
factors that limit a respondents capabilities and/or make the situational factors on their behavior, or other things pertaining
task of answering accurately more difficult are potential sources to cognitive processes that they are unlikely to have attended
of method bias. There are a number of factors that may cause to or stored in short-term memory. Perhaps more importantly,
biased responding by decreasing the ability of the respondent this suggests that it is crucial to select respondents who have
to answer accurately. These factors relate to the respondents the necessary experience thinking about the issues of interest in
ability, experience, and/or characteristics of the items that s/he the questionnaire. In instances where several diverse topics are
is asked to answer (Table 2). of interest, this may require asking one set of respondents (e.g.,
employees) about some topics, and another set of respondents
Lack of ability (e.g., managers) about other topics.
Several researchers (Krosnick 1991, 1999; Krosnick and
Alwin 1987; Schuman and Presser 1981) have noted that low Complex or abstract questions
ability (e.g., as reflected in a lack of verbal skills, education, or Complex, abstract questions are more difficult for respon-
cognitive sophistication) increases the likelihood of satisficing dents to answer (Doty and Glick 1998) and increase the
because respondents who lack these abilities have more diffi- likelihood of satisficing (Krosnick 1991) because they: (a)
culty comprehending the meaning of the questions, retrieving decrease the ability of respondents to comprehend the mean-
relevant information from memory, and making judgments. To ing of the questions, (b) make it more difficult for respondents
mitigate this problem, researchers need to do a better job of align- to know what information to retrieve from memory to answer
ing the difficulty of the task of answering the questions with the the question, and (c) make judgments more difficult because
capabilities of the respondents. For example, if the concern is they make it harder for respondents to assess the completeness
about the cognitive sophistication of respondents, researchers of what has been recalled and to identify and fill in gaps in what
need to ensure that the questions are written at a level that the is recalled. Obvious ways to diminish this problem are to: avoid
respondents can comprehend. This is something that should be referring to vague concepts without providing clear examples;
assessed through pretesting. Alternatively, if the concern is about simplify complex or compound questions; and use language,
the literacy of respondents, questions can be recorded and pre- vocabulary, and syntax that match the reading capabilities of the
sented to respondents in audio form to augment the written form respondents.
546 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555

Table 2
Factors that increase method bias by decreasing the ability to respond accurately.
Conditions that cause method bias Mechanism Potential remedies

Lack of verbal ability, education, or May increase the difficulty of the task of Align the difficulty of the task with the capabilities of the
cognitive sophistication (Krosnick comprehending the meaning of the questions, respondents by: (a) pretesting questions to ensure they are
1999; Krosnick and Alwin 1987; retrieving information, and making judgments written at a level the respondents can comprehend; and/or
Schuman and Presser 1981). (Krosnick 1991). (b) presenting the questions in audio form to augment the
written form (e.g., audio computer-assisted
self-administered interviewing (ACASI)).
Lack of experience thinking about the May impair a respondents ability to answer Select respondents who have the necessary experience
topic (e.g., Fiske and Kinder 1981; because it: (a) hinders comprehension by reducing thinking about the issues of interest. Exercise caution when
Schwarz, Hippler, and the respondents ability to link key terms to asking respondents about the motives for their behavior, the
Noelle-Neumann 1992). relevant concepts, (b) makes information retrieval effects of situational factors on their behavior, or other
more difficult (less to retrieve, less practice things pertaining to cognitive processes that are unlikely to
retrieving), and (c) makes it harder to draw have been attended to or stored in short-term memory.
inferences needed to fill in gaps and to integrate
material that is retrieved.
Complex or abstract questions (Doty and May increase the difficulty of comprehending the Avoid referring to vague concepts without providing clear
Glick 1998; Krosnick 1991). meaning of the questions, retrieving relevant examples; simplify complex or compound questions; and
information, and making judgments. use language, vocabulary, and syntax that match the reading
capabilities of the respondents.
Item ambiguity (Krosnick 1991; May increase the difficulty of comprehending the Use clear and concise language; avoid complicated syntax;
Podsakoff et al. 2003; Tourangeau, questions, retrieving relevant information, and define ambiguous or unfamiliar terms; and label all
Rips, and Rasinski 2000). making judgments. Can also increase the response options rather than just the end points.
sensitivity of answers to context effects.
Double-barreled questions (Bradburn, Make the retrieval task more demanding Avoid double-barreled questions.
Sudman, and Wansink 2004; Krosnick (Krosnick 1991) and introduce ambiguities into
1991; Sudman and Bradburn 1982). the response selection task by making it unclear
whether respondents should: (a) answer only one
part of the question, or (b) average their responses
to both parts of the question.
Questions that rely on retrospective May increase the difficulty of the retrieval process Refocus the questions to ask about current states because
recall (Krosnick 1991). and the likelihood of satisficing because questions this reduces the effort required for retrieval. Take steps to
that require retrospective recall are more difficult increase the respondents motivation to expend the effort
to answer due to the relative remoteness of the required to retrieve the information necessary to answer the
relevant information in memory. question accurately by explaining why the questions are
important and how accurate responses will have useful
consequences for the respondent and/or the organization.
Make it easier for respondents to recall the information
necessary to answer the question accurately.
Auditory only presentation of item Increases the memory load because respondents Simplify questions and/or response options. Present long,
(telephone) versus written must keep the meaning of the question and all complex, questions with many response options in written
presentation of item (print or web). response options in short-term memory before form or with visual aids.
responding.

Item ambiguity ambiguous or unfamiliar terms; and labeling all response options
Item ambiguity decreases the ability of respondents to gener- rather than just the end points.
ate an accurate response (MacKenzie, Podsakoff, and Podsakoff
2011; Podsakoff et al. 2003) and thus increases the likelihood Double-barreled items
that respondents will rely on stylistic response tendencies to gen- Double-barreled questions are ones in which opinions about
erate a merely satisfactory answer (Krosnick 1991) and increases two subjects are joined together so that respondents must
the sensitivity of their answers to measurement context effects answer two questions with one answer (Bradburn, Sudman,
(see Tourangeau, Rips, and Rasinski 2000). This can happen and Wansink 2004; Krosnick 1991; MacKenzie, Podsakoff, and
because item ambiguity impairs the respondents ability to com- Podsakoff 2011; Podsakoff et al. 2003; Sudman and Bradburn
prehend the meaning of the question which makes it difficult for 1982). Krosnick (1991, p. 221) notes that that this type of ques-
the respondent to generate an effective retrieval strategy, assess tion can make the retrieval task more demanding, A question
the completeness of what has been recalled, and fill in gaps in that asks respondents how much they like spinach requires only
what is recalled. This problem can be addressed by using clear that cognitions about spinach be retrieved. But a question that
and concise language; avoiding complicated syntax; defining asks whether an individual prefers spinach or turnips requires
S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555 547

that information about both vegetables be recalled. The more dif- The modality through which the items are presented (audi-
ficult the required retrieval task is, the more likely satisficing is to tory or visual) can potentially influence method bias because
occur. In addition, because double-barreled questions decrease it can affect the cognitive demands placed on the respondent.
a respondents ability to generate an accurate response, they Compared to visual presentation, auditory only presentation of
may increase the likelihood that respondents will answer only the items may increase the difficulty of the task of respon-
one part of the question, or average their responses to both parts ding accurately because respondents must keep the meaning
of the question. Finally, this type of question can also make it of the question and all response options in short-term mem-
difficult to accurately map judgments onto response categories. ory before answering. In addition, an auditory only presentation
The simple solution is to avoid double-barreled questions by does not permit the respondent to control the rate of information
splitting the question up into two separate questions. exchange, and no visual cues are present to reinforce the mean-
ing of the items or the response options. Indeed, Tourangeau,
Questions that rely on retrospective recall Rips, and Rasinski (2000, pp. 292293) have noted that, Audi-
Questions that require the retrieval of information about tory presentation (without simultaneous visual display), of long
current states are easier to answer than questions that require or complicated questions may overtax the respondents listen-
retrospective recall because of the relative remoteness of the rele- ing ability, reducing their comprehension. Moreover, when the
vant information in memory (Krosnick 1991). The more difficult question has numerous response options, it may be difficult
the retrieval process, the less complete the information retrieval, for respondents to keep them all in mind . . . [and] When the
the less able the respondent is to fill in missing details and gaps questions are only read aloud, respondents have little control
in what is recalled, and the greater the motivation to satisfice. To over the pace at which questions and response options are pre-
compensate for this, one obvious strategy would be to refocus sented and may begin by considering the options that come at
the questions to ask about current states because this reduces the end of the list. One way to diminish this problem would
the effort required for retrieval. However, this may not always be to present the items in visual form in addition to auditory
be possible. In these instances, another strategy would be to form (e.g., computer assisted self-completion questionnaire with
take steps to increase the respondents motivation to expend the audio), or present long, complex, questions with many response
effort required to retrieve the information necessary to answer options with visual aids. Beyond this, it would help to simplify
the question accurately (Cannell, Miller, and Oksenberg 1981), the questions and/or response options to the extent possible.
perhaps by explaining why the questions are important and how
their accurate responses will have useful consequences for them Remedies for factors that decrease the motivation to
and/or their organization. Yet another strategy would be to make respond accurately
it easier for respondents to recall the information necessary to
answer the question accurately. Depending upon the nature of the To answer accurately, it is essential for respondents to have
question of interest, Schrder (2011, p. 15) notes three things not only the ability to answer questions correctly, but also the
that can be done to make this information easier for respon- motivation to do so. Consequently, factors that diminish the
dents to recall: (a) to facilitate top-down retrieval, specify large motivation of respondents are potential sources of method bias.
topics first and then move within these topics to the more spe- Broadly speaking, many of these factors identified in previous
cific events; (b) to facilitate temporal retrieval, start with the research are dispositional characteristics of the respondent, self-
first (or the last) event and then move forward (or backward) referential characteristics, or characteristics of the measurement
chronologically; and (c) to facilitate parallel retrieval, ask about context (Table 3).
contemporaneous events all at the same time.
Low personal relevance of the issue
Auditory versus written presentation of items Both the Elaboration Likelihood Model (Petty and Cacioppo
The most common data collection methods used in marketing 1986) and the Heuristic Systematic Model (Chaiken, Liberman,
studies are: (a) auditory only presentation of items via telephone and Eagly 1989) imply that when an issue is perceived to
or face-to-face interviews or (b) written presentation of items have little personal relevance to respondents, it may decrease
by means of traditional pencil and paper self-completion ques- their motivation to exert cognitive effort to answer the ques-
tionnaires or computer-assisted self-completion questionnaires. tion and increase the desire to satisfice. Consequently, this may
These methods confound two key factors that potentially affect decrease their willingness to assess the completeness and accu-
method bias. One is the modality through which the items are racy of information retrieved, fill in gaps in what is recalled,
presented auditory mode or visual mode. The other is who and integrate the information retrieved. Similarly, Krosnick
administers the questions an interviewer or self-administration (1999) has noted that questions that are perceived to be unim-
by the respondent. The former primarily affects the difficulty of portant or that are not expected to have useful consequences
the task of answering the question accurately, whereas the latter may undermine motivation and result in poorer comprehension,
primarily affects a respondents motivation to answer accurately. less thorough retrieval, less careful judgment and mapping of
Consequently, we will discuss the issues related to the modal- judgments onto response categories, and can lead to respon-
ity through which the items are presented in this section and ding in a stylistic or nondifferentiated manner. To compensate
the issues related to the presence of an interviewer in the next for this an attempt could be made to increase motivation by:
section. (a) explaining to respondents why the questions are important
548 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555

Table 3
Factors that increase method bias by decreasing the motivation to respond accurately.
Conditions that cause method bias Mechanism Potential remedies

Low personal relevance of the issue May decrease a respondents motivation to exert Explain to respondents why the questions are important
(Chaiken, Liberman, and Eagly 1989; cognitive effort and result in: poorer and how their accurate responses will have useful
Krosnick 1999; Petty and Cacioppo 1986). comprehension; less thorough retrieval; and less consequences for them and/or their organization; and/or
careful judgment and mapping of judgments promise feedback to respondents to motivate them to
onto response categories. respond more accurately so that they can gain greater
self-understanding, enhance self-efficacy, and improve
performance.
Low self-efficacy to provide a correct answer May decrease motivation to exert cognitive effort Emphasizing to respondents that it is their personal
(Chaiken, Liberman, and Eagly 1989). which decreases a persons willingness to: assess opinions that are important, and only their personal
the completeness and accuracy of information experience or knowledge is required to answer the
retrieved, fill in gaps in what is recalled, and trust questions.
his/her own inferences based on partial retrieval.
Low need for cognition (Cacioppo and Petty May decrease motivation to exert cognitive effort Enhance motivation to exert cognitive effort by
1982). and thereby diminish: (a) the thoroughness of emphasizing the importance of the issues; reminding
information retrieval and integration processes, respondents of how research can benefit them or help
and (b) the filling in of gaps in what is recalled. the organization; or increasing personal relevance of the
task.
Low need for self-expression, May decrease motivation to exert cognitive Enhance the motivation for self-expression by
self-disclosure, or emotional catharsis effort and thereby decrease (a) the thoroughness explaining in the cover story or instructions that we
(Krosnick 1999). of information retrieval and (b) the filling in of value your opinion, we need your feedback, or that
gaps in what is recalled. As a result, these we want respondents to tell us what you think, and so
factors may cause people to respond to items forth. Similarly, enhance willingness to self-disclose by
carelessly, randomly, or nonpurposefully. emphasizing the personal benefits of the research to
them (e.g., improved performance and increased
self-awareness) in the instructions.
Low feelings of altruism (Krosnick 1999; May decrease intentions to exert cognitive effort Explain how much the respondents help is needed,
Orne 1962; Viswanathan 2005). on behalf of the researcher which can decrease indicating that others are depending upon the accuracy
the thoroughness of information retrieval and the of the responses; suggest that no one else can provide
filling in of gaps in what is recalled. the needed information (or you are one of the few that
can); and/or remind them how research can improve the
quality of life for others or help the organization.
Agreeableness (Costa and McCrae 1992; Increases the tendency to uncritically endorse or Through instructions, stress the fact that the best way to
Knowles and Condon 1999). acquiesce to statements, search for cues help the researcher is to answer the questions as
suggesting how to respond, and edit responses accurately as possible. Enhance motivation by
for acceptability. According to the dual process emphasizing the importance of the issues; reminding
theory, acquiescence results from a premature respondents of how research can benefit them or help
truncation of the reconsideration stage of the organization; or increasing personal relevance of the
comprehension. task. In addition, measure acquiescence response style
and control for it.
Impulsiveness (Couch and Keniston 1960; May: (a) impair comprehension by decreasing Stress the importance of conscientiousness and
Messick 1991). attention to questions and instructions; (b) accuracy, and encourage respondents to carefully weigh
diminish the tendency to assess the the alternatives before responding.
completeness and accuracy of information
retrieved, fill in gaps in what is recalled, and
integrate the information retrieved; and (c) result
in carelessness in mapping judgments onto
response categories.
Dogmatism, rigidity, or intolerance of Dogmatism (or rigidity) may heighten feelings Stress the importance of conscientiousness and
ambiguity (Baumgartner and Steenkamp of certainty and thus increase willingness to: accuracy, and encourage respondents to carefully weigh
2001; Hamilton 1968). make an estimate based on partial retrieval; the alternatives before responding. Measure extreme
and/or draw inferences based on accessibility or response style and control for it.
to fill in gaps in what is recalled. Dogmatism (or
intolerance for ambiguity) can cause people to
view things as either black or white, thus
increasing the likelihood that they will map
judgments onto extreme response categories.
Implicit theories (Lord et al. 1978; Podsakoff May motivate respondents to edit their responses Introduce a temporal, proximal, or spatial separation;
et al. 2003; Staw 1975). in a manner that is consistent with their theory. and/or obtain the information about the predictor and
criterion variables from separate sources.
S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555 549

Table 3 (Continued)
Conditions that cause method bias Mechanism Potential remedies

Repetitiveness of the items (Petty and May decrease motivation to maintain the Increase motivation by minimizing the repetitiveness of
Cacioppo 1986). cognitive effort required to provide optimal the items, making the questions seem less repetitive by
answers and increase the tendency to respond in reversing some items (i.e., polar opposites not
a nondifferentiated manner or stylistically. negations), or changing the format.
Lengthy scales (Krosnick 1999). May decrease motivation to maintain the Increase motivation by minimizing the length of the
cognitive effort required to provide optimal survey, simplifying the questions, making the questions
answers, and result in poorer comprehension, seem less repetitive by reversing some items (i.e., polar
less thorough retrieval, less careful judgment opposites not negations), or changing the format.
and mapping of judgments onto response
categories, and/or stylistic responding.
Forced participation (Brehm 1966). May increase psychological reactance and Solicit participation by promising rewards rather than
consequently decrease the motivation to exert by threatening punishment. Treat participants in a
cognitive effort to generate accurate answers or respectful manner, show that you value their time, and
to faithfully report those answers. express appreciation for their participation.
Presence of an interviewer (Bowling 2005). May motivate respondents to edit their answers If appropriate, utilize a self-administered method of data
to make them more socially desirable to avoid collection (e.g., traditional paper and pencil or
any social consequences of expressing their true computer-assisted questionnaire). If this is
judgments. inappropriate, assure respondents in the cover story or
instructions that there are no right or wrong answers,
that people have many different opinions about the
issues addressed in the questionnaire, that their
responses will only be used for research purposes, and
that their individual responses will not be revealed to
anyone else.
Source of the survey is disliked (Krosnick May decrease: the desire to cooperate; Treat participants in a respectful manner, show that you
1999). willingness to exert the cognitive effort required value their time, and express appreciation for their
to generate optimal answers; or motivation to participation. If the dislike relates to an impersonal
faithfully report those answers. source of the survey, you can attempt to disguise the
source.
Contexts that arouse suspicions May motivate respondents to conceal their true Suspicions may be mitigated by explaining how the
(Baumgartner and Steenkamp 2001; opinion by editing their responses. They might information will be used, why the information is being
Schmitt 1994). do this by using the middle scale category requested, who will see the responses, and how the
regardless of their true feelings, or by information will be kept secure. In addition, one could
responding to items carelessly, randomly, or assure participants that their responses will be used only
nonpurposefully. for research purposes, will be aggregated with the
responses of others, and that no one in their organization
will see their individual responses. Measure midpoint
response style and control for it.
Measurement conditions that make the May increase desire to edit answers in order to Can be diminished by guaranteeing anonymity, telling
consequences of a response salient (see provide a socially acceptable response or to respondents there are no right or wrong answers, and
Paulhus 1984; Steenkamp, DeJong, and avoid undesirable consequences. assuring them that people have different opinions about
Baumgartner 2010). the issues addressed in the questionnaire.

and how their accurate responses will have useful consequences his/her own inferences based on partial retrieval. In addition,
for them and/or their organization; or (b) promising feedback low perceived self-efficacy may also increase the likelihood that
to respondents (where appropriate) so that they can gain greater respondents will use the middle scale category regardless of their
self-understanding, enhance self-efficacy, and/or improve their true feelings because they lack confidence in their responses.
performance. One remedy for this perceived lack of self-efficacy would be
to emphasize to respondents that it is their personal opinions
Low self-efcacy to provide a correct answer that are important and that only their personal experience or
Chaiken, Liberman, and Eagly (1989) noted that a respon- knowledge is required to answer the questions.
dents perceived self-efficacy to provide a correct answer can
influence how much cognitive effort s/he is willing to exert Low need for cognition
to answer a question. This suggests that, low perceived self- Cacioppo and Petty (1982) demonstrated that if need for cog-
efficacy to answer a question can decrease motivation to exert nition (i.e., chronic motivation to exert cognitive effort) is low,
cognitive effort, which can subsequently decrease a respondents it is likely to decrease the thoroughness of information retrieval
willingness to: (a) assess the completeness and accuracy of infor- and integration processes and the extent to which the respondent
mation retrieved, (b) fill in gaps in what is recalled, and (c) trust attempts to fill in missing details and gaps in what is recalled.
550 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555

More generally, it can decrease a respondents motivation to possible. This could be done through instructions that encourage
exert the cognitive effort required to provide optimal answers respondents to tell us what you honestly think and that we
and increase his/her desire to satisfice by responding stylistically need your opinion. Another way to control this bias might be to
or in a nondifferentiated manner. This chronically low level of measure the respondents acquiescence response style and con-
motivation to exert cognitive effort might be temporarily com- trol for it using the procedures recommended by Baumgartner
pensated for by enhancing motivation through other means. This and Steenkamp (2001) and Weijters, Schillewaert, and Geuens
might be done by emphasizing the importance of the issues; (2008).
reminding respondents of how research can benefit them or help
their organization; or increasing the personal relevance of the Impulsiveness
task of answering the questions. Impulsiveness causes respondents to react to questions
quickly, with little reflection, and little monitoring of judgments.
Low need for self-expression, self-disclosure, or emotional This trait has been linked to response bias in several studies
catharsis (Couch and Keniston 1960; Messick 1991). Impulsiveness can:
Krosnick (1999) noted that these self-referential factors can (a) impair comprehension by decreasing attention to questions
decrease a respondents motivation to exert the cognitive effort and instructions; (b) diminish the tendency to assess the com-
required to provide optimal answers. This is likely to decrease pleteness and accuracy of information retrieved, fill in gaps in
the thoroughness of information retrieval and the extent to which what is recalled, and integrate the information retrieved; and (c)
the respondent attempts to fill in missing details and gaps in what result in carelessness in mapping judgments onto response cat-
is recalled. Thus, these factors may have a tendency to cause egories. One way to offset this tendency might be to stress the
people to respond to items carelessly, randomly, or nonpurpose- importance of conscientiousness and accuracy in the instructions
fully. To counter this tendency, the desire for self-expression or to respondents and/or to encourage them to carefully weigh the
emotional catharsis may be enhanced by explaining in the cover alternatives before responding.
story or instructions that we value your opinion, we need your
feedback, or that we want you to tell us what you think, and Dogmatism, rigidity, or intolerance for ambiguity
so forth. Similarly, to increase the willingness of respondents Research (Baumgartner and Steenkamp 2001; Hamilton
to self-disclose, the instructions could emphasize the personal 1968) suggests that dogmatism, rigidity, and intolerance for
benefits of the research to them (e.g., improved performance and ambiguity can produce biased responding for several reasons.
increased self-awareness). First, rigidity and dogmatism can heighten a persons willing-
ness to make judgments or fill in gaps based on only partial
Low feelings of altruism retrieval, and cause people to be more willing to draw infer-
Low feelings of altruism toward the sponsor of the research ences based on information accessibility, because people who
project can decrease intentions to exert cognitive effort to help are high in these traits tend to feel certain of everything. Second,
the sponsor, and thereby decrease the thoroughness of informa- dogmatism (or intolerance for ambiguity) can make people view
tion retrieval and the respondents willingness to try to fill in things as either black or white, thus increasing the likelihood that
missing details and gaps in what is recalled. More generally, they will map judgments onto extreme response categories. One
this may increase the respondents tendency to satisfice by res- way to manage the latter problem would be to measure extreme
ponding in a nondifferentiated manner or stylistically (Krosnick response style and control for it (Baumgartner and Steenkamp
1991, 1999). Feelings of altruism toward the sponsor of the 2001). Managing the former problem is more difficult, but might
research might be increased by explaining why the respondents be mitigated to some degree by stressing the importance of
help is needed, indicating that others (who the respondent cares conscientiousness and accuracy in the instructions to respon-
about) are depending upon the accuracy of the responses, or sug- dents and/or by encouraging respondents to carefully weigh the
gesting that no one else can provide the needed information (or alternatives before answering.
the respondent is one of the few that can). It might also help to
remind the respondents of how research can improve the quality Implicit theories
of life for others and/or help the organization. If respondents have an implicit theory (Lord et al. 1978; Staw
1975) that two constructs are related, they may be motivated to
Agreeableness fill in gaps in what is recalled in a manner that is consistent
Costa and McCrae (1992) have noted that agreeableness is a with their implicit theory, or to edit their responses in a man-
tendency to be compassionate and cooperative rather than suspi- ner that is consistent with their theory. Podsakoff et al. (2003)
cious and antagonistic toward others, and that agreeable people and Podsakoff, MacKenzie, and Podsakoff (2012) have argued
are especially concerned with maintaining or enhancing social that one way to diminish this tendency is to obtain the infor-
harmony. Consequently, people high in agreeableness may have mation about the two constructs linked by the implicit theory
a tendency to uncritically endorse or acquiesce to statements, from separate sources, or introduce a temporal, psychological,
search for cues that suggest how they should respond, and edit or spatial separation between the measures of the two constructs
their responses for acceptability. One way to compensate for this linked by the implicit theory. Introducing a temporal separation
would be to stress the fact that the best way for respondents to between the measures of two constructs linked by an implicit
help the researcher is to answer the questions as accurately as theory decreases the likelihood that the answers to the first set
S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555 551

of measures will be available in the respondents short-term to rebel might also be diminished by treating participants in a
memory at the time s/he answers the second set of measures. respectful manner, showing that you value their time (i.e., by not
Introducing a psychological separation between the measures of wasting it), and expressing appreciation for their participation.
two constructs linked by an implicit theory decreases the diag-
nosticity of the answers to the first set of measures as cues to Presence of an interviewer
how to respond to the second set of measures (cf. Feldman and The mere presence of an interviewer may motivate respon-
Lynch 1988). Finally, introducing a spatial separation between dents to edit their answers to make them more socially desirable
the measures of two constructs linked by an implicit theory can to avoid any social consequences of expressing their true
(under some circumstances) make the answers to the first set judgments. Indeed, Bowlings (2005) review of the empirical lit-
of measures physically unavailable at the time the respondent erature demonstrates that social desirability bias and yea-saying
answers the second set of measures. bias are higher, and willingness to disclose sensitive informa-
tion is lower, for interviews (face-to-face or telephone) than for
Repetitiveness self-administered questionnaires (paper and pencil or computer-
According to Petty and Cacioppos (1986) Elaboration Like- assisted). Perhaps the most obvious solution to this potential
lihood Model, excessive repetition of a message decreases problem would be to avoid the use of an interviewer by utilizing
the motivation to centrally process it. This suggests that the a self-administered method of data collection (e.g., traditional
repetitiveness of the items on a questionnaire may decrease a paper and pencil or computer-assisted questionnaire). However,
respondents motivation to maintain the cognitive effort required in those cases where this is infeasible, or undesirable, one could
to provide optimal answers and increase the desire to satisfice partially diffuse this issue by assuring respondents in the cover
by responding in a nondifferentiated manner or stylistically. The story or instructions that there are no right or wrong answers, that
obvious solution is to maintain the respondents motivation to people have many different opinions about the issues addressed
provide accurate answers by minimizing the repetitiveness of the in the questionnaire, that their responses will only be used for
items or making the questions seem less repetitive by reversing research purposes, and that their individual responses will not
some items (i.e., polar opposites but not negations), or varying be revealed to anyone else.
the scale format.
Source of survey is disliked
Lengthy scales If the interviewer, experimenter, or sponsor of the survey is
Several researchers have noted that a seemingly unending disliked by respondents it may decrease the respondents desire
stream of questions may cause respondents to become fatigued to cooperate which may decrease their motivation to exert the
or irritated (Clancy and Wachsler 1971; Schuman and Presser cognitive effort required to generate optimal answers or faith-
1981). This may decrease a respondents motivation to maintain fully report those answers. Consequently, respondents may be
the cognitive effort required to provide optimal answers and less likely to expend the cognitive effort necessary to carefully
increase the desire to satisfice (e.g., by mechanically acquiesc- attend to the questions and instructions, retrieve relevant infor-
ing to items). This may also result in poorer comprehension due mation from memory, fill in gaps in what is recalled, and select
to careless reading of items, less thorough retrieval, less careful the appropriate response categories. The remedy for this depends
judgment and mapping of judgments onto response categories, upon the source that is disliked by respondents. If the dislike
and can lead to responding in a stylistic or nondifferentiated is for an interviewer or experimenter, steps could be taken to
manner. This can be mitigated by increasing the motivation to establish rapport with the respondents. If the dislike relates to
provide accurate answers by shortening the length of the sur- the sponsor of the survey, an attempt could be made to dis-
vey, simplifying the questions, making the questions seem less guise the source. In addition, it is always important to treat
repetitive by reversing some items (i.e., polar opposites but not participants in a respectful manner, show that you value their
negations), or changing the format. time (i.e., by not wasting it), and express appreciation for their
participation.
Forced participation
Compelling respondents to participate in the survey (e.g., to Contexts that arouse suspicions
fulfill a course requirement or a request by top management) can Several researchers (Baumgartner and Steenkamp 2001;
increase psychological reactance (Brehm 1966) and the desire to Schmitt 1994) have noted that when respondents are suspicious
rebel. This may subsequently decrease a respondents motivation about how the data will be used, they may be motivated to con-
to exert the cognitive effort required to generate optimal answers, ceal their true opinions by editing their responses. They might
or decrease his/her motivation to faithfully report those answers. do this by using the middle scale category regardless of their
Consequently, the respondent may be less likely to cooperate by: true feelings, or perhaps by responding to items carelessly, ran-
carefully attending to the questions and instructions; thoroughly domly, or nonpurposefully. These suspicions may be mitigated
retrieving relevant information from memory and filling in gaps by explaining (to the extent possible) why the information is
in what is recalled; and conscientiously mapping judgments onto being requested, how the information will be used, and how the
response categories. Perhaps the most effective way to avoid this information will be kept secure. Beyond this, if possible, it would
problem would be to solicit participation by promising rewards also be beneficial to assure participants that their responses will
rather than by threatening punishment. Beyond this, the tendency be used only for research purposes, will be aggregated with the
552 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555

Table 4
Factors that increase method bias by decreasing the difficulty of satisficing.
Conditions that cause method bias Mechanism Potential remedies

Common scale attributes (e.g., the same scale May heighten the perceived similarity and Explain to respondents that although some questions
types, scale points, and anchor labels). redundancy of items; and encourage respondents may seem similar, each is unique in important ways.
to be less thorough in item comprehension, Encourage them to read each item carefully. Vary the
memory retrieval, and judgment. This also scale types and anchor labels and/or reverse the wording
makes it easier to edit answers for consistency of some of the items to disrupt undesirable response
which decreases difficulty of satisficing. patterns.
Grouping related items together. May heighten the perceived similarity and Disperse similar items throughout the questionnaire,
redundancy of items; and encourage respondents separated by unrelated buffer items.
to be less thorough in item comprehension,
memory retrieval, and judgment. This also
makes it easier to edit answers for consistency
which decreases difficulty of satisficing; and
makes it easier to use previously recalled
information and prior answers to respond to the
current question.
The availability of answers to previous This may make it easier to: (a) use previously Memory availability can be diminished by introducing a
questions (physically or in memory). recalled information and answers to respond to temporal separation between the measurement of the
the current question (i.e., judgment referral); and predictor and criterion variables; and the diagnosticity
(b) provide answers that are consistent with each of the previous answers (as a cue to how to respond) can
other or with an implicit theory. be diminished by introducing a psychological
separation. Physical availability can be diminished by
restricting access to previous answers.

responses of others, and that no one in their organization will Common scale attributes
see their individual responses. It seems plausible that some forms of satisficing would be
easier to implement if the measures share a common scale type,
have the same number of scale points and common anchor
labels, and do not have any reverse-worded items. These com-
Measurement conditions that make the consequences of a mon characteristics make it easier to edit answers for consistency
response salient which decreases the difficulty of satisficing. These characteris-
When the social or professional consequences of a respon- tics also heighten the perceived similarity and redundancy of
dents answers are potentially serious, and the measurement items which may encourage respondents to be less thorough in
conditions make these consequences salient (see Paulhus 1984; item comprehension, memory retrieval, and judgment. These
Steenkamp, DeJong, and Baumgartner 2010), respondents may detrimental tendencies can be diminished by varying the scale
be motivated to edit their answers in order to provide a types and anchor labels and/or reversing the wording of some of
socially acceptable response, rather than saying what they really the items to balance the positively and negatively worded items
think. This tendency to respond in a socially desirable man- (Baumgartner and Steenkamp 2001; Weijters and Baumgartner
ner may be diminished by guaranteeing anonymity, telling 2012). However, the latter is only a good idea if it can be done
respondents in the cover story or instructions that there are without altering the content validity or conceptual meaning of
no right or wrong answers, and assuring respondents that peo- the scale, and if the reverse-worded items are not confusing to
ple have different opinions about the issues addressed in the respondents.
questionnaire.

Grouping related items together


It is not uncommon for researchers to group items measur-
Remedies for factors that decrease the difculty of ing the same or similar constructs together on a questionnaire.
satiscing Although this has the benefit of diminishing the cognitive
demands of the task, it may also heighten perceptions of
The factors above decrease the likelihood that a respondent the similarity and redundancy of the items, and consequently
will answer accurately by: decreasing the motivation to respond encourage respondents to be less thorough in item comprehen-
accurately, undermining the respondents capabilities, or making sion, memory retrieval, and judgment. Grouping similar items
the task of responding accurately more difficult. In this section together also makes it easier to: use previously recalled infor-
we will shift our focus to discuss characteristics of items that mation and prior answers to respond to the current question
encourage satisficing by making the task of generating alterna- and edit answers for consistency which decreases the difficulty
tive answers easier, rather than by making the task of answering of satisficing. The obvious solution to this problem is to dis-
accurately harder (Table 4). perse similar items throughout the questionnaire, separated by
S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555 553

unrelated buffer items (see Weijters, Schillewaert, and Geuens respondents of the confidentiality of their answers; encour-
2008). aging respondents to carefully weigh the alternatives before
responding; and minimizing the length and repetitiveness of the
Availability of answers to previous questions questionnaire to the extent possible. It also may be a good idea to
It is easier for respondents to satisfice by providing answers make it more difficult for respondents to satisfice by: varying the
that are consistent with each other or with an implicit theory if the scale types and anchor labels when appropriate; and introducing
answers to previous questions are readily available, either phys- a temporal, psychological, or spatial separation between items
ically or in memory, at the time of answering a later question (cf. measuring key constructs when possible. Our hope is that this
Feldman and Lynch 1988). Answers are likely to be physically discussion will help researchers anticipate when method bias is
available in a self-administered paper and pencil questionnaire likely to be a problem and provide ideas about how to avoid it
and may be (but need not be) for online questionnaires. This can through the careful design of a study.
be remedied by using computer-presented questionnaires that
prevent subjects from scrolling backwards to consult previous A nal cautionary note
answers. In addition, as noted above, availability is also greater
when questions are grouped together in close proximity by con- That being said, it is important to note that all research
struct on the questionnaire. This can be avoided by separating involves inevitable tradeoffs and it is impossible to design a
items on the questionnaire to decrease the availability of previous study that completely rules out all possibility of method bias.
answers. More generally, whenever the predictor and criterion This implies that the procedural remedies discussed in this
variables are measured at the same point in time by means a research should be viewed as a much needed complement
single questionnaire, the likelihood that the answers to previous to but not a substitute for the statistical remedies that
questions will be available is greater. In these instances, avail- have already been developed (Bagozzi 1984; Baumgartner and
ability might be diminished by introducing a temporal separation Steenkamp 2001; Podsakoff et al. 2003; Podsakoff, MacKenzie,
between the measurement of the predictor and criterion vari- and Podsakoff 2012; Weijters, Schillewaert, and Geuens 2008;
ables; and/or the diagnosticity of the previous answers (as a cue Williams, Hartman, and Cavazotte 2010). This also implies that
to how to respond to subsequent questions) might be diminished some method bias may be present even in a well designed study.
by introducing a psychological separation. Consequently, the researchers goal should be to thoughtfully
assess the research setting, try to identify the most likely causes
Conclusion of method bias, and take concrete steps to mitigate these prob-
lems in order to reduce the plausibility of method bias as a rival
The purpose of this article has been to discuss the causes explanation for the relationships observed in a study. Indeed, as
of method biases, the mechanisms through which they produce we have said before (Podsakoff et al. 2003, p. 899),
their biasing effects, and how to minimize or prevent these effects
The key point to remember is that the procedural and statis-
by implementing appropriate procedural controls. More specif-
tical remedies selected should be tailored to fit the specific
ically, we identified a series of factors that may cause method
research question at hand. There is no single best method for
biases by undermining the capabilities of the respondent, mak-
handling the problem of common method variance because
ing the task of responding accurately more difficult, decreasing
it depends on what the sources of method variance are in the
the motivation to respond accurately, and making it easier for
study and the feasibility of the remedies that are available.
respondents to satisfice. In addition, we tried to advance our
understanding of the mechanisms through which these factors In addition, we would also like to make it clear that nothing
produce their biasing effects by discussing how they influence we have said here should be construed as a general indictment
a respondents desire to provide optimal versus satisfactory of survey research. Indeed, many of the sources of method bias
answers to the questions, and how this subsequently affects identified in this research are also present in experimental inves-
question comprehension, memory retrieval and inference pro- tigations of mediating effects models, and in research based on
cesses, the mapping of judgments onto response categories, and archival datasets gathered through single-source surveys (e.g.,
the editing of responses. Finally, based on this understanding of including official government statistics). Moreover, we believe
the mechanisms involved, we tried to propose procedural reme- that survey research is an essential complement to experimen-
dies that would counterbalance or offset each of these specific tal research for two reasons. First, survey research is needed to
effects. For example, to increase the likelihood that respon- examine the extent to which causal relationships observed in
dents are able to answer accurately it is important to: align the experimental studies hold over variations in persons, settings,
capabilities of respondents with the difficulty of the task; select treatment manipulations, and outcome variables. This is impor-
respondents who have the necessary experience thinking about tant because there are some phenomena that consistently show
the issues of interest; avoid referring to vague concepts; and use up in a lab setting that do not generalize. Second, there are some
clear and concise language. It is also important to enhance the important phenomena that can only be studied in field settings
motivation of respondents to answer accurately perhaps by: pro- because: they cannot be effectively or ethically manipulated in
viding an explanation of why the questions are important and a lab setting; the real world setting to which one wishes to gen-
have useful consequences for the respondent, organization, and eralize is complex and cannot be adequately replicated in a lab
so forth; explaining why their answers are important; assuring setting; some subject populations may be unwilling to participate
554 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555

in a lab study; and/or some important outcome variables (i.e., Costa, Paul T. and Robert R. McCrae (1992), NEO PI-R Professional Manual,
sales, profitability, loyalty, repeat purchase, satisfaction based Odessa, FL: Psychological Assessment Resources, Inc.
Cote, James A. and Ronald Buckley (1987), Estimating Trait, Method, and
on product usage, etc.) are difficult to capture in lab settings. Error Variance: Generalizing Across 70 Construct Validation Studies, Jour-
Therefore, our objective is not to discourage the use of nal of Marketing Research, 24 (3), 3158.
survey research methods in marketing, but rather to encour- and (1988), Measurement Error
age researchers to think carefully about how to control for and Theory Testing in Consumer Research: An Illustration of the Impor-
method biases in the design of their studies. Moreover, we tance of Construct Validation, Journal of Consumer Research, 14 (4),
57982.
caution researchers not to throw the baby out with the bathwa-
Dickinson, John R. and Eric Kirzner (1985), Questionnaire Item Omission as a
ter by rejecting survey research simply because method bias Function of Within-Group Question Position, Journal of Business Research,
potentially provides a rival explanation for the relationships 13 (1), 715.
observed. The goal of this paper has been to suggest some things Doty, D. Harold and William H. Glick (1998), Common Methods Bias: Does
researchers can do to make this explanation less plausible. Common Methods Variance Really Bias Results?, Organizational Research
Methods, 1, 374406.
Elliott, Lois L. (1961), Effects of Item Construction and Respondent Aptitude
References on Response Acquiescence, Educational and Psychological Measurement,
21 (2), 40515.
Arndt, Johan and Edgar Crane (1975), Response Bias, Yea-Saying, and the Ericsson, K. Anders and Herbert A. Simon (1980), Verbal Reports as Data,
Double Negative, Journal of Marketing Research, 12 (May), 21820. Psychological Review, 87 (3), 21557.
Arnold, Mark J. and Kristy E. Reynolds (2009), Affect and Retail Shopping Feldman, Jack M. and John G. Lynch (1988), Self-Generated Validity and
Behavior: Understanding the Role of Mood Regulation and Regulatory Other Effects of Measurement on Belief, Attitude, Intention, and Behavior,
Focus, Journal of Retailing, 85 (3), 30820. Journal of Applied Psychology, 73 (3), 42135.
Babakus, Emin, Ugur Yavas and Nicholas J. Ashill (2009), The Role of Cus- Ferber, Robert (1966), Item Nonresponse in a Consumer Survey, Public Opin-
tomer Orientation as a Moderator of the Job DemandBurnoutPerformance ion Quarterly, 30 (3), 399415.
Relationship: A Surface-Level Trait Perspective, Journal of Retailing, 85 Fiske, Donald W. (1982), Convergent-Discriminant Validation in Measure-
(4), 48092. ments and Research Strategies, in Forms of Validity in Research, Brinberg
Bagozzi, Richard P. (1980), Causal Models in Marketing, New York: John Wiley. David and Kidder Louise H., eds. San Francisco, CA: Jossey-Bass,
(1984), A Prospectus for Theory Construction in Mar- 7792.
keting, Journal of Marketing, 48 (1), 1129. Fiske, Susan T. and Donald R. Kinder (1981), Involvement, Expertise, and
Baumgartner, Hans and Jan-Benedict E.M. Steenkamp (2001), Response Styles Schema Use: Evidence from Political Cognition, in Personality, Cognition
in Marketing Research: A Cross-National Investigation, Journal of Market- and Social Interaction, Cantor Nancy and Kihlstrom John, eds. Hillsdale,
ing Research, 38 (2), 14356. NJ: Erlbaum, 17190.
Bollen, Kenneth A. (1989), Structural Equations with Latent Variables, New Francis, Joe D. and Lawrence Busch (1975), What We Dont Know About I
York, NY: Wiley. Dont Knows, Public Opinion Quarterly, 39 (2), 20718.
Bowling, Ann (2005), Mode of Questionnaire Administration Can Have Seri- Grace, Debra and Scott Weaven (2011), An Empirical Analysis of Fran-
ous Effects on Data Quality, Journal of Public Health, 27 (3), 28191. chisee Value-in-Use, Investment Risk and Relational Satisfaction, Journal
Bradburn, Norman M., Seymour Sudman and Brian Wansink (2004), Asking of Retailing, 87 (3), 36680.
Questions: The Denitive Guide to Questionnaire Design For Market Greenleaf, Eric A. (1992), Improving Rating Scale Measures by Detecting
Research, Political Polls, and Social and Health Questionnaires, San Fran- and Correcting Bias Components in Some Response Styles, Journal of
cisco, CA: Jossey-Bass. Marketing Research, 29 (May), 17688.
Brannick, Michael T., David Chan, James M. Conway, Charles E. Lance and Hamilton, David L. (1968), Personality Attributes Associated with Extreme
Paul E. Spector (2010), What Is Method Variance and How Can We Cope Response Style, Psychological Bulletin, 69 (March), 192203.
with It? A Panel Discussion, Organizational Research Methods, 13 (3), Hansen, Robert A. (1980), A Self-Perception Interpretation of the Effect of
40720. Monetary and Nonmonetary Incentives on Mail Survey Respondent Behav-
Brehm, John W. (1966), A Theory of Psychological Reactance, New York, NY: ior, Journal of Marketing Research, 17 (1), 7783.
Academic Press. Harris, Michael M. and Amy Bladen (1994), Wording Effects in the Mea-
Buckley, M. Ronald, James A. Cote and S. Mark Comstock (1990), Measure- surement of Role Conflict and Role Ambiguity: A MultitraitMultimethod
ment Errors in the Behavioral Sciences: The Case of Personality Attitude Analysis, Journal of Management, 20 (4), 887901.
Research, Educational and Psychological Measurement, 50 (September), Herzog, A. Regula and Jerald G. Bachman (1981), Effects of Question-
44774. naire Length on Response Quality, Public Opinion Quarterly, 45 (4),
Cacioppo, John T. and Richard E. Petty (1982), The Need for Cognition, 54959.
Journal of Personality and Social Psychology, 42 (1), 11631. Jackson, Douglas N. (1959), Cognitive Energy Level, Acquiescence, and
Campbell, Donald T. and Donald W. Fiske (1959), Convergent and Dis- Authoritarianism, Journal of Social Psychology, 49 (1), 659.
criminant Validation by the MultitraitMultimethod Matrix, Psychological Knowles, Eric S. and Christopher A. Condon (1999), Why People Say Yes: A
Bulletin, 56 (2), 81105. Dual-Process Theory Of Acquiescence, Journal of Personality and Social
Cannell, Charles F., Peter V. Miller and Lois F. Oksenberg (1981), Research on Psychology, 77 (2), 37986.
Interviewing Techniques, In Sociological Methodology, Vol. 11, Leinhardt Kraut, Allen I., Alan D. Wolfson and Alan Rothenberg (1975), Some Effects
Samuel ed. San Francisco, CA: Jossey Bass, 389437. of Position on Opinion Survey Items, Journal of Applied Psychology, 60
Chaiken, Shelley, Akiva Liberman and Alice H. Eagly (1989), Heuristic and (6), 7746.
Systematic Processing Within and Beyond the Persuasion Context, in Unin- Krosnick, Jon A. (1991), Response Strategies for Coping with the Cognitive
tended Thought: Limits of Awareness, Intention, and Control, Uleman James Demands of Attitude Measures in Surveys, Applied Cognitive Psychology,
S. and Bargh John A., eds. New York, NY: Guilford, 21252. 5 (3), 21336.
Clancy, Kevin J. and Robert A. Wachsler (1971), Positional Effects in Shared- (1999), Survey Research, Annual Review of Psychol-
Cost Surveys, Public Opinion Quarterly, 35, 25865. ogy, 50, 53767.
Couch, Arthur and Kenneth Keniston (1960), Yeasayers and Naysayers: Agree- Krosnick, Jon A. and Duane F. Alwin (1987), An Evaluation of a Cognitive
ing Response Set as a Personality Variable, Journal of Abnormal and Social Theory of Response-Order Effects in Survey Measurement, Public Opinion
Psychology, 60 (2), 15172. Quarterly, 51 (2), 20119.
S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542555 555

and (1988), A Test of the Form- Recommendations on How to Control It, Annual Review of Psychology, 63,
Resistant Correlation Hypothesis: Ratings, Rankings, and the Measurement 53969.
of Values, Public Opinion Quarterly, 52 (4), 52638. Rapoport, Ronald B. (1982), Sex Differences in Attitude Expression: A Gen-
Krosnick, Jon A. and Michael A. Milburn (1990), Psychological Determinants erational Explanation, Public Opinion Quarterly, 46 (1), 8696.
of Political Opinionation, Social Cognition, 8 (1), 4972. Schmitt, Neal (1994), Method Bias: The Importance of Theory and Measure-
Lance, Charles E., Lisa E. Baranik, Abby R. Lau and Elizabeth A. Scharlau ment, Journal of Organizational Behavior, 15 (5), 3938.
(2009), If It Aint Trait It Must Be Method: (Mis)Application Of The Schreder, Mathis (2011), Concepts and Topics, in Retrospective Data
Multitrait-Multimethod Design In Organizational Research, in Statistical Collection in the Survey of Health, Ageing and Retirement in Europe:
and Methodological Myths and Urban Legends, in Doctrine, Verity, and SHARELIFE Methodology, Schrder Mathis ed. Mannheim, Germany:
Fable in the Organizational and Social Sciences, Lance Charles E. and Mannheim Research Institute for the Economics of Ageing, 119.
Vandenberg Robert L., eds. New York: Routledge, 33760. Schuman, Howard and Stanley Presser (1981), Questions and Answers in Atti-
Lance, Charles E., Bryan Dawson, David Birkelbach and Brian J. Hoffman tude Surveys, New York: Academic Press.
(2010), Method Effects, Measurement Error, and Substantive Conclusions, Schwarz, Norbert, Hans-Juergen Hippler and Elisabeth Noelle-Neumann
Organizational Research Methods, 13 (3), 40720. (1992), A Cognitive Model of Response-Order Effects in Survey Measure-
Le, Huy, Frank L. Schmidt and Dan J. Putka (2009), The Multifaceted Nature of ment, in Context Effects in Social and Psychological Research, Schwarz
Measurement Artifacts and Its Implications for Estimating Construct-Level Norbert and Sudman Seymour, eds. New York: Springer-Verlag.
Relationships, Organizational Research Methods, 12 (1), 165200. Siemsen, Enno, Aleda Roth and Pedro Oliveira (2010), Common
Lord, Robert G., John F. Binning, Michael C. Rush and Jay C. Thomas (1978), Method Bias in Regression Models with Linear, Quadratic, and
The Effect of Performance Cues and Leader Behavior on Questionnaire Interaction Effects, Organizational Research Methods, 13 (3),
Ratings of Leadership Behavior, Organizational Behavior and Human 45676.
Decision Processes, 21 (1), 2739. Spralls, Samuel A. III, Shelby D. Hunt and James B. Wilcox (2011), Extranet
MacKenzie, Scott B., Philip M. Podsakoff and Nathan P. Podsakoff (2011), Use and Building Relationship Capital in Interfirm Distribution Networks:
Construct Measurement and Validity Assessment in Behavioral Research: The Role of Extranet Capability, Journal of Retailing, 87 (1), 5974.
Integrating New and Existing Techniques, MIS Quarterly, 35 (2), Staw, Barry M. (1975), Attribution of the Causes of Performance: A General
293334. Alternative Interpretation of Cross-sectional Research on Organizations,
Mathews, C.O. (1927), The Effect of Position of Printed Response Words Upon Organizational Behavior and Human Decision Processes, 13 (3), 41432.
Childrens Answers to Questions in Two-Response Types of Tests, Journal Steenkamp, Jan-Benedict E.M., Martijn G. De Jong and Hans Baumgartner
of Educational Psychology, 18 (7), 44557. (2010), Socially Desirable Response Tendencies in Survey Research, Jour-
McClendon, McKee J. (1986), Response-Order Effects for Dichotomous Ques- nal of Marketing Research, 47 (2), 199214.
tions, Social Science Quarterly, 67 (1), 20511. Sudman, Seymour and Norman M. Bradburn (1982), Asking Questions a Prac-
(1991), Acquiescence and Recency Response Order tical Guide to Questionnaire Design, San Francisco, CA: Jossey-Bass.
Effects in Interview Surveys, Sociological Methods & Research, 20 (1), Sudman, Seymour, Norman M. Bradburn and Norbert Schwarz (1996), Thinking
60103. About Answers: The Application of Cognitive Processes to Survey Method-
McDaniel, Stephen W. and C.P. Rao (1980), The Effect of Monetary Induce- ology, San Francisco, CA: Jossey-Bass.
ment on Mailed Questionnaire Response Quality, Journal of Marketing Tourangeau, Roger, Lance J. Rips and Kenneth A. Rasinski (2000), The Psy-
Research, 17 (2), 2658. chology of Survey Response, Cambridge, UK: Cambridge University Press.
McGuire, Robert J. (1969), The Nature of Attitudes and Attitude Change, in Trott, D. Merilee and Douglas N. Jackson (1967), An Experimental Analysis
The Handbook of Social Psychology, Lindzey Gardner and Aronson Eliot, of Acquiescence, Journal of Experimental Research in Personality, 2 (4),
eds. Reading, MA: Addison-Wesley, 136314. 27888.
Messick, Samuel (1991), Psychology and Methodology of Response Styles, in Viswanathan, Madhu (2005), Measurement Error and Research Design, Thou-
Improving the Inquiry in Social Science: A Volume in Honor of Lee J. Cron- sand Oaks, CA: Sage Publications.
bach, Snow Richard E. and Wiley David E., eds. Hillsdale, NJ: Lawrence Weijters, Bert and Hans Baumgartner (2012), Misresponse to Reversed and
Erlbaum, 161200. Negated Items in Surveys: A Review, Journal of Marketing Research, 49,
Nisbett, Richard E. and Timothy D. Wilson (1977), Telling More than We Can http://dx.doi.org/10.1509/jmr.11.0368. Ahead of Print
Know: Verbal Reports on Mental Processes, Psychological Review, 84 (3), Weijters, Bert, Maggie Geuens and Niels Schillewaert (2009), The Proximity
23159. Effect: The Role of Inter-Item Distance on Reverse-Item Bias, International
Orne, Martin T. (1962), On the Social Psychology of the Psychological Exper- Journal of Marketing Research, 26 (1), 212.
iment: With Particular Reference to Demand Characteristics and Their , Niels Schillewaert and Maggie Geuens (2008),
Implications, American Psychologist, 17 (11), 77683. Assessing Response Styles Across Modes of Data Collection, Journal of
Paulhus, Delbert L. (1984), Two-Component Models of Socially Desir- the Academy of Marketing Science, 36 (3), 40922.
able Responding, Journal of Personality and Social Psychology, 46 (3), Williams, Larry J., James A. Cote and M. Ronald Buckley (1989), Lack of
598609. Method Variance in Self-Reported Affect and Perceptions at Work: Reality
Petty, Richard E. and John T. Cacioppo (1986), Communication and Persuasion: or Artifact?, Journal of Applied Psychology, 74 (3), 4628.
Central and Peripheral Routes to Attitude Change, New York, NY: Springer- Williams, Larry J., Mark B. Gavin and Margaret L. Williams (1996), Mea-
Verlag. surement and Nonmeasurement Processes with Negative Affectivity and
Podsakoff, Philip M., Scott B. MacKenzie, Jeong-Yeon Lee and Nathan P. Pod- Employee Attitudes, Journal of Applied Psychology, 81 (1), 88101.
sakoff (2003), Common Method Biases in Behavioral Research: A Critical Williams, Larry J., Nathan Hartman and Flavia Cavazotte (2010), Method Vari-
Review of the Literature and Recommended Remedies, Journal of Applied ance and Marker Variables: A Review and Comprehensive CFA Marker
Psychology, 88 (5), 879903. Technique, Organizational Research Methods, 13 (3), 477514.
Podsakoff, Philip M., Scott B. MacKenzie and Nathan P. Podsakoff Wotruba, Thomas R. (1966), Monetary Inducements and Mail Questionnaire
(2012), Sources of Method Bias in Social Science Research and Response, Journal of Marketing Research, 3 (4), 398400.

Anda mungkin juga menyukai