Anda di halaman 1dari 27

International Journal of Market Research Vol.

50 Issue 1

Web surveys versus other survey modes


A meta-analysis comparing response rates
Katja Lozar Manfreda
University of Ljubljana

Michael Bosnjak
Free University of Bozen-Bolzano

Jernej Berzelak
University of Ljubljana

Iris Haas
University of Mannheim

Vasja Vehovar
University of Ljubljana

One question that arises when discussing the usefulness of web-based surveys is whether they gain the same response rates compared to other modes of collecting survey data. A common perception exists that, in general, web survey response rates are considerably lower. However, such unsystematic anecdotal evidence could be misleading and does not provide any useful quantitative estimate. Metaanalytic procedures synthesising controlled experimental mode comparisons could give accurate answers but, to the best of the authors knowledge, such research syntheses have so far not been conducted. To overcome this gap, the authors have conducted a meta-analysis of 45 published and unpublished experimental comparisons between web and other survey modes. On average, web surveys yield an 11% lower response rate compared to other modes (the 95% confidence interval is confined by 15% and 6% to the disadvantage of the web mode). This response rate difference to the disadvantage of the web mode is systematically influenced by the sample recruitment base (a smaller difference for panel members as compared to one-time respondents), the solicitation mode chosen for web surveys (a greater difference for postal mail solicitation compared to email) and the number of contacts (the more contacts, the larger the difference in response rates between modes). No significant influence on response rate differences can be revealed for the type of mode web surveys are compared to, the type of target population, the type of sponsorship, whether or not incentives were offered, and the year the studies were conducted. Practical implications are discussed.

Received (in revised form): 21 December 2006

2008 The Market Research Society

79

Web surveys versus other survey modes

Introduction
Web surveys are often discussed as a supplement, sometimes also as an alternative to traditional survey modes where response rates are tending to decline (e.g. de Leeuw & de Heer 2002; Roster et al. 2004; Evans & Mathur 2005). However, also in web surveys there is the problem of nonresponse (Couper 2000; Vehovar et al. 2002). For example, over-surveying internet users may negatively impact on their willingness to participate. The activity of direct marketers with their unsolicited email practices (spam) may also have a negative influence on responses to web surveys. In addition, the methodology of web surveys is probably still not sufficiently developed to take full advantage of the possibilities available, although extensive research efforts are being made in this direction (e.g. Frick et al. 1999; Tuten et al. 1999/2000; Dillman 2000). In order for web surveys to become an established supplement, or even an alternative to traditional survey modes, data collection methodologists should show that the data obtained by this mode have the same or even higher quality than data from already established modes. While there are several indicators of survey data quality, here we limit the discussion to response rates as an indicator of non-response error (e.g. Groves 1989). In the survey and marketing research industry, a common perception exists that response rates for web surveys are lower than in traditional survey modes. However, this perception is often speculative, theoretical or based on limited evidence. Most anecdotal reviews provide ranges of response rates for some web surveys and other survey modes without pointing to experimental studies allowing for stronger conclusions (e.g. MacElroy 2000; Knapp & Heidingsfelder 2001; Schonlau et al. 2002, pp. 20, 95; Braithwaite et al. 2003; Pineau & Slotwiner 2004, p. 3). Some other authors have reviewed several experimental studies; however, their review was limited to vote-counting methods only (e.g. McNeish 2001; Tuten et al. 2002; Truell 2003), or they also included studies not using comparable samples (e.g. McNeish 2001). Several individual experimental studies comparing the response rates of web with another (or several other) survey mode(s) have already been reported in the literature. However, no studies are available that systematically synthesise these results. Individual experimental studies often appear limited by being specific to a certain target population, survey topic or the implementation procedure applied. More seriously, some primary studies suggest contradictory conclusions: while some authors report substantially higher response rates for web surveys compared to another mode (e.g. Wygant & Lindorf 1999; Cobanoglu et al. 2001), other

80

International Journal of Market Research Vol. 50 Issue 1

authors report opposite findings (e.g. Vehovar et al. 2001; Fricker et al. 2003). A need thus exists for a more powerful meta-analytical approach, quantitatively synthesising the available studies. Such an approach would show on an aggregate level whether response rates for web surveys actually differ compared to other survey modes. If so, a quantitative estimate of the differences will be provided. In addition, it can help practitioners decide in which situations a web survey mode is expected to yield lower, equivalent or even a better response rate than some other survey mode. Given such evidence, an informed decision about the appropriate survey mode, balancing out the response rates and costs, can be chosen. The goal of this study is thus to explore, with the aid of a systematic meta-analytic approach, whether response rates in web surveys are actually lower than response rates for other survey modes. If systematic differences are revealed and quantified, then the moderators of the magnitude of these differences will be investigated.

Literature overview: response rate differences between web surveys and other survey modes
In the literature we can find theoretical discussions unanimously expecting lower response rates for web surveys compared to other survey modes. The reasons provided can essentially be grouped into two broad categories. First, authors have come up with reasons supporting the notion of web surveys being inferior in terms of response rates in general, regardless of the mode to which they are compared. Second, web survey response rates are discussed in the light of a specific comparable mode. Both avenues providing reasons for expected differences in response rates to the disadvantage of the web relative to other survey modes will now briefly be sketched. In the debate on why a web survey would be expected to give lower response rates than alternative survey modes in general, security and privacy concerns associated with the internet are frequently stressed (e.g. Vehovar et al. 2001; Sax et al. 2003). Respondents tend to be anxious about their data being transferred via the internet and may consequently be reluctant to participate in web surveys. Another reason found in the literature is the limited possibilities to employ methods and procedures to increase response rates in web-based surveys, and the lack of new techniques taking advantage of the web mode (Kwak & Radler 1999; Bosnjak & Tuten 2003; Kaplowitz et al. 2004;

81

Web surveys versus other survey modes

Tuten et al. 2004). For instance, Bosnjak and Tuten (2003) show that prepaid monetary incentives, a most effective technique in traditional contexts that takes advantage of the reciprocity norm leading one to comply with survey participation requests (Groves et al. 1992; Dillman 2000), apparently does not work for web surveys if the incentive is transferred electronically (e.g. via PayPal). Monetary tokens of appreciation must be tangible to be effective in web survey contexts (Birnholtz et al. 2004), a restriction substantially pushing up the costs of conducting surveys via the internet and thus preventing most marketing research firms employing this strategy. In essence, the measures undertaken to decrease non-response rates in web surveys are probably still not sufficiently developed despite a decade of the intensive use of this mode in the survey and marketing research industry. Research efforts are being made in this direction, however they cannot easily improve on the state-of-the-art measures that have proved effective for other survey modes in the last (at least) five decades. Limited web literacy among certain segments of internet users, especially a lack of knowledge on how to access and adequately fill out a web-based survey, is also mentioned in the literature (e.g. Dillman 2000; Fraze et al. 2002; Miller et al. 2002; Grigorian et al. 2004). Similarly, some individuals may not use a computer very often and may therefore be less likely to respond to a web survey (Wygant & Lindorf 1999). Both the limited web literacy and low-frequency use of the internet should be associated with another reason mentioned in the literature, namely the increased burden when responding through non-traditional methods (Bason 2000; Lozar Manfreda et al. 2001; Vehovar et al. 2001). Last, but not least, technical limitations associated with the web mode are listed, such as software incompatibilities, misrepresentation of the visual materials used (e.g. scales and other visual design elements), and long or irregular loading times (Miller et al. 2002; Knapp & Kirk 2003; Hayslett & Wildemuth 2004). By sketching the discussion on web survey response rates in light of a specific comparable mode, we will limit our discussion to two other survey modes with which web surveys are most often discussed in relation to each other that is, web vs mail surveys and web vs telephone surveys. In comparison to mail surveys, web surveys may yield lower response rates due to the following basic reasons: while a paper-based questionnaire is likely to remain on a respondents desk and act as a continuous reminder, this is not the case with web questionnaires, especially those with an email invitation. The possibility of overlooking the invitation to

82

International Journal of Market Research Vol. 50 Issue 1

participate in a survey is more likely for web surveys with an email invitation than for traditional mail surveys (e.g. Crawford et al. 2001). Further, email invitations are more likely to be perceived as spam and as less legitimate (e.g. due to the ease of falsifying the identity of researchers on the web), which ultimately translates into lower response rates (e.g. Tuten 1997; Jones & Pitt 1999). The researchers investment in sending a letter by post may heighten its perceived importance and legitimacy, resulting in higher response rates than with emailed invitations. The reasons for lower responses to web in comparison to telephone surveys may be attributed to the impersonal, self-administered nature of the web mode (Vehovar et al. 2001). Potential respondents may find it much harder to decline participation when requested to do so by telephone. Personal requests on the phone might be harder to ignore and deflect than mail or electronic messages. Moreover, answering a web survey needs much more action from the respondent than simply immediately answering questions from an interviewer over the telephone (Fricker et al. 2003). The rest of this paper moves beyond these theoretical and speculative arguments and will synthesise the empirical evidence related to the expected differences in response rates for web surveys in comparison to other survey modes.

Research questions and hypotheses


Two main research questions are addressed. First, are response rates for web surveys actually lower than for other survey modes? Second, what is the impact of moderators influencing the magnitude of such potential differences? Accordingly, the first research hypothesis is: H1: Response rates for web surveys are lower than response rates for other survey modes. To address H1, the focus will be on the average response rate difference between web and other survey modes. The second research question refers to those moderators possibly influencing the magnitude of response rate differences between web and compared survey modes. Specifically, are there situations where a web mode would nevertheless perform better than other survey modes? Does the mode to which web surveys are compared, systematically influence the response rate differences? Are panel members inclined to respond

83

Web surveys versus other survey modes

regardless of the mode, compared to subjects who are requested to participate once? May we expect higher response rates for web surveys from highly educated, computer-savvy respondents (e.g. students or professionals compared to the general population)? Or do certain implementation procedures, such as the type of sponsorship, the solicitation mode, incentives and the number of contacts, have a differential impact on the response rate differences of interest? Taken together, the second set of hypotheses is more general in nature due to the relatively unexplored effects of different moderators on response rate differences. H2: Moderators that vary the response rates differences between compared modes are: (a) the type of mode to which web surveys are compared; (b) whether or not subjects are from a panel; (c) the type of target population; (d) the type of research sponsor; (e) the year of study; and (fh) the implementation procedures used (mode of survey invitation, incentives, number of contacts).

Method
Response rate differences between web and other survey modes are studied through meta-analytic techniques. This section briefly sketches the methodology, the eligibility criteria and search strategy used, the coding of primary studies and the statistical procedures employed.

Background and overview of methods


The term meta-analysis was coined by Gene Glass in the mid-1970s (Glass 1976; Smith & Glass 1977; Glass et al. 1981), and encompasses a variety of methods and techniques for quantitatively synthesising research findings, namely effect size estimates (Cooper & Hedges 1994; Hunter & Schmidt 2004). It thus can be described as a set of quantitative methods and procedures for synthesising research results to assess the true value of an experimental effect or of an association between variables. The brief overview of the meta-analytic procedure employed in the current research is based on the Hedges and Olkin (1985) meta-analysis framework (see also Cooper & Hedges 1994; Lipsey & Wilson 2001). As in our case, meta-analytic techniques are used to study potential differences in response rates between web and other modes of data collection. The procedure starts with a comprehensive collection of eligible papers, reports and presentations using computer databases and

84

International Journal of Market Research Vol. 50 Issue 1

bibliographical references. These individual studies, or primary studies, are then used to extract the information to derive the effect size of interest. For instance, response and non-response counts for different survey modes are extracted to compute response rate differences (which is the effect size measure in our case). Further, the primary studies characteristics assumed to influence the effect size measure of interest (i.e. moderators such as the type of sample, or information as to whether or not incentives were employed) are also coded. Sometimes primary studies carry more than one comparison of interest, resulting in a larger number of effect sizes than studies. Effect size measures of primary studies are then aggregated to estimate the true effect in the population of studies under certain distributional assumptions (see, for example, Hedges and Vevea (1998) for an overview and discussion). In the simplest class of models the so-called fixed-effects models it is assumed that the only variation in primary studies effect sizes is due to (subject-level) sampling error alone. Fixed-effects models are applicable if one is interested in estimating the mean effect size for a given set of available studies. Strictly speaking, fixed-effect models allow inferences only for the collection of studies included in the meta-analysis and say nothing about other studies that may be done later, could have been done earlier, or may have already been done but are not included among the observed studies. However, in most research contexts a different inference goal is pursued, namely to make inferences that embody an explicit generalisation beyond the observed studies (i.e. about the parameters of a population of studies that are larger than the set of observed studies). Random-effects analysis procedures, representing the second class of models, are designed to facilitate such inferences by assuming essential random differences between studies. These models account for the fact that, in addition to sampling error, there is a true random variation in effect sizes between studies. In random-effects models, the random component of effect size variation is calculated and incorporated into the summary statistics. Because our inference goal is to make inferences going beyond the studies included, our analysis will be based on the random-effects model. In all modern meta-analytic procedures, the estimated true effect size is a weighted central tendency function of the primary studies effect sizes to account for subject-level sampling error and the additional betweenstudies random variance component. Generally speaking, the larger the sample in primary studies, the more precise the effect size estimates. And the more precise these estimates are, the more they are weighted when

85

Web surveys versus other survey modes

aggregated to get an accurate estimate of the effect in the population of studies. To determine whether the effect sizes in primary studies to be aggregated are actually from the same population of studies, a homogeneity test the so-called Q-Test (see, for example, Lipsey and Wilson 2001, pp. 115ff) is performed. If the homogeneity assumption must be rejected, moderator analyses with coded study descriptors are conducted to estimate the influence of these factors on the effect size distribution (i.e. the heterogeneity of effect sizes). For instance, one might find out that the moderator variable type of survey sponsor is systematically related to the response rate differences. In such a case, effect sizes will prove heterogeneous between the moderator categories academic sponsor, commercial sponsor etc., as reflected in a significant QB-Test (Q for between categories), and ideally homogeneous within these categories. Following this brief introduction, we now describe the methods employed in more detail, starting with the criteria followed to decide which studies to include in the meta-analysis. We then turn to a description of the literature search strategy, the coding procedure and the statistical methods used under a random-effects distributional assumption.

Eligibility criteria and literature search


In general, we sought to maximise internal validity by isolating the impact of the survey mode from other causes on the response rate difference of interest. Accordingly, only those studies meeting the following criteria were included. 1. One of the survey modes used should be a web-based survey (i.e. a survey where a survey questionnaire on the web was used to gather responses from respondents). 2. The web-based survey should be compared to data from one or more other survey modes (e.g. email survey, mail survey, telephone survey, face-to-face survey, fax survey). 3. Data on response rates from the web and the other survey mode(s) should be available. 4. A split-sample experimental design must have been employed with subjects from the same population being randomly assigned to different modes. 5. Subjects should have remained in the mode they were randomly assigned to. In other words, studies where subjects were permitted to switch

86

International Journal of Market Research Vol. 50 Issue 1

modes were not eligible for inclusion; or, for those studies where subjects were assigned to another mode in the following phases of the survey process, only the results up to this change are taken into account. 6. The implementation of the compared modes should be as similar as possible, with the only difference being in the mode used for answering the survey questionnaire. For example, comparisons where unequal incentives were used were excluded. The last three criteria in particular the random assignment of subjects to modes, the retention within this mode, and comparable implementation procedures are crucial to isolate the impact of the survey mode from other factors. Primary studies of interest were identified through a comprehensive literature search. The sources for collecting cases were: a search through bibliography entries on the WebSM site at http://www.websm.org (a website dedicated to the methodology of web surveys, whose bibliography database includes more than 2000 entries Lozar Manfreda & Vehovar 2006) a search using keywords1 in online literature databases (ScienceDirect at http://www.sciencedirect.com, ISI Web of Knowledge at http:// isiwebofknowledge.com, Directory of Open Access Journals at http://www.doaj.org/, EBSCOhost at http://search.ebscohost.com/, Emerald at http://www.emeraldinsight.com/, Ingenta select at http:// www.ingentaselect.com/, LookSmarts FindArticles at http://articles. findarticles.com, The Internet Public Library at http://www.ipl.org/ div/serials/, Kluwer Online Journals at http://journals.kluweronline. com/, Proquest at http://www.umi.com/proquest) a review of papers in relevant journals in the survey methodology field for the 19952005 period a call for papers in online discussion lists relevant to survey methodologists (Elmar, German Online Research discussion list, SRMSNET list, Aoir) a call for papers on the WebSM site at http://www.websm.org a search of the references of collected papers (references of each bibliographical unit obtained using the above means were checked in order to find additional relevant studies).

1 Very general keywords were used in order not to miss any study using a web survey. Thus web survey, internet survey, online survey, web-based survey, internet-based survey, electronic survey were all used. The authors of this paper then selected the mode-comparison studies out of the listed hits by checking the papers abstracts.

87

Web surveys versus other survey modes

Coding
To calculate our effect size measure, namely response rate differences, raw frequencies (i.e. the number of invited and eligible subjects and the number of respondents per mode) were used. The effective initial frequencies were most often calculated as the initial sample size minus undeliverables and non-eligible units. In some cases, insufficient data were provided to use the above definition of response rate. In these cases we simply used the authors definition of response rates as defined in the paper. Since, for both compared modes, the definition of response rates was the same, such an approach was nevertheless found to be adequate. As possible moderator variables, i.e. those expected to influence the magnitude of observed response rate differences, the following categorical information was coded: 1. type of mode to which the respective web survey was compared: mail, email, telephone, fax, and other (e.g. IVR, touch-tone) 2. sample recruitment strategy: panel vs one-time recruitment for the study reported 3. type of the target population: students, employees/members of an organisation, general population, and other (e.g. customers, business respondents in an institutional survey) 4. sponsorship: academic, government (local, state), commercial 57. implementation procedures for the web mode: (5) contact mode for the web survey main contact2 (mail, email, other); (6) incentives (used or not used); and (7) number of contacts (from 1 to 5, including pre-notification, main contact, follow-ups).

In addition to these seven categorical variables, the (8) year study was conducted was coded as a continuous variable. It should be stressed that, except for the fifth moderator mentioned above (contact mode for the web survey main contact), all other moderators were applicable both to the web and the corresponding comparison mode. While the respective moderator parameter value is identical within one comparison (i.e. in one specific case; does not apply to the fifth moderator as detailed above), the variability on the moderator variable(s) between the comparisons included may affect response rate differences. For instance, the use vs non-use of incentives might portray a
2 If different modes were used for different contacts (e.g. email for pre-notification, mail for the main contact with an invitation to the survey, email for follow-ups), the mode for the main contact was coded.

88

International Journal of Market Research Vol. 50 Issue 1

different picture of whether (and how large) the differences in response rates are between web-based and other surveys. The coding of the papers was carried out twice, by two authors of this paper, based on an initial coding scheme that was formatively improved during the coding process. In cases where the authors undertaking the coding initially did not agree regarding the assigned code(s), an agreement was achieved during their discussion. No measures of coding reliability were thus assessed since only relatively straight factual information regarding primary studies had been coded and disagreements were immediately corrected. When needed, the authors of the primary papers were contacted to obtain additional information.

Statistical analysis procedure


Because the statistical methods used are extensively documented in metaanalysis textbooks (e.g. Lipsey & Wilson 2001) and in the technical parts of the meta-analytical software package used (MetaWin Version 2; Rosenberg et al. 2000), we will outline only briefly the four steps followed. First, the mean response difference across all studies was computed by averaging all individual effect sizes weighted by an inverse variance component encompassing: (1) subject-level sampling error variance; and (2) an estimate of between-study variance as recommended by Hedges and Olkin (1985). In the second step, it was determined whether the estimated population effect size is statistically different from zero by computing the 95% confidence interval around it. Then, a homogeneity analysis was performed to assess whether the effect sizes are from the same population of studies. All these analyses were conducted under the random-effects distributional assumption. In view of our inference goal aimed at generalising beyond the studies included, and since the random-effects model is less prone to Type I errors compared to the fixed-effect approach (Hedges & Vevea 1998), the random-effects distributional assumption was deemed appropriate. In the final step, separate moderator analyses were performed for seven categorical variables and one continuous variable.

89

Web surveys versus other survey modes

Results Primary studies identified


Following our search strategy and eligibility criteria described above, we identified 24 papers reporting comparisons of response rates of web and alternative survey modes using split-sample experimental designs. Some papers contained more than one comparison of interest, resulting in 45 pairs of survey mode comparisons (cases). In these cases, the web mode was compared to the following survey modes: mail (27 cases), email (8 cases), fax (3 cases), telephone (5 cases) and IVR and touch-tone data entry (1 case each). Table 1 contains a summary of the experimental comparisons included in the meta-analysis, with the corresponding counts and response rates for the different modes.
Table 1 Summary of the 24 papers and 45 cases included in the meta-analysis
Web mode: response rate in % (No. of eligible units contacted, no. of responses) 15.50 15.50 15.50 55.70 82.13 24.00 44.21 44.21 63.00 37.39 43.16 43.16 51.61 61.88 61.88 28.00 39.00 18.50 18.50 20.70 25.40 29.70 28.60 (742, 115) (742, 115) (742, 115) (1571, 875) (3627, 2979) (300, 72) (95, 42) (95, 42) (3500, 2205) (690, 258) (95, 41) (95, 41) (1058, 546) (1941, 1201) (1941, 1201) (100, 28) (100, 39) (200, 37) (200, 37) (4440, 919) (4351, 1105) (4327, 1285) (4178, 1195) Other mode: response rate in % (No. of eligible units contacted, no. of responses) 23.98 27.76 17.39 44.23 62.89 30.00 26.26 17.00 52.00 54.26 60.00 27.37 97.43 66.82 78.00 51.00 51.00 72.00 34.00 31.50 31.50 31.50 31.50 (674, 161) (735 204) (736, 128) (1569, 694) (477, 300) (300, 90) (99, 26) (100, 17) (3500, 1820) (693, 376) 95, 57 (95, 26) (544, 530) 1941, 1297 (27,982, 21,826) (100, 51) (100, 51) (100, 72) (200, 68) (2594, 817) (2594, 817) (2594, 817) (2594, 817)
(continued)

Case number Reference 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Bason (2000) Bason (2000) Bason (2000) Bates (2001) Chatt & Dennis (2003) Chisholm (1998) Cobanoglu et al. (2001) Cobanoglu et al. (2001) Crawford et al. (2001) Elder & Incalcatera (2000) Fraze et al. (2002) Fraze et al. (2002) Fricker et al. (2003) Grigorian et al. (2004) Grigorian et al. (2004) Hayslett & Wildemuth (2004) Hayslett & Wildemuth (2004) Jones & Pitt (1999) Jones & Pitt (1999) Kaplowitz et al. (2004) Kaplowitz et al. (2004) Kaplowitz et al. (2004) Kaplowitz et al. (2004)

Web mode compared to: telephone mail IVR mail telephone email mail fax mail mail mail email telephone telephone mail mail mail mail email mail mail mail mail

90

International Journal of Market Research Vol. 50 Issue 1

Table 1 (continued)
Web mode: response rate in % (No. of eligible units contacted, no. of responses) 37.60 15.88 15.88 27.36 18.87 21.89 18.87 21.89 18.87 21.89 77.00 14.30 12.86 37.11 11.13 26.00 26.00 26.00 34.44 34.44 34.44 49.53 (359, 135) (359, 57) (359, 57) (987, 270) (159, 30) (233, 51) (159, 30) (233, 51) (159, 30) (233, 51) (200, 154) (2805, 401) (2900, 373) (380, 141) (737, 82) (300, 78) (300, 78) (300, 78) (151, 52) (151, 52) (151, 52) (1270, 629) Other mode: response rate in % (No. of eligible units contacted, no. of responses) 27.69 48.47 33.71 41.92 59.38 59.38 39.26 39.26 52.98 52.98 89.00 37.00 38.01 50.75 10.28 51.94 39.19 31.58 35.71 30.86 29.81 31.56 (195, 54) (359, 174) (359, 121) (990, 415) (389, 231) (389, 231) (163, 64) (163, 64) (151, 80) (151, 80) (200, 178) (2811, 1040) (2897, 1101) (402, 204) (1478, 152) (747, 388) (222, 87) (76, 24) (196, 70) (162, 50) (161, 48) (1299, 410)

Case number Reference 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 Kerwin et al. (2004) Knapp & Kirk (2003) Knapp & Kirk (2003) Kwak & Radler (1999) Lesser & Newton (2001) Lesser & Newton (2001) Lesser & Newton (2001) Lesser & Newton (2001) Lesser & Newton (2001) Lesser & Newton (2001) Lozar Manfreda et al. (2001) Miller et al. (2002) Miller et al. (2002) Ptschke (2004) Sax et al. (2003) Vehovar et al. (2001) Vehovar et al. (2001) Vehovar et al. (2001) Weible & Wallace (1998) Weible & Wallace (1998) Weible & Wallace (1998) Wygant & Lindorf (1999)

Web mode compared to: mail mail touch-tone mail mail mail email email email email mail mail mail mail mail telephone mail fax mail fax email mail

Weighted mean effect size estimate for response rate differences and homogeneity analysis
Figure 1 summarises the differences in response rates between web surveys and other modes on the effect size metric, namely the rate difference (RD) measure, ranging from 1 (100% difference in favour of other modes) to 1 (100% difference in favour of web surveys). Each row in Figure 1 represents one effect size measure per comparison along with its respective 95% confidence interval. The effect size distribution in Figure 1 suggests that most cases report web surveys as being inferior compared to other modes because in 34 out of 45 cases the rate differences are negative. The sampling error weighted mean effect size estimate, computed across all 45 cases under a randomeffects assumption, amounts to 0.11 (95% CI = 0.15/0.06; randomeffects pooled variance estimate = 0.02) and thus favours other survey modes over the web mode. In other words, the results indicate that web

91

92
8 5 45 7 12 4 9 24 44 43 38 42 22 3 23 14 41 6 21 1 Grand mean 20 17 34 2 40 37 27 19 15 11 10 31 26 30 35 16 36 39 33 25 32 29 28

Web surveys versus other survey modes

13 18

0.64

0.38

0.12 RD

0.14

0.40

Figure 1 Distribution of effect sizes (response rate differences, RD) and their 95% confidence intervals (based on 45 comparisons between web survey and other survey modes; study numbers correspond to those reported in Table 1)

International Journal of Market Research Vol. 50 Issue 1

surveys yield a lower response rate of about 11% on average compared to other modes, with a 95% confidence interval confined by the 15% response rate difference as the upper limit, and 6% as the lower limit. This finding corroborates H1, stating that web surveys are associated with lower response rates compared to other survey modes. A homogeneity analysis revealed a non-significant Q-score of 59.89 (df = 44, p = 0.06), suggesting the homogeneity of the effect size distribution under the random-effects assumption. However, because of the Q-test being marginally non-significant and the fact that the power of the Q-test tends to be low in the circumstances of our study, namely a relatively low number of cases synthesised, we will nevertheless investigate whether moderators influence response rate differences. Before doing so, two questions regarding the validity of the findings will be addressed. First, is the mean response rate difference estimate substantially influenced by publication bias? Second, how robust is the result if dependencies among a subset of the 45 cases, i.e. those reported within the same paper, are removed?

Publication bias and sensitivity analysis


To determine the degree of bias possibly introduced by the selective publication of studies, the computational approach proposed by Rosenthal (1979), and the graphical method proposed by Wang and Bushman (1998) were employed. The method proposed by Rosenthal (1979) computes the number of non-significant, unpublished or missing studies that would need to be added to a meta-analysis in order to change the results of the meta-analysis from significance to non-significance. This fail-safe-N number, computed according to Rosenthal (1979), amounts to 321.4 in our case, meaning that at least 321 non-significant experiments are needed to invalidate our findings. Because this number is deemed large compared to the number of studies included (45), it is felt that the observed result can be regarded fairly confidently as a reliable estimate of the true effect. In addition, plotting the quantiles of the effect size distribution against the quantiles of the normal distribution as suggested by Wang and Bushman (1998) does not give rise to concerns regarding a possible publication bias (see Figure 2). The cases do not deviate substantially from linearity, nor are any suspicious gaps in the plot visible. As outlined and discussed by Wang and Bushman (1998), these two characteristics, which appear absent in our case, would suggest the presence of publication bias in the data.

93

9.43
9 5 45

Standarised effect size

94
4 8 7 12 24 44 38 43 42 3 41 17 6 22 23 11 30 2 26 10 33 27 32 39 20 18 25 21 19 1 37 31 16 34 14 40 29 28 15

Web surveys versus other survey modes

0.25

8.93

18.10

35

36

27.28 2.29 1.14

13

0.00 Normal quantile

1.14

2.29

Figure 2 Normal quantile plot to detect any publication bias (based on 45 comparisons between web survey and other survey modes; study numbers correspond to those reported in Table 1)

International Journal of Market Research Vol. 50 Issue 1

All previous analyses were based on 45 effect sizes; some of them were reported in the same paper and may therefore share unique characteristics (see also Table 1), biasing the overall results. To explore the robustness of results for independent effect sizes compared to the previously reported dependent ones, we averaged dependent cases into one single response rate estimate per paper and calculated the mean response rate difference for the remaining 24 paper-level comparisons. The resulting average response rate difference amounts to 0.09 and is therefore slightly lower compared to the estimate based on 45 cases (amounting to 0.11). As can be expected in view of the smaller number of effect sizes included, the confidence interval around the mean response rate difference estimate gets larger (95% CI = 0.16/0.02; random-effects pooled variance estimate = 0.03). However, the overall result is still in favour of other survey modes compared to the web mode and can therefore be regarded as fairly robust.

Moderator analyses
In Table 2 the results of seven separate analyses are presented, investigating the influence of categorical moderators on the response rate differences between the web survey and other modes. As indicated in Table 2, for three out of seven categorical moderators, significant differences could be ascertained. Specifically, whether the sample consisted of panel members or respondents recruited just for one single study led to significant response rate differences (see the row Sample recruitment strategy in Table 2). For panel members, the average response difference amounted to 9% to the disadvantage of the web mode. For one-time recruited subjects, this difference grew to 28%. A second influential moderator is the mode of solicitation: if initially requested by postal mail to participate, web surveys lead to a 15% lower response rate. If requested by email, this figure shrinks to an average 5% difference. The third moderator significantly influencing response rate differences is the number of contacts used. Because of the low number of cases, we collapsed this moderator to two categories (category 1: one or two contacts; category 2: three, four or five contacts; see Table 2). The results suggest that, by increasing the number of contacts, the difference between web and other survey modes gets larger, namely from about 5% to the disadvantage for the web mode (for one to two contacts) to 16% on average (for three to five contacts).

95

Web surveys versus other survey modes

Table 2 Summary of seven categorical moderator analyses predicting the response rate differences between web and other survey modes
Mean response Categories (and number difference of cases) estimate Mail (27) Email (8) Telephone (5) Fax (3) Other (2)* 0.12 0.13 0.13 0.08

Moderator variable Type of mode compared to

95% CI 0.17/0.05 0.27/0.00 0.32/0.06 0.32/0.48 0.14/0.05 0.49/0.07 0.14/0.02 0.19/0.06 0.40/0.03 0.17/0.07 0.24/0.07 0.39/0.36 0.21/0.09 0.10/0.00 0.55/0.21 0.15/0.05 0.11/0.01 0.23/0.10

QB test (Q for between categories) QB = 4.52, df = 3, p = 0. 21

Sample recruitment strategy

Panel/pre-recruited list (40) 0.09 One-time recruitment (4) 0.28 Other (1) * 0.06 0.12 0.19 0.12 0.08 0.01 0.15 0.05 0.17 0.10 0.05 0.16

QB = 7.18, df = 2, p = 0.01

Target population Students (13) Employees/members of associations (20) General population (4) Other (8) * Type of sponsorship Academic (36) Governmental (6) Commercial (3)

QB = 3.12, df = 2, p = 0.21

QB = 1.68, df = 2, p = 0.43

Solicitation mode Mail (17) Email (25) Other (3)* Incentive Number of contacts Yes (3) No (42) Onetwo (23) Threefive (22)

QB = 6.69, df = 1, p = 0.01

QB = 0.57, df = 1, p = 0.45 QB = 7.56, df = 1, p = 0.01

* Other categories dropped from the homogeneity analysis.

No significant systematic influence on the differences in response rates between web and other survey modes were observed for: (1) the type of mode to which the web surveys were compared; (2) characteristics of the target population; (3) the type of sponsorship; or (4) whether or not incentives were offered (see Table 2). However, some of these nonsignificant results could have been caused by the low number of cases in certain categories, attenuating their statistical power. With a larger number of published studies, especially for the moderator type of target population, statistical significance may be reached. The data so far available point in the direction that response rate differences appear lower

96

International Journal of Market Research Vol. 50 Issue 1

for student samples and larger for employees, members of professional associations, and the general public. Last but not least, we performed a meta-regression for the only continuous moderator in our study (the year the study was conducted) and found no significant influence on response rate differences between the web and other survey modes (B = 0.01, SE(B) = 0.01, p = 0.41; Qregression = 0.67, df = 1, p = 0.42).

Summary and conclusions


This research synthesis based on 45 comparisons between web survey response rates with other modes showed that on average web surveys yield an 11% lower response rate compared to other modes. Despite the moderate number of available experimental mode comparisons to date, this result does not appear to be due to publication bias or systematic dependencies among sets of cases reported within the same paper. The estimated response rate difference could aid methodologists and practitioners in a number of ways. First, because non-response is demonstrably higher for web surveys in most cases, compared to other modes non-response bias appears to be an even larger concern for web surveys. If there are reasons to believe that this non-response rate difference could substantially bias the conclusions to be made, an informed decision about the appropriate mode can be made. However, when discussing this a researcher should also take into account not just the difference in the response rate but also the absolute response rate. For example, a 60% response rate achieved with a web survey may be very acceptable (although this is often a judgement based on a particular individual case) even if it is lower than a 75% response rate using another mode. Second, besides non-response error, the precision of estimated parameters will be lower in most cases for web surveys compared to other modes (keeping everything else including the initial sample size other than the mode constant). Therefore, the initial number of subjects needs to be higher to achieve the same precision. However, this may not necessarily be a problem since in web surveys larger samples can be used for the same cost. Last but not least, the results point in the direction that in comparable conditions (e.g. when comparing web to other survey modes given the same incentives, the same number of contacts) the web yields lower response rates than most other modes. However, one might argue that the web requires different methods and procedures to reduce nonresponse levels and could therewith generate data of the same or even

97

Web surveys versus other survey modes

better quality. Exploring such mode-congruent methods and procedures resulting in high-quality survey data should be an issue for further research. In addition, since it is usually cheaper to conduct a web survey, several additional measures for reducing non-response can be used in the web mode but not for some other mode within the same available budget. In this paper we did not actually take into account the costs of the compared modes and these comparisons are somehow not fair to the web mode. In web surveys which are usually cheaper we could invest more funds into procedures for increasing response rates and this would be a fair basis for comparison what might significantly change the difference in response rates. Besides the mean average response rate difference, this research synthesis revealed the influence of moderators. The difference is lower for panel members compared to one-time respondents, most probably because pre-recruited subjects have the technical resources, skills and experience to participate in web surveys and regard this mode as less burdensome. From an applied perspective, mode differences in terms of response rates should be a concern for those doing one-time surveys, but much less compared to those conducting online access panel-based research. Further, the solicitation mode appears to play some role in the sense that the differences get larger for postal mail solicitation compared to email contacts. One reason might be that it is much easier to complete a web survey when requested by email compared to the burdensome process of switching from a mailed request to the internet. Given that most comparisons were web vs mail surveys, respondents seem to prefer the same mode for conducting a survey in which they were also contacted. A third, and in our view surprising, moderator was the number of contacts: the more contacts, the larger the difference in response rates between web and other modes. In other words, the effectiveness of the number of contacts is curbed for web surveys, an alarming result not only for practitioners. There might be various reasons for this effect that could be explored in further detail by survey methodologists, such as the way repeated requests are perceived for different modes, the way research subjects feel obliged to comply with requests conditional upon the mode, and others. It may be possible that respondents in web surveys actually classify non-response reminders (especially if they are sent by email) as intrusive, and perceive the survey request as something similar to spam. Multiple email reminders may reach an early saturation point, resulting in resistant, non-compliant behaviour (Kittleson 1997). And this may be different in conventional survey modes, where multiple contacts are

98

International Journal of Market Research Vol. 50 Issue 1

actually the general rule for improving response rates (Dillman 2000). In these traditional modes, the researchers investment in multiple contacts stresses the importance and legitimacy of the study and therefore positively influences survey participation. Besides the three influential moderators, other characteristics obviously do not affect response rate differences. The type of mode to which web surveys are compared, the type of target population, the type of sponsorship, whether or not incentives were offered, and the year of study are among these variables. These non-significant findings are, in the authors view, at least evenly valuable compared to the significant ones summarised above because they guide researchers and practitioners alike as to which aspects are not expected to give rise to concern in multi-mode survey contexts. However, due to the very low number of cases for several moderators, these results may change in the future when more primary studies on the same issue become available. For instance, future studies may show that response rate differences are lower for student samples and larger for employees, members of professional associations and the general public. Such questions may be addressed in cumulative meta-analyses, defined as the procedure of performing new meta-analyses at one or more future time points (see, for example, Mullen et al. 2001). In view of the fact that web surveys have undergone various changes in the last decade, primarily influenced by changes in technology (e.g. more sophisticated design options for conducting surveys on the web) and change at the societal level (e.g. broader segments of society have adopted the internet), it is to be expected that a cumulative meta-analysis approach will yield important information on the sufficiency and stability of results obtained over time. At this point it should again be stressed that the discussion in this paper is limited to non-response rates as an indicator of non-response error. The authors are aware that a non-response does not necessarily lead to a nonresponse error, which is a function of the percentage of the sample not responding to the survey and the differences in the statistics between respondents and non-respondents (Groves 1989, p. 134). The nonresponse error occurs only if non-respondents, had they responded, would have provided different answers to the survey questions than those who responded to the survey. Since this is usually unknown, the response rate is taken as an indicator of survey quality, assuming that a high response minimises the probability of non-respondents affecting survey results. This was also done in the current paper. However, several studies have shown that lower response rates do not necessarily increase non-response error

99

Web surveys versus other survey modes

(e.g. Groves 2006; Keeter et al. 2000). Therefore, further research on the issue should examine not just the quantitative differences in response rates of web and other survey modes (as in this paper), but also the qualitative ones (i.e. how similar or different respondents from the compared survey modes are in terms of key variables and in other aspects of data quality for example, item non-response, consistency of answers, richness of responses to open-ended questions, speed of answering). If in practice it can be shown that, on a particular issue, responses from the compared modes are similar, the problem of lower response rates in web surveys would not be as critical. This is particularly true if we take into account the smaller amount of resources usually needed to carry out web surveys.

References
Bason, J.J. (2000) Comparing results from telephone mail internet and interactive voice recognition surveys of drug and alcohol use among University of Georgia students. Paper presented at the American Association for Public Opinion Research 55th Annual Conference, Portland, USA, 1720 May 2000. Bates, N. (2001) Internet versus mail as data collection methodology from a high coverage population. Proceedings of the Annual Meeting of the American Statistical Association, 59 August 2001. Birnholtz, J.P., Horn, D.B., Finholt, T.A. & Bae, S.J. (2004) The effects of cash, electronic, and paper gift certificates as respondent incentives for a web-based survey of a technologically sophisticated sample. Social Science Computer Review, 22, 3, pp. 377384. Bosnjak, M. & Tuten, T.L. (2003) Prepaid and promised incentives in web surveys an experiment. Social Science Computer Review, 21, 2, pp. 208217. Braithwaite, D., Emery, J., de Lusignan, S. & Sutton, S. (2003) Using the internet to conduct surveys of health professionals: a valid alternative? Family Practice, 20, 5, pp. 545551. Chatt, C. & Dennis, J.M. (2003) Data collection mode effects controlling for sample origins in a panel survey: telephone versus internet. Paper presented at the 2003 Annual Meeting of the Midwest Chapter of the American Association for Public Opinion Research, Chicago, USA. Chisholm, J. (1998) Using the internet to measure and increase customer satisfaction and loyalty. White paper by CustomerSat.com. Cobanoglu, C., Warde, B. & Moreo, P.J. (2001) A comparison of mail, fax, and web-based survey methods. International Journal of Market Research, 43, 4, pp. 441452. Cooper, H. & Hedges, L.V. (1994) The Handbook of Research Synthesis. New York: Russel Sage Foundation. Couper, M.P. (2000) Web surveys: a review of issues and approaches. Public Opinion Quarterly, 64, 4, pp. 464494.

100

International Journal of Market Research Vol. 50 Issue 1

Crawford, S., McCabe, S., Couper, M. & Boyd, C. (2001) From mail to web: improving response rates and data collection efficiencies. Paper presented at the International Conference on Improving Surveys, Copenhagen, Denmark. de Leeuw, E. & de Heer, W. (2002) Trends in household survey nonresponse: a longitudinal and international comparison. In: R.M. Groves, D.A. Dillman, J.L. Eltinge & R.J.A Little (eds) Survey Nonresponse. New York: Wiley, pp. 4154. Dillman, D.A. (2000) Mail and Internet Surveys. The Tailored Design Method. New York: Wiley. Elder, A. & Incalcatera, T. (2000) Pushing the envelope: moving a major syndicated study to the web. Paper presented at the Net Effects 3 Conference, Dublin, Ireland. Evans, J.R. & Mathur, A. (2005) The value of online surveys. Internet Research, 15, 2, pp. 195219. Fraze, S., Hardin, K., Brashears, T., Smith, J.H. & Lockaby, J. (2002) The effects of delivery mode upon survey response rate and perceived attitudes of Texas agriscience teachers. Paper presented at the National Agricultural Education Research Conference, Las Vegas, USA. Frick, A., Bchtiger, M.T. & Reips, U.-D. (1999) Financial incentives, personal information and drop-out rate in online studies. In: U.-D. Reips, B. Batinic, W. Bandilla, M. Bosnjak, L. Grf, K. Moser & A.A. Werner (eds) Current internet science trends, techniques, results. Aktuelle Online Forschung Trends, Techniken, Ergebnisse. Zurich: Online Press. Retrieved from http://dgof.de/tband99/. Fricker, S., Galei, M., Tourangeau, R. & Yan, T. (2003) An experimental comparison of web and telephone surveys. Working paper. Glass, G.V. (1976) Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 10, pp. 38. Glass, G.V., McGaw, B. & Smith, M.L. (1981) Meta-Analysis in Social Research. London: Sage. Grigorian, K.H., Sederstrom, S. & Hoffer, T.B. (2004) Web of intrigue? Evaluating effects on response rates of between web SAQ, CATI, and mail SAQ options in a national panel survey. Paper presented at the American Association for Public Opinion Research 59th Annual Conference, Phoenix, USA. Groves, R.M. (1989) Survey Errors and Survey Costs. New York: Wiley. Groves, R.M. (2006) Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70, 5, pp. 646675. Groves, R.M., Cialdini, R.B. & Couper, M.P. (1992) Understanding the decision to participate in a survey. Public Opinion Quarterly, 56, 4, pp. 475495. Hayslett, M.M. & Wildemuth, B.M. (2004) Pixels or pencils? The relative effectiveness of web-based versus paper surveys. Library & Information Science Research, 26, 1, pp. 7393. Hedges, L.V. & Olkin, I. (1985) Statistical Methods for Meta-Analysis. San Diego: Academic Press. Hedges, L.V. & Vevea, J.L. (1998) Fixed- and random-effects models in metaanalysis. Psychological Methods, 3, 4, pp. 486504. Hunter, J.E. & Schmidt, F.L. (2004) Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. Newbury Park: Sage Publications.

101

Web surveys versus other survey modes

Jones, R. & Pitt, N. (1999) Health surveys in the workplace: comparison of postal, email and world wide web methods. Occupational Medicine, 49, 8, pp. 556558. Kaplowitz, M.D., Hadlock, T.D. & Levine, R. (2004) A comparison of web and mail survey response rates. Public Opinion Quarterly, 68, 1, pp. 94101. Keeter, S., Miller, K., Kohut, A., Groves, R.M. & Presser, S. (2000) Consequences of reducing nonresponse in a national telephone survey. Public Opinion Quarterly, 64, 2, pp. 125148. Kerwin, J., Brick, P.D., Levin, K., OBrien, J., Cantor, D., Wang, A., Campbell, S. & Shipp, S. (2004) Web, mail, and mixed-mode data collection in a survey of advanced technology program applicants. Paper presented at the 2004 Joint Statistics Meetings, Toronto, Canada. Kittleson, M.J. (1997) Determining effective follow-up of email surveys. American Journal of Health Behavior, 21, 3, pp. 193196. Knapp, F. & Heidingsfelder, M. (2001) Drop-out analysis: effects of the survey design. In: U.-D. Reips & M. Bosnjak (eds) Dimensions of Internet Science. Lengerich: Pabst Science Publishers, pp. 221230. Knapp, H. & Kirk, S.A. (2003) Using pencil and paper, internet and touch-tone phones for self-administered surveys: does methodology matter? Computers in Human Behaviour, 19, 1, pp. 117134. Kwak, N. & Radler, B.T. (1999) A comparison between mail and web surveys: response pattern, respondent profile, data quality, and construct association. Paper presented at the annual meeting of Midwest Association of Public Opinion Research, Chicago, USA. Lesser, V.M. & Newton, L. (2001) Mail, email and web surveys: a cost and response rate comparison in a study of undergraduate research activity. Paper presented at the American Association for Public Opinion Research 56th Annual Conference, Montreal, Canada. Lipsey, M.W. & Wilson, D.B. (2001) Practical Meta-Analysis. Thousand Oaks: Sage Publications. Lozar Manfreda, K. & Vehovar, V. (2006) Internet surveys. In: J. Hox, E. de Leeuw & D.A. Dillman (eds) The International Handbook of Survey Methodology. Lawrence Erlbaum Associates, New Jersey, USA. Lozar Manfreda, K., Vehovar, V. & Batagelj, Z. (2001) Web versus mail questionnaire for an institutional survey. Paper presented at the 2nd ASC International Conference on Survey Research Methods: The Challenge of the Internet, UK. MacElroy, B. (2000) Variables influencing dropout rates in web-based surveys. Quirks Marketing Research Review, July. McNeish, J. (2001) Using the internet for data collection just because we can, should we? Paper presented at 2001 AAPOR Annual Conference, Montreal, Canada, 1720 May. Miller, T.I., Miller Kobayashi, M., Caldwell, E., Thurston, S. & Collett, B. (2002) Citizen surveys on the web: general population surveys of community opinion. Social Science Computer Review, 20, 2, pp. 124136. Mullen, B., Muellerleile, P. & Bryant, B. (2001) Cumulative meta-analysis: a consideration of indicators of sufficiency and stability. Personality and Social Psychology Bulletin, 27, 11, pp. 14501462.

102

International Journal of Market Research Vol. 50 Issue 1

Pineau, V. & Slotwiner, D. (2004) Probability samples vs volunteer respondents in internet research: defining potential effects on data and decision-making in marketing applications. Retrieved from Knowledge Networks (www.knowledgenetworks.com). Ptschke, M. (2004) Paper and pencil or online? Methodological experiences from an employee survey. Paper presented at the German Online Research Conference (GOR) 2004, Duisburg, Germany. Rosenberg, M.S., Adams, D.C. & Gurevitch, J. (2000) MetaWin: Statistical Software for Meta-Analysis, Version 2.0. Sunderland: Sinauer Associates. Rosenthal, R. (1979) The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 3, pp. 638641. Roster, C.A., Rogers, R.D., Albaurn, G. & Klein, J.D. (2004) A comparison of response characteristics from web and telephone surveys. International Journal of Market Research, 46, 3, pp. 359373. Sax, L.J., Gilmartin, S.K. & Bryant, A.N. (2003) Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education, 44, 4, pp. 409432. Schonlau, M., Elliot, M.N. & Fricker, R.D. (2002) Conducting Research Surveys via E-mail and the Web. Rand: Santa Monica. Smith, M.L. & Glass, G.V. (1977) Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 9, pp. 752760. Truell, A.D. (2003) Use of internet tools for survey research. Information Technology, Learning and Performance Journal, 21, 1, pp. 3137. Tuten, T.L. (1997) Getting a Foot in the Electronic Door: Understanding Why People Read or Delete Electronic Mail (Rep. No. 97/08). Mannheim: Zentrum fr Umfragen, Methoden und Analysen. Tuten, T.L., Bosnjak, M. & Bandilla, W. (1999/2000) Banner-advertised web surveys. Marketing Research, 11, 4, pp. 1621. Tuten, T.L., Galei, M. & Bosnjak, M. (2004) Effects of immediate versus delayed notification of prize draw results on response behavior in web surveys: an experiment. Social Science Computer Review, 22, 3, pp. 377384. Tuten, T.L., Urban, D.J. & Bosnjak, M. (2002) Internet surveys and data quality a review. In: B. Batinic, U.-D. Reips & M. Bosnjak (eds) Online Social Sciences. Seattle, WA: Hogrefe & Huber Publishers, pp. 726. Vehovar, V., Batagelj, Z., Lozar Manfreda, K. & Zaletel, M. (2002) Nonresponse in web surveys. In: R.M. Groves, D.A. Dillman, J.L. Eltinge & R.J.A. Little (eds) Survey Nonresponse. New York: Wiley, pp. 229242. Vehovar, V., Lozar Manfreda, K. & Batagelj, Z. (2001) Sensitivity of e-commerce measurement to the survey instrument. International Journal of Electronic Commerce, 6, 1, pp. 3152. Wang, M.C. & Bushman, B.J. (1998) Using the normal quantile plot to explore meta-analytic data sets. Psychological Methods, 3, 1, pp. 4654. Weible, R. & Wallace, J. (1998) Cyber research: the impact of the internet on data collection. Marketing Research, 10, 3, pp. 1924. Wygant, S. & Lindorf, R. (1999) Surveying collegiate net surfers. Quirks Marketing Research Review, July.

103

Web surveys versus other survey modes

About the authors


Katja Lozar Manfreda, Ph.D., is an Assistant Professor of Statistics and Social Informatics at the Faculty of Social Sciences, University of Ljubljana (Slovenia). Her current research interests include survey methodology, new technologies in social science data collection and web survey methodology. Michael Bosnjak, Ph.D., is an Associate Professor in the School of Economics and Management at the Free University of Bozen-Bolzano (South Tyrol, Italy). His research interests include: Research synthesis methods, marketing research methods, Internet-based data collection methodology, and consumer psychology with a special emphasis on individual differences and self-concept-related/symbolic consumption. Jernej Berzelak is a researcher at the Faculty of Social Sciences, University of Ljubljana. His research activities are primarily focused on methodological issues of Internet-based surveys and on other new technologies in social science data collection. Iris Haas, Dipl.-Psych., is a postgraduate research assistant at the University of Mannheim, Department of Psychology II. Of special interest to her are research methods in Psychology and Marketing Research. Vasja Vehovar, Ph.D., is a Professor of Statistics at the Faculty of Social Sciences, University of Ljubljana. The main areas of his research interest include the problem of survey non-response, web survey methodology and social aspects of the Internet and other ICTs. He is the principal investigator of the internationally recognized project WebSM Web Survey Methodology portal. Address correspondence to: Katja Lozar Manfreda, Faculty of Social Sciences, Kardeljeva ploscad 5, 1000 Ljubljana, Slovenia, Email: katja.lozar@fdv.uni-lj.si

104

Anda mungkin juga menyukai