www.elsevier.com/locate/comphumbeh
Abstract
Two studies investigated the validity of computer self-efficacy and computer anxiety scales
when administered to an Internet sample. In the first study, it was found that existing mea-
sures of computer self-efficacy and anxiety, originally developed through paper-and-pencil
methods with more traditional samples, were not adequately equivalent when administered to
a sample that was recruited and tested via the Internet. In the second study, the existing
measures were adapted, and new items were developed to create new measures of computer
self-efficacy and anxiety. The relationship of these new measures to computer and Internet use
behaviors provided evidence for validity. Confidence and aversion were related to computer
and Internet use suggesting that these new measures are adequate for capturing confidence
and aversion towards computers when administered to an Internet sample.
# 2003 Elsevier Ltd. All rights reserved.
Keywords: Self efficacy; Computer anxiety; Computer attitudes; Internet; Computers; Sampling
1. Introduction
As the Internet has become an increasingly useful tool for everyday life, so has it
become a useful tool for psychological research. A search through the psychologi-
cal literature results in numerous and varying studies utilizing the Internet for data
collection (e.g. Buchanan & Smith, 1999; Malone & Bero, 2000; Welch & Krantz,
1996). As with any methodology, data collection by computer and by the Internet
has advantages as well as challenges that must be addressed (Reips, 2000). One
0747-5632/03/$ - see front matter # 2003 Elsevier Ltd. All rights reserved.
doi:10.1016/S0747-5632(03)00049-9
2 F.G. Barbeite, E.M. Weiss / Computers in Human Behavior 20 (2004) 1–15
benefit is that Internet sampling can provide researchers with the opportunity to
obtain a sample more diverse than the traditional college student sample, in terms
of both demographics and geography. Turnaround time for survey administration
and data collection is faster. The quality of the data is better as people can be
reminded to go back to an item that was missed and manual data entry from a
paper-based survey is not necessary. Ultimately, the advantages of Internet sam-
pling make it a much more cost-effective procedure (in terms of time and money)
than other sampling methods. Moreover, as the technology for using computers
and the Internet as a means of data collection becomes easier to use, it becomes
even more accessible.
There are also challenges that must be addressed when using the Internet. As
mentioned before, an Internet sample is more diverse than a sample of college stu-
dents, but compared with other sampling techniques, an Internet sample is still
limited to those that have access to a computer, those that have access to the
Internet, and those that are willing to complete a survey on the Internet. Another
challenge is the experimenter’s inability to control the environmental conditions in
which Internet participants complete experiments. For example, participants may
be completing online surveys or experiments while watching TV, eating, taking care
of children or while speaking with others. Such random conditions can lead to
inflation of random error variance influencing the validity of statistical conclusions
(Cook & Campbell, 1979). Thus, collecting data via the Internet has its own set of
challenges that make it different from more traditional methods and samples of
data collection.
1985; Dambrot, Watkins-Malek, Silling, Marshall, & Garver, 1985; Heinssen, Glass,
& Knight, 1987; Kay, 1993; Loyd & Gressard, 1984a; Marcoulides, 1989; Murphy,
Coover, & Owen, 1989; Nickell & Pinto, 1986; Richter, Naumman, &
Groeben, 2000), yet, surprisingly little research exists regarding the generalizability
of such measures for administration via computer and the Internet.
2. Study 1
2.1. Method
2.1.1. Participants
Participants were members of an online standing research panel (Stanton & Weiss,
2002). A random sample of 612 members of the research panel population (entire
available population at time of study was 4100) received an e-mail that described the
study and provided a link to where the survey could be completed. Of those invited
to participate, 413 respondents provided usable survey data. Of this available data,
226 were used for the current study. The sample consisted of 66 males and 159
F.G. Barbeite, E.M. Weiss / Computers in Human Behavior 20 (2004) 1–15 5
females (one did not report). Ages ranged from 19 to 74 (M=38.16, S.D.=13.19).
There were 179 US residents and 44 from other countries (three did not report).
Participants held a wide variety of occupations. The most reported occupations were
11 administrative/support, 16 education/training, 8 engineering/design, 12 health or
safety, 9 managerial, 9 research, 10 retail/wholesale, and 13 technology (e.g. Web
design, computer networking). In this sample, 74% had at least some college
education. Most participants worked at least part time (n=133).
Personal information regarding computer and Internet use were also recorded.
Two hundred and twenty participants had been using the computer for at least a
year, and 219 had been using the Internet for at least a year. Computer and Internet
experience was also recorded using an activity checklist adapted from the Graphics,
Visualization, and Usability survey (2002). Participants had engaged in an average
of 10.58 (SD=3.34) out of 14 computer tasks and an average of 10.13 (S.D.=3.14)
out of 15 online activities.
2.1.3. Analysis
Consistent with the recommended method for testing measurement equivalence
(King & Miles, 1995) analysis consisted of three steps. First, exploratory factor
analysis was conducted to determine if the number of factors for each scale mea-
sured in the current sample was consistent with those found in previous studies.
Consistency in number of factors would indicate a constant conceptual domain.
Factor analysis was conducted using principal axis factoring and direct oblimin
rotation. Number of factors was determined by parallel analysis and scree plot evi-
dence, two criteria shown to be most useful in determining the number of factors
(Zwick & Velicer, 1986). An updated method for parallel analysis was used as
recommended by Glorfeld (1995) and demonstrated by O’Connor (2000), which
reduces the tendency of the original method to overextract the number of factors.
Second, confirmatory factor analysis (CFA) was conducted in order to test the
consistency of factor loading among items for each scale. Consistency in the factor
loadings would indicate constant calibration. That is, the scale measures the same
construct to the same degree. CFA was conducted on LISREL 8.52 (Jöreskog &
Sörbom, 1998) using maximum likelihood. Factor loading misspecification was
measured using chi-square as well as Comparative Fit Index (CFI) and root mean
6
Table 1
Means, S.D., Alpha Reliability for Past Studies and Current Study
Loyd and Gressard (1984b) 354 high school and college students 31.88 6.30 0.87
Loyd and Loyd (1985) 114 elementary school teachers 32.10 6.10 0.90
Current study 39.61 6.20 0.86
2.2. Results
Results of the EFA and CFA analysis are shown in Table 2. Only the CSE showed
a consistency in the number of factors retained between the current sample and
previous samples. None of the five scales showed a consistency in factor loadings.
Together these results indicated that the five scales did not have measurement
equivalence between administration methods. Therefore, it would not be meaningful
to test mean differences between the current data and previous data collected
through paper-and-pencil methods. However, a comparison of means from the cur-
rent study and those from previous studies (Table 1) at least indicated a consistent
trend. On the CACAS and CARS, the current sample showed to be less anxious
towards computers. On the CSE and CCCAS, the current sample showed to be
more confident towards computers.
The lack of measurement equivalence and the apparent differences in the means
between the current study and previous studies could have been due to either a
difference in the nature of the sample, the difference between administration meth-
ods, or both. Although the current study cannot decipher the cause, the results
nevertheless indicate that these instruments are inappropriate for a sample collected
through the Internet. For this reason, it was important to develop a new measure
of computer anxiety and self-efficacy that would be appropriate for an Internet
sample.
Table 2
Number of factors retained in current sample and previous samples, and fit indices for factor loadings
3. Study 2
The goal of this study was to adapt the five scales to better be able to distinguish
high and low computer anxiety and self-efficacy for an online sample. Each of the 87
items, their mean, and standard deviation from the Experiment 1 sample were
examined. Given the current prevalence and acceptance of computers, many of the
items seemed out-of-date. Other items seemed to refer to a commonly held know-
ledge about computers rather than subjective rating of feelings towards using the
computer or level of confidence when using the computer. Some items also had very
high (or low) means with a small standard deviation indicating that the item might
not be able to discriminate high and low levels of anxiety or self-efficacy among an
Internet sample. Therefore, in addition to the need to remove items that seemed out-
of-date or common knowledge, the remaining items would need to be carefully
selected to assure they can adequately discriminate among an Internet sample. With
a new pool of items, a new factor structure could then be explored, and then vali-
dated with a new Internet sample.
Items with a mean rating equal to or greater than the midpoint were retained. Six
items were removed. In total, 10 items were removed and 77 remained.
F.G. Barbeite, E.M. Weiss / Computers in Human Behavior 20 (2004) 1–15 9
3.2. Method
3.2.1. Participants
Participants were members of the same online research panel (Stanton & Weiss,
2002). None of the participants were involved in Study 1. Eight hundred members of
the research panel population (entire available population at time of study was 4100)
received an e-mail that described the study and provided a link to where the survey
could be completed. Of those invited to participate, 476 respondents provided usable
survey data. Of this available data, 227 were used for the current study. The sample
consisted of 41 males and 186 females. Ages ranged from 18 to 72 (M=35.44,
S.D.=10.91). There were 192 US residents and 35 from other countries. The most
reported occupations were 8 accounting/financial, 27 administrative/support, 14
education/training, 16 retail/wholesale, and 9 technology (e.g. Web design, compu-
ter networking). Computer and Internet use in this sample were comparable with
those in Study 1.
submitting their panel number in a raffle for an online gift certificate. When the
survey was completed, data was sent electronically to a local database.
3.3. Results
Exploratory factor analysis was again used to explore the factor structure of the
items measured. Parallel analysis and scree plot evidence both indicated the same
four-factor solution. The four factors were identical to those previously identified.
Coefficient a reliability for each factor as well as the factor loading, mean, and
standard deviation for each item are shown in Table 3.
The validity of the new self-efficacy and anxiety instruments were tested by ana-
lyzing their relationship with indicators of use and comfort for both computers and
the Internet. Table 4 shows the intercorrelation among the four instruments as well
as their correlations with these indicators. All four scales related strongly and in the
expected direction with years using the computer, comfort and experience using the
computer as well as experience using the Internet. Computer self-efficacy for
advanced activities was significantly related to all indicators. Table 5 shows results
from simultaneous regression analyses where the self-efficacy and anxiety instru-
ments are used to predict comfort and use. Computer self-efficacy for advanced
Table 3
Scale and item Statistics for New Computer Anxiety and Self-efficacy Scales
Table 4
Correlations among Computer Self-efficacy, Anxiety Scales (a reliabilities in parentheses on the diagonal)
and computer use measures N=227
1 2 3 4
1. CSE—beginner/general (0.83)
2. CSE—advanced 0.32*** (0.85)
3. Anxiety—use 0.49*** 0.32*** (0.90)
4. Anxiety—computer-related activity 0.24*** 0.10 0.21*** (0.76)
Years using computer 0.34*** 0.18*** 0.28*** 0.17**
Comfort with computers 0.41*** 0.42*** 0.48*** 0.15*
Total computer experience 0.55*** 0.48*** 0.36*** 0.24***
Years using Internet 0.30*** 0.25*** 0.27*** 0.08
Time spent on Internet at home 0.12 0.21** 0.07*** 0.09
Comfort with the Internet 0.31*** 0.35*** 0.39*** 0.08
Total Internet experience 0.51*** 0.49*** 0.32 0.18**
* P40.05.
** P40.01.
*** P40.001
Table 5
Computer self-efficacy and anxiety predicting computer and Internet comfort/use (N=227)
R2 b
* P40.05.
** P40.01
*** P40.001.
activities and anxiety related to computer use were significant predictors of comfort
with computers and the Internet. That these scales predicted comfort with the
Internet is interesting as there is no reference to the Internet in the items themselves.
Computer self-efficacy for advanced activities was again significantly related to most
indicators. Computer self-efficacy for general/beginner activities and anxiety for
computer-related activities were not consistent predictors.
The computer self-efficacy measures were generally better predictors than the
computer anxiety measures. However, it should be noted that anxiety is not hypo-
thesized to be directly related to behavior/performance. Thus, the current lack of
significant findings need not indicate that these scales are not useful. Anxiety is
hypothesized to be significantly related to specific self-efficacy. A regression analysis
12 F.G. Barbeite, E.M. Weiss / Computers in Human Behavior 20 (2004) 1–15
was therefore conducted where the anxiety measures predicted each self-efficacy
measure. For computer self-efficacy for general/beginner activities, anxiety of com-
puter use ð ¼ 0:46; P 4 0:001Þ and anxiety of computer-related activities
ð ¼ 0:014; P ¼ :018Þ both accounted for a significant portion of variance
R 2 ¼ 0:26; P 4 0:001 . For computer self-efficacy for advanced activities, anxiety
of computer use ð ¼ 0:31; P 4 0:001Þ, but not anxiety of computer-related
activities ð ¼ 0:34; P 4 0:60Þ accounted for a significant portion of variance
R 2 ¼ 0:10; P 4 0:001 .
Together, these results provide preliminary evidence for the value of some of the
instruments developed in this study. Of the four scales, self-efficacy for advanced
problems and anxiety for computer use warrant the most attention in future
research.
4. Discussion
Care must be taken to address the challenges that come with using the Internet for
data collection. One such challenge is ensuring the measurement equivalence of an
instrument developed using more traditional methods when administered through
the Internet. The current study tested the measurement equivalence of two impor-
tant sets of measures: computer self-efficacy and computer anxiety. The instruments
tested showed little equivalence when used with an Internet sample, and four new
scales were adapted.
Although the four scales showed good factor structure and reliability, some per-
formed better than others in predicting use and comfort of computers and the
Internet. In general, the self-efficacy scales predicted several use and comfort vari-
ables. These results are not surprising as theoretical and empirical evidence suggests
that self-efficacy for specific activities is a very powerful construct for predicting
behavior. However, self-efficacy for advanced activities was the most consistent
predictor of use and comfort. Self-efficacy for general/beginner activities may not
have been as strong a predictor because the activities referred to in the items are
such simple tasks that little consideration is given to one’s confidence in accom-
plishing them. For example, ‘‘I feel confident making selections from an on screen
menu’’ relates to an activity that is very ubiquitous in everyday activities even when
not actually using a personal computer (e.g. using an ATM machine). They are
therefore not indicative of the more difficult computer tasks that pose a greater
challenge to computer users. Especially for a sample that participated through the
Internet, such tasks may already be very familiar.
Anxiety of computer-related activities was the poorest predictor of all four scales.
Perhaps the items for this scale refer to activities that are too far removed from
actual computer use such that it is not a good measure of computer anxiety. For
example, the item ‘‘Visiting a computer store,’’ is an activity too far removed from
actually using a computer to have any influence on beliefs about being able to use a
computer. Anxiety for using computers was a predictor of comfort using computers
and the Internet; however, it did not predict computer use or computer experience.
F.G. Barbeite, E.M. Weiss / Computers in Human Behavior 20 (2004) 1–15 13
This may suggest that people feel uncomfortable using the computer yet use the
computer anyway. If people are anxious towards computers, but continue to use the
computer, this begs the question of whether performance of computer-based activ-
ities suffers because of it. Alternatively, anxiety is not hypothesized to directly relate
to behavior/performance and, therefore, this relationship should not be expected.
Instead, it is expected to be related to computer self-efficacy; and, accordingly,
anxiety of computer use predicted both self-efficacy measures in this study.
Limitations and remaining questions of this study provide direction for future
research. Computer self-efficacy for advanced activities and anxiety of computer use
seem to be the most promising scales for future use in online research. In general,
further evidence for the validity of the scales is needed. An important step to take
next is to test whether these scales are significant predictors of performance on
computer-based activities. These scales will be most useful if they are able to ade-
quately account for the effects of self-efficacy and anxiety on performance, and serve
as statistical controls for such effects. As in Study 1, the new scales must be tested
for their generalizability to other samples and methods. The current study used an
online standing research panel. Other methods exist for collecting data on the
Internet such as snowball emailing and announcements on newsgroups, forums and
popular Websites. The measures must be tested with these other methods.
Although further work is warranted, the current study provides promising initial
evidence for the usefulness of at least two of the four scales developed. Anxiety and
specific self-efficacy are important constructs in determining behavior. In the current
study, anxiety of computer use was a significant predictor of self-efficacy for
advanced activities and was significantly correlated with several indicators of com-
puter and Internet use. In turn, self-efficacy for advanced activities was a significant
predictor of computer and Internet use. This is consistent with Bandura’s self-effi-
cacy framework (1997). Both measures can provide online researchers with a way of
measuring and understanding the effects of these individual characteristics on other
computer-related activities. Because the scales consist of only four items each, they
can be easily incorporated into the method of any experiment. Accumulation of
such information can provide a better understanding of how to better make use of
the Internet as a research tool.
References
Bandura, A. (1986). Social foundations of thought and action: a social cognitive theory. Englewood Cliffs,
NJ: Prentice-Hall.
Bandura, A. (1997). Self-efficacy: the exercise of control. New York: Freeman.
Bannon, S. H., Marshall, J. C., & Fluegal, S. (1985). Cognitive and affective computer attitude scales: a
validity study. Educational and Psychological Measurement, 45, 679–681.
Buchanan, T., & Smith, J. L. (1999). Using the Internet for Psychological research: personality testing on
the World Wide Web. British Journal of Psychology, 90, 125–144.
Cambre, M. A., & Cook, D. L. (1985). Computer anxiety: definition, measurement and correlates. Journal
of Educational Computing Research, 1(1), 37–54.
Chen, G., Gully, S. M., Whiteman, J., & Kilcullen, R. N. (2000). Examination of relationships among
14 F.G. Barbeite, E.M. Weiss / Computers in Human Behavior 20 (2004) 1–15
trait-like individual differences, state-like individual differences, and learning performance. Journal of
Applied Psychology, 85(6), 835–847.
Chua, S. L., Chen, D., & Wong, A. F. L. (1999). Computer anxiety and its correlates: a meta-analysis.
Computers in Human Behavior, 15, 609–623.
Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: development of a measure and initial
test. MIS Quarterly, 19, 189–211.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: design and analysis issues. Boston, MA:
Houghton Mifflin.
Dambrot, F. H., Watkins-Malek, M. A., Silling, S. M., Marshall, R. S., & Garver, J. (1985). Correlates of
sex differences in attitudes toward and involvement with computers. Journal of Vocational Behavior, 27,
71–86.
Dyck, J. L., Gee, N. R., & Smither, J. A. (1998). The changing construct of computer anxiety for younger
and older adults. Computers in Human Behavior, 14(1), 61–77.
Gist, M. E., Schwoerer, C. E., & Rosen, B. (1998). Effects of alternative training methods on self-efficacy
and performance in computer software training. Journal of Applied Psychology, 74(6), 884–891.
Glorfeld, L. W. (1995). An improvement on Horn’s parallel analysis methodology for selecting the correct
number of factors to retain. Educational & Psychological Measurement, 55, 377–393.
Graphics, Visualization, and Usability Center. (2002). 10th WWW User Survey. Available: http://
www.gvu.gatech.edu/gvu/user_surveys/survey-1998-10/.
Harrington, K. V., McElroy, J. C., & Morrow, P. C. (1990). Computer anxiety and computer-based
training: a laboratory experiment. Journal of Educational Computing Research, 6, 343–358.
Harrison, A. W., & Rainer Jr., K. (1992). An examination of the factor structures and concurrent valid-
ities for the computer attitude scale, the computer anxiety rating scale, and the computer self-efficacy
scale. Educational and Psychological Measurement, 52, 735–745.
Heinssen, R. K. Jr., Glass, C. R., & Knight, L. A. (1987). Assessing computer anxiety: development and
validation of the computer anxiety rating scale. Computers in Human Behavior, 3, 49–59.
Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: sensitivity to under-
parameterized model misspecification. Psychological Methods, 3(4), 424–453.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conven-
tional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55.
Jackson, L. M., Voller, L., & Stuurman, J. (1985). Effects of attitude and task complexity on micro-
computer text-editing. Journal of Computer-Based Instruction, 12, 111–115.
Jöreskog, K. G., & Sörbom, D. (1998). LISREL 8.20 and PRELIS 2.20 for Windows. Chicago, IL:
Scientific Software International.
Kanfer, R., & Heggestad, E. D. (1997). Motivational traits and skills: a person-centered approach to work
motivation. Research in Organizational Behavior, 19, 1–56.
Kay, R. H. (1993). An exploration of theoretical and practical foundations for assessing attitudes toward
computers: the computer attitude measure (CAM). Computers in Human Behavior, 9, 371–386.
King, W. C. Jr., & Miles, E. W. (1995). A quasi-experimental assessment of the effect of computerizing
noncognitive paper-and-pencil measurements: a test of measurement equivalence. Journal of Applied
Psychology, 80(6), 643–651.
Labouvie, E. W. (1980). Identity versus equivalence of psychological measures and constructs. In
L. W. Poon (Ed.), Aging in the 1980s (pp. 493–502). Washington, D.C: American Psychological
Association.
Loyd, B. H., & Gressard, C. (1984a). Reliability and factorial validity of computer attitude scales.
Educational and Psychological Measurement, 44, 501–505.
Loyd, B. H., & Gressard, C. (1984b). The effects of sex, age, and computer experience on computer
attitudes. AEDS Journal, 18(2), 67–77.
Loyd, B. H., & Loyd, D. E. (1985). The reliability and validity of an instrument for the assessment of
computer attitudes. Educational and Psychological Measurement, 45, 903–908.
Malone, R. E., & Bero, L. A. (2000). Cigars, youth, and the internet link. American Journal of Public
Health, 90(5), 790–792.
F.G. Barbeite, E.M. Weiss / Computers in Human Behavior 20 (2004) 1–15 15
Marcoulides, G. A. (1988). The relationship between computer anxiety and computer achievement.
Journal of Educational Computing Research, 4, 151–158.
Marcoulides, G. A. (1989). Measuring computer anxiety: the computer anxiety scale. Educational and
Psychological Measurement, 49, 733–739.
Murphy, C. A., Coover, D., & Owen, S. V. (1989). Development and validation of the computer self-
efficacy scale. Educational and Psychological Measurement, 49, 893–899.
Nickell, G. S., & Pinto, J. N. (1986). The computer attitude scale. Computers in Human Behavior, 2, 301–
306.
O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using
parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments & Computers, 32(3),
396–402.
Reips, U. (2000). The web experiment method: advantages, disadvantages, and solutions. In
M. H. Birnbaum (Ed.), Psychological Experiments on the Internet (pp. 89–117). San Diego, CA:
Academic Press.
Richter, T., Naumman, J., & Groeben, N. (2000). Attitudes toward the computer: construct validation of
an instrument with scales differentiated by content. Computers in Human Behavior, 16, 473–491.
Riordan, C. M., & Vandenberg, R. J. (1994). A central question in cross-cultural research: do employees
of different cultures interpret work-related measures in an equivalent manner? Journal of Management,
20(3), 643–671.
Stanton, J.M., & Weiss, E.M., (2002). Studyresponse.com: your online behavioral science research source.
Retrieved 13 June, 2002, Available: http://www.studyresponse.com.
Torkzadeh, G., & Angulo, I. E. (1992). The concept and correlates of computer anxiety. Behavior and
Information Technology, 11, 99–108.
Webster, J., & Martocchio, J. J. (1992). Microcomputer playfulness: development of a measure with
workplace implications. MIS Quarterly, 16(2), 201–226.
Weil, M. M., & Rosen, L. D. (1995). The psychological impact of technology from a global perspective:
a study of technological sophistication and technophobia in university students from twenty three
countries. Computers in Human Behavior, 11(1), 95–133.
Welch, N., & Krantz, J. H. (1996). The World-Wide Web as a medium for psychoacoustical demonstra-
tions and experiments: experience and results. Behavior Research Methods, Instruments, & Computers,
28, 192–196.
Wood, R., & Bandura, A. (1989). Impact of conceptions of ability on self-regulatory mechanisms and
complex decision making. Journal of Personality and Social Psychology, 56(3), 407–415.
Zwick, W. R., & Velicer, W. F. (1986). Comparison of five rules for determining the number of compo-
nents to retain. Psychological Bulletin, 99(3), 432–442.