Anda di halaman 1dari 34

JOURNAL OF RESEARCH IN SCIENCE TEACHING

Research Article

Development and Validation of an Instrument to Assess Student Attitudes


Toward Science Across Grades 5 Through 10
1
Ryan Summers and Fouad Abd-El-Khalick2
1
Department of Teaching and Learning, University of North Dakota, Grand Forks, North Dakota
2
School of Education, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina

Received 24 May 2016; Accepted 24 June 2017

Abstract: The aim of the present study is to enable future studies into students attitudes toward science,
and related constructs, by developing and validating an instrument suitable for cross-sectional designs.
Following a thorough review of the literature it was determined that many extant instruments included design
aspects that appeared to be limited in some way. The BRAINS (Behaviors, Related Attitudes, and Intentions
toward Science) Survey was designed to address core criticisms that have been leveled against many existing
instruments. BRAINS was rooted in a theoretical framework drawn from the theories of reasoned action and
planned behavior (TRAPB). Initial development involved review by an expert panel, adaptation for online
delivery and a pilot on this platform. To establish the psychometric validity of the 59-item instrument it was
administered to a representative, random sample of 1,291 Illinois students in grades 5 through 10.
Confirmatory factor analysis and subsequent refinement yielded a 30-item instrument with five factors and a
good statistical fit including a RMSEA of 0.04 and a CFI of 0.95. The five factors, or constructs, of the final
instrument model reflect the underlying TRAPB framework: attitudes toward science, behavioral beliefs
about science, intentions to engage in science, normative beliefs, and control beliefs. # 2017 Wiley
Periodicals, Inc. J Res Sci Teach 9999:XXXX, 2017
Keywords: attitudes; validity/reliability; student beliefs; values

Rising above the Gathering Storm (National Research Council [NRC], 2007) emphasized
that the quality of life in the United States is largely dependent on the continued production of
knowledge and innovation in science and technology. The National Science Board (NSB; NSB,
2001) voiced the same message, stating that advances in science and engineering . . . determine
economic growth, quality of life, and the health and security for our nation (p. 7). The sciences
are directly connected to the viability and sustainability of all crucial foundations of prosperous
nations. For modern societies, now heavily reliant on science and technology, to continue to
flourish they need citizens who are literate in, and able to engage with, these domains. To meet
current and future demands in STEM, an educated, innovative, and motivated workforce is the

Contract grant sponsor: University of Illinois College of Education; Contract grant number: Hardie Dissertation
Award; Contract grant sponsor: University of Illinois Survey Research Lab; Contract grant number: Robert Ferber
Dissertation Award; Contract grant sponsor: University of Illinois Graduate College; Contract grant number:
Dissertation Completion Fellowship.
Correspondence to: R. Summers; E-mail: rgsummers@gmail.com, fouad@unc.edu
DOI 10.1002/tea.21416
Published online in Wiley Online Library (wileyonlinelibrary.com).

# 2017 Wiley Periodicals, Inc.


2 SUMMERS AND ABD-EL-KHALICK

most important resource (NRC, 2007). This stance was emboldened by the reports Prepare and
Inspire and Engage to Excel (Presidents Council of Advisors on Science and Technology, 2010,
2012), which call for the preparation of 1 million middle and high school teachers in the STEM
fields as well as the production of an additional 1 million graduates from STEM disciplines.
Running contradictory to the ambitions outlined above is the consensus among national
leaders, policy-makers, and educators that the number of students studying science and pursuing
science-related fields is fleeting (George, 2006). This perspective is undoubtedly informed by
evidence suggesting that the majority of students fail to engage with science, technology,
engineering, and mathematics (STEM) at the post-secondary level (e.g., United States
Department of Labor, 2007), along with reports that interest among young people for pursuing
scientific careers is declining (e.g., Schreiner & Sjberg, 2004). Out of the 1.8 million American
high school students from class of 2013 who provided responses to questions on the ACT (2014)
interest inventory, more than 150,000 indicated they had an inherent interest in STEM, but did
not have plans to major in a STEM field in college or pursue a STEM-related career. Sustained
reports about the inability to attract an adequate number of women and persons from minority
backgrounds to STEM-related careers add to these concerns (Ashby, 2006).
The present situation has continued to spur questions about the factors, such as school science
education, that influence precollege students decision to pursue future careers in STEM. Such
questions have attracted investigators to students attitudes toward science with the underlying
hypothesis that attitudes help to steer school performance and career choice (e.g., Wyer, 2003).
Gibson and Chase (2002), for example, asked middle school students about their interest in taking
another science course in school, as well as about their interest in becoming scientists when they
become adults. Researchers have also examined the relationship between precollege students
attitudes and their dispositions, interest, and/or intentions to pursue additional studies in science
(e.g., Caleon & Subramaniam, 2008; Farenga & Joyce, 1998) and/or scientific or science-related
careers (e.g., Archer, Dewitt, & Osborne, 2015; Berk et al., 2014) both in the near and the distant
futures. Despite these studies, and a great many others spanning more than 40 years, clarity into
students attitudes and eventual behaviors related to science continue to be cloudy.
The Case for Cross-Sectional Study
Cross-sectional studies allow for a high number of schools to be involved, across a large
geographical area, and with the potential for including students from more diverse backgrounds
(e.g., ethnicity, socioeconomic status [SES], funding of school attended, etc.). These features are
critically important given that many extant studies into students attitudes toward science, though
essential to the development of the field, rely on relatively small-scale data collection efforts (e.g.,
Hamerick & Harty, 1987). Cross-sectional studies involve comparing current individuals at
different stages on the variable of interest, such as age or grade level (Gall, Borg, & Gall, 1996),
generally focus on a single point in time (Vogt & Johnson, 2011). Advantages of cross-sectional
designs include the mitigation of the notable complications associated with longitudinal studies
(e.g., cost and attrition), and the comparatively shorter turnaround for having results available.
Rationale for a New Instrument. The aim of the present study is to enable future studies into
students attitudes toward science, and related constructs, by developing and validating an
instrument suitable for cross-sectional designs. A thorough review of the literature revealed that
while a number of robust instruments aimed at assessing precollege students attitudes toward
science have been developed (see Osborne, Simon, & Collins, 2003; Osborne, Simon, & Tytler,
2009), none of the extant instruments were adequate for this undertaking. Many of the existing
instruments appeared unfit for cross-sectional investigations largely because of their design and

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 3

intended use with a target audience in a narrow age band. Several instruments were designed to
target a single grade level (e.g., Childrens Science Curiosity Scale [Harty & Beall, 1984]), or
specific age of students (e.g., Relevance of Science Education [Sjberg & Schreiner, 2005]). Other
instruments target a grade-level range, such as middle or high school (e.g., Heikkinen, 1973;
Skinner & Barcikowski, 1973). One solution might be to revisit and re-evaluate an extant
instrument, a potential option illustrated by Owen et al. (2007) with the SimpsonTroost Attitude
Questionnaire (Simpson & Troost, 1982). However, this approach does not afford the ability to
easily address any ingrained problems with the existing instrument. Instead the alternative was
selected: to develop a new instrument that addresses to assess student attitudes toward science,
grounded in a robust theoretical framework, validated through appropriate methods, and that
demonstrates reliability according to the standards of modern psychometric evaluation.
Review of Literature
Concerns with instruments purporting to measure students attitudes toward science
are longstanding (e.g., Gardner, 1975; Munby, 1983; Schibeci, 1984; Shrigley & Koballa,
1992), well-documented (Blalock et al., 2008), and persistent (Potvin & Hasni, 2014). Of
all of the characteristics to consider when designing a new instrument, age level of the
target student audience is of critical importance when planning for a cross-sectional
investigation. Hillman, Zeeman, Tilburg, and List (2016) outline the broad appeal making
an instrument available for elementary, middle, and high school students, and note the
attractiveness of being able to assess students from a broad age range for research
purposes. However, as noted in earlier discussion and as illustrated in Table 1, extant
instruments that accommodate a range of grade levels are geared toward students in the
low to the middle grades, or the middle to the high school grade levels. Further, only two
instruments included in Table 1 are designed to collect responses from participants in
elementary, middle, and high school: The Wareing Attitude Toward Science Protocol
(Wareing, 1982), and the My Attitudes toward Science (MATS) instrument (Hillman et al.,
2016), but both fall short in other regards that will be explored in the sections that follow.
Beyond considerations of accessibility and applicability, general concerns about the
psychometric properties, including validity, of instruments are well-documented have highlighted
issues with instruments aiming to measure students attitudes toward science and related
constructs. Ramsden (1998) offered a summary of the weaknesses in extant measures as indicated
by the research literature, which include:

(1) A lack of standardization in the wide range of instruments reported as a means of


measuring attitudes makes comparisons between studies problematic.
(2) Poor design of instruments used to gather data and of individual response items within
instruments.
(3) Failure to formulate the research with reference to theory on the construction of data
collection tools.
(4) Failure to address matters of reliability and validity appropriately.

Although much of the blame for the problems with capturing and interpreting student
attitudes has been placed on inadequate instrumentation (e.g., Gardner, 1975; Munby, 1979; Pearl,
1974), researchers have done little to improve extant instruments and advance measurement
practices in the field. This trend continues with recent additions to the field continuing to prove
incompatible with specific research designs (e.g., cross-sectional), or inadequate in one or more of
the historically weak areas listed above. Blalock et al. (2008) documented the tendency for

Journal of Research in Science Teaching


4
Table 1
Summary overview of select attitude instruments

Developer(s) and Primary Reference


Instrument Focusa Audience to Theoryb Format Reliability Sample Questions
Attitudes to school Attitudes toward: science, school Middle and Yes 25 items (classified as Reported 85% I would like a job
science and science science, scientific enterprise, secondary Level 1 or L1 responses) agreement involving science.
instrument (Bennett science as leisure students on a 3-point scale (agree/ between I agree because . . . (L1)
& Hogarth, 2009) (ages 11, neutral/ disagree). L1 free & fixed (a) I enjoy science at
14, and 16) responses to disposition responses school (L2)
statements are followed (b) . . . they are generally
with Level 2 (L2) list to well paid (L2)
probe for explanations. (c) . . . science makes

Journal of Research in Science Teaching


Students select as many the world a better
L2 responses as they feel place to live in (L2)
apply. (d) there are good jobs
you can do with
science (L2)
(x) . . . another reason
 please say what
(L2)
Test of Science Related Perceived utility of science, Middle and Yes 70 items on a 5-point a 0.78; Scientific studies are
Attitudes (TOSRA) attitudes toward science as a secondary Likert scale test-retest doing more harm than
(Fraser, 1978) school subject, pursuing students 0.82 good.
SUMMERS AND ABD-EL-KHALICK

science, and science as leisure (grades Science lessons bore


712) me.
I dislike reading
newspaper articles
about science.
Attitude Toward Attitudes toward science Middle school Yes 14 items on a 5-point a 0.94 Science is fun.
Science in School as a school subject students Likert scale I would like to learn
Assessment (grades 7 more about science.
(ATSSA) (Germann, and 8) Science makes me feel
1988) uncomfortable,
restless, irritable, and
impatient.
continued
Developer(s) and Primary Reference
Instrument Focusa Audience to Theoryb Format Reliability Sample Questions
Science Opinion Attitudes toward: Middle school No 30 items on a 5-point Not reported I would like to be a
Survey (SOS) school science & students Likert scale scientist when I leave
(Gibson & Chase, scientists (grades 68) school.
2002) Science lessons are fun.
Attitudes toward Attitudes toward Elementary No 28 items on a 5-point a 0.770.87 I would like to have a
STEM (Guzey et al., STEM, STEM and middle scale job that involves
2014) integration school science, mathematics,
students engineering, or
(grades 46) technology.
To learn engineering, I
have to be good at
science and
mathematics.
Views about Science Science self-concept, Middle Yes 30 items on a 5-point Assessed I study physics:
Survey (VASS) attitudes toward science through scale toward one of indirectly (a) to satisfy course
(Halloun, 1997, learning, nature of undergraduate two statements. Five requirements
2001) scientific knowledge, students overlapping versions (b) to learn useful
and perceived usefulness (grades 816) available for different knowledge
of science. branches of science 1. Mostly (a), rarely (b)
2. More (a) than (b)
3. Equally (a) & (b)
4. More (b) than (a)
5. Mostly (b), rarely (a)
Childrens Science Attitudes toward: Elementary Yes 30 items on a 5-point a 0.83 I like to watch television
Curiosity Scale science-related activities, school Likert scale using programs about
CROSS-SECTIONAL INSTRUMENT DEVELOPMENT

(CSCS) (Harty & & doing science students emoticons science.


Beall, 1984) (grade 5) It is boring to read about
different kinds of
animals.
My Attitudes Toward Attitudes toward school science, Elementary Yes 40 items on a 5-point a 0.540.87 I feel upset when
Science (MATS) perceived usefulness of through Likert scale using someone talks to me
(Hillman et al., science, pursuing science, and high school emoticons about being in a
2016) perception students science class
of scientists (grades The things scientists
continued
5

Journal of Research in Science Teaching


Developer(s) and Primary Reference 6
Instrument Focusa Audience to Theoryb Format Reliability Sample Questions
312) discover through their
work do not affect
other people in my
life
Science Attitude Attitudes toward: school Elementary Yes 23 items on a 5-point a 0.920.96 I would not think of
Scale (SAS) science learning & science- and middle Likert scale discussing science
(Misiti et al., 1991) related activities school with my friends
students outside of class.
(grades 58) I hate keeping records of
experiments in a lab
notebook.
Learning science facts is

Journal of Research in Science Teaching


a drag.
Science Attitude Attitude toward: science & Secondary Yes 40 items scored on a a 0.78; Every citizen should
Inventory II (SAI II) toward scientists, perceived school 5-point Likert scale split-half understand science.
(Moore & Hill Foy, usefulness of science students reliability Scientists do not have
1997) 0.81 enough time for their
families or for fun.
Instrument to assess Attitude toward: school & Elementary No 43 items scored on a a 0.650.81 How do you feel about
childrens attitudes science, perceived and middle 5-point Likert scale ...
toward science (Pell usefulness of science school using smiley face Doing science
& Jarvis, 2001) students emoticons. Only experiments.
SUMMERS AND ABD-EL-KHALICK

(grades 16) positively worded items Watching the teacher do


were included. an experiment.
Changes in Attitude Attitudes toward school Middle and Yes 20 items on a 5-point a 0.80 for Learning science will
About the Relevance science, perceived high school Likert scale. Three each test; have an effect on the
of Science (CARS) usefulness of science students versions of the total way I vote in
(Siegal & Ranneys, instrument available, 8 reliability elections.
2003) questions overlap. 0.91 My parents encourage
me to continue with
science.
Simpson-Troost Attitudes toward school science, Middle school Yes 22 items scored on a 5- a 0.85 I enjoy science courses.
Attitude perceived attitudes of family, students point Likert scale in the Most of my friends do
Questionnaire as perceived attitudes of peers (grades 68) revised version well in science.
continued
Developer(s) and Primary Reference
Instrument Focusa Audience to Theoryb Format Reliability Sample Questions
Revised (STAQ-R) My mother likes
(Owen et al., 2007) science.
Relevance of Science Attitudes toward: science & Students Yes 245 items with varying Not reported I would like to learn
Education (ROSE) specific topics and activities; 15 years of scales (Likert, agree/ about. . .
student science self-concept, perceived age disagree, interested/not Stars, planets, and the
questionnaire usefulness of science interested, often/never) universe
(Sjberg & Science and technology
Schreiner, 2005) are important for
society
Students Motivation Attitudes toward: science, science Grade 11 Yes 35 items scored on a a ranged No matter how much
Toward Science self-concept, science learning; students 5-point Likert scale 0.700.89 effort I put in, I
Learning perceived usefulness of science for each cannot learn science.
Questionnaire scale; total I am willing to
(SMTSL) (Tuan reliability participate in this
et al., 2005) 0.89 science course
because the content is
exciting and
changeable.
Attitudes Toward Attitudes toward school science, Elementary, No 42 items scored on a a 0.910.94 I am sick of the hows
Science Protocol science self-concept, perceived middle, and 5-point Likert scale and whys in
(WASP)b (Wareing, usefulness of science high school science.
1982) students Students are like robots
(grades in science classes.
412)
Modified Attitudes Attitudes toward science, anxiety Grade 5 Yes 25 items scored on a a 0.70 I feel at ease in science
CROSS-SECTIONAL INSTRUMENT DEVELOPMENT

Toward Science toward science, perceived students 5-point Likert scale class.
Inventory (mATSI) usefulness of science, science No matter how hard I
(Weinburgh & self-concept, perception of try, I cannot
Steele, 2000) science teacher understand science.
Science is something I
enjoy very much.
a
These identifiers do not necessarily reflect specific constructs or categories noted by the author(s). These terms were assigned thematically, based on the focus of multiple items.
b
This column indicates whether a theory was referenced in the design of the instrument. As reference to theory, nonetheless, does not reflect the extent (ranging from operational
definitions of key terms to full model), nor does it comment on the viability of the reference made by the author(s).
7

Journal of Research in Science Teaching


8 SUMMERS AND ABD-EL-KHALICK

researchers to haphazardly design their own measures for various pursuits, which undoubtedly
contributes to the persistence of many problems. The following sections expands on the
weaknesses described by Ramsden, and illustrates these concerns as they relate to the instruments
presented in Table 1.

Lack of Standardization in Attitude Measures


Researchers (e.g., Aiken & Aiken, 1969; Osborne et al., 2003) have expressed concern over
the absence of a clear definition for the construct of attitudes toward science. Blosser (1984) noted
that attitudes toward science can be used to reference scientific attitudes and interests, as well as
attitudes toward scientists, scientific careers, methods of teaching science, science curriculum, or
the subject of science in the classroom. While attitudes toward science has largely been
distinguished from some of these other constructsscientific attitudes, for example, which refer
to the particular approaches for solving problems, assessing ideas and information, and/or making
decisions (Germann, 1988)the construct remains somewhat nebulous and may be defined or
framed in different ways depending on the purpose and perspective of the researcher(s) involved.
Unsurprisingly, several of these meanings are reflected in various measures of attitudes toward
science. This variation in ascribed meaning and associated measures has been noted in reviews of
the field (e.g., Haladyna & Shaughnessy, 1982; Schibeci, 1984). The concern with such
widespread meanings is that the applicability and comparability of research outcomes can be
restricted.

Poor Design of Instruments


Critiques of extant instruments have raised issues with the item creation and/or selection
process. In general, a cursory review of extant instruments reveals that numerous items are poorly
worded for the selected response format (i.e., Likert scale). As examples, compound or double-
barreled items (Science makes me feel uncomfortable, restless, irritable, and impatient
[Germann, 1988]) and items that incorporate confusing terms (e.g., Students are like robots in
science classes [Wareing, 1982]) are unreliable. An additional item-level consideration that can
limit accessibility by respondents is the presence of advanced science content or discipline-
specific terminology. To explain, most students could honestly rate to general prompts, such as
I enjoy science courses (Owen et al., 2007). Occasionally, as Table 1 illustrates, extant
instruments include items, such as the prompt I study physics. . . in the Views about Science
Survey, that may be foreign to young students. In these cases students might not know enough to
respond to prompts about certain content, like Physics, let alone articulate how the laws of
physics portray the real world (Halloun, 1997, 2001). If an instrument did ask questions about
advanced topics, care would need to be taken to ensure that the respondents had the prerequisite
knowledge to appropriately respond or, if not, a way to report their uncertainty.
Failure to Reference Theory in Instrument Construction
Critiques of existing instruments have drawn attention to the necessity of clear conceptualiza-
tion and a robust, well-articulated theoretical framework (Messick, 1989). Pearl (1974) warned
that the validity of a given measure may be highly suspect without a corresponding definition,
explanation, or conceptualization. The Wareing Attitudes toward Science Protocol (WASP)
embodies this concern (Wareing, 1982), boasting very high reliability estimates (0.910.94)an
uncommon component as noted belowbut, this statistic is presented in the absence of any
degree of theoretical framework. Even more recent instruments omit these details
(e.g., Gibson & Chase, 2002; Guzey, Harwell, & Moore, 2014). As a minimum, Vaske (2008)
Journal of Research in Science Teaching
CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 9

contends that survey methodologies must conceptualize and operationalize the involved variables,
which requires identifying the meaning of the concepts and specifying how the variables will be
measured. The Attitudes toward STEM instrument developed by Guzey et al. (2014) exemplifies
these concerns with no references to operational definitions nor any connections to proposed
measurement goals. With a final factor structure that is not logically consistent (i.e., attitudes
toward science and engineering as one sub-scale and attitudes toward mathematics in another),
and without some degree of theoretical framing, it would be difficult to use such an instrument
with confidence.
Concerns of Consistency, Reliability, and Validity
Researchers (e.g., Krynowsky, 1988; Munby, 1983; Pearl, 1974; Ramsden, 1998) have been
very critical of some extant measures of student attitudes and interests in science for lacking sound
evidence in terms of validity and reliability. Munby (1979) criticized the validity and credibility of
instruments seeking to quantify affective outcomes of science education, claiming that existing
instruments do little to enlist our confidence in their use (p. 273). Gardner (1975) identified
internal consistency and uni-dimensionality as key statistical criteria for instrument development.
Lovelace and Brickman (2013) elaborate on the importance of internal consistency, and explain
that providing an estimate for each instrument sub-scale increases confidence that items, on their
respective scale, measure the same underlying construct. In their review, Osborne et al. (2009)
note that efforts to establish instrument validity and reliability have been poor in multiple cases.
Modern attitude instruments have been found to further this trend by demonstrating sub-standard
reliability (e.g., Hillman et al., 2016). In fact, few instruments purporting to measure students
attitudes toward science were found to demonstrate exceptional internal consistency, reliability,
and/or external validity in the comprehensive review conducted by Blalock et al. (2008). Given
that many of the instruments, which are still the basis for current research, were developed in the
1970s and 1980s (e.g., Fraser, 1978; Germann, 1988; Moore & Sutman, 1970; Simpson & Troost,
1982) the aforementioned concerns carry even greater magnitude, and further stress the need to
update psychometric tools in the field.
Instrument Development
To address the pitfalls of extant instruments summarized by Ramsden (1998), Kind, Jones,
and Barmby (2007) put forward the following guidelines for the construction of attitude measures:

(1) Clear descriptions need to be put forward for the constructs that one wishes to measure.
(2) Reliability of the measure needs to be demonstrated by confirming the internal
consistency of the construct (e.g., by use of Cronbachs alpha) and by confirming
unidimensionality (e.g., by using factor analysis).
(3) Validity needs to be demonstrated by the use of more than one method, including the use
of psychometric techniques.
(4) Care needs to be taken when separate constructs are combined to form one scale, with
justification that these constructs are closely related.

The following section addresses the first guideline relating to the constructs, and eventually
the items, considered for inclusion in the instrument. The second and third guidelines are
discussed in later sections, alongside instrument validation and data analysis, and considerations
related to the final guideline are reviewed in the final discussion.

Journal of Research in Science Teaching


10 SUMMERS AND ABD-EL-KHALICK

The Construct of Attitudes Toward Science


According to Simpson, Koballa, Oliver, and Crawley (1994), attitude entails affective,
cognitive, and behavioral components. Many researchers, initially, seemed to have related
attitude in this sense to preference. Bem (1970) wrote to the preferential attribute of
attitudes, that they represent our likes and dislikes (p. 14). Koballa and Crawley (1985)
further explored this quality and connected it to science, by suggesting that attitudes
toward science refer to whether a person likes or dislikes science, or has a positive or
negative feeling about science (p. 223). However, as Baars (1986) narrates, the 1990s saw
developments in psychology that led to a redefinition of the attitude construct. Research
paradigms in social and educational psychology that had long influenced the study of
students attitudes toward science shifted from behaviorism to a more cognitive orientation
(Richardson, 1996). This change in theoretical perspective, Koballa and Glynn (2007)
explain, divided the construct of attitude away from cognition. Attitudes, shifting to align
more with affect, were consequently less of a concern to researchers, instead replaced by
the construct of beliefs which were thought to explain the actions, or behaviors, of
learners. Students attitudes toward science remained a consideration, but Koballa (1988a)
emphasized that the primary goal of measuring students attitudes toward science is to
predict student behaviors. Toward making such predictions, researchers and educators have
been drawn to social psychological models (Crawley & Koballa, 1994).
As a result of the aforementioned transition in the field, definitions of attitudes toward science
became intimately connected with observable outcomes, notably behavior. Ramsden (1998)
revisits Shaw and Wrights (1968) definition of attitude, highlighting the inclusion of a behavioral
component:

Attitude is best viewed as a set of affective reactions toward the attitude object, derived from
concepts of beliefs that the individual has concerning the object, and predisposing the
individual to behave in a certain manner toward the object (p. 13).

In later years, Icek Ajzen and Martin Fishbein would help to better situate the attitude
construct by clarifying the somewhat ambiguous causal chain that Shaw and Wright
alluded to in their definition. Instead, Fishbein and Ajzen (1975) described attitude as a
learned predisposition to respond in a consistently favorable or unfavorable manner toward
a person, place, thing or event (i.e., the attitude object). Compartmentalizing attitude, for
Ajzen and Fishbein, was necessary in order to make the transition from definition to
theory as will become apparent in the next section. Still, researchers who have continued
to focus heavily on attitudes have amalgamated past definitions with modern perspectives
that reflect the contributions of Ajzen and Fishbein. This is well illustrated by Oppenheim
(1992) who offers the following:

Attitudes . . . [are] . . . a state of readiness or predisposition to respond in a certain manner


when confronted with certain stimuli . . . attitudes are reinforced by beliefs (cognitive
component), often attract string feelings (emotional component) which may lead to
particular behavioral intents (action-tendency component) (p. 7475).

By championing student behavior as the outcome variable, and compartmentalizing attitude


as a contributing factor, the discussion now turns to how these factors are related and what other
factors are involved.

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 11

The Theories of Reasoned Action and Planned Behavior


Researchers and educators were attracted to the work of Ajzen and Fishbein who claimed that
affective, cognitive, and behavioral aspects of attitude interact in a causal and unidirectional
manner. The theory of reasoned action, proposed by Fishbein and Ajzen (1975), offers a unifying
and systematic conceptual framework, which can be used to explore a range of human behaviors.
Fishbein and Ajzen (2010) explain that peoples attitudes, subjective norms and perceptions of
control follow reasonably and consistently from their beliefs, no matter how the beliefs were
formed, and that in this way they influence intentions and behavior. The theory was born largely
out of frustration with traditional attitudebehavior research, much of which found weak
correlations between attitude measures and performance of volitional behaviors (Hale,
Householder & Greene, 2002, p. 259). The theory of reasoned action, and the complementing
theory of planned behavior discussed, have been used successfully to predict and explain a wide
range of behaviors and intentions with individuals of varied ages and backgrounds. Such inquiries
have examined adolescent and young adult drinking habits, substance use, HIV/STD-prevention
behaviors and use of contraceptives, and use of sunscreen (Albarracin, Johnson, Fishbein, &
Muellerleile, 2001; Bandawe & Foster, 1996; Morrison, Spencer, & Gillmore, 1998; Steen, Peay,
and Owen, 1998; Trafimow, 1996). Subsequently, many published studies report on effective
behavior change interventions developed through an understanding of related constructs as
identified by the theories of reasoned action and planned behavior (e.g., Hardeman et al., 2002;
Head & Noar, 2014; Jemmott & Jemmott, 2000; Weber, Martin, & Corrigan, 2007).
Butler (1999) contended that the theory of reasoned action was a natural fit in science
education because many of the desired student outcomes, such as deciding to take a high level
science course or pursuing a science-related career, represented specific behaviors. According to
the theory of reasoned action, a persons intention to perform a given behavior, rather than their
attitude toward the behavior, is more closely linked to the actual behavioral performance (Fishbein
& Ajzen, 1975). For that reason this theory focuses on the distinction between attitudes toward
some object (e.g., person, place, thing, or event) and attitudes toward some specific action to be
performed on that object (Osborne et al., 2003). In this context, students attitudes toward doing
science is thought to be more predictive of their behavior than their overall attitudes toward
science. Shrigley et al. (1988) suggested, in their review of the literature, that this relationship
became apparent from inconsistencies among early studies between reported attitudes and
subsequent behaviors. Osborne et al. (2003) add and articulate that preferences, resulting from
attitudes, will not necessarily be related to the behaviors a student ultimately exhibits.

[B]ehavior may be influenced by the fact that attitudes other than the ones under
consideration may be more strongly held; motivation to behave in another way may be
stronger than the motivation associated with the expressed attitude; or, alternatively, the
anticipated consequences of a specific behavior may modify that behavior so that it is
inconsistent with the attitude held (Osborne et al., 2003, p. 1054).

As an example, consider that a student may have a positive attitude toward science, but that
student may avoid publicly demonstrating that preference around their peers who he/she perceives
might look down on them for that preference. In this case, it is likely the student holds a positive
attitude, but he/she might be quite reluctant to engage in certain science-related endeavors for fear
of being judged or shunned by their friends. Even if the student in this example did not have a
positive attitude, he or she might be compelled to perform the behavior in question if they had a
high motivation to comply (e.g., the behavior was important for future success) or perceived some
greater advantage could result from their engagement (e.g., the behavior improves the likelihood
Journal of Research in Science Teaching
12 SUMMERS AND ABD-EL-KHALICK

of winning a scholarship). In review, the theory suggests an individuals behavior is determined by


their intention, and intention is a joint product of attitude toward the behavior and the subjective
norm (i.e., beliefs about how other people would regard their performance of the behavior). The
relative importance of the individuals attitude toward performing the behavior, including
outcome evaluations, and their personal beliefs, which include their normative beliefs, are
weighed in the expectancy-value theorem. To put it simply, the more favorable the attitude and the
subjective norm, and the greater the perceived control, the stronger the persons intention is to
perform the behavior in question.
Ajzen (1985) extended the theory of reasoned action by introducing the theory of planned
behavior in an effort to address concerns of limited applicability (see Liska, 1984). This extension
takes into consideration that internal factors, such as a persons skills or ability, as well as external
factors, like the co-operation of others or lack of resources, may influence an individuals behavior.
Another notable contribution of this theory is that it introduces the concept of perceived behavioral
control, acknowledging that a person may believe they do not have full control over their own
behavior. As a result, this theory introduces more variables that can influence students intention to
perform a given behavior independent of their attitude toward that behavior (Crawley & Koballa,
1994). The role of internal factors has been explored further in more recent entries, such as the
overlap of self-efficacy onto the perceived behavioral control construct (Fishbein & Ajzen, 2010).
Overall, Ajzens theory of planned behavior offers a framework to predict and understand
science-related behaviors and allows for the construction of instruments to measure the variables
guide science-related behavior. Figure 1 outlines the theories of reasoned action and planned
behavior (TRAPB) elements and the associated causal model (adopted from Ajzen & Fishbein,
2005). Ajzen and Fishbein (2005, 2010) identified a host of background factors that impact
behavioral, normative, and control beliefs. These factors range from the individual (personality,
intelligence, experience, etc.) to the social (education, gender, income, culture, etc.), and include
available information (knowledge, media, etc.). Crawley and Coe (1990), Koballa (1988b), and
Oliver and Simpson (1988), all contend that support from peers and positive attitudes toward
enrolling in a course are strong determinants of students decisions to pursue science courses. The
convergent findings of these studies suggest that the theory has at least partial validity. Some
researchers question aspects of this model, perhaps rightfully so considering the limited number
of empirical investigations published in the literature, but it has a committed following and work
has been conducted in recent years to further develop the model (e.g., Crawley & Koballa, 1994).

Figure 1. Schematic repersentation of the reasoned action model (Adapted from Fishbein & Ajzen, 2010, p. 22).

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 13

At the present it remains at the forefront of competing models for shaping attitude research in
science education (Osborne et al., 2003).
Other Theoretical Perspectives
There is a growing body of work relating students attitudes toward science that is grounded
in theoretical construct of identity, which provides an analytic lens for the construction of
explanatory hypotheses for students choices (Osborne et al., 2009). The first example, drawing
from the works of Etienne Wenger-Trayner (Lave & Wenger, 1991; Wenger, 1998), learning as
taking place through everyday social interactions within communities of practice, such as those
found at school, home, or work. These situated learning experiences, whereby participants interact
and learn together, shape an individuals identity. Within a situated learning framework,
Aschbacher, Li, and Roth (2010) discuss how students science identity is informed by their lived
experiences and social interactions at home, in school, and in the larger world. It is based on how
students view themselves and believe others view them as they participate in science-related
endeavors. The authors note that students science identity likely changes and evolves over time as
they are likely to participate in multiple social communities where they must negotiate their
identities back and forth along the rules and values set up by these communities (Furman &
Calabrese Barton, 2006; Lave & Wenger, 1991).
Nieswandt (2005) highlights that the role of internal factors (e.g., confidence), as well as
several external factors (e.g., resources, co-operation of others), have been sorely overlooked in
the large body of research rooted in Fishbein and Ajzens TRAPB. Nieswandt makes the case that
these omissions are very relevant to science learning, especially with regard to students self-
concept and motivation. It is important to recognize that multiple early explanations of students
attitudes toward science to reference internal factors, including identity (Gardner, 1975) or self-
concept, and claim profound influence on students decisions to pursue science-related careers
(Mayberry, 1998). For example, Shrigley et al. (1988) advocated the inclusion of self-
perception1 as a component of the attitude construct. Additionally, Bloom (1976) predicted that
an attitude complex, including affective variables and subject-related self-concept, would account
for up to 25% of variability in students achievement scores. Based on this prediction, Speering
and Rennie (1996) proposed a model for attitudes toward science incorporating students
perceptions of past performance in science, expected future performance in science, perceived
usefulness of science, and enjoyment of science.
Regarding the past concerns about the absence of identity as a variable, Fishbein and Ajzen
(2010) acknowledge that self-identity can influence different constructs in the TRAPB. In the past
identity had been addressed only as a potential mediating variable in specific circumstances (e.g.,
Terry & Hogg, 1996), such as an influence on a persons perceived norm as affected by their
affiliation with a specific group. More recent studies, the authors note, add self-identity to the
TRAPB model as having a direct influence on intentions. Effective measures of self-identity have
helped to explain additional variance on the prediction of intentions (e.g., Rise, Sheeran, & Skalle,
2006), but in many cases these contributions have been small. Moreover, Fishbein and Azjen
continue to portray self-identity as a difficult to assess dimension. Because some aspects of
identity and group identification overlap with the three basic antecedents of intentions as depicted
in the TRAPB (i.e., attitudes, perceived norms, and perceived control), it may be difficult to
disentangle the contribution of identity on a students intention, or subsequent behavior, from
constructs which have been empirically shown to be more dominant.
Previous Use of the TRAPB in Science Education Research. The majority of science
education researchers employing TRAPB (e.g., Crawley & Black, 1992; Crawley & Coe, 1990;

Journal of Research in Science Teaching


14 SUMMERS AND ABD-EL-KHALICK

Crawley & Koballa, 1992) have attempted to understand students decisions to engage with
science by focusing on factors that are believed to contribute to their intention to pursue
elective courses in science. This is illustrated by early research efforts with the TRAPB that
attempted to gauge students intentions based on the relative strength of the determinants.
Koballa (1988b) examined eighth-grade female students intentions to enroll in at least one
elective high school physical science course. Using multiple regression analyses on behavioral
intention, Koballa concluded that attitude toward the behavior carried more weight than
subjective norm. Crawley and Coe (1990) furthered this line of research by exploring whether
eighth-grade students would take science in ninth grade if it were considered an elective
course. As a result of this study the authors concluded that the relative contributions of attitude
and subjective norm components to the prediction of intention to enroll in a science course in
ninth grade vary depending on students ability and individual characteristics (i.e., gender and
ethnicity). Crawley and Koballa (1992) expanded on this avenue of research by examining
determinants that influenced a sample of tenth-grade students decision to enroll in an elective
high school chemistry course. In this study, a sub-sample of students were asked to list the
advantages and disadvantages of enrolling in chemistry, persons who would disapprove of
chemistry enrollment, and factors that facilitate or inhibit enrolling in chemistry. These tasks,
respectively, represent behavioral, normative, and control beliefs, which are key components of
the TRAPB model. Following analysis, student responses collected were used as an empirical
basis for a questionnaire, which was then administered to the sample.

TRAPB as an Underlying Theoretical Framework


As illustrated by the preceding review, some researchers have employed models based on the
theory of reasoned action (Ajzen & Fishbein, 1980) to explore students decision to engage with
science. The development of the Behaviors, Related Attitudes, and Intentions toward Science
(BRAINS) Survey was guided by the most recent revision of the TRAPB (Ajzen & Fishbein,
2005). The major elements of the TRAPB are defined in Table 2. Note that the development of the
BRAINS, together with the TRAPB elements included, drew from the development of a similar
instrument designed for use outside the United States (Abd-El-Khalick, Summers, Said, Wang, &
Culbertson, 2015).
Figure 2 presents the TRAPB elements associated with the design process of the instrument in
this study. It should be noted that behaviors and actual behavioral controls (Figure 2, shaded
boxes) do not lend themselves to measurement through self-report paper-and-pencil instruments
(compared, for instance, to direct observation). These two TRAPB elements, thus, were not
addressed in the development of the BRAINS. The BRAINS items were carefully designed to
align with the TRAPB elements and model, as well as incorporate known determinants of student
attitudes and behavioral intentions. Table 2 defines the BRAINS constructs and dimensions
(Figure 2, un-shaded boxes), which were mapped onto major elements of the TRAPB by drawing
on Fishbein and Ajzens (2005, 2010) model. These dimensions and constructs were selected and
defined based on our and other researchers systematic reviews of the empirical and conceptual/
theoretical research literature on student attitudes toward science (e.g., Osborne et al., 2003,
2009).
It is important to explicate that we were keenly aware of concerns related to, and potential
shortcomings of the TRAPB, particularly as they relate to assessing core constructs by survey.
First, the BRAINS like many of its predecessors and in the context of this study, relies on collecting
information from participants about their attitudes, perceptions, beliefs, and intentions via self-
report. The reliance on self-report assumes that respondents have some shared understanding of

Journal of Research in Science Teaching


Table 2
BRAINS domains and constructs as related to elements of the theories of reasoned action and planned behaviors (TRAPB)a

TRAPB Definition (From Ajzen & Related BRAINS Related BRAINS Sub-Domain Illustrative
Component Fishbein, 2005, p. 193) Domain or Construct or Sub-Constructb BRAINS Itemsb
Intention Antecedent of actual engagement Intention to pursue, Additional or future I will study
with the target behavior interest in pursuing, studies in science science if I get
science A career in science into a university
I will become a
scientist in the
future
Attitude A learned disposition to respond in a Attitude toward different facets of Attitude toward science I do not like
toward the consistently favorable or unfavorable science as it relates to student lives Attitude toward school science
behavior manner toward an attitude object science I really enjoy
[in this case, science]c Attitude toward science science lessons
as leisure
Behavioral Beliefs about the likely consequences of a Beliefs about the consequences Beliefs about consequences Scientists do not
beliefs behavior . . . outcome expectancies . . . or associated with engagement with associated with becoming a have enough
costs and benefits . . . these beliefs science, and beliefs about the scientist time for fun
and their associated evaluations are benefits associated with science Beliefs about consequences I look forward
assumed to produce an overall positive or associated with science to science
negative evaluation or attitude toward learning activities in
performing the behavior in question Beliefs about the relevance class
and utility of science: (i) at We live in a
the societal level; (ii) at the better world
personal level because of
sciences
CROSS-SECTIONAL INSTRUMENT DEVELOPMENT

Learning
science is not
important for
my future
success
Control Beliefs concerning the presence or absence of Perceived self-efficacy and personal Perceived ability toward Science is easy
beliefs and factors that make performance of a behavior agency toward science learning learning science for me
perceived easier or more difficult . . . lead to the Perceived efficacy of effort I cannot
behavioral perception that one has or does not have the toward learning science understand
continued
15

Journal of Research in Science Teaching


TRAPB Definition (From Ajzen & Related BRAINS Related BRAINS Sub-Domain Illustrative 16
Component Fishbein, 2005, p. 193) Domain or Construct or Sub-Constructb BRAINS Itemsb
control capacity to carry out the behavior, referred to science even if I
. . . as self-efficacy and personal agency . . . or try hard
perceived behavioral control
Normative Beliefs that deal with the likely approval or Perceived approval or disapproval Perceived approval or My family
beliefs and disapproval of a behavior by friends, family toward engagement with science disapproval by family encourages me
subjective members . . . and, in their totality . . . lead to members to have a
norm perceived social pressure or subjective norm Perceived approval or science-related
to engage or not engage in the behavior disapproval by friends career
My friends do
well in science

Adapted from Abd-El-Khalick et al. (2015).


a

Journal of Research in Science Teaching


Note that the two TRAPB components actual behavioral controls and behavior (see Figure 1), which are not measurable through self-report paper-and-pencil instruments, were not
planned for in the BRAINS design or present study.
b
Domains and items that survived into the finalized instrument.
c
From (Fishbein & Ajzen, 1975, p. 6).
SUMMERS AND ABD-EL-KHALICK
CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 17

Figure 2. Unedrlying TRAPB model with construct mapping.


Source: Adaped from Abd-El-Khalick et al. (2015).

the prompts and associated constructs being assessed (Fishbein & Ajzen, 2010). Ajzen (2015)
further stipulates that even with a perfect measure the core constructs, or beliefs, assessed for any
one individual may still yield an incomplete understanding of their trajectory and/or eventual
behaviors, stating:

[T]he [TRABP] makes no assumptions about the objectivity or veridicality of behavioral,


normative and control beliefs. These beliefs may rely on invalid or selective information;
they may be irrational, reflecting unconscious biases, paranoid tendencies, wishful thinking
or other self-serving motives; and they may fail to correspond to reality in many other ways.
(p. 3)

In sum, all the TRAPB contends is that individuals intentions and behaviors are consistent
with their beliefs, and only from this view of internal consistency is behavior considered to be
reasoned (Ajzen & Dasgupta, 2015, p. 120).
A second concern is that anticipated predictive ability associated with assessing
students attitudes and behavioral intentions is likely to diminish as the perceived or real
temporal gap widens between the assessment and actual performance of the target behavior
(e.g., asking a 7th grader if she would elect to take a science course in grade 8 versus
asking her whether she would major in science when admitted to college). The latter
concern is certainly relevant to investigations utilizing cross-sectional designswith the
ages of participants spanning many years apartcompared to studies which evaluated
students behavioral intentions on short-term bases with data collection spanning
approximately 14 years before the anticipated performance of the target behavior (e.g.,
Crawley & Black, 1992). Nonetheless, the TRAPB serve to connect a wide range of
possible facets and factors that typify and/or impact student attitudes and behavioral
intentions in relation to a target domain (Crawley & Koballa, 1994). This potential is
underscored by the common factors from extant instruments, including science
Journal of Research in Science Teaching
18 SUMMERS AND ABD-EL-KHALICK

achievement, science self-concept (including a particular focus on sex and gender),


perceptions of the utility of science, perceptions of the expectations of parents/guardians
and peers in relation to science, and dispositions toward pursuing additional studies in
science or careers in scientific fields (Andre, Whigham, Chambers, & Hendrickson, et al.,
1999; Catsambis, 1995; DeBacker & Nelson, 2000; Gardner, 1975; George, 2000, 2006;
George & Kaplan, 1998; Hasan, 1985; Keeves, 1975; Kotte, 1992; Shrigley, Koballa, &
Simpson, 1988; Simpson & Oliver, 1985; Simpson & Troost, 1982). Thus, the TRAPB
were selected to link a number of constructs and domains that have been, to various
extents and in various combinations, invoked in past instruments and research on student
attitudes toward science.
Selection and Development of the Item Pool
Championing the TRAPB allowed for the identification of a set of domains and constructs
that would support the development of the new instrument, and was followed by a thorough and
systematic empirical analysis of a dozen widely used extant science attitude instruments. Several
extant forced-choice instruments measuring constructs related to attitudes toward science have
been widely used (Moore & Hill Foy, 1997; Moore & Sutman, 1970; Wareing, 1982, 1990;
Weinburgh & Steele, 2000). For the development of this new instrument existing measures were
selected for review based on a history of repeated use or modification, evidence of an exemplary
characteristic or performance (Blalock et al., 2008), and/or evidence of continued use (e.g.,
numerous translations; Navarro, Gonzalez, & Gonzalez-Pose, et al., 2016). The instruments
analyzed during this process included the Attitudes toward Science Protocol (Wareing, 1982),
Attitude toward Science in School Assessment (Germann, 1988), Attitudes Toward Science
Inventory-Modified (Weinburgh & Steele, 2000), Changes in Attitude about the Relevance of
Science (Siegel & Ranney, 2003), Science Attitude Inventory: Modified (Nagy, 1978), Science
Attitude Inventory: Revised (Moore & Hill Foy, 1997), Science Attitude Scale (Misiti, Shrigley,
& Hanson, 1991), Science Opinion Survey (Gibson & Chase, 2002), SimpsonTroost Attitude
Questionnaire: Revised (Owen et al., 2007), Students Motivation toward Science Learning
Questionnaire (Tuan, Chin, & Shieh, 2005), Test of Science Related Attitudes (Fraser, 1978), and
Views about Science Survey (Halloun, 1997, 2001). During this analysis the core TRAPB
constructs were used to guide the identification items from extant instruments suitable for an
initial pool. These potential items, in terms of content, addressed various dimensions of the
theoretical constructs. Clusters of items identified for each of the constructs were reviewed. Items
that were redundant, poorly worded (e.g., double negative statements), or otherwise problematic
(e.g., double-barreled) were removed from the pool. Throughout this process language, length,
and level of item abstraction were evaluated in an effort to mitigate the common concerns with
surveying younger students, particularly ages 811 years, to support their ability to produce
reliable responses to survey questions (Borgers, Hox, & Sikkel, 2004; Borgers, Leeuw, & Hox,
2000). The analysis reinforced the selection of the domains and constructs, as well as enabled the
adoptionin several cases with revisionof a number of existing items that were aligned with
the envisioned instrument. A total of 62 items were advanced, 16 of which were modified, from
extant instruments. Twelve additional items were developed by the research team to ensure that
all of the TRAPB domains and constructs were addressed in the BRAINS. Together these 74
Likert items, ordered with a 5-point response scale (strongly disagree, disagree, not sure, agree,
and strongly agree) formed the initial item pool.2 The pool comprised some similar items, and
next stages of development were meant to select the most appropriate among these items and
reduce the overall length of the instrument.

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 19

Instrument Validation
Face and Content Validity
Validating the BRAINS Survey occurred in multiple phases. First an expert review panel
helped establish the face validity of an initial pool of items, which comprised items derived from
several extant attitude-toward-science instruments, as well as items developed by the authors
(Summers, 2012). Panel members were carefully selected to cover expertise with research on
precollege students attitudes toward science, science teaching and learning, and science
education research. Expertise from the panel membership included five science education or
science college faculty members from the United States, two experts in science education
research, and a researcher who is considered an authority in the domain of attitudes research in
science education. Panel members were asked to provide feedback on the theoretical framework
underlying the new instrument, the match of each item in the pool with its respective construct or
domain, the wording of each item, and the appropriateness of the language for use with a range
students (e.g., elementary, middle, and high school students). Panel members also were asked to
suggest revisions for an item in case they identified issues with its wording and also encouraged to
suggest additional items in case they believed them to be was necessary.
Feedback received from the expert panel was compiled, reviewed by the design team, and
found to primarily pertain to individual items. This information was used to reduce the number of
content items, and further align the instrument with the theoretical framework and outlined
measurement goals. As a result of the feedback, of the 74 original items submitted for review, 37
items (50%) remained unchanged, 21 items (28%) were modified, 16 items (22%) were deleted,
and 10 new items were added. After completing the recommended revisions, along with further
consolidation of items addressing similar constructs or domains, and a final internal review
resulted in a 59-item version of the BRAINS Survey. In addition to the content items, the BRAINS
instrument also included a coversheet with several items intended to collect biographical
information and provide additional insight into the perspectives of students.
Adaptation for Online Administration. The instrument, consent information, instructions,
and the content items were uploaded onto the Qualtrics1 digital platform to enable online
administration. During this process every effort was made to ensure similarity, especially
concerning response format, between the online and pencil-paper versions. Borgers et al. (2000)
note that digital delivery systems can make surveys more attractive, reduce the number of skipped
questions, and help participants feel more at ease. Being considerate of younger participants, a
couple of unique features, available only in the digital version, were added to make the instrument
more accessible. The first feature was to restrict the number of items per page to three, making it
easier for participants to focus on the items presented to them at any one time. The second feature
was to upload audio files onto the survey for students to use if needed. Mindful that reading ability
might vary among students, and could consequently limit participants ability to access the survey,
individual students were given the ability to listen to the survey, or portions thereof, at their
computer station (see Scott, 1997). At the click of a button, a play icon initiating the specific
audio file, students could have items, or other written portions (e.g., informed consent passages),
read to them in a neutral tone.
Online Administration Pilot. The online instrument was piloted during the fall of 2013 with
multiple class sections of 3rd and 7th grade students. The purpose of this pilot was primarily to
determine the ease that participants had completing, as well as teachers had implementing, the
BRAINS online survey in preparation of a larger data collection effort. Students (N 151) were

Journal of Research in Science Teaching


20 SUMMERS AND ABD-EL-KHALICK

purposefully sampled from two schools, 3rd grade students (n 45) from a public, STEM magnet
primary school and 7th grade students (n 106) from a public middle school, both near a large
Midwestern university. The 3rd grade sample included two class sections, and the 7th grade
sample included five class sections. Students, on average, were able to complete the online survey,
including the demographic and 59 content items, in 2535 min.
Following their completion of the survey, a subsample of students (ranging from 2 to 4) from
each class section were conveniently selected and asked about their experience with the BRAINS
and the digital delivery. Questions focused on students understanding of items and whether they
experienced any difficulty responding to the questions posed. Using students free responses is
suggested by Osborne et al. (2003), Oppenheim (1992), and Bennet (2001) as another method of
establishing validity. This step was intended to support and build on the efforts of the expert panel
by collecting free responses from survey participants as advocated by Osborne et al. (2009). A
total of 25 students were asked to comment on the survey as a whole, and to provide more in-depth
feedback on a selected subset of individual items. Given the participants age range, it would have
been burdensome to ask each studentespecially the younger ones, to comment on all 59 items.
Thus, students were asked to explain how they interpreted a subset of 20 items, identify terms or
items that were hard to understand in this subset, and suggest ways to revise the latter terms or
items. Analyses of the interview data indicated that, overall, the survey items were accessible and
understandable to participant students. However, some younger students reported the question
intended to capture information related to participants ethnicity Which of the following best
describes me was unclear. Teachers present at the time of the pilot recorded student questions that
arose and comments voiced, and submitted the information to the researcher. The most common
of these questions, aside from the aforementioned issues with the background questions, were due
to specific vocabulary included in some survey items (e.g., motivation, respect, pursue, influence,
and science concepts).
Following the students completion of the online pilot, teachers were asked a set of informal
questions about the instrument and the ease of administrating the instrument online. Obtaining
feedback from teachers who know the students well further adds to the validity of a measure
(Bennet, 2001), and also helped to improve the survey process. These questions focused on the
practical concerns of online survey administration, such as the time required to get all students
logged on to computers and directed to the correct website. One teacher who, admittedly, was not
computer savvy did require some assistance to make the survey available to students. The teacher
explained that she was unable to create a link to allow students easy access from their computer
stations. To help alleviate the issues raised both by students and teachers, an administration
support guide was developed for teachers. This guide contained a section on survey administration
and another on trouble-shooting possible issues. The guide also provided a standardized set of
answers for known questions about the survey, as well as acceptable definitions for any vocabulary
terms that were identified as potentially problematic by students during the pilot.
Administration Procedures
The BRAINS Survey was made available online, using the Qualtrics1 platform, to allow for
data collection in a number of schools across a large geographic area. The lead researcher, in
collaboration with classroom teachers and administrators, assigned a time range for each class
section to complete the survey. All participant students completed the survey under the
supervision of their classroom teacher during one allotted 50-min class period. A standard
protocol for administering the survey (introducing the study, securing informed consent, giving
instructions to complete the survey) was provided to participating teachers in the form of a

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 21

guidebook. Additionally, students were presented with the requisite consent information online
before they could access the survey.
Illinois Educational Context
The largest population center in Illinois comprises Chicagothe third largest city in the
United States, and the surrounding metro area, which include several counties in the northern part
of the state. The rest of Illinois is much more rural when compared to the densely populated and
industrialized Chicago area, characterized by small towns and small to medium cities. Exceptions
to this characterization exist in northern (e.g., Rockford), central (e.g., Champaign-Urbana), and
southern Illinois (e.g., East Saint Louis). The predominantly rural stateside influences a number of
school-related attributes such as racial composition and level of income. Overall, students
attending public schools during the 20112012 academic year in Illinois were 0.3% American
Indian/Alaska Native, 4.2% Asian, 18.2% Black/African American, 0.1% Native Hawaiian/Other
Pacific Islander, 23.6% Hispanic/Latino, and 50.7% White, with 2.9% having reported two or
more races (Illinois State Board of Education [ISBE], 2012). Ninety percent of Illinois students
attend public schools, and 49% are identified as low-income (Illinois State Board of Education
[ISBE], 2012).
Illinois has demonstrated a commitment to the Next Generation Science Standards (NGSS
Lead States, 2013) by serving as one of the lead states during the development of the standards
(Illinois Lead State Summary, 2011). The state moved forward with the adoption of NGSS, which
will be fully implemented by the 20162017 school year. The current Illinois Learning Standards
were adopted in 1997. These standards are organized by four levels: Early elementary, late
elementary, middle/junior high school, and late high school. Under these standards, districts in
Illinois were allowed to choose their own science curriculum as long as it follows the state
standards. Standardized assessments in science were administered to students in grades 4 and 7
using the Illinois Standards Achievement Test (ISAT) and in grade 11 using the Prairie State
Achievement Examination (PSAE). Because students were not tested in science until grade 4, it is
likely that they receive little by way of formal science instruction in preceding grades. Another
important benchmark with respect to students science instruction occurs in grade 11. Illinois
students are tested in science on the ISAT in grade 11, but it is possible for students not to be
enrolled in a science course at that time. To graduate from an Illinois public high school students
must complete a minimum of 2 years of science coursework with no specific course requirements.
Sample
Participant students were selected by generating a representative sample of class sections in
grades 5 through 10 across Illinois. To achieve this selection, Illinois was divided into six
geographic sampling regions, by county. Schools from within each region were identified using
the database of public school entities maintained by ISBE. Schools were then randomly selected
with the goal of recruiting two schools from each grade level, with six levels total (i.e., 5 through
10), from each region. Selected schools were contacted and asked if a single section of students
from the specified grade level were willing to participate in the study under the supervision of their
teacher. In the event multiple sections for the requested grade existed at a particular school, school
administrators and teacher(s) were allowed to determine which section would participate in the
study. In the event that a school declined the invitation to participate, or was unresponsive, another
school was randomly selected from the population without substitution (i.e., unresponsive and
declining schools were removed from the selection pool). The target sample for this study was 72
class sections in total. In summary, the target sample was designed to include 12 schools per grade
level and a total of 12 schools per sampling region. Data collection was completed in fall of 2014.
Journal of Research in Science Teaching
22
Table 3
Representative sample of illinois students (N 1291)

School Students
Sections Number Male Female Not Reported
a a b b
Grade Level n % n % n % n % n %b
5 13 19.1 286 22.2 133 46.5 150 52.4 3 1.1
6 11 16.3 215 16.7 101 47.0 113 52.6 1 0.4
7 10 14.7 162 12.5 84 51.9 78 48.1 0 0
8 12 17.6 243 18.8 132 54.3 109 44.9 2 0.8
9 12 17.6 254 19.7 111 43.7 112 44.1 31 12.2
10 10 14.7 131 10.1 62 47.3 68 51.9 1 0.8

Journal of Research in Science Teaching


Total 68 100.0 1291 100.0 623 48.4 630 48.7 38 2.9
Geographical Region
1 10 14.7 235 18.2 117 49.8 118 50.2 0 0
2 10 14.7 152 11.8 70 46.1 80 52.6 2 1.3
3 13 19.1 273 21.2 123 45.1 119 43.6 31 11.3
4 11 16.2 191 14.8 82 42.9 108 56.6 1 0.5
5 14 20.6 246 19.0 123 50.0 121 49.2 2 0.8
6 10 14.7 194 15.0 108 55.7 84 43.3 2 1.0
Total 68 100.0 1291 100.0 623 48.3 630 48.8 38 2.9
a
Percent of grand total.
b
SUMMERS AND ABD-EL-KHALICK

Percent of corresponding grade or region total.


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 23

The BRAINS was completed by 1,291 students, from 68 class sections (94.4% of the target
sample), representing 68 unique schools. Distribution of the sample by grade, including class
sections and students, and by geographical region, are presented in Table 3. For the purposes of
instrument development it is important to note that the majority of respondents identified as white
(n 1,021; 79.1%), and relatively few reported any use of a language other than English in their
home (n 196; 15.2%).
Data Analysis
Face and content validity as established through an expert panel alone is not sufficient
(DeVellis, 2003), and needs to be accompanied by psychometric or objective evidence (Munby,
1979). Potvin and Hasni (2014) note this evidence should demonstrate that an instrument is
unidimensional and internally consistent. To begin this process Confirmatory Factor Analysis
(CFA) was conducted based on the theoretical factors established in the instrument design process.
Analyses were done using Mplus and the maximum likelihood robust (MLR) estimator option
(Muthen & Muthen, 19982015). It should be noted that MLR is appropriate for dealing with data
that may not be normally distributed (Rosseel, 2010). The analyses indicated that many items did
load onto factors that resembled the theoretical structure; however, the overall fit of the 59-item
model was poor. Refinement of the theoretical model proceeded stepwise, based on the results
from the analysis, by systematically identifying and culling ill-fitting items. The item deletion
process, necessary to reach minimum fit statistics for the instrument as a whole, is detailed in the
following section. This is followed by a discussion of individual items, which extends to
performance on specific factors and conceptual adherence to the TRAPB framework.
The first step in achieving the minimum information criterion was to cull items that loaded
onto incorrect factors. Next, there were cases where an item needed to be removed from an item-
pair (e.g., exclude either item 6 or 34). After these items were removed the model fit improved
greatly, but still did not achieve the minimum threshold. For these selections, item complexity and
content were taken into consideration. To illustrate, consider the following item-pair, I consider
my familys advice about my future career and My parents influence my thinking about my
education. Both items loaded on the same construct, normative beliefsa construct intended to
tap into the role important individuals might play in shaping future behaviors, but the latter was
ultimately excluded. The decision in this case was based on the inclusivity of family rather than
parents and the specificity of advice over the more ambiguous term influence. In total four items
were excluded in this manner, and the decision to retain items was based on their precision in
relation to the theoretical constructs and the clarity of the prompt. A review of the remaining items
identified those that loaded on multiple factors, which entailed deletions of 13 additional items.
The deletion of these items was based on modification indices; a value that shows the improvement
in model fit if a particular coefficient were to become unconstrained (Gatignon, 2010), by allowing
the item to correlate with another factor, or otherwise be removed. Allowing items to correlate
with multiple factors complicates an instruments model, and violates the conditions of scale
unidimesionality (Gardner, 1995), so items with large modification indices may be considered
bad items. (Note that a modification index of 10 was used as a cutoff to justify decisions to cull
items [Muthen & Muthen, 19982015]). Beginning with items possessing the largest modification
indices, indicative of the poorest items, deletions were made in a stepwise fashion until the
minimum acceptable statistical fit was achieved.
Item-Level Considerations
A final review of individual item statistics and behavior revealed that two additional items
loaded onto multiple factors and were, therefore, deleted. Similarly two other items were
Journal of Research in Science Teaching
24 SUMMERS AND ABD-EL-KHALICK

troublesome and were preferentially deleted based on a large modification index (>10) and poor
performance. Five additional items were deleted for loading poorly, with standardized loadings
less than the 0.32 cutoff on their respective factors (Tabachnick & Fidell, 2001). Appendix 1
(Supplementary materials) lists all items included in the final instrument loaded onto constructs
that were predicted based on the underlying theoretical framework, the TRAPB.
Final Instrument
The final 30-item instrument, containing five factors or sub-scales, has a good fit with a Root
Mean Square Error of Approximation (RMSEA) of 0.04, a Standardized Root Mean Square
Residual (SRMSR) of 0.04, a CFI of 0.95, and a non-normed index of 0.95. (It should be noted that
the ideal values for the information criteria used are as follows: RMSEA should be less than 0.07,
SRMSR should be less than 0.07, CFI should be greater than 0.9, and the non-normed index should
be greater than 0.9 (Hu & Bentler, 1999). Illinois students responses to the BRAINS revealed that
the items comprising the instrument clustered around factors or sub-scales that reflected the core
constructs of the TRAPB. The five factors, named after the theoretical constructs along with their
respective range of item loadings, include: attitude toward science (0.540.92), intention to pursue
or engage in science (0.560.84), behavioral beliefs (0.410.77), control beliefs (0.580.82), and
normative beliefs (0.390.80). The items included in the final instrument, organized by construct
and factor loading, are included in Table 4. Figure 3, read from left to right, illustrates the co-
variances between factors, as well as the individual item loadings and residuals. Note that the
unequal item residual values support the MLR estimator use in Mplus CFA computations.
It is important to note that the BRAINS demonstrated a high fidelity to TRAPB theoretical
framework that served as its foundation. This is evident by the grouping of all items included in the
final version of the instrument into the same sub-scales predicted during the instrument
development process. It is equally important to highlight that all of the TRAPB inspired sub-scales
predicted during instrument development were identified and retained during CFA. To provide a
measure of how each sub-scale performed, as a group of items, CFA-based scale reliabilities were
computed for each of the five factors (Table 5). Scale reliability, also referred to as construct
reliability, was estimated based on the CFA results (Dillon & Goldstein, 1984; Joreskog, 1971)
and reported instead of Cronbachs alpha due to its increased dependability (Raykov, 2001). Note
that scale reliability is evaluated similarly to Cronbachs alpha with values greater than 0.6
considered acceptable, and values between 0.7 and 0.9 considered good.
Conclusions and Directions for Future Research
The development of the BRAINS Survey systematically addressed concerns that have been
highlighted as having compromised the validity and/or reliability of many of the existing attitudes
toward science instruments (Blalock et al., 2008; Osborne et al., 2003, 2009). Methodological
considerations during the development and validation of the BRAINS ranged from anchoring the
development of the instrument in a robust theoretical model, using multiple sources to establish
the instruments face and content validity, a validation process including a sizable sample, to using
robust statistical measures to explore the instruments underling psychometric properties and
structure. The results indicate that the five sub-scales underlying the instrument produce a robust
model with a good fit, and good reliability and validity measures. Further, the factor structure
obtained resembled the theoretical factors from the TRAPB framework with individual items that
persisted into the final instrument all loading on their predicted factors. Such empirical alignment
of the instruments factor structure and item loadings with its theoretical foundationthe
TRAPB, which guided the development and evaluation of this measurePotvin and Hasni (2014)
noted, plays an important role in ascertaining the measures construct validity.
Journal of Research in Science Teaching
CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 25

Table 4
Standardized loadings for the statewide sample of illinois student responses from grades 5 through 10
(N 1,291)

Item Attitude Intention Behavior Control Normative


a
24. I really like science 0.92
15. I really enjoy science lessonsb 0.84
30. I do not like science 0.84
1. I enjoy science 0.78
7. Science is one of the most interesting 0.76
school subjectsc
23. I would like to do science experiments 0.54
at homec
20. I would enjoy working in a 0.84
science-related career
16. I will continue studying science 0.80
after I leave school
28. I will take additional science courses 0.80
in the futured
4. I will study science if I get into a university 0.74
13. I will become a scientist in the future 0.67
11. I will not pursue a science-related career 0.56
in the future
27. Science will help me understand the world 0.77
around mee
26. Knowledge of science helps me protect the 0.72
environmente
21. Knowing science can help me make better 0.70
choices about my healthe
19. We live in a better world because of science 0.64
3. Most people should understand science because 0.55
it affects their livesf
8. Teachers encourage me to understand concepts in 0.47
science classes
2. Scientists are highly respected 0.45
29. People with science-related careers have a 0.44
normal family lifea
25. Scientists usually like to go to work even when 0.41
they have a day offa
18. I am confident that I can understand science 0.82
10. Science is easy for me 0.77
14. I can understand difficult science concepts 0.73
5. I am sure I can do well on science testsg 0.71
12. I cannot understand science even if I try hard 0.69
6. I usually give up when I do not understand a 0.58
science concept
22. My family encourages me to have a science- 0.80
related career
17. My family encourages my interest in scienceh 0.75
9. Members of my family work in scientific 0.39
careersh

Item(s) source: aFrom Owen et al. (2007).


b
Modified from Fraser (1978).
c
From Fraser (1978).
d
Modified from Gibson and Chase (2002).
e
Modified from Siegel and Ranney (2003).
f
Modified from Moore and Hill Foy (1997).
g
From Tuan, Chin and Shieh (2005).
h
Wareing (1982, 1990); The remaining items were generated internally (Summers, 2012).

Journal of Research in Science Teaching


26 SUMMERS AND ABD-EL-KHALICK

Figure 3. Standardized factor co-variances, item loadings and residuals from confirmatory factor analysis.

The finalized BRAINS instrument was the product of a systematic and painstaking process,
which was detailed in the methods and results sections. Still, like with the development of any
paper-and-pencil instrument, methodological choices often have to be made, which mean that
certain issues and questions would remain. These possible concerns need to be addressed or
revisited in later work. To begin, we want to address questions that might arise from the fact that
the finalized instrument included 30 items down from original 59 items. In describing the
instrument development process, we noted that we intentionally added a large enough pool of
items to cover all TRAPB dimensions, which would increase the likelihood of ending up with the
minimum number of items needed to identify factors during subsequent refinements of the
instrument (commonly three items per factor; cf. Anderson & Gerbing, 1984). In general,
instruments are designed with a mindset that some items will be culled for various reasons and it is
common for the length of an instrument to change during development (e.g., Romine, Sadler,
Presley, & Klosterman, 2014). It is important to highlight that exploratory factor analysis was not
used to identify clusters of similarly performing items, but rather the final factor structure of the
instrument was the product of refinement through CFA. Because the a priori model was both
confirmed and bolstered by key indicators, we elected to disseminate the instrument without using
the finalized 30-item BRAINS to collect additional large-scale data (see Jackson, Gillaspy, &
Purc-Stephenson, 2009).
Journal of Research in Science Teaching
CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 27

Table 5
Scale reliabilities estimated from CFA results

Sub-Scale Reliability Number of Items


Attitude toward science 0.91 6
Intention 0.88 6
Behavioral beliefs 0.82 9
Control beliefs 0.87 6
Normative beliefs 0.70 3

Reflecting on the process leading to the final instrument we want to recognize that alternative
approaches, such as item response theory (IRT) and Rasch modeling, could have been used to
inform the development or refinement process. CFA, which was the core approach we elected to
use, is part of a larger family of methods known as structural equation modeling (SEM) and plays
an essential role in measurement model validation (Brown, 2006). The decision to rely on factor
analysis was rooted in the philosophical position of our design team, and supported by literature
recounting the similarity of finalized instrument models when the same set of data was refined
through different methodse.g., CFA, IRT, and/or Rasch modeling (e.g., Sharkness & DeAngelo,
2011). Research literature in science education continues to advance factor analysis as a common
and acceptable method of initial survey development (e.g., Deniz, Donnelly, & Yilmaz, 2008), and
IRT is often applied after several administrations to further fine-tune an instrument (e.g., Romine,
Walter, Bosse, & Todd, 2017). Certainly, IRT could be used during the development process;
however, many of the immediate advantages of using IRT were addressed in others ways in the
present study, such as the use of the MLR estimator to adjust for non-normal data and procurement
of a large sample size (n > 1000) suitable for factor analysis (Rosseel, 2010; Sebille et al., 2010).
Following the lead of other survey researchers in the field, it may be desirable, and even necessary,
to reexamine the BRAINS after additional administrations to possibly hone the instrument length
or inquire about issues arising when the instrument is applied in different contexts.
Looking toward future applications of the BRAINS instrument, it is essential to revisit the
practical value of understanding the causal sequence implied by the TRAPB, as noted by Osborne
et al. (2003), because this may help determine what salient beliefs students hold about science and
how they impact student decision-making. A significant area for future research would be to
investigate key casual relationships, such as those suggested by the TRAPB framework underling
the BRAINS, to ensure that further research into students attitudes toward science is guided by
theory that is commensurate with the ultimate aim of fostering involvement in science. In other
words, the connection between students intention to pursue science in their futures, the factors
that shape their intentions, and the consistency with which these intentions predict student
behavior need to be examined in detail and, more importantly, empirically ascertained. The design
of the BRAINS instrument supports the latter line of inquiry by it being amenable to multivariate,
multilevel modeling to examine relationships among multiple outcomes at different levels of
analysis (Templin, 2014). BRAINS allows for the simultaneous analysis of its 5 sub-scales in a
meaningful way given its robust underlying framework, and can be used to explore how factors
(e.g., intention) differ across individuals. An ultimate aim of this research would be to provide
insight regarding the consistency with which students intention actually predict behavior, and
also serve as the basis of criterion validity, or more specifically predictive validity, for the BRAINS
instrument (see Cronbach & Meehl, 1955). Findings from this proposed research may also afford a
deeper understanding of the conditions that best support the intention-behavior link with respect
Journal of Research in Science Teaching
28 SUMMERS AND ABD-EL-KHALICK

to making decisions related to pursuing additional science studies and potentially science careers
in the future.
Notes
1
It is worth highlighting that studies dealing with identity employ an array of related terms
(e.g., self-esteem, -efficacy, -concept, -image, -perception, etc.). Undoubtedly, inconsistent and
poorly articulated uses of these terms only compound the research into the already ill-defined
construct of attitudes toward science.
2
Note that the 5-point Likert scale is commonly used with items corresponding to TRAPB
constructs (Montano & Kasprzyk, 2015).
References
Abd-El-Khalick, F., Summers, R., Said, Z., Wang, S., & Culbertson, M. (2015). Development and large-
scale validation of an instrument to assess Arabic-speaking students attitudes toward science. International
Journal of Science Education, 37(16), 26372663.
ACT (2014) The condition of STEM 2013. Retrieved from http://www.act.org/
Aiken, L. R., & Aiken, D. R. (1969). Recent research on attitudes concerning science. Science
Education, 53, 295305.
Ajzen, I. (1985). From intentions to actions: A theory of planned behavior (pp. 1139). Berlin
Heidelberg: Springer.
Ajzen, I. (2015). The theory of planned behaviour is alive and well, and not ready to retire:
A commentary on Sniehotta, Presseau, and Araujo-Soares. Health Psychology Review, 9(2), 131137.
Ajzen, I., & Dasgupta, N. (2015). Explicit and implicit beliefs, attitudes, and intentions. In P. Haggards,
& B. Eitams (Eds.), The sense of agency. New York, NY: Oxford University Press.
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior.
Englewood-Cliffs, NJ: Prentice-Hall.
Ajzen, I., & Fishbein, M. (2005). The influence of attitudes on behavior. In D. Albarracn, B. T. Johnson,
& M. P. Zanna (Eds.), The handbook of attitudes (pp. 173221). Mahwah, NJ: Erlbaum.
Albarracin, D., Johnson, B. T., Fishbein, M., & Muellerleile, P. A. (2001). Theories of reasoned action
and planned behavior as models of condom use: A meta-analysis. Psychological Bulletin, 127(1), 142.
Anderson, J., & Gerbing, D. W. (1984). The effects of sampling error on convergence, improper
solutions and goodness-of-fit indices for maximum likelihood confirmatory factor analysis. Psychometrika,
49, 155173.
Andre, T., Whigham, M., Chambers, S., & Hendrickson, A. (1999). Competence beliefs, positive affect,
and gender stereotypes of elementary students and their parents about science versus other school subjects.
Journal of Research in Science Teaching, 36, 719747.
Archer, L., Dewitt, J., & Osborne, J. (2015). Is science for us? Black students and parents views of
science and science careers. Science Education, 99(2), 199237.
Aschbacher, P. R., Li, E., & Roth, E. J. (2010). Is science me? High school students identities,
participation and aspirations in science, engineering, and medicine. Journal of Research in Science Teaching,
47(5), 564582.
Ashby, C. M. (2006). Higher education: Science, technology, engineering, and mathematics trends and
the role of federal programs. Washington D.C.: U.S. Government Accountability Office (Education
Resources Information Center Document ED 491614).
Baars, B. J. (1986). The cognitive revolution in psychology. New York, NY: The Guilford Press.
Bandawe, C. R., & Foster, D. (1996). AIDS-related beliefs, attitudes and intentions among Malawian
students in three secondary schools. AIDS Care, 8(2), 223232.
Bem, B. (1970). Beliefs, attitudes, and human affairs. Belmont, CA: Coles.
Bennet, J. (2001). Science with attitude: The perennial problem of pupils responses to science. School
Science Review, 82(300), 5970.

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 29

Bennett, J., & Hogarth, S. (2009). Would you want to talk to a scientist at a party? High school students
attitudes to school science and to science. International Journal of Science Education, 31(14), 19751998.
Berk, L. J., Muret-Wagstaff, S. L., Goyal, R., Joyal, J. A., Gordon, J. A., Faux, R., & Oriol, N. E. (2014).
Inspiring careers in STEM and healthcare fields through medical simulation embedded in high school science
education. Advances in Physiology Education, 38(3), 210215.
Blalock, C. L., Lichtenstein, M. J., Owen, S., Pruski, L., Marshall, C., & Topperwein, M. (2008). In
pursuit of validity: A comprehensive review of science attitude instruments. International Journal of Science
Education, 30, 961977.
Bloom, B. S. (1976). Human characteristics of school learning. New York, NY: McGraw.
Blosser, P. E. (1984). Attitude research in science education. ERIC clearinghouse of science,
mathematics and environmental education information bulletin. Columbus, OH: Ohio State University.
Borgers, N., De Leeuw, E., & Hox, J. (2000). Children as respondents in survey research: Cognitive
development and response quality 1. Bulletin De Methodologie Sociologique, 66(1), 6075.
Borgers, N., Sikkel, D., & Hox, J. (2004). Response effects in surveys on children and adolescents: The
effect of number of response options, negative wording, and neutral mid-point. Quality & Quantity, 38(1),
1733.
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: Guilford.
Butler, M. B. (1999). Factors associated with students intentions to engage in science learning
activities. Journal of Research in Science Teaching, 36, 455473.
Caleon, I. S., & Subramaniam, R. (2008). Attitudes towards science of intellectually gifted and
mainstream upper primary students in Singapore. Journal of Research in Science Teaching, 45(8), 940954.
Catsambis, S. (1995). Gender, race, ethnicity, and science education in the middle grades. Journal of
Research in Science Teaching, 32, 243257.
Crawley, F. E., & Black, C. B. (1992). Causal modeling of secondary science students intentions to
enroll in physics. Journal of Research in Science Teaching, 29, 585599.
Crawley, F. E., & Coe, A. E. (1990). Determinations of middle school students intention to enroll in a
high school science course: An application of the theory of reasoned action. Journal of Research in Science
Teaching, 27, 461476.
Crawley, F. E., & Koballa, T. R., Jr. (1992). Hispanic-American students attitudes toward enrolling in
high school chemistry: A study of planned behavior and belief-based change. Hispanic Journal of Behavioral
Sciences, 14, 469486.
Crawley, F. E., & Koballa, T. R. (1994). Attitude research in science education: Contemporary models
and methods. Science Education, 78, 3556.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity for psychological tests. Psychological
Bulletin, 52, 281302.
DeBacker, T. K., & Nelson, R. M. (2000). Motivation to learn science: Differences related to gender,
class type, and ability. Journal of Educational Research, 93(4), 245255.
Deniz, H., Donnelly, L. A., & Yilmaz, I. (2008). Exploring the factors related to acceptance of
evolutionary theory among Turkish preservice biology teachers: Toward a more informative conceptual
ecology for biological evolution. Journal of Research in Science Teaching, 45, 420443.
DeVellis, R. F. (2003). Scale development: Theory and application (2nd ed.). Thousand Oaks, CA: Sage.
Dillon, W. R., & Goldstein, M. (1984). Multivariate analysis: Methods and applications.
Farenga, S. J., & Joyce, B. A. (1998). Science-related attitudes and science course selection: A study of
high-ability boys and girls. Roeper Review, 20(4), 247251.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and
research. Reading, MA: Addison-Wesley.
Fishbein, M., & Ajzen, I. (2010). Predicting and changing behavior: The reasoned action approach. New
York, NY: Taylor & Francis.
Fraser, B. L. (1978). Development of a test of science-related attitudes. Science Education, 62(4),
509515.
Furman, M., & Calabrese Barton, A. (2006). Capturing urban student voices in the creation of a science
mini-documentary. Journal of Research in Science Teaching, 43(7), 667694.

Journal of Research in Science Teaching


30 SUMMERS AND ABD-EL-KHALICK

Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction. White Plains, NY:
Longman.
Gardner, P. L. (1975). Attitude to science: A review. Studies in Science Education, 2(1), 141.
Gatignon, H. (2010). Statistical analysis of management data: Confirmatory factor analysis.
(pp. 59122). New York, NY: Springer.
George, R. (2000). Measuring change in students attitudes toward science over time: An application of
latent variable growth modeling. Journal of Science Education and Technology, 9, 213225.
George, R. (2006). A cross-domain analysis of change in students attitudes toward science and attitudes
about the utility of science. International Journal of Science Education, 28, 571-589.
George, R., & Kaplan, D. (1998). A structural model of parent and teacher influences on science
attitudes of eighth graders: Evidence from NELS: 88. Science Education, 82, 93109.
Germann, P. J. (1988). Development of the attitude toward science in school assessment and its use to
investigate the relationship between science achievement and attitude toward science in school. Journal of
Research in Science Teaching, 25, 689703.
Gibson, H. L., & Chase, C. (2002). Longitudinal impact of an inquiry-based science program on middle
school students attitudes toward science. Science Education, 86(5), 693705.
Guzey, S. S., Harwell, M., & Moore, T. (2014). Development of an instrument to assess attitudes toward
science, technology, engineering, and mathematics (STEM). School Science and Mathematics, 114(6),
271279.
Haladyna, T., & Shaughnessy, J. (1982). Attitudes toward science: A quantitative synthesis. Science
Education, 66, 547563.
Hale, J. L., Householder, B. J., & Greene, K. L. (2002). The theory of reasoned action. In J. P. Dillard, &
L. Shen (Eds.), The persuasion handbook: Developments in theory and practice (pp. 259286). London, UK:
Sage.
Halloun, I. (1997). Views about science and physics achievement: The VASS story. In E. F. Redish, &
J. S. Rigden (Eds.), The changing role of physics departments in modern universities. American Institute of
Physics Proceedings (pp. 605613). College Park, MD: American Institute of Physics Press.
Halloun, I. (2001). Student views about science: A comparative survey. Beirut, Lebanon: Educational
Research Center, Lebanese University.
Hamerick, L., & Harty, H. (1987). Influence of resequencing general science content on the science
achievement, attitudes toward science, and interest in science of sixth grade students. Journal of Research in
Science Teaching, 24(1), 1525.
Hardeman, W., Johnston, M., Johnston, D. W., Bonetti, D., Wereham, N. J., & Kinmonth, A. L. (2002).
Application of the theory of planned behaviour in behaviour change interventions: A systematic review.
Psychology & Health, 17, 123158. https://doi.org/10.1080/08870440290013644a
Harty, H., & Beall, D. (1984). Toward the development of a childrens science curiosity measure. Journal
of Research in Science Teaching, 21(4), 425436.
Hasan, O. E. (1985). An investigation into factors affecting attitudes toward science of secondary school
students in Jordan. Science Education, 69(1), 118.
Head, K. J., & Noar, S. M. (2014). Facilitating progress in health behaviour theory development and
modification: The reasoned action approach as a case study. Health Psychology Review, 8, 3452. https://doi.
org/10.1080/17437199.2013.778165
Heikkinen, H. W. (1973). A study of factors influencing student attitudes toward the study of high school
chemistry (Doctoral dissertation). University of Maryland, College Park, MD.
Hillman, S. J., Zeeman, S. I., Tilburg, C. E., & List, H. E. (2016). My attitudes toward science (MATS):
The development of a multidimensional instrument measuring students science attitudes. Learning
Environments Research, 19(2), 203219.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis:
Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 155.
Illinois Lead State Summary. (2011). Next generation science standards. Retrieved from http://www.
nextgenscience.org/Illinois

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 31

Illinois State Board of Education. (2012). A profile of Illinois public schools: Selections from the school
report card files. Retrieved from http://www.isbe.net/assessment/pdfs/report_card/2012/pub-school-
profile12.pdf
Jackson, D. L., Gillaspy Jr, J. A., & Purc-Stephenson, R. (2009). Reporting practices in confirmatory
factor analysis: An overview and some recommendations. Psychological Methods, 14(1), 6.
Jemmott 3rd, J. B., & Jemmott, L. S. (2000). HIV risk reduction behavioral interventions with
heterosexual adolescents. AIDS, 14, S40S52.
Joreskog, K. G. (1971). Statistical analysis of sets of congeneric tests. Psychometrika, 36(2), 109133.
Keeves, J. P. (1975). The home, the school, and achievement in mathematics and science. Science
Education, 59, 439460.
Kind, P., Jones, K., & Barmby, P. (2007). Developing attitudes towards science measures. International
Journal of Science Education, 29(7), 871893.
Koballa, T. R., Jr. (1988a). Attitude and related concepts in science education. Science Education, 72(2),
115126.
Koballa, T. R., Jr. (1988b). The determinants of female junior high school students intentions to enroll
in elective physical science courses in high school: Testing the applicability of the theory of reasoned action.
Journal of Research in Science Teaching, 25, 479492.
Koballa, T. R., Jr., & Crawley, F. E. (1985). The influence of attitude on science teaching and learning.
School Science and Mathematics, 85, 222232.
Koballa, T. R., Jr., & Glynn, S. M. (2007). Attitudinal and motivational constructs in science learning.
Handbook of research on science education (pp. 75102). Mahwah, NJ: Lawrence Erlbaum.
Kotte, D. (1992). Gender differences in science achievement in 10 countries. Frankfurt, Germany: Peter
Lang.
Krynowsky, R. A. (1988). Problems in assessing student attitude in science education: A partial solution.
Science Education, 72(5), 575584.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York, NY:
Cambridge University Press.
Liska, A. E. (1984). A critical examination of the causal structure of the Fishbein/Ajzen attitude-
behavior model. Social Psychology Quarterly, 47, 6174.
Lovelace, M., & Brickman, P. (2013). Best practices for measuring students attitudes toward learning
science. CBE-Life Sciences Education, 12(4), 606617.
Mayberry, M. (1998). Reproductive and resistant pedagogies: The comparative roles of collaborative
learning and feminist pedagogy in science education. Journal of Research in Science Teaching, 35, 443459.
Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment.
Educational Researcher, 18(2), 511.
Misiti, F. L., Shrigley, R. L., & Hanson, L. (1991). Science attitude scale for middle school students.
Science Education, 75(5), 525540.
Montano, D. E., & Kasprzyk, D. (2015). Theory of reasoned action, theory of planned behavior, and the
integrated behavioral model. In K. Glanz, B. K. Rimer, & K. Viswanath (Eds.), Health behavior: Theory,
research and practice (4th ed.). San Francisco, CA: Jossey-Bass.
Moore, R. W., & Hill Foy, R. L. (1997). The scientific attitude inventory: A revision (SAI II). Journal of
Research in Science Teaching, 34, 327336.
Moore, R. W., & Sutman, F. X. (1970). The development, field test and validation of an inventory of
scientific attitudes. Journal of Research in Science Teaching, 7, 8594.
Morrison, D. M., Spencer, M. S., & Gillmore, M. R. (1998). Beliefs about substance use among pregnant
and parenting adolescents. Journal of Research on Adolescence, 8(1), 6995.
Munby, H. (1979). An investigation into the measurement of attitudes in science education. Kingston,
ON: Faculty of Education, Queens University.
Munby, H. (1983). Thirty studies involving the scientific attitude inventory: What confidence can we
have in this instrument? Journal of Research in Science Teaching, 20(2), 141162.
Muthen, L. K., & Muthen, B. O. (19982015). Mplus users guide (7th ed.). Los Angeles, CA: Muthen
& Muthen.

Journal of Research in Science Teaching


32 SUMMERS AND ABD-EL-KHALICK

Nagy, P. (1978). Subtest formation by cluster analysis of the Scientific Attitude Inventory. Journal of
Research in Science Teaching, 15, 355360.
National Science Board. (2001). Toward a more effective role for the U.S. government in international
science and engineering. Arlington, VA: National Science Foundation.
National Research Council. (2007). Rising above the gathering storm: Energizing and employing
America for a brighter economic future. Washington, D.C.: The National Academies Press.
Navarro, M. F. C., Gonzalez, C, & Gonzalez-Pose, P. (2016). Attitudes toward science: Measurement
and psychometric properties of the Test of Science-Related Attitudes for its use in Spanish-speaking
classrooms. International Journal of Science Education, 38(9), 14591482.
NGSS Lead States. (2013). Next generation science standards: For states, by states. Washington: The
National Academies Press.
Nieswandt, M. (2005). Attitudes toward science: A review of the field. In S. Alsop (Ed.), Beyond
cartesian dualism (pp. 4152). Dordrecht, Netherlands: Springer.
Oliver, J. S., & Simpson, R. D. (1988). Influences of attitude toward science, achievement motivation,
and science self concept on achievement in science: A longitudinal study. Science Education, 72, 143155.
Oppenheim, A. N. (1992). Questionnaire design, interviewing and attitude measurement. London, UK:
Pinter.
Osborne, J., Simon, S., & Collins, S. (2003). Attitude towards science: A review of the literature and its
implications. International Journal of Science Education, 25, 10491079.
Osborne, J., Simon, S., & Tytler, R. (2009), Attitudes towards science: An update. Paper presented at the
annual meeting of the American Educational Research Association, San Diego, CA.
Owen, S. V., Toepperwein, M. A., Marshall, C. E., Lichtenstein, M. J., Blalock, C. L., Liu, Y., . . . Grimes,
K. (2007). Finding pearls: Psychometric reevaluation of the Simpson-Troost attitude questionnaire (STAQ).
Science Education, 92, 10761095.
Pearl, R. E. (1974). The present status of science attitude measurement: History, theory, and availability
of measurement instruments. School Science and Mathematics, 73, 375381.
Pell, T., & Jarvis, T. (2001). Developing attitude to science scales for use with children of ages from five
to eleven years. International Journal in Science Education, 23, 847862.
Potvin, P., & Hasni, A. (2014). Interest, motivation and attitude towards science and technology at K-12
levels: a systematic review of 12 years of educational research. Studies in Science Education, 50(1), 85129.
Presidents Council of Advisors on Science and Technology. (2010). Prepare and Inspire: K-12
Education in Science, Technology, Engineering, and Math (STEM) for Americas Future: Executive Report.
Executive Office of the President, Presidents Council of Advisors on Science and Technology.
Presidents Council of Advisors on Science and Technology. (2012). Engage to Excel: Producing One
Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics.
Executive Office of the President, Presidents Council of Advisors on Science and Technology, STEM
Undergraduate Working Group.
Ramsden, J. M. (1998). Mission impossible?: Can anything be done about attitudes to science?
International Journal of Science Education, 20, 125137.
Richardson, V. (1996). The role of attitudes and beliefs in learning to teach. In J. Sikula (Ed.), Handbook
of research on teacher education (pp. 102119). New York, NY: MacMillan.
Raykov, T. (2001). Bias of coefficient alpha for fixed congeneric measures with correlated errors.
Applied Psychological Measurement, 25(1), 6976.
Rise, J., Sheeran, P., & Skalle, S. (2006). The role of self-identity in the theory of planned behavior: A
meta-analysis. Unpublished manuscript, Norwegian Institute for Alcohol and Drug Abuse, Oslo.
Romine, W., Sadler, T., Presley, M., & Klosterman, M. (2014). Student interest in technology and
science (SITS) survey: Development, validation, and use of a new instrument. International Journal of
Science & Mathematics Education, 12(2), 261283.
Romine, W. L., Walter, E. M., Bosse, E., & Todd, A. N. (2017). Understanding patterns of evolution
acceptanceA new implementation of the Measure of Acceptance of the Theory of Evolution (MATE) with
Midwestern university students. Journal of Research in Science Teaching, 54, 642671. https://doi.org/
10.1002/tea.21380

Journal of Research in Science Teaching


CROSS-SECTIONAL INSTRUMENT DEVELOPMENT 33

Rosseel, Y. (2010). Mplus estimators: MLM and MLR [PowerPoint slides]. Retrieved from http://users.
ugent.be/~yrosseel/lavaan/utrecht2010.pdf.
Schibeci, R. A. (1984). Attitudes to science: An update. Studies in Science Education, 11, 2659.
Schreiner, C., & Sjberg, S. (2004). Sowing the seeds of ROSE: Background, rationale, questionnaire
development and data collection for ROSE (the Relevance of Science Education)A comparative study of
students views of science and science education. Oslo, Norway: University of Oslo, Department of Teacher
Education and School Development.
Scott, J. (1997). Children as respondents: Methods for improving data quality. In L. Lyberg, P. Biemer,
M. Collins, E. de Leeuw, C. Dippo, N. Schwarz, & D. Trewin (Eds.), Survey measurement and process quality
(pp. 331350). New York, NY: Wiley.
Sebille, V., Hardouin, J. B., Le Neel, T., Kubis, G., Boyer, F., Guillemin, F., & Falissard, B. (2010).
Methodological issues regarding power of classical test theory (CTT) and item response theory (IRT)-based
approaches for the comparison of patient-reported outcomes in two groups of patients-a simulation study.
BMC Medical Research Methodology, 10(1), 24.
Sharkness, J., & DeAngelo, L. (2011). Measuring student involvement: A comparison of classical test
theory and item response theory in the construction of scales from student surveys. Research in Higher
Education, 52(5), 480507.
Shaw, M. E., & Wright, J. M. (1968). Scales for the measurement of attitude. New York, NY: McGraw
Hill.
Shrigley, R. L., & Koballa, T. R., Jr. (1992). A decade of attitude research based on Hovlands Learning
Theory Model. Science Education, 76, 1742.
Shrigley, R. L., Koballa, T. R., & Simpson, R. D. (1988). Defining attitude for science educators. Journal
of Research in Science Teaching, 25(8), 659678.
Siegel, M. A., & Ranney, M. A. (2003). Developing the Changes in Attitude about the Relevance of
Science (CARS) questionnaire and assessing two high school science classes. Journal of Research in Science
Teaching, 40, 757775.
Simpson, R. D., & Oliver, J. S. (1985). Attitude toward science and achievement motivation profiles of
male and female science students in grades six through ten. Science Education, 69(4), 511525.
Simpson, R. D., & Troost, K. M. (1982). Influences on commitment to learning of science among
adolescent students. Science Education, 66, 763781.
Simpson, R. D., Koballa, T. R. Jr., Oliver, J. S., & Crawley, F. E. (1994). Research on the affective
dimension of science learning. In D. L. Gabel (Ed.), Handbook of research on science teaching and learning
(pp. 211234). New York, NY: Macmillan.
Sjberg, S., & Schreiner, C. (2005). How do learners in different cultures relate to science and
technology? Asia-Pacific Forum on Science Learning and Teaching, 6(2), 1.
Skinner, R., & Barcikowski, R. S. (1973). Measuring specific interests in biological, physical and earth
sciences in intermediate grade levels. Journal of Research in Science Teaching, 10(2), 153158.
Speering, W., & Rennie, L. (1996). Students perceptions about science: The impact of transition from
primary to secondary school. Research in Science Education, 26(3), 283298.
Steen, D. M., Peay, M. Y., & Owen, N. (1998). Predicting Australian adolescents intentions to minimize
sun exposure. Psychology and Health, 13(1), 111119.
Summers, R. (2012). Development and validation of an instrument to assess precollege Arabic speaking
students attitudes toward science (Unpublished Masters Thesis). University of Illinois at Urbana-
Champaign, Urbana, Illinois.
Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston, MA: Allyn &
Bacon.
Templin, J. (2014). Applied multilevel models for cross-sectional data [PowerPoint slides]. Retrieved
10 April 2017 from http://jonathantemplin.com/
Terry, D. J., & Hogg, M. A. (1996). Group norms and the attitude-behavior relationship: A role for group
identification. Personality and Social Psychology Bulletin, 22, 776793.
Trafimow, D. (1996). The importance of attitudes in the prediction of college students intentions to
drink. Journal of Applied Social Psychology, 26(24), 21672188.

Journal of Research in Science Teaching


34 SUMMERS AND ABD-EL-KHALICK

Tuan, H. L., Chin, C. C., & Shieh, S. H. (2005). The development of a questionnaire to measure students
motivation towards science learning. International Journal of Science Education, 27(6), 639654.
United States Department of Labor. (2007). The STEM workforce challenge. Washington, D.C.:
Author.
Vaske, J. J. (2008). Survey research and analysis: Applications in parks, recreation and human
dimensions. Venture Publishing, Inc.
Vogt, W. P., & Johnson, R. B. (2011). Dictionary of statistics and methodology: A non-technical guide
for the social sciences (4th ed.). Thousand Oaks, CA: SAGE Publications, Inc.
Wareing, C. (1982). Developing the WASP: Wareing attitude toward science protocol. Journal of
Research in Science Teaching, 19(8), 639645.
Wareing, C. (1990). A survey of antecedents of attitudes toward science. Journal of Research in Science
Teaching, 27(4), 371386.
Weinburgh, M. H., & Steele, D. (2000). The modified attitudes toward science inventory: Developing an
instrument to be used with fifth grade urban students. Journal of Women and Minorities in Science and
Engineering, 6(1), 8794.
Weber, K., Martin, M. M., & Corrigan, M. (2007). Real donors, real consent: Testing the theory of
reasoned action on organ donor consent. Journal of Applied Social Psychology, 37(10), 24352450.
Wenger, E. (1998). Communities of practice: Learning as a social system. Systems Thinker, 9(5), 23.
Wyer, M. (2003). Intending to stay: Images of scientists, attitudes toward women and gender as
influences on persistence among science and engineering majors. Journal of Women and Minorities in
Science and Engineering, 9, 116.

Supporting Information
Additional Supporting Information may be found in the online version of this article.

Journal of Research in Science Teaching

Anda mungkin juga menyukai