Student was only present for one day during the intervention phase.
Psychology in the Schools DOI: 10.1002/pits
350 Poncy et al.
convincing with four instances of negative DPR phase trends. There are several reasons why we
caution readers from overinterpreting these trend data, however. There were few data points used
to calculate each trend (only three or four for each intervention phase, and even fewer when one
considers that students were absent on certain days). Also, the item sets were constricted, containing
only 12 problems each. Although constricting sets may have been conducive to the demonstration
of large and immediate intervention effects, these rapid increases in DCM can lead to a gross
underestimation of the treatment effect when determining student learning rates. For example, Set
C baseline phase concludes with the students completing approximately 23 DCM. On the rst day
they increased to 33 DCM. Unfortunately, within-phase statistical slope calculations exclude this
immediate increase of 10 DCM and only measure the additional 9-DCM increase that occurred over
the next 3 DPR sessions. Thus, such calculations fail to measure the true impact of DPR.
DPR includes several components that may have caused, or interacted with each other to
cause, the increases in fact uency. Previous research on explicit timing suggests that the detect
phase and the post-CCC sprint with self-graphing may have enhanced uency (Codding et al.,
2007; Van Houten et al., 1975). Additionally, the CCC procedure may have caused the increases in
uency (Skinner et al., 1989). Finally, each of these procedures has been shown to enhance rates of
accurate, academic responding, which has been shown to enhance math uency (Skinner et al., 1997).
Therefore, researchers should conduct component analysis studies to identify the component(s) that
caused the increases in uency.
Although many of the aforementioned intervention components have been well researched, DPR
is unique in that it incorporates the detect pretest. Although many have indicated the need to remedy
skill decits with evidence-based interventions, much less attention has been paid to intervention
efciency (Bramlett, Cates, Savina, & Lauinger, 2010; Skinner, 2008, 2010). By incorporating
the metronome-paced group assessment (detect phase), DPR provides an efcient procedure for
identifying specic math-fact problems in need of remediation. This procedure allows each student
to apply the self-managed CCCintervention to only those problems in need of remediation, a strategy
that has been shown to be effective in enhancing learning rates (Cates et al., 2003; Joseph & Nist,
2006; Nist & Joseph, 2008). In the current study, students may have beneted from this attempt at
streamlining instructional time with DPR resulting in an average increase of 5.5 DCM per week in
response to 12 min of instruction per day. Regardless, future researchers should conduct studies to
determine if procedures similar to the detect phase enhance instructional efciency (e.g., learning
per instructional minute).
The data from the current study suggest that DPR is an efcient method to increase student
DCM performance. Future researchers should determine if DPR enhances learning rates (amount
learned given a stable unit of instructional time) compared to other interventions, such as explicit
timing and the taped-problems intervention (e.g., Codding et al., 2007; McCallum, Skinner, Turner,
& Saecker, 2006). Component analysis studies are needed to assess increases in learning rates
caused by different treatments and/or treatment component combinations. These studies may allow
educators to reduce or eliminate time spent on time-consuming components that do not enhance,
or cause little increases in, automaticity and/or allot more time to components that cause larger
increases in learning rates. Such studies may help us to rene DPR and other interventions to
maximize learning rates, which may allow students to return to general education activities, as
opposed to remedial activities, more quickly (Skinner, 2008).
A related limitation that should be addressed is treatment integrity. In the current study, proce-
dural integrity data were collected on the experimenters behavior. The intervention was designed,
however, for efcient and simultaneous application across the group of students, allowing each
student to efciently identify specic idiosyncratic target problems in need of remediation during
the detect phase and self-instruction procedures to remedy these decits (e.g., CCC, self-graphing).
Psychology in the Schools DOI: 10.1002/pits
Remedying Math-Fact Decits with DPR 351
The primary experimenter and teacher moved around the room observing students and prompting
them when they made procedural errors; however, as students were simultaneously working on
different problems, they could not and did not collect treatment integrity data on each students
self-assessment, self-instruction, and self-evaluation behavior. Future researchers should attempt to
collect data on student behaviors to ensure that they are carrying out procedures as planned. These
integrity data will be critical for researchers attempting to identify effective components, as the
degree that each component contributes to skill development cannot be assessed unless researchers
know that the component was administered with integrity.
To address external validity limitations associated with the current study, researchers should
conduct similar studies across grade levels, students (e.g., students with disabilities), and math facts
(e.g., addition, subtraction, and division). Because fact uency may enhance learning of advanced
skills and reduce math anxiety (Cates & Rhymer, 2003; NMAP, 2008), longitudinal studies are
needed to evaluate the effects of DPR on students advanced math achievement, their disposition
toward math (e.g., math-esteem), and the probability of students choosing to do more math or engage
in more advance math (e.g., take advanced math electives).
CONCLUSION
Poncy and colleagues (2006) conducted a pilot study and found evidence that DPR may have
enhanced uency. In the current study, the application of a multiple-probe design to control for threats
to internal validity extends this research, and results provide stronger evidence that DPR procedures
caused increases in uency, with the average student increasing her/his DCM score by 63% in
a 2-week period, using a little more than 2 hours of instructional time. Next, researchers should
attempt to enhance the external validity of DPR and conduct studies designed to identify which
components are effective in causing the increases in math-fact uency. By identifying components
that cause increases in uency and assessing the time required for each component, researchers may
be able to adapt DPR to allow for greater increases in math uency, while requiring less instructional
time (e.g., eliminating time-consuming components that account for little or no learning, applying
more time to components that cause the most rapid increases in learning rates). Additionally, to
help prevent basic skill decits from hindering future learning, researchers should compare DPR
with other efcient and sustainable interventions so that educators can select interventions that
remedy skill decits as rapidly as possible (Skinner, 2008; Skinner, Belore, & Watson, 1995/
2002).
APPENDIX
Set A: 5 9 = 45, 3 9 = 27, 6 6 = 36, 3 4 = 12, 7 9 = 63, 2 2 = 4, 6 3 = 18, 7 2 =
14, 5 4 = 20, 4 8 = 32, 8 8 = 64, 7 5 = 35
Set B: 4 2 = 8, 2 6 = 12, 4 6 = 24, 9 2 = 18, 6 8 = 48, 7 4 = 28, 5 5 = 25, 9 9 =
81, 3 2 = 6, 8 5 = 40, 8 7 = 56, 7 3 = 21
Set C: 6 9 = 54, 7 6 = 42, 7 7 = 49, 3 5 = 15, 4 4 = 16, 8 3 = 24, 8 2 = 16, 3 3 =
9, 8 9 = 72, 5 6 = 30, 9 4 = 36, 2 5 = 10
REFERENCES
Bliss, S. L., Skinner, C. H., McCallum, E., Saecker, L. B., Rowland, E., & Brown, K. S. (in press). Does a brief post-treatment
assessment enhance learning? An investigation of taped problems on multiplication uency. Journal of Behavioral
Education.
Bramlett, R., Cates, G. L., Savina, E., &Lauinger, B. (2010). Assessing effectiveness and efciency of academic interventions
in school psychology journals: 1995-2005. Psychology in the Schools, 47, 114125.
Psychology in the Schools DOI: 10.1002/pits
352 Poncy et al.
Cates, G. L., & Rhymer, K. N. (2003). Examining the relationship between mathematics anxiety and mathematics perfor-
mance: An instructional hierarchy perspective. Journal of Behavioral Education, 12, 2334.
Cates, G. L., Skinner, C. H., Watson, S. T., Meadows, T. J., Weaver, A., & Jackson, B. (2003). Instructional effectiveness and
instructional efciency as considerations for data-based decision making: An evaluation of interspersing procedures.
School Psychology Review, 32, 601617.
Codding, R. S., Shiyko, M., Russo, M., Birch, S., Fanning, E., & Jaspen, D. (2007). Comparing mathematics interventions:
Does initial level of uency predict intervention effectiveness? Journal of School Psychology, 45, 603617.
Cuvo, A. J. (1979). Multiple-baseline design in instructional research: Pitfalls of measurement and procedural advantages.
American Journal of Mental Deciency, 84, 219228.
Gagne, R. M. (1982). Some issues in psychology of mathematics instruction. Journal of Research in Mathematics Education,
14, 718.
Hasselbring, T. S., Goin, L. I., & Bransford, J. D. (1987). Developing automaticity. Teaching Exceptional Children, 1,
3033.
Horner, R. D., & Baer, D. M. (1978). Multiple-probe technique: A variation of the multiple baseline. Journal of Applied
Behavior Analysis, 11, 189196.
Howell, K. W., Fox, S. L., & Morehead, M. K. (1993). Curriculum-based evaluation teaching and decision making (2nd ed.).
Columbus, OH: Charles E. Merrill.
Joseph, L. M., & Nist, L. M. (2006). Comparing the effects of unknown-known ratios on word reading learning versus
learning rates. Journal of Behavioral Education, 15, 6979.
McCallum, E., Skinner, C. H., Turner, H., & Saecker, L. (2006). The taped-problems intervention: Increasing multiplication
fact uency using a low-tech, class-wide, time-delay intervention. School Psychology Review, 35, 419434.
National Mathematics Advisory Panel. (2008). Foundations for success: The nal report of the national mathematics advisory
panel. U.S. Department of Education: Washington, DC.
Nist, L., & Joseph, L. M. (2008). Effectiveness and efciency of ashcard drill instructional methods on urban rst graders
word recognition, acquisition, maintenance, and generalization. School Psychology Review, 27, 294308.
Ownby, R. L., Wallbrown, F., DArti, A., & Armstrong, B. (1985). Patterns of referrals for school psychological services:
Replication of the referral problems and category system. Special Services in the School, 1(4), 5366.
Pellegrino, J. W., & Goldman, S. R. (1987). Information processing and elementary mathematics. Journal of Learning
Disabilities, 20, 2332, 57.
Poncy, B. C., Skinner, C. H., &OMara, T. (2006). Detect, practice, and repair (DPR): The effects of a class-wide intervention
on elementary students math-fact uency. Journal of Evidence Based Practices for Schools, 7, 4768.
Rhymer, K. N., Henington, C., Skinner, C. H., & Looby, E. J. (1999). The effects of explicit timing on mathematics
performance in Caucasian and African-American second-grade students. School Psychology Quarterly, 14, 397
407.
Rhymer, K. N., Skinner, C. H., Henington, C., DReaux, R. A., & Sims, S. (1998). Effects of explicit timing on mathematics
problem completion rates in African-American third- grade students. Journal of Applied Behavior Analysis, 31, 673
677.
Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). New York: The Guilford
Press.
Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. New York: The Guilford Press.
Skinner, C. H. (2002). An empirical analysis of interspersal research: Evidence, implications and applications of the discrete
task completion hypothesis. Journal of School Psychology, 40, 347368.
Skinner, C. H. (2008). Theoretical and applied implications of precisely measuring learning rates. School Psychology Review,
37, 309314.
Skinner, C. H. (2010). Applied comparative effectiveness researchers must measure learning rates: A commentary on
efciency articles. Psychology in the Schools, 47, 166172.
Skinner, C. H., Belore, P. J., Mace, H. W., Williams, S., & Johns, G. A. (1997). Altering response topography to increase
response efciency and learning rates. School Psychology Quarterly, 12, 5464.
Skinner, C. H., Belore, P. J., & Watson, T. S. (1995/2002). Assessing the relative effects of interventions in students with
mild disabilities: Assessing instructional time. Journal of Psychoeducational Assessment, 20, 345356. (Reprinted from
Assessment in Rehabilitation and Exceptionality, 2, 207220, 1995).
Skinner, C. H., McLaughlin, T. F., & Logan, P. (1997). Cover, copy, and compare: A self- managed academic intervention
effective across skills, students, and settings. Journal of Behavioral Education, 7, 295306.
Skinner, C. H., Pappas, D. N., & Davis, K. A. (2005). Enhancing academic engagement: Providing opportunities for
responding and inuencing students to choose to respond. Psychology in the Schools, 42, 389403.
Skinner, C. H., & Skinner, A. L. (2007). Establishing an evidence base for a classroom management procedure with a series
of studies: Evaluating the color wheel. Journal of Evidence-Based Practices for Schools, 8, 88101.
Psychology in the Schools DOI: 10.1002/pits
Remedying Math-Fact Decits with DPR 353
Skinner, C. H., Turco, T. L., Beatty, K. L., & Rasavage, C. (1989). Cover, copy, and compare: An intervention for increasing
multiplication performance. School Psychology Review, 18, 212220.
Van Houten, R., Hill, S., & Parsons, M. (1975). An analysis of a performance feedback system: The effects of timing and
feedback, public posting, and praises upon academic performance and peer interaction. Journal of Applied Behavior
Analysis, 8, 449457.
Van Houten, R., & Thompson, C. (1976). The effect of explicit timing on math performance. Journal of Applied Behavior
Analysis, 9, 227230.
Psychology in the Schools DOI: 10.1002/pits
Copyright of Psychology in the Schools is the property of John Wiley & Sons, Inc. / Education and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express
written permission. However, users may print, download, or email articles for individual use.
_______________________________________________________________
_______________________________________________________________
Report Information from ProQuest
October 10 2013 22:31
_______________________________________________________________
10 October 2013 ProQuest
Table of contents
1. Effective Programs in Elementary Mathematics: A Best-Evidence Synthesis............................................. 1
10 October 2013 ii ProQuest
Document 1 of 1
Effective Programs in Elementary Mathematics: A Best-Evidence Synthesis
Author: Slavin, Robert E; Lake, Cynthia
Publication info: Review of Educational Research 78.3 (Sep 2008): 427-515.
ProQuest document link
Abstract: This article reviews research on the achievement outcomes of three types of approaches to improving
elementary mathematics: mathematics curricula, computer-assisted instruction (CAI), and instructional process
programs. Study inclusion requirements included use of a randomized or matched control group, a study
duration of at least 12 weeks, and achievement measures not inherent to the experimental treatment. Eighty-
seven studies met these criteria, of which 36 used random assignment to treatments. There was limited
evidence supporting differential effects of various mathematics textbooks. Effects of CAI were moderate. The
strongest positive effects were found for instructional process approaches such as forms of cooperative
learning, classroom management and motivation programs, and supplemental tutoring programs. The review
concludes that programs designed to change daily teaching practices appear to have more promise than those
that deal primarily with curriculum or technology alone. [PUBLICATION ABSTRACT]
Links: Get article
Full text: Headnote
This article reviews research on the achievement outcomes of three types of approaches to improving
elementary mathematics: mathematics curricula, computer-assisted instruction (CAI), and instructional process
programs. Study inclusion requirements included use of a randomized or matched control group, a study
duration of at least 12 weeks, and achievement measures not inherent to the experimental treatment. Eighty-
seven studies met these criteria, of which 36 used random assignment to treatments. There was limited
evidence supporting differential effects of various mathematics textbooks. Effects of CAI were moderate. The
strongest positive effects were found for instructional process approaches such as forms of cooperative
learning, classroom management and motivation programs, and supplemental tutoring programs. The review
concludes that programs designed to change daily teaching practices appear to have more promise than those
that deal primarily with curriculum or technology alone.
Keywords: best-evidence syntheses, elementary mathematics, experiments, meta-analysis, reviews of
research.
The mathematics achievement of American children is improving but still has a long way to go. On the National
Assessment of Educational Progress (NAEP, 2005), the math scores of fourth graders steadily improved from
1990 to 2005, increasing from 12% proficient or above to 35%. Among eighth graders, the percentage of
students scoring proficient or better gained from 15% in 1990 to 20% in 2005. These trends are much in
contrast to the trend in reading, which changed only slightly between 1992 and 2005.
However, although mathematics performance has grown substantially for all subgroups, the achievement gap
between African American and Hispanic students and their White counterparts remains wide. In 2005, 47% of
White fourth graders scored at or above proficient on the NAEP, but only 13% of African American students and
19% of Hispanic students scored this well.
Furthermore, the United States remains behind other developed nations in international comparisons of
mathematics achievement. For example, U.S. 15-year-olds ranked 28th on the Organization for Economic
Cooperation and Development's Program for International Student Assessment study of mathematics
achievement, significantly behind countries such as Finland, Canada, the Czech Republic, and France.
Although we can celebrate the growth of America's children in mathematics, we cannot be complacent. The
10 October 2013 Page 1 of 33 ProQuest
achievement gap between children of different ethnicities, and between U.S. children and those in other
countries, gives us no justification for relaxing our focus on improving mathematics for all children. Under No
Child Left Behind, schools can meet their adequate yearly progress goals only if all subgroups meet state
standards (or show adequate growth) in all subjects tested. Nationally, thousands of schools are under
increasing sanctions because the school, or one or more subgroups, is not making sufficient progress in math.
For this reason, educators are particularly interested in implementing programs and practices that have been
shown to improve the achievement of all children.
One way to reduce mathematics achievement gaps, and to improve achievement overall, is to provide low-
performing schools training and materials known to be markedly more effective than typical programs. No Child
Left Behind, for example, emphasizes the use of research-proven programs to help schools meet their
adequate yearly progress goals. Yet, for such a strategy to be effective, it is essential to know what specific
programs are most likely to improve mathematics achievement. The purpose of this review is to summarize and
interpret the evidence on elementary mathematics programs in hopes of informing policies and practices
designed to reduce achievement gaps and improve the mathematics achievement of all children.
Reviews of Effective Programs in Elementary Mathematics
The No Child Left Behind Act strongly emphasizes encouraging schools to use federal funding (such as Title I)
on programs with strong evidence of effectiveness from "scientifically-based research." No Child Left Behind
defines scientifically based research as research that uses experimental methods to compare groups using
programs to control groups, preferably with random assignment to conditions. Yet, in mathematics, what
programs meet this standard? There has never been a published review of scientific research on all types of
effective programs. There have been meta-analyses on outcomes of particular approaches to mathematics
education, such as use of educational technology (e.g., H. J. Becker, 1992; Chambers, 2003; Kulik, 2003; R.
Murphy et al., 2002), calculators (e.g., Ellington, 2003), and math approaches for at-risk children (e.g., Baker,
Gersten, &Lee, 2002; Kroesbergen &Van Luit, 2003). There have been reviews of the degree to which various
math programs correspond to current conceptions of curriculum, such as those carried out by Project 2061
evaluating middle school math textbooks (American Association for the Advancement of Science, 2000). The
What Works Clearinghouse (2006) is doing a review of research on effects of alternative math textbooks, but
this has not appeared as of this writing. However, there are no comprehensive reviews of research on all of the
programs and practices available to educators.
In 2002, the National Research Council (NRC) convened a blue-ribbon panel to review evaluation data on the
effectiveness of mathematics curriculum materials, focusing in particular on innovative programs supported by
the National Science Foundation (NSF) but also looking at evaluations of non-NSF materials (Confrey, 2006;
NRC, 2004). The NRC panel assembled research evaluating elementary and secondary math programs and
ultimately agreed on 63 quasi-experimental studies covering all grade levels, kindergarten through 12, that they
considered to meet minimum standards of quality.
The authors of the NRC (2004) report carefully considered the evidence across the 63 studies and decided that
they did not warrant any firm conclusions. Using a vote-count procedure, they reported that among 46 studies of
innovative programs that had been supported by NSF, 59% found significantly positive effects, 6% significant
negative effects, and 35% no differences. Most of these studies involved elementary and secondary programs
of the University of Chicago School Mathematics Project. For commercial programs, fhe corresponding
percentages were 29%, 1 3%, and 59%. Based on this, the report tentatively suggested that NSF-funded
programs had better outcomes. Other than this very general finding, the report was silent about the evidence on
particular programs. In addition to concerns about methodological limitations, the report maintained that it is not
enough to show differences in student outcomes; curricula, the authors argued, should be reviewed for content
by math educators and mathematicians to be sure they correspond to current conceptions of what math content
should be. None of the studies combined this kind of curriculum review with rigorous evaluation methods, so the
10 October 2013 Page 2 of 33 ProQuest
NRC chose not to describe the outcomes it found in the 63 evaluations that met its minimum standards (see
Confrey, 2006).
Focus of the Present Review
This review examines research on all types of math programs that are available to elementary educators today.
The intention is to place all types of programs on a common scale. In this way, we hope to provide educators
with meaningful, unbiased information that they can use to select programs and practices most likely to make a
difference with their students. In addition, the review is intended to look broadly for factors that might underlie
effective practices across programs and program types and to inform an overarching theory of effective
instruction in elementary mathematics.
The review examines three general categories of math approaches. One is mathematics curricula, in which the
main focus of the reform is on introduction of alternative textbooks. Some of the programs in this category were
developed with extensive funding by the NSF, which began in the 1990s. Such programs provide professional
development to teachers, and many include innovative instructional methods. However, the primary theory of
action behind this set of reforms is that higher level objectives, including a focus on developing critical
mathematics concepts and problem-solving skills and pedagogical aids such as the use of manipulatives and
improved sequencing of objectives, and other features of textbooks will improve student outcomes. Outcomes
of such programs have not been comprehensively reviewed previously, although they are currently being
reviewed by the What Works Clearinghouse (2006).
A second major category of programs is computer-assisted instruction (CAI), which uses technology to enhance
student achievement in mathematics. CAI programs are almost always supplementary, meaning that students
experience a full textbook-based program in mathematics and then go to a computer lab or a classroombased
computer to receive additional instruction. CAI programs diagnose students' levels of performance and then
provide exercises tailored to students' individual needs. Their theory of action depends substantially on this
individualization and on the computer's ability to continuously assess students' progress and accommodate their
needs. This is the one category of program that has been extensively reviewed in the past, most recently by
Kulik (2003), R. Murphy et al. (2002), and Chambers (2003).
A third category is instructional process programs. This set of interventions is highly diverse, but what
characterizes its approaches is a focus on teachers' instructional practices and classroom management
strategies rather than on curriculum or technology. With two exceptions, studies of instructional process
strategies hold curriculum constant. Instructional process programs introduce variations in withinclass grouping
arrangements (as in cooperative learning or tutoring) and in the amounts and uses of instructional time. Their
theories of action emphasize enhancing teachers' abilities to motivate children, to engage their thinking
processes, to improve classroom management, and to accommodate instruction to students' needs. Their
hallmark is extensive professional development, usually incorporating follow-up and continuing interactions
among the teachers themselves.
The three approaches to mathematics reform can be summarized as follows:
* change the curriculum,
* supplement the curriculum with CAI, or
* change classroom practices.
The categorization of programs in this review relates to a long-standing debate in research on technology by
Kozma (1994) and Clark (2001). Clark argued that research on technology must hold curriculum constant in
order to identify the unique contributions of the technology. Kozma replied that technology and curriculum were
so intertwined that it was not meaningful to separate them in analysis. As a practical matter, content, media, and
instructional processes are treated in different ways in the research discussed here. The mathematics curricula
vary textbooks but otherwise do not make important changes in media or instructional methods. The CAI
studies invariably consider technology and curricula together; none do as Clark suggested. Most of the
10 October 2013 Page 3 of 33 ProQuest
instructional process studies vary only the teaching methods and professional development, holding curriculum
constant, but a few (Team-Assisted Individualization [TAI] Math, Project CHILD, Direct Instruction, and Project
SEED) combine curricula, processes, and (in the case of Project CHILD) media.
The categorization of the programs was intended to facilitate understanding and contribute to theory, not to
restrict the review. No studies were excluded due to lack of fit with one of the categories. This review examines
research on dozens of individual programs to shed light on the broad types of mathematics reforms most likely
to enhance the mathematics achievement of elementary school children.
Review Method
This article reviews studies of elementary mathematics programs in an attempt to apply consistent, well-justified
standards of evidence to draw conclusions about effective elementary mathematics programs. The review
applies a technique called "best-evidence synthesis" (Slavin, 1986, 2007), which seeks to apply consistent,
well-justified standards to identify unbiased, meaningful quantitative information from experimental studies, and
then discusses each qualifying study, computing effect sizes but also describing the context, design, and
findings of each. Bestevidence synthesis closely resembles meta-analysis (Cooper, 1998; Lipsey &Wilson,
2001), but it requires more extensive discussion of key studies instead of primarily pooling results across many
studies. In reviewing educational programs, this distinction is particularly important, as there are typically few
studies of any particular program, so understanding the nature and quality of fhe contribution made by each
study is essential. The review procedures, described below, are similar to those applied by the What Works
Clearinghouse (2005, 2006). (See Slavin, 2007, for a detailed description and justification for fhe review
procedures used here and in syntheses by Cheung &Slavin, 2005, and by Slavin, Lake, &Groff, 2007.)
The purpose of this review is to examine the quantitative evidence on elementary mathematics programs to
discover how much of a scientific basis there is for competing claims about the effects of various programs. Our
intention is to inform practitioners, policy makers, and researchers about the current state of fhe evidence on
this topic and to identify gaps in the knowledge base that are in need of further scientific investigation.
Limitations of the Review
This article is a quantitative synthesis of achievement outcomes of alternative mathematics approaches. It does
not report on qualitative or descriptive evidence, attitudes, or other nonachievement outcomes. These are
excluded not because they are unimportant but because space limitations do not allow for a full treatment of all
of the information available on each program. Each report cited, and many that were not included (listed in
Appendix A), contain much valuable information, such as descriptions of settings, nonquantitative and
nonachievement outcomes, and the story of what happened in each study. The present article extracts from
these rich sources just the information on experimental control differences on quantitative achievement
measures in order to contribute to an understanding of the likely achievement affects of using each of fhe
programs discussed. Studies are included or excluded and are referred to as being high or low in quality solely
based on their contributions to an unbiased, well-justified quantitative estimate of fhe strength of the evidence
supporting each program. For a deeper understanding of all of the findings of each study, please see the
original reports.
Literature Search Procedures
A broad literature search was carried out in an attempt to locate every study that could possibly meet the
inclusion requirements. This included obtaining all of the elementary studies cited by the NRC (2004) and by
other reviews of mathematics programs, including technology programs that teach math (e.g., Chambers, 2003;
Kulik, 2003; R. Murphy et al., 2002). Electronic searches were made of educational databases (JSTOR, ERIC,
EBSCO, PsycINFO, and Dissertation Abstracts), Web-based repositories (Google, Yahoo, Google Scholar),
and math education publishers' Web sites. Citations of studies appearing in the studies found in the first wave
were also followed up.
Effect Sizes
10 October 2013 Page 4 of 33 ProQuest
In general, effect sizes were computed as the difference between experimental and control individual student
posttests, after adjustment for pretests and other covariates, divided by the unadjusted control group standard
deviation. If the control group standard deviation was not available, a pooled standard deviation was used.
Procedures described by Lipsey and Wilson (2001) and by Sedlmeier and Gigerenzer (1989) were used to
estimate effect sizes when unadjusted standard deviations were not available, as when the only standard
deviation presented was already adjusted for covariates or when only gain score standard deviations were
available. School- or classroom-level standard deviations were adjusted to approximate individual-level
standard deviations, as aggregated standard deviations tend to be much smaller than individual standard
deviations. If pretest and posttest means and standard deviations were presented but adjusted means were not,
effect sizes for pretests were subtracted from effect sizes for posttests.
Criteria for Inclusion
Criteria for inclusion of studies in this review were as follows:
1. The studies involved elementary (K-5) children, plus sixth graders if they were in elementary schools.
2. The studies compared children taught in classes using a given mathematics program to those in control
classes using an alternative program or standard methods.
3. Studies could have taken place in any country, but the report had to be available in English.
4. Random assignment or matching with appropriate adjustments for any pretest differences (e.g., ANCOVA)
had to be used. Studies without control groups, such as pre-post comparisons and comparisons to "expected"
gains, were excluded. Studies with pretest differences of more than 50% of a standard deviation were excluded,
because even with ANCOVAs, large pretest differences cannot be adequately controlled for, as underlying
distributions may be fundamentally different. (See the Methodological Issues section, later in the article, for a
discussion of randomized and matched designs.)
5. The dependent measures included quantitative measures of mathematics performance, such as standardized
mathematics measures. Experimenter-made measures were accepted if they were described as comprehensive
measures of mathematics, which would be fair to the control groups, but measures of math objectives inherent
to the program (but unlikely to be emphasized in control groups) were excluded. For example, a study of CAI by
Van Dusen and Worthen (1994) found no differences on a standardized test (effect size [ES] = +0.01) but a
substantial difference on a test made by the software developer (ES = +0.35). The software-specific measure
was excluded, as it probably focused on objectives and formats practiced in the CAI group but not in the control
group. (See the Methodological Issues section, later in the article, for a discussion of this issue.)
6. A minimum treatment duration of 12 weeks was required. This requirement is intended to focus the review on
practical programs intended for use for the whole year rather than on brief investigations. Brief studies may not
allow programs intended to be used over the whole year to show their full effect. On the other hand, brief
studies often advantage experimental groups that focus on a particular set of objectives during a limited time
period while control groups spread that topic over a longer period. For example, a 30-day experiment by
Cramer, Post, and delMas (2002) evaluated a fractions curriculum that is part of the Rational Number Project.
Control teachers using standard basais were asked to delay their fractions instruction to January to match the
exclusive focus of the experimental group on fractions, but it seems unlikely that their focus would have been
equally focused on fractions, the only skill assessed.
Appendix A lists studies that were considered but excluded according to these criteria, as well as the reasons
for exclusion. Appendix B lists abbreviations used throughout this review.
Methodological Issues in Studies of Elementary Mathematics Programs
The three types of mathematics programs reviewed here, mathematics curricula, CAI programs, and
instructional process programs, suffer from different characteristic methodological problems (see Slavin, 2007).
Across most of the evaluations, lack of random assignment is a serious problem. Matched designs are used in
most studies that met the inclusion criteria, and matching leaves studies open to selection bias. That is, schools
10 October 2013 Page 5 of 33 ProQuest
or teachers usually choose to implement a given experimental program and are compared to schools or
teachers who did not choose the program. The fact of this self-selection means that no matter how well
experimental groups and control groups are matched on other factors, the experimental group is likely to be
more receptive to innovation, more concerned about math, have greater resources for reform, or otherwise have
advantages that cannot be controlled for statistically. Alternatively, it is possible that schools that would choose
a given program might have been dissatisfied with results in the past and might therefore be less effective than
comparable schools. Either way, matching reduces internal validity by allowing for the possibility that outcomes
are influenced by whatever (unmeasured) factors that led the school or teacher to choose the program. It affects
external validity in limiting the generalization of findings to schools or teachers who similarly chose to use the
program.
Garden-variety selection bias is bad enough in experimental design, but many of the studies suffered from
design features that add to concerns about selection bias. In particular, many of the curriculum evaluations used
a post hoc design, in which a group of schools using a given program, perhaps for many years, is compared
after the fact to schools that matched the experimental program at pretest or that matched on other variables,
such as poverty or reading measures. The problem is that only the "survivors" are included in the study.
Schools that bought the materials, received the training, but abandoned the program before the study took
place are not in the final sample, which is therefore limited to more capable schools. As one example of this,
Waite (2000), in an evaluation of Everyday Mathematics, described how 17 schools in a Texas city originally
received materials and training. Only 7 were still implementing it at the end of the year, and 6 of these agreed to
be in the evaluation. We are not told why the other schools dropped out, but it is possible that the staff members
of the remaining 6 schools may have been more capable or motivated than those at schools that dropped the
program. The comparison group in the same city was likely composed of fhe full range of more and less
capable school staff members, and they presumably had the same opportunity to implement Everyday
Mathematics but chose not to do so. Other post hoc studies, especially those with multiyear implementations,
must also have had some number of dropouts but typically did not report how many schools there were at first
and how many dropped out. There are many reasons schools may have dropped out, but it seems likely that
any school staff able to implement any innovative program for several years is more capable, more reform
oriented, or better led than those unable to do so or (even worse) than those that abandoned the program
because it was not working. As an analog, imagine an evaluation of a diet regimen that studied only people who
kept up the diet for a year. There are many reasons a person might abandon a diet, but chief among them is
that it is not working, so looking only at the nondropouts would bias such a study.
Worst of all, post hoc studies usually report outcome data selected from many potential experimental and
comparison groups and may therefore report on especially successful schools using the program or matched
schools that happen to have made particularly small gains, making an experimental group look better by
comparison. The fact that researchers in post hoc studies often have pretest and posttest data readily available
on hundreds of potential matches, and may deliberately or inadvertently select the schools that show the
program to best effect, means that readers must take results from after-the-fact comparisons with a grain of salt.
Finally, because post hoc studies can be very easy and inexpensive to do, and are usually contracted for by
publishers rather than supported by research grants or conducted for dissertations, such studies are likely to be
particularly subject to the "file drawer" problem. That is, post hoc studies that fail to find expected positive
effects are likely to be quietly abandoned, whereas studies supported by grants or produced for dissertations
will almost always result in a report of some kind. The file drawer problem has been extensively described in
research on meta-analyses and other quantitative syntheses (see, e.g., Cooper, 1998), and it is a problem in all
research reviews, but it is much more of a problem with post hoc studies.
Despite all of these concerns, post hoc studies were reluctantly included in this review for one reason: Without
10 October 2013 Page 6 of 33 ProQuest
them, there would be no evidence at all concerning most of the commercial textbook series used by the vast
majority of elementary schools. As long as the experimental and control groups were well matched at pretest on
achievement and demographic variables, and met other inclusion requirements, we decided to include them,
but readers should be very cautious in interpreting their findings. Prospective studies, in which experimental and
control groups were designated in advance and outcomes are likely to be reported whatever they turn out to be,
are always to be preferred to post hoc studies, other factors being equal.
Another methodological limitation of almost all of the studies in this review is analysis of data at the individual-
student level. The treatments are invariably implemented at the school or classroom levels, and student scores
within schools and classrooms cannot be considered independent. In clustered settings, individual-level
analysis does not introduce bias, but it does greatly overstate statistical significance, and in studies involving a
small number of schools or classes it can cause treatment effects to be confounded with school or classroom
effects. In an extreme form, a study comparing, say, one school or teacher using Program A and one using
Program B may have plenty of statistical power at the student level, but treatment effects cannot be separated
from characteristics of the schools or teachers.
Several studies did randomly assign groups of students to treatments but nevertheless analyzed at the
individual-student level. The random assignment in such studies is beneficial because it essentially eliminates
selection bias. However, analysis at the student level, rather than at the level of random assignment, still
confounds treatment effects and school or classroom effects, as noted earlier. We call such studies randomized
quasi-experiments and consider them more methodologically rigorous, all other things being equal, than
matched studies, but less so than randomized studies in which analysis is at the level of random assignment.
Some of the qualifying studies, especially of instructional process programs, were quite small, involving a
handful of schools or classes. Beyond the problem of confounding, small studies often allow the developers or
experimenters to be closely involved in implementation, creating far more faithful and high-quality
implementations than would be likely in more realistic circumstances. Unfortunately, many of the studies that
used random assignment to treatments were very small, often with just one teacher or class per treatment. Also,
the file drawer problem is heightened with small studies, which are likely to be published or otherwise reported
only if their results are positive (see Cooper, 1998).
Another methodological problem inherent to research on alternative mathematics curricula relates to outcome
measures. In a recent criticism of the What Works Clearinghouse, Schoenfeld (2006) expressed concern that
because most studies of mathematics curricula use standardized tests or state accountability tests focused
more on traditional skills than on concepts and problem solving, there is a serious risk of "false negative" errors,
which is to say that studies might miss true and meaningful effects on unmeasured outcomes characteristic of
innovative curricula (also see Confrey, 2006, for more on this point). This is indeed a serious problem, and there
is no solution to it. Measuring content taught only in the experimental group risks false positive errors, just as
use of standardized tests risks false negatives. In the present review, only outcome measures that assess
content likely to have been covered by all groups are considered; measures inherent to the treatment are
excluded. However, many curriculum studies include outcomes for subscales, such as computation, concepts
and applications, and problem solving, and these outcomes are separately reported in this review. Therefore, if
an innovative curriculum produces, for example, better outcomes on problem solving but no differences on
computation, that might be taken as an indication that it is succeeding at least in its area of emphasis.
A total of 87 studies met the inclusion criteria. Tables 1 , 2, and 3 list all the qualifying studies. Within sections
on each program, studies that used random assignment (if any) are listed first, then randomized quasi-
experiments, then prospective matched studies, and finally post hoc matched studies. Within these categories,
studies with larger sample sizes are listed first.
This article discusses conclusions drawn from the qualifying studies, but studyby-study discussions are withheld
so that the article will fit within the space requirements of the Review of Educational Research. These
10 October 2013 Page 7 of 33 ProQuest
descriptions appear in a longer version of this article, which can be found at http://www.bestevidence.org/
_images/word_docs/Eff%20progs%20ES%20math%20Version%201.2%20for%
20BEE%2002%2009%2007.doc.
Perhaps the most common approach to reform in mathematics involves adoption of reform-oriented textbooks,
along with appropriate professional development. Programs that have been evaluated fall into three categories.
One is programs developed under funding from the NSF that emphasize a constructivist philosophy, with a
strong emphasis on problem solving, manipulatives, and concept development and a relative de-emphasis on
algorithms. At the opposite extreme is Saxon Math, a back-tobasics curriculum that emphasizes building
students' confidence and skill in computation and word problems. Finally, there are traditional commercial
textbook programs.
The reform-oriented programs supported by NSF, especially Everyday Mathematics, have been remarkably
successful in making the transition to widespread commercial application. Sconiers, Isaacs, Higgins, McBride,
and Kelso (2003) estimated that, in 1999, 10% of all schools were using one of three programs that had been
developed under NSF funding and then commercially published. That number is surely higher as of this writing.
Yet, experimental control evaluations of these and other curricula that meet the most minimal standards of
methodological quality are very few. Only five studies of the NSF programs met the inclusion standards, and all
but one of these was a post hoc matched comparison.
This section reviews the evidence on mathematics curricula. Overall, 13 studies met the inclusion criteria, of
which only 2 used random assignment. Table 1 summarizes the methodological characteristics and outcomes
of these studies. Descriptions of each study can be seen at http://www.bestevidence.org/_images/
word_docs/Eff%20progs%20ES%20math%20Version%201.2%20for%20BEE% 2002%2009%2007.doc.
With a few exceptions, the studies that compared alternative mathematics curricula are of marginal
methodological quality. Ten of the 13 qualifying studies used post hoc matched designs in which control
schools, classes, or students were matched with experimental groups after outcomes were known. Even though
such studies are likely to overstate program outcomes, the outcomes reported in diese studies are modest. The
median effect size was only +0. 10. The enormous ARC study (Sconiers et al., 2003) found an average effect
size of only +0. 10 for the three most widely used of the NSFsupported mathematics curricula, taken together.
Riordan and Noyce (2001), in a post hoc study of Everyday Mathematics, did find substantial positive effects
(ES = +0.34) in comparison to controls for schools that had used the program for 4 to 6 years, but effects for
schools that used the program for 2 to 3 years were much smaller (ES = +0.15). This finding may suggest that
schools need to implement this program for 4 to 6 years to see a meaningful benefit, but the difference in
outcomes may just be a selection artifact, due to the fact that schools that were not succeeding may have
dropped the program before the 4th year. The evidence for the impacts of all of die curricula on standardized
tests is diin. The median effect size across five studies of the NSF-supported curricula is only +0. 1 2, very
similar to the findings of the ARC study.
The reform-oriented math curricula may have positive effects on outcomes not assessed by standardized tests,
as suggested by Schoenfeld (2006) and Confrey (2006). However, the results on standardized and state
accountability measures do not suggest differentially strong impacts on outcomes such as problem solving or
concepts and applications diat one might expect, as diese are the focus of the NSF curricula and odier reform
curricula.
Evidence supporting Saxon Math, the very traditional, algorithmically focused curriculum that is the polar
opposite of the NSF-supported models, was lacking. The one methodologically adequate study evaluating the
program, by Resendez and Azin (2005), found no differences on Georgia state tests between elementary
students who experienced Saxon Math and those who used other texts.
A review of research on middle and high school math programs by Slavin et al. (2007) using methods
essentially identical to those used in the present review also found very limited impacts of alternative textbooks.
10 October 2013 Page 8 of 33 ProQuest
Across 38 qualifying studies, the median effect size was only +0.05. The effect sizes were also +0.05 in 24
studies of NSF-supported curricula and were +0.12 in 11 studies of Saxon Math.
More research is needed on all of these programs, but fhe evidence to date suggests a surprising conclusion
that despite all fhe heated debates about the content of mathematics, there is limited high-quality evidence
supporting differential effects of different math curricula.
Computer-Assisted Instruction
A long-standing approach to improving the mathematics performance of elementary students is computer-
assisted instruction, or CAI. Over the years, CAI strategies have evolved from limited drill-and-practice
programs to sophisticated integrated learning systems that combine computerized placement and instruction.
Typically, CAI materials have been used as supplements to classroom instruction and are often used only a few
times a week. Some of the studies of CAI in math have involved only 30 minutes per week. What CAI primarily
adds is the ability to identify children's strengths and weaknesses and then give them selfinstructional exercises
designed to fill in gaps. In a hierarchical subject like mathematics, especially computation, this may be of
particular importance.
A closely related strategy, computer-managed learning systems, is also reviewed in this section as a separate
subcategory.
As noted earlier, CAI is one of the few categories of elementary mathematics interventions that has been
reviewed extensively. Most recently, for example, Kulik (2003) reviewed research on the uses of CAI in reading
and math and concluded that studies supported the effectiveness of CAI for math but not for reading. R. Murphy
et al. (2002) concluded that CAI was effective in both subjects, but with much larger effects in math than in
reading. A recent large, randomized evaluation of various CAI programs found no effects on student
achievement in reading or math (Dynarski et al., 2007).
Table 2 summarizes qualifying research on several approaches to CAI in elementary mathematics. Many of
these involved earlier versions of CAI that no longer exist, but it is still useful to be aware of the earlier evidence,
as many of the highest quality studies were done in the 1980s and early 1990s. Overall, 38 studies of CAI met
the inclusion criteria, and 15 of these used randomized or randomized quasi-experimental designs. In all cases,
control groups used nontechnology approaches, such as traditional textbooks. For descriptions of each study,
see http://www.bestevidence.org/_images/word_docs/Eff%20progs%20ES%20math %20Version%20 1
.2%20for%20BEE%2002%2009%2007.doc.
In sheer numbers of studies, CAI is the most extensively studied of all approaches to elementary math reform.
Most studies of CAI find positive effects, especially on measures of math computations. Across all studies from
which effect sizes could be computed, these effects are meaningful in light ofthe fact that CAI is a supplemental
approach, rarely occupying more than three 30-minute sessions weekly (and often alternating with CAI reading
instruction). The median effect size was +0.19. This is larger than the median found for the curriculum studies
(+0. 1 0), and it is based on many more studies (38 vs. 13) and on many more randomized and randomized
quasi-experimental studies (15 vs. 2). However, it is important to note that most of these studies are quite old,
and they usually evaluated programs that are no longer commercially available.
In the Slavin et al. (2007) review of middle and high school math programs, the median effect size across 36
qualifying studies of CAI was +0.17, nearly identical to the estimate for elementary CAI studies.
Although outcomes of studies of CAI are highly variable, most studies do find positive effects, and none
significantly favored a control group. Although the largest number of studies has involved Jostens, there is not
enough high-quality evidence on particular CAI approaches to recommend any one over another, at least based
on student outcomes on standardized tests.
In studies that break down their results by subscales, outcomes are usually stronger for computation than for
concepts or problem solving. This is not surprising, as CAI is primarily used as a supplement to help children
with computation skills. Because of the hierarchical nature of math computation, CAI has a special advantage in
10 October 2013 Page 9 of 33 ProQuest
this area because of its ability to assess students and provide them with individualized practice on skills that
they have the prerequisites to learn but have not yet learned.
Instructional Process Strategies
Many researchers and reformers have sought to improve children's mathematics achievement by giving
teachers extensive professional development on the use of instructional process strategies, such as cooperative
learning, classroom management, and motivation strategies (see Hill, 2004). Curriculum reforms and CAI also
typically include professional development, of course, but the strategies reviewed in this section are primarily
characterized by a focus on changing what teachers do with the curriculum they have, not changing the
curriculum.
The programs in this section are highly diverse, so they are further divided into seven categories:
1. cooperative learning,
2. cooperative/individualized programs,
3. direct instruction,
4. mastery learning,
5. professional development focused on math content,
6. professional development focused on classroom management and motivation, and
7. supplemental programs.
A total of 36 studies evaluated instructional process programs. Table 3 summarizes characteristics and
outcomes of these studies. For discussions of each study, see
http://www.bestevidence.org/_images/word_docs7Eff%20progs%20ES%
20math%20Version%201.2%20for%20BEE%2002%2009%2007.doc.
Research on instructional process strategies tends to be of much higher quality than research on mathematics
curricula or CAI. Out of 36 studies, 19 used randomized or randomized quasi-experimental designs. Many had
small samples and/or short durations, and in some cases there were confounds between treatments and
teachers, but even so, these are relatively high-quality studies, most of which were published in peer-reviewed
journals. The median effect size for randomized studies was +0.33, and the median was also +0.33 for all
studies taken together.
The research on instructional process strategies identified several methods with strong positive outcomes on
student achievement. In particular, the evidence supports various forms of cooperative learning. The median
effect size across 9 studies of cooperative learning was +0.29 for elementary programs (and +0.38 for middle
and high school programs; see Slavin et al., 2007). Particularly positive effects were found for Classwide Peer
Tutoring (CWPT) and Peer-Assisted Learning Strategies (PALS), which are pair learning methods, and Student
Teams- Achievement Division (STAD) and TAI Math, which use groups of four. Project CHILD, which also uses
cooperative learning, was successfully evaluated in 1 study.
Two programs that focus on classroom management and motivation also had strong evidence of effectiveness
in large studies. These are the Missouri Mathematics Project, with three large randomized and randomized
quasi-experiments with positive outcomes, and Consistency Management &Cooperative Discipline (CMCD).
Positive effects were also seen for Dynamic Pedagogy and Cognitively Guided Instruction (CGI), which focus on
helping teachers understand math content and pedagogy. Four small studies supported direct instruction
models, Connecting Math Concepts, and User-Friendly Direct Instruction.
Programs that supplemented traditional classroom instruction also had strong positive effects. These include
small-group tutoring for struggling first graders and Project SEED, which provides a second math period
focused on high-level math concepts.
The research on these instructional process strategies suggests that the key to improving math achievement
outcomes is changing the way teachers and students interact in the classroom. It is important to be clear that
the well-supported programs are not ones that just provide generic professional development or professional
10 October 2013 Page 10 of 33 ProQuest
development focusing on mathematics content knowledge. What characterizes the successfully evaluated
programs in this section is a focus on how teachers use instructional process strategies, such as using time
effectively, keeping children productively engaged, giving children opportunities and incentives to help each
other learn, and motivating students to be interested in learning mathematics.
Overall Patterns of Outcomes
Across all categories, 87 studies met the inclusion criteria for this review, of which 36 used randomized or
randomized quasi-experimental designs: 13 studies (2 randomized) of mathematics curricula, 38(11
randomized, 4 randomized quasiexperimental) of CAI, and 36 (9 randomized, 10 randomized quasi-
experimental) of instructional process programs. The median effect size was +0.22. The effect size for
randomized and randomized quasi-experimental studies (n = 36) was +0.29, and for fully randomized studies (n
= 22) it was +0.28, indicating that randomized studies generally produced effects similar to those of matched
quasi-experimental studies. Recall that the matched studies had to meet stringent methodological standards, so
the similarity between randomized and matched outcomes reinforces the observation made by Glazerman,
Levy, and Myers (2002) and Torgerson (2006) that high-quality studies with well-matched control groups
produce outcomes similar to those of randomized experiments.
Overall effect sizes differed, however, by type of program. Median effect sizes for all qualifying studies were
+0.10 for mathematics curricula, +0.19 for CAI programs, and +0.33 for instructional process programs. Effect
sizes were above the overall median (+0.22) in 15% of studies of mathematics curricula, 37% of CAI studies,
and 72% of instructional process programs. The difference in effect sizes between the instructional process and
other programs is statistically significant (^sup 2^ = 15.71, p <. 001).
With only a few exceptions, effects were similar for disadvantaged and middleclass students and for students of
different ethnic backgrounds. Effects were also generally similar on all subscales of math tests, except that CAI
and TAI Math generally had stronger effects on measures of computations than on measures of concepts and
applications.
Summarizing Evidence of Effectiveness for Current Programs
In several recent reviews of research on outcomes of various educational programs, reviewers have
summarized program outcomes using a variety of standards. This is not as straightforward a procedure as
might be imagined, as several factors must be balanced (Slavin, 2008). These include the number of studies,
the average effect sizes, and the methodological quality of studies.
The problem is that the number of studies of any given program is likely to be small, so simply averaging effect
sizes (as in meta-analyses) is likely to overemphasize small, biased, or otherwise flawed studies with large
effect sizes. For example, in the present review, there are several very small matched studies with effect sizes
in excess of +1.00, and these outcomes cannot be allowed to overbalance large and randomized studies with
more modest effects. Emphasizing numbers of studies can similarly favor programs with many small, matched
studies, which may collectively be biased toward positive findings by file drawer effects. The difference in
findings for CAI programs between small numbers of randomized experiments and large numbers of matched
experiments shows the danger of emphasizing numbers of studies without considering quality. Finally,
emphasizing methodological factors alone risks eliminating most studies or emphasizing studies that may be
randomized but are very small, confounding teacher and treatment effects, or may be brief, artificial, or
otherwise not useful for judging the likely practical impact of a treatment.
In this review, we applied a procedure for characterizing the strength of the evidence favoring each program
that attempts to balance methodological, replication, and effect size factors. Following the What Works
Clearinghouse (2006), the Comprehensive School Reform Quality Center (2005), and Borman, Hewes,
Overman, and Brown (2003), but placing a greater emphasis on methodological quality, we categorized
programs as follows (see Slavin, 2007):
* Strong evidence of effectiveness-at least one large randomized or randomized quasi-experimental study, or
10 October 2013 Page 11 of 33 ProQuest
two smaller studies, with a median effect size of at least +0.20. A large study is defined as one in which at least
10 classes or schools, or 250 students, were assigned to treatments.
* Moderate evidence of effectiveness - at least one randomized or randomized quasi-experimental study, or a
total of two large or four small qualifying matched studies, with a median effect size of at least +0.20.
* Limited evidence of effectiveness - at least one qualifying study of any design with a statistically significant
effect size of at least +0.10.
* Insufficient evidence of effectiveness - one or more qualifying study of any design with nonsignificant
outcomes and a median effect size less than +0.10.
N No qualifying studies.
Table 4 summarizes currently available programs falling into each of these categories (programs are listed in
alphabetical order within each category). Note that programs that are not currently available, primarily the older
CAI programs, do not appear in the table, as it is intended to represent the range of options from which today's
educators might choose.
In line with the previous discussions, the programs represented in each category are strikingly different. In the
Strong Evidence category appear five instructional process programs, four of which are cooperative learning
programs: Classwide Peer Tutoring, PALS, STAD, and TAI Math. The fifth program is a classroom management
and motivation model, the Missouri Mathematics Project.
The Moderate Evidence category is also dominated by instructional process programs, including two
supplemental designs, small-group tutoring, and Project SEED, as well as CGI, which focuses on training
teachers in mathematical concepts, and CMCD, which focuses on school and classroom management and
motivation. Connecting Math Concepts, an instructional process program tied to a specific curriculum, also
appears in this category. The only current CAI program with this level of evidence is Classworks.
The Limited Evidence category includes five math curricula: Everyday Mathematics, Excel Math, Growing With
Mathematics, Houghton Mifflin Mathematics, and Knowing Mathematics. Dynamic Pedagogy, Project CHILD,
and Mastery Learning - instructional process programs - are also in this category, along with Lightspan and
Accelerated Math. Four programs, listed under the Insufficient Evidence category, had only one or two studies,
which failed to find significant differences. The final category, No Qualifying Studies, lists 48 programs.
Discussion
The research reviewed in this article evaluates a broad range of strategies for improving mathematics
achievement. Across all topics, the most important conclusion is that there are fewer high-quality studies than
one would wish for. Although a total of 87 studies across all programs qualified for inclusion, there were small
numbers of studies on each particular program. There were 36 studies, 19 of which involved instructional
process strategies, that randomly assigned schools, teachers, or students to treatments, but many of these
tended to be small and therefore to confound treatment effects with school and teacher effects. There were
several large-scale, multiyear studies, especially of mathematics curricula, but these tended to be post hoc
matched quasi-experiments, which can introduce serious selection bias. Clearly, more randomized evaluations
of programs used on a significant scale over a year or more are needed.
This being said, there were several interesting patterns in the research on elemen- tary mathematics programs.
One surprising observation is the lack of evidence that it matters very much which textbook schools choose
(median ES = +0.10 across 13 studies). Quality research is particularly lacking in this area, but the mostly
matched post hoc studies that do exist find modest differences between programs. NSFfunded curricula such
as Everyday Mathematics, Investigations, and Math Trailblazers might have been expected to at least show
significant evidence of effectiveness for outcomes such as problem solving or concepts and applications, but
the quasi-experimental studies that qualified for this review find little evidence of strong effects even in these
areas. The large national study of these programs by Sconiers et al. (2003) found effect sizes of only +0.10 for
all outcomes, and the median effect size for 5 studies of NSF-funded programs was +0.12.
10 October 2013 Page 12 of 33 ProQuest
It is possible that the state assessments used in the Sconiers et al. (2003) study and other studies may have
failed to detect some of the more sophisticated skills taught in NSF-funded programs but not other programs, a
concern expressed by Schoenfeld (2006) in his criticism of the What Works Clearinghouse. However, in light of
the small effects seen on outcomes such as problem solving, probability and statistics, geometry, and algebra, it
seems unlikely that misalignment between the NSF-sponsored curricula and the state tests accounts for the
modest outcomes.
Studies of CAI found a median effect size (ES = +0. 19) higher than that found for mathematics curricula, and
there were many more high-quality studies of CAI. A number of studies showed substantial positive effects of
using CAI strategies, especially for computation, across many types of programs. However, the highest quality
studies, including the few randomized experiments, mostly found no significant differences.
CAI effects in math, although modest in median effect size, are important in light of the fact that in most studies
CAI was used for only about 30 minutes three times a week or less. The conclusion that CAI is effective in math
is in accord with the findings of a recent review of research on technology applications by Kulik (2003), who
found positive effects of CAI in math but not in reading.
The most striking conclusion from the review, however, is the evidence supporting various instructional process
strategies. Twenty randomized experiments and randomized quasi-experiments found impressive affects
(median ES = +0.33) for programs that target teachers' instructional behaviors rather than math content alone.
Several categories of programs were particularly supported by high-quality research. Cooperative learning
methods, in which students work in pairs or small teams and are rewarded based on the learning of all team
members, were found to be effective in 9 well-designed studies, 8 of which used random assignment, with a
median effect size of +0.29. These included studies of Classwide Peer Tutoring, PALS, and STAD. Team
Accelerated Instruction, which combines cooperative learning and individualization, also had strong evidence of
effectiveness. Another well-supported approach included programs that focus on improving teachers' skills in
classroom management, motivation, and effective use of time, in particular the Missouri Mathematics Project
and CMCD. Studies supported programs focusing on helping teachers introduce mathematics concepts
effectively, such as CGI, Dynamic Pedagogy, and Connecting Math Concepts.
Supplementing classroom instruction with well-targeted supplementary instruction is another strategy with
strong evidence of effectiveness. In particular, small-group tutoring for first graders struggling in math and
Project SEED, which provides an additional period of instruction from professional mathematicians, have strong
evidence.
The debate about mathematics reform has focused primarily on curriculum, not on professional development or
instruction (see, e.g., American Association for the Advancement of Science, 2000; NRC, 2004). Yet this review
suggests mat in terms of outcomes on traditional measures, such as standardized tests and state accountability
assessments, curriculum differences appear to be less consequential than instructional differences are. This is
not to say that curriculum is unimportant. There is no point in teaching the wrong mathematics. The research on
the NSFsupported curricula is at least comforting in showing that reform-oriented curricula are no less effective
than traditional curricula on traditional measures, and they may be somewhat more effective, so their
contribution to nontraditional outcomes does not detract from traditional ones. The movement led by the
National Council of Teachers of Mathematics to focus math instruction more on problem solving and concepts
may account for the gains over time on NAEP, which itself focuses substantially on these domains.
Also, it is important to note that the three types of approaches to mathematics instruction reviewed here do not
conflict with each other and may have additive effects if used together. For example, schools might use an NSF-
supported curriculum such as Everyday Mathematics with well-structured cooperative learning and
supplemental CAI, and the effects may be greater than those of any of these programs by themselves.
However, the findings of this review suggest that educators and researchers might do well to focus more on
how mathematics is taught, rather than expecting that choosing one or another textbook by itself will move their
10 October 2013 Page 13 of 33 ProQuest
students forward.
As noted earlier, the most important problem in mathematics education is the gap in performance between
middle- and lower-class students and between White and Asian American students and African American,
Hispanic, and Native American students. The studies summarized in this review took place in widely diverse
settings, and several of them reported outcomes separately for various subgroups. Overall, there is no clear
pattern of differential effects for students of different social class or ethnic background. Programs found to be
effective with any subgroup tend to be effective with all groups. Rather than expecting to find programs with
different effects on students in me same schools and classrooms, the information on effective mathematics
programs might better be used to address the achievement gap by providing research-proven programs to
schools serving many disadvantaged and minority students. Federal Reading First and Comprehensive School
Reform programs were intended to provide special funding to help highpoverty, low-achieving schools adopt
proven programs. A similar strategy in mathematics could help schools with many students struggling in math to
implement innovative programs with strong evidence of effectiveness, as long as the schools agree to
participate in the full professional development process used in successful studies and to implement all aspects
of the program with quality and integrity.
The mathematics performance of America's students does not justify complacency. In particular, schools
serving many students at risk need more effective programs. This article provides a starting place in
determining which programs have the strongest evidence bases today. Hopefully, higher quality evaluations of a
broader range of programs will appear in the coming years. What is important is that we use what we know now
at the same time that we work to improve our knowledge base in the future so that our children receive the most
effective mathematics instruction we can give them.
References
References
Abram, S. L. (1984). The effect of computer assisted instruction on first grade phonics and mathematics
achievement computation. Unpublished doctoral dissertation, Northern Arizona University.
Ai, X. (2002). District Mathematics Plan evaluation: 2001-2002 evaluation report. Los Angeles: Program
Evaluation and Research Branch, Los Angeles Unified School District.
Al-Halal, A. (2001). The effects of individualistic learning and cooperative learning strategies on elementary
students ' mathematics achievement and use of social skills. Unpublished doctoral dissertation, Ohio University.
Alifrangis, C. M. (1991). An integrated learning system in an elementary school: Implementation, attitudes, and
results. Journal of Computing in Childhood Education, 2(3), 51-66.
Allinder, R. M., &Oats, R. G. (1997). Effects of acceptability on teachers' implementation of curriculum-based
measurement and student achievement in mathematics computation. Remedial and Special Education, 18(2),
113-120.
American Association for the Advancement of Science. (2000). Middle school mathematics textbooks: A
benchmark-based evaluation. Washington, DC: Author.
Anderson, L., Scott, C, &Hutlock, N. (1976, April). The effects of a mastery learning program on selected
cognitive, affective and ecological variables in Grades 1 through 6. Paper presented at the annual meeting of
the American Educational Research Association, San Francisco, CA.
Anelli, C. (1977). Computer-assisted instruction and reading achievement of urban third and fourth graders.
Unpublished doctoral dissertation, Rutgers University.
Armour-Thomas, E., Walker, E., Dixon-Roman, E. J., Mejia, B., &Gordon, E. J. (2006, April). An evaluation of
dynamic pedagogy on third-grade math achievement of ethnic minority children. Paper presented at the annual
meeting of the American Educational Research Association, San Francisco, CA.
Atkeison-Cherry, N. K. (2004). A comparative study of mathematics learning of thirdgrade children using a
10 October 2013 Page 14 of 33 ProQuest
scripted curriculum and third-grade children using a nonscripted curriculum. Unpublished doctoral dissertation,
Union University.
Atkins, J. (2005). The association between the use of Accelerated Math and students ' math achievement.
Unpublished doctoral dissertation, East Tennessee State University.
Austin Independent School District. (2001). Austin collaborative for mathematics education, 1999-2000
evaluation. Unpublished manuscript.
Axelrod, S., McGregor, G., Sherman, J., &Hamlet, C. (1987). Effects of video games as reinforcers for
computerized addition performance. Journal of Special Education Technology, 9(1), 1-8.
Baker, S., Gersten, R., &Lee, D. (2002). A synthesis of empirical research teaching mathematics to low-
achieving students. Elementary School Journal, 103(1), 5 1-69.
Bass, G., Ries, R., &Sharpe, W. (1986). Teaching basic skills though microcomputer assisted instruction.
Journal of Educational Computing Research, 2(2), 207-219.
Beck Evaluation &Testing Associates. (2006). Progress in mathematics 2006: Grade 1 pre-post field test
evaluation. Pleasantville, NY: Author.
Becker, H. J. (1992). Computer-based integrated learning systems in the elementary and middle grades: A
critical review and synthesis of evaluation reports. Journal of Educational Computing Research, 8(1), 1-41.
Becker, H. J. (1994). Mindless or mindful use of integrating learning systems. International Journal of
Educational Research, 21(1), 65-79.
Becker, W. C, &Gersten, R. (1982). A follow-up of Follow Through: The later effects of the direct instruction
model on children in the fifth and sixth grades. American Educational Research Journal, 19(1), 75-92.
Bedell, J. P. (1998). Effects of reading and mathematics software formats on elementary students '
achievement. Unpublished doctoral dissertation, University of Miami.
Beirne-Smith, M. (1991). Peer tutoring in arithmetic for children with learning disabilities. Exceptional Children,
57, 330-337.
Bereiter, C, &Kurland, M. (1981-1982). A constructive look at Follow-Through results. Interchange, 12, 1-22.
Birch, J. H. (2002). The effects ofthe Delaware Challenge Grant Program on the standardized reading and
mathematics test scores of second and third grade students in the Caesar Rodney School District. Unpublished
doctoral dissertation, Wilmington College.
Biscoe, B., &Harris, B. (2004). Growing With Mathematics: National evaluation study summary report. DeSoto,
TX: Wright Group.
Bolser, S., &oilman, D. (2003). Saxon Math, Southeast Fountain Elementary School: Effective or ineffective?
(ERIC Document Reproduction Service No. ED474537).
Borman, G. D., Hewes, G. M., Overman, L. T., &Brown, S. (2003). Comprehensive school reform and
achievement: A meta-analysis. Review of Educational Research, 73(2), 125-230.
Borton, W. M. (1988). The effects of computer managed mastery learning on mathematics test scores in the
elementary school. Journal of Computer-Based Instruction, 75(3), 95-98.
Bosfield, G. (2004). A comparison of traditional mathematical learning and cooperative mathematical learning.
Unpublished master's thesis, California State University, Domnguez Hills.
Boys, C.J. (2003). Mastery orientation through task-focused goals: Effects on achievement and motivation.
Unpublished master's thesis, University of Minnesota.
Brandt, W. C., &Hutchinson, C. (2005). Romulus Community Schools comprehensive school reform evaluation.
Naperville, IL: Learning Point Associates.
Brem, S. K. (2003). AM users outperform controls when exposure and quality of interaction are high: A two-year
study of the effects of Accelerated Math and Math Renaissance performance in a Title I elementary school
[tech. rep.] . Tempe: Arizona State University.
Brent, G., &DiObilda, N. (1993). Effects of curriculum alignment versus direct instruction on urban children.
10 October 2013 Page 15 of 33 ProQuest
Journal of Educational Research, 86(6), 333-338.
Briars, D., &Resnick, L. (2000). Standards, assessments - and what else? The essential elements of standards-
based school improvement. Los Angeles: National Center for Research on Evaluation, Standards, and Student
Testing, UCLA.
Briars, D. J. (2004, July). The Pittsburgh story: Successes and challenges in implementing standards-based
mathematics programs. Paper presented at the meeting of the UCSMP Everyday Mathematics Leadership
Institute, Lisle, IL.
Brown, F., &Boshamer, C. C. (2000). Using computer assisted instruction to teach mathematics: A study.
Journal of the National Alliance of Black School Educators, 4(1), 62-71.
Brush, T. (1997). The effects on student achievement and attitudes when using integrated learning systems with
cooperative pairs. Educational Research and Development, 45(1), 51-64.
Bryant, R. R. (1981). Effects of team-assisted individualization on the attitudes and achievement of third, fourth,
and fifth grade students of mathematics. Unpublished doctoral dissertation, University of Maryland.
Burke, A. J. (1980). Students' potential for learning contrasted under tutorial and group approaches to
instruction. Unpublished doctoral dissertation, University of Chicago.
Burkhouse, B., Loftus, M., Sadowski, B., &Buzad, K. (2003). Thinking Mathematics as professional
development: Teacher perceptions and student achievement. Scranton, Pennsylvania: Mary wood University.
(ERIC Document Reproduction Service No. ED482911)
Burton, S. K. (2005). The effects of ability grouping on academic gains of rural elementary school students.
Unpublished doctoral dissertation, Union University.
Butzin, S. M. (2001). Using instructional technology in transformed learning environments: An evaluation of
Project CHILD. Journal of Research on Computing in Education, 33, 367-373.
Cabezon, E. (1984). The effects of marked changes in student achievement pattern on the students, their
teachers, and their parents: The Chilean case. Unpublished doctoral dissertation, University of Chicago.
Calvery, R., Bell, D., &Wheeler, G. (1993, November). A comparison of selected second and third graders' math
achievement: Saxon vs. Holt. Paper presented at the Annual Meeting of the Mid-South Educational Research
Association, New Orleans, LA.
Campbell, P. F., Rowan, T. E., &Cheng, Y. (1995, April). Project IMPACT: Mathematics achievement in
predominantly minority elementary classrooms attempting reform. Paper presented at the Annual Meeting of the
American Educational Research Association, San Francisco, CA.
Cardelle-Elawar, M. (1990). Effects of feedback tailored to bilingual students' mathematics needs on verbal
problem solving. Elementary School Journal, 91, 165-175.
Cardelle-Elawar, M. (1992). Effects of teaching metacognitive skills to students with low mathematics ability.
Teaching and Teacher Education, 8, 109-121.
Cardelle-Elawar, M. (1995). Effects of metacognitive instruction on low achievers in mathematics problems.
Teaching and Teacher Education, 11, 81-95.
Carpenter, T. P., Fennema, E., Peterson, P. L., Chiang, C. P., &Loef, M. (1989). Using knowledge of children's
mathematics thinking in classroom teaching: An experimental study. American Education Research Journal, 26,
499-531.
Carrier, C, Post, T., &Heck, W. (1985). Using microcomputers with fourth-grade students to reinforce arithmetic
skills. Journal for Research in Mathematics Education, 16(l), 45-51.
Carroll, W. (1995). Report on the field test of fifth grade Everyday Mathematics. Chicago: University of Chicago.
Carroll, W. (1998). Geometric knowledge of middle school students in a reform-based mathematics curriculum.
School Science and Mathematics, 98, 188-197.
Carroll, W. M. (1993). Mathematical knowledge of kindergarten and first-grade students in Everyday
Mathematics [UCSMP report]. Unpublished manuscript.
10 October 2013 Page 16 of 33 ProQuest
Carroll, W. M. (1994-1995). Third grade Everyday Mathematics students' performance on the 1993 and 1994
Illinois state mathematics test. Unpublished manuscript.
Carroll, W. M. (1996a). A follow-up to the fifth-grade field test of Everyday Mathematics: Geometry, and mental
and written computation. Chicago: University of Chicago School Mathematics Project.
Carroll, W. M. (1996b). Use of invented algorithms by second graders in a reform mathematics curriculum.
Journal of Mathematical Behavior, 15, 137-150.
Carroll, W. M. (1996c). Mental computation of students in a reform-based mathematics curriculum. School
Science and Mathematics, 96, 305-311.
Carroll, W. M. (1997). Mental and written computation: Abilities of students in a reform-based mathematics
curriculum. Mathematics Educator, 2(1), 18-32.
Carroll, W. M. (2000). Invented computational procedures of students in a standardsbased curriculum. Journal
of Mathematical Behavior, 18, 111-121.
Carroll, W. M. (2001a). A longitudinal study of children in the Everyday Mathematics curriculum. Unpublished
manuscript.
Carroll, W. M. (2001b). Students in a standards-based curriculum: Performance on the 1999 Illinois State
Achievement Test. Illinois Mathematics Teacher, 52(1), 3-7.
Carroll, W. M., &Fuson, K. C. (1998). Multidigit computation skills of second and third graders in Everyday
Mathematics: A follow-up to the longitudinal study. Unpublished manuscript.
Carroll, W. M., &Isaacs, A. (2003). Achievement of students using the University of Chicago School
Mathematics Project's Everyday Mathematics. In S. L. Senk &D. R. Thompson (Eds.), Standards-based school
mathematics curricula: What are they? What do students learn? (pp. 79-108). Mahwah, NJ: Lawrence Erlbaum.
Carroll, W. M., &Porter, D. (1994). Afield test of fourth grade Everyday Mathematics: Summary report. Chicago:
University of Chicago School Mathematics Project, Elementary Component.
Carter, A., Beissinger, J., Cirulis, ?., Gartzman, M., Kelso, C, &Wagreich, P. (2003). Student learning and
achievement with math. In S. L. Senk &D. R. Thompson (Eds.), Standards-based school mathematics curricula:
What are they? What do students learn? (pp. 45-78). Mahwah, NJ: Lawrence Erlbaum.
Chambers, E. A. (2003). Efficacy of educational technology in elementary and secondary classrooms: A meta-
analysis of the research literature from 1992-2002. Unpublished doctoral dissertation, Southern Illinois
University at Carbondale.
Chan, L., Cole, P., &Cahill, R. (1988). Effect of cognitive entry behavior, mastery level, and information about
criterion on third graders' mastery of number concepts. Journal for Research in Mathematics Education, 19,
439-448.
Chang, K. E., Sung, Y. T., &Lin, S. F. (2006). Computer-assisted learning for mathematical problem solving.
Computers and Education, 46(2), 140-151.
Chase, A., Johnston, K., Delameter, B., Moore, D., &Golding, J. (2000). Evaluation of the Math Steps
curriculum: Final report. Boston: Houghton Mifflin.
Cheung, A., &Slavin, R. (2005). A synthesis of research on language of reading instruction for English language
learners. Review of Education Research, 75, 247-284.
Chiang, A. (1978) Demonstration of the use of computer-assisted instruction with handicapped children: Final
report. (Rep. No. 446-AM-66076A). Arlington, VA: RMC Research Corp. (ERIC Document Reproduction Service
No. EDl 16913)
Clariana, R. B. (1994). The effects of an integrated learning system on third graders' mathematics and reading
achievement. San Diego, CA: Jostens Learning Corporation. (ERIC Document Reproduction Service No.
ED409181)
Clariana, R. B. (1996). Differential achievement gains for mathematics computation, concepts, and applications
with an integrated learning system. Journal of Computers in Mathematics and Science, 15(3), 203-215.
10 October 2013 Page 17 of 33 ProQuest
Clark, R. E. (Ed.). (2001). Learning from media: Arguments, analysis, and evidence. Greenwich, CT:
Information Age.
Clarke, B., &Shinn, M. (2004). A preliminary investigation into the identification and development of early
mathematics curriculum-based measurement. School Psychology Review, 33, 234-248.
Cobb, P., Wood, T., Yackel, E., Nicholls, J., Wheatley, G., Trigatti, B., et al. (1991). Assessment of a problem-
centered second-grade mathematics project. Journal for Research in Mathematics Education, 22(1), 3-29.
Cognition and Technology Group at Vanderbilt. (1992). The Jasper series as an example of anchored
instruction: Theory, program description, and assessment data. Educational Psychologist, 27(3), 291-315.
Comprehensive School Reform Quality Center. (2005). CSRQ Center Report on Elementary School
Comprehensive School Reform Models. Washington, DC: American Institutes for Research.
Confrey, J. (2006). Comparing and contrasting the National Research Council report on evaluating curricular
effectiveness with the What Works Clearinghouse approach. Educational Evaluation and Policy Analysis, 28(3),
195-213.
Cooper, H. (1998). Synthesizing research (3rd ed.). Thousand Oaks, CA: Sage.
Cooperative Mathematics Project. (1996). Findings from the evaluation of the Number Power Mathematics
Program: Final report to the National Science Foundation. Oakland, CA: Developmental Studies Center.
Cox, B. F. (1986). Instructional management system: How it affects achievement, selfconcept, and attitudes
toward math of fifth-grade students. Unpublished doctoral dissertation, Oklahoma State University.
Craig, J., &Cairo, L. (2005). Assessing the relationship between questioning and understanding to improve
learning and thinking (QUILT) and student achievement in mathematics: A pilot study. Charleston, WV:
Appalachia Educational Lab.
Cramer, K. A., Post, T. R., &delMas, R. C. (2002). Initial fraction learning by fourth and fifth grade students: A
comparison of the effects of using commercial curricula with the effects of using the Rational Number Project
curriculum. Journal for Research in Mathematics Education, 33(2), 111-144.
Crawford, D. B., &Snider, V. E. (2000). Effective mathematics instruction: The importance of curriculum.
Education and Treatment of Children, 23(2), 122-142.
Crenshaw, H. M. (1982). A comparison of two computer assisted instruction management systems in remedial
math. Unpublished doctoral dissertation, Texas A&M University.
Dahn, V. L. (1992). The effect of integrated learning systems on mathematics and reading achievement and
student attitudes in selected Salt Lake City, Utah, elementary schools. Unpublished doctoral dissertation,
Brigham Young University.
De Russe, J. T. (1999). The effect of staff development training and the use of cooperative learning methods on
the academic achievement of third through sixth grade MexicanAmerican students. Unpublished doctoral
dissertation, Texas A&M University.
Dev, P. C, Doyle, B. A., &Valenta, B. (2002). Labels needn't stick: "At-risk" first graders rescued with
appropriate intervention. Journal of Education for Students Placed at Risk, 7(3), 327-332.
Dilworth, R., &Warren, L. (1980). An independent investigation of Real Math: The field-testing and learner-
verification studies. San Diego, CA: Center for the Improvement of Mathematics Education.
Dobbins, E. R. (1993). Math computer assisted instruction with remedial students and students with mild
learning/behavior disabilities. Unpublished doctoral dissertation, University of Alabama.
Donnelly, L. (2004, April). Year two results: Evaluation of the implementation and effectiveness of
SuccessMaker during 2002-2003. Paper presented at the annual meeting of the American Educational
Research Association, San Diego, California.
Drueck, J. V., Fuson, K. C, Carroll, W. M., &Bell, M. S. (1995, April). Performance of U.S. first graders in a
reform math curriculum compared to Japanese, Chinese, and traditionally taught U.S. students. Paper
presented at the annual meeting of the American Education Research Association, San Francisco, CA.
10 October 2013 Page 18 of 33 ProQuest
DuPaul, G. J., Ervin, R. A., Hook, C. L., &McGoey, K. E. (1998). Peer tutoring for children with attention deficit
hyperactivity disorder: Effects on classroom behavior and academic performance. Journal of Applied Behavior
Analysis, 31, 595-592.
Dynarski, M., Agodini, R., Heaviside, S., Novak, T., Carey, N., Campuzano, L., et al. (2007). Effectiveness of
reading and mathematics software products: Findings from the first student cohort. Washington, DC: Institute of
Education Sciences.
Earnheart, M. (1989). A study ofthe effectiveness of mastery learning in a low socioeconomic setting.
Unpublished doctoral dissertation, University of Mississippi.
Ebmeier, H., &Good, T. (1979). The effects of instructing teachers about good teaching on the mathematics
achievement of fourth grade students. American Educational Research Journal, 16(1), 1-16.
EDSTAR. (2004). Large-scale evaluation of student achievement in districts using Houghton Mifflin
Mathematics: Phase two. Raleigh, NC: Author.
Ellington, A. J. (2003). A meta-analysis of the effects of calculators on students' achievement and attitude levels
in precollege mathematics classes. Journal for Research in Mathematics Education, 34(5), 433-463.
Emihovich, C, &Miller, G. (1988). Effects of Logo and CAI on Black first graders' achievement, reflectivity, and
self-esteem. Elementary School Journal, 88(5), 472-487.
Estep, S. G., Mclnerney, W. D., Vockell, E., &Kosmoski, G. (1999-2000). An investigation of the relationship
between integrated learning systems and academic achievement. Journal of Educational Technology Systems,
28(1), 5-19.
Fahsl, A. J. (2001). An investigation of the effects of exposure to Saxon math textbooks, socioeconomic status
and gender on math achievement scores. Unpublished doctoral dissertation, Oklahoma State University.
Fantuzzo, J. W., Davis, G. Y., &Ginsburg, M. D. (1995). Effects of parent involvement in isolation or in
combination with peer tutoring on student self -concept and mathematics achievement. Journal of Educational
Psychology, 87(2), 272-281.
Fantuzzo, J. W., King, J. A., &Heller, L. R. (1992). Effects of reciprocal peer tutoring on mathematics and school
adjustment: A component analysis. Journal of Educational Psychology, 84(3), 331-339.
Fantuzzo, J. W., Polite, K., &Grayson, N. (1990). An evaluation of reciprocal peer tutoring across elementary
school settings. Journal of School Psychology, 28, 309-323.
Faykus, S. P. (1993). Impact of an integrated learning system: Analysis of student achievement through
traditional and curriculum-based procedures. Unpublished doctoral dissertation, Arizona State University.
Fennema, E., Carpenter, T. P., Franke, M. L., Levi, L., Jacobs, V. R., &Empson, S. B. (1996). A longitudinal
study of learning to use children's thinking in mathematics instruction. Journal for Research in Mathematics
Education, 27(4), 403-434.
Fischer, F. (1990). A part-part-whole curriculum for teaching number to kindergarten. Journal for Research in
Mathematics Education, 21, 207-215.
Fletcher, J. D., Hawley, D. E., &Piele, P. K. (1990). Costs, effects, and utility of microcomputer-assisted
instruction. American Educational Research Journal, 27(4), 783-806.
Florida Tax Watch. (2005). Florida TaxWatch's comparative evaluation of Project CHILD: Phase IV, 2005.
Retrieved from http:www.floridataxwatch.org/projchild/ projchild4.html
Flowers, J. (1998). A study of proportional reasoning as it relates to the development of multiplication concepts.
Unpublished doctoral dissertation, University of Michigan.
Foley, M. M. (1994). A comparison of computer-assisted instruction with teachermanaged instructional
practices. Unpublished doctoral dissertation, Columbia University Teachers College.
Follmer, R. (2001). Reading, mathematics, and problem solving: The effects of direct instruction in the
development of fourth grade students ' strategic reading and problem-solving approaches to text-based, non-
routine mathematics problems. Unpublished doctoral dissertation, Widener University.
10 October 2013 Page 19 of 33 ProQuest
Freiberg, H. J., Connell, M. L., &Lorentz, J. (2001). Effects of consistency management on student mathematics
achievement in seven Chapter 1 elementary schools. Journal of Education for Students Placed at Risk, 6(3),
249-270.
Freiberg, H. J., Huzinec, C, &Borders, K. (2006). Classroom management and mathematics student
achievement: A study of fourteen inner-city elementary schools. Unpublished manuscript.
Freiberg, H. J., Prokosch, N., Treiser, E. S., &Stein, T. (1990). Turning around five atrisk elementary schools.
Journal of School Effectiveness and School Improvement, 7(1), 5-25.
Freiberg, H. J., Stein, T. A., &Huang, S. (1995). Effects of classroom management intervention on student
achievement in inner-city elementary schools. Educational Research and Evaluation, 7(1), 36-66.
Fuchs, L., Compton, D., Fuchs, D., Paulsen, K., Bryant, J., &Hamlett, C. (2005). The prevention, identification,
and cognitive determinants of math difficulty. Journal of Educational Psychology, 97(3), 493-513.
Fuchs, L., Fuchs, D., Finelli, R., Courey, S., &Hamlett, C. (2004). Expanding schemabased transfer instruction
to help third graders solve real-life mathematical problems. American Educational Research Journal, 47(2), 419-
445.
Fuchs, L., Fuchs, D., Finelli, R., Courey, S., Hamlett, C, Sones, E., et al. (2006). Teaching third graders about
real-life mathematical problem solving: A randomized control study. Elementary School Journal, 106(A), 293-
311.
Fuchs, L., Fuchs, D., &Hamlett, C. (1989). Effects of alternative goal structures within curriculum-based
measurement. Exceptional Children, 55(5), 429-438.
Fuchs, L., Fuchs, D., Hamlett, C, &Stecker, P. (1991). Effects of curriculum-based measurement and
consultation on teacher planning and student achievement in mathematics operations. American Educational
Research Journal, 28(3), 617-641.
Fuchs, L., Fuchs, D. &Karns, K. (2001). Enhancing kindergarteners' mathematical development: Effects of peer-
assisted learning strategies. Elementary School Journal, 707(5), 495-510.
Fuchs, L., Fuchs, D., Karns, K., Hamlett, C, &Katzaroff, M. (1999). Mathematics performance assessment in the
classroom: Effects on teacher planning and student problem solving. American Educational Research Journal,
36(3), 609-646.
Fuchs, L., Fuchs, D., Karns, K., Hamlett, C, Katzaroff, M., &Dutka, S. (1997). Effects of task- focused goals on
low-achieving students with and without learning disabilities. American Educational Research Journal, 34(3),
513-543.
Fuchs, L., Fuchs, D. Phillips, N., Hamlett, C, &Karns, K. (1995). Acquisition and transfer effects of classwide
peer-assisted learning strategies in mathematics for students with varying learning histories. School Psychology
Review, 24(A), 604-620.
Fuchs, L., Fuchs, D., &Prentice, K. (2004). Responsiveness to mathematical problemsolving instruction:
Comparing students at risk of mathematics disability with and without risk of reading disability. Journal of
Learning Disabilities, 37(A), 293-306.
Fuchs, L., Fuchs, D., Prentice, K., Burch, M., Hamlett, C, Owen, R., Hosp, M., et al. (2003). Explicitly teaching
for transfer: Effects on third-grade students' mathematical problem solving. Journal of Educational Psychology,
95(2), 293-305.
Fuchs, L., Fuchs, D., Prentice, K., Burch, M., Hamlett, C, Owen, R., &Schroeter, K. (2003). Enhancing third-
grade students' mathematical problem solving with selfregulated learning strategies. Journal of Educational
Psychology, 95(2), 306-315.
Fuchs, L., Fuchs, D., Prentice, K., Hamlett, C, Finelli, R., &Courey, S. (2004). Enhancing mathematical problem
solving among third-grade students with schemabased instruction. Journal of Educational Psychology, 96(A),
635-647.
Fuchs, L., Fuchs, D., Yazdian, L? &Powell, S. (2002). Enhancing first-grade children's mathematical
10 October 2013 Page 20 of 33 ProQuest
development with peer-assisted learning strategies. School Psychology Review, 31(A), 569-583.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., &Appleton, A. C. (2002). Explicitly teaching for transfer: Effects on the
mathematical problem-solving performance of students with mathematical disabilities. Learning Disabilities
Research and Practice, 77(2), 90-106.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., &Bentz (1994). Classwide curriculum-based
measurement: Helping general educators meet the challenge of student diversity. Exceptional Children, 60(6),
518-537.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., Karns, K., &Dutka, S. (1997). Enhancing students' helping
behavior during peer-mediated instruction with conceptual mathematical explanations. Elementary School
Journal, 97(3), 223-249.
Fueyo, V., &Bushell, D. (1998). Using number line procedures and peer tutoring to improve the mathematics
computation of low-performing first graders. Journal of Applied Behavior Analysis, 31(3), 417-430.
Fuson, K., Carroll, W., &Drueck, J. (2000). Achievement results for second and third graders using the
standards-based curriculum Everyday Mathematics. Journal for Research in Mathematics Education, 31(3),
277-295.
Fuson, K. C, &Carroll, W. M. (n.d.-a). Performance of U.S. fifth graders in a reform math curriculum compared
to Japanese, Chinese, and traditionally taught U.S. students. Unpublished manuscript.
Fuson, K. C, &Carroll, W. M. (n.d.-b). Summary of comparison of Everyday Math (EM) and McMillan (MC):
Evanston student performance on whole-class tests in Grades 1, 2, 3, and 4. Unpublished manuscript.
Gabbert, B., Johnson, D. W., &Johnson, R. T. (1986). Cooperative learning, group-toindividual transfer, process
gain, and the acquisition of cognitive reasoning strategies. Journal of Psychology, 120(3), 265-278.
Gallagher, P. A. (1991). Training the parents of Chapter 1 math students in the use of mastery learning
strategies to help increase math achievement. Unpublished doctoral dissertation, University of Pittsburgh.
Gatti, G. (2004a). ARC study. In Pearson Education (Ed.). Investigations in Number, Data, and Space:
Validation and research. Upper Saddle River, NJ: Pearson.
Gatti, G. (2004b). Scott Foresman-Addison Wesley Math national effect size study. (Available from Pearson
Education, K-12 School Group, 1 Lake Street, Upper Saddle River, NJ 07458)
Giancola, S. (2000). Evaluation results ofthe Delaware Challenge Grant Project Lead Education Agency: Capital
School District. Newark: University of Delaware.
Gilbert-Macmillan, K. (1983). Mathematical problem solving in cooperative small groups and whole class
instruction. Unpublished doctoral dissertation, Stanford University.
Gill, B. (1995). Project CHILD middle school follow-up evaluation: Final report. Daniel Memorial Institute,
Jacksonville, FL.
Gilman, D., &Brantley, T. (1988). The effects of computer-assisted instruction on achievement, problem-solving
skills, computer skills, and attitude: A study of an experimental program at Marrs Elementary School, Mount
Vernon, Indiana. Indianapolis: Indiana State University. (ERIC Document Reproduction Service No. ED302232)
Ginsburg, A., Leinwand, S., Anstrom, T., &Pollok, E. (2005). What the United States can learn from Singapore's
world-class mathematics system (and what Singapore can learn from the United States): An exploratory study.
Washington, DC: American Institutes for Research.
Ginsburg-Block, M. (1998). The effects of reciprocal peer problem-solving on the mathematics achievement,
academic motivation and self-concept of "at risk" urban elementary students. Unpublished doctoral dissertation,
University of Pennsylvania.
Ginsburg-Block, M., &Fantuzzo, J. W. (1997). Reciprocal peer tutoring: An analysis of "teacher" and "student"
interactions as a function of training and experience. School Psychology Quarterly, 12, 134-149.
Ginsburg-Block, M. D., &Fantuzzo, J. W. (1998). An evaluation of the relative effectiveness of NCTM standards-
based interventions for low-achieving urban elementary students. Journal of Educational Psychology, 90, 560-
10 October 2013 Page 21 of 33 ProQuest
569.
Glassman, P. (1989). A study of cooperative learning in mathematics, writing, and reading in the intermediate
grades: A focus upon achievement , attitudes, and selfesteem by gender, race, and ability group. Unpublished
doctoral dissertation, Hofstra University.
Glazerman, S., Levy, D. M., &Myers, D. (2002). Nonexperimental replications of social experiments: A
systematic review. Washington, DC: Mathematica.
Goldberg, L. F. (1989). Implementing cooperative learning within six elementary school learning disability
classrooms to improve math achievement and social skills. Unpublished doctoral dissertation, Nova University.
Good, K., Bickel, R., &Howley, C. (2006). Saxon Elementary Math program effectiveness study. Charleston,
WV: Edvantia.
Good, T. L. &Grouws, D. A. (1979). The Missouri Mathematics Effectiveness Project: An experimental study in
fourth-grade classrooms. Journal of Educational Psychology, 77(3), 355-362.
Goodrow, A. ( 1 998). Children 's construction of number sense in traditional, constructivism and mixed
classrooms. Unpublished doctoral dissertation, Tufts University.
Great Source. (2006). Every Day Counts Grades K-6: Research base and program effectiveness [computer
manual]. Retrieved from http://www.greatsource.com/ GreatSource/pdf/EveryDayCountsResearch206.pdf
Greenwood, C. R., Delquadri, J. C, &Hall, R. V. (1989). Longitudinal effects of classwide peer tutoring. Journal
of Educational Psychology, 81(3), 371-383.
Greenwood, C. R., Dinwiddie, G., Terry, B., Wade, L., Stanley, S., Thibadeau, S., et al. (1984). Teacher-
versus-peer mediated instruction: An ecobehavioral analysis of achievement outcomes. Journal of Applied
Behavior Analysis, 17, 521-538.
Greenwood, C. R., Terry, B., Utley, C. A., &Montagna, D. (1993). Achievement, placement, and services:
Middle school benefits of ClassWide Peer Tutoring used at the elementary school. School Psychology Review,
22(3), 497-516.
Griffin, S. A., Case, R., &Siegler, R. S. (1994). Rightstart: Providing the central conceptual prerequisites for first
formal learning of arithmetic to students at risk for school failure. In K. McGiIIy (Ed.), Classroom lessons:
Integrating cognitive theory and classroom practice (pp. 25-49). Cambridge, MA: MIT Press.
Grossen, B., &Ewing, S. (1994). Raising mathematics problem-solving performance: Do the NCTM teaching
standards help? Final report. Effective School Practices, 13(2), 79-91.
Gwaltney, L. (2000). Year three final report the Lightspan Partnership, Inc. Achieve Now Project: Unified School
District 259, Wichita Public Schools. Wichita, KS: Allied Educational Research and Development Services.
Hallmark, B. W. (1994). The effects of group composition on elementary students' achievement , self-concept,
and interpersonal perceptions in cooperative learning groups. Unpublished doctoral dissertation, University of
Connecticut.
Hansen, E., &Greene, K. (n.d.). A recipe for math. What's cooking in the classroom: Saxon or traditional?
Retrieved March 1, 2006, from http://www.secondaryenglish.com/recipeformath.html
Hativa, N. (1998). Computer-based drill and practice in arithmetic: Widening the gap between high- and low-
achieving students. American Educational Research Journal, 25(3), 366-397.
Haynie, T. R. (1989). The effects of computer-assisted instruction on the mathematics achievement of selected
groups of elementary school students. Unpublished doctoral dissertation, George Washington University.
Heller, L. R., &Fantuzzo, J. W. (1993). Reciprocal peer tutoring and parent partnership: Does parent
involvement make a difference? School Psychology Review, 22(3), 517-535.
Hess, R. D., &McGarvey, L. J. (1987). School-relevant effects of educational use of microcomputers in
kindergarten classrooms and homes. Journal of Educational Computing Research, 3(3), 269-287.
Hickey, D. T., Moore, A. L., &Pellegrino, J. W. (2001). The motivational and academic consequences of
elementary mathematics environments: Do constructivist innovations and reforms make a difference? American
10 October 2013 Page 22 of 33 ProQuest
Educational Research Journal, 38(3), 611-652.
Hiebert, J., &Wearne, D. (1993). Instructional tasks, classroom discourse, and students' learning in second-
grade arithmetic. American Educational Research Journal, 30(2), 393-425.
Hill, H. (2004). Professional development standards and practices in elementary school math. Elementary
School Journal, 104(3), 215-232.
Hohn, R., &Frey, B. (2002). Heuristic training and performance in elementary mathematical problem solving.
Journal of Educational Research, 95(6), 374-380.
Holmes, C. T., &Brown, C. L. (2003). A controlled evaluation of a total school improvement process, School
Renaissance. Paper presented at the National Renaissance Conference, Nashville, TN.
Hooper, S. (1992). Effects of peer interaction during computer-based mathematics instruction. Journal of
Education Research, 85(3), 180-189.
Hotard, S., &Cortez, M. (1983). Computer-assisted instruction as an enhancer of remediation. Lafayette:
University of Southwestern Louisiana.
Houghton Mifflin, (n.d.). Knowing Mathematics field test results for Lincoln Public Schools: Fall 2002-Spring
2003. Boston: Author.
Hunter, C. T. L. (1994). A study of the effect of instructional method on the reading and mathematics
achievement of Chapter 1 students in rural Georgia. Unpublished doctoral dissertation, South Carolina State
University.
Interactive. (2000). Documenting the effects of Lightspan Achieve Now! in the Adams County School District 50:
Year 3 report. Huntington, NY: Lightspan.
Interactive. (2001). Documenting the effects of Lightspan Achieve Now! in the Hempstead Union Free School
District: Year 2 report. Huntington, NY: Lightspan.
Interactive. (2003, August). An analysis of CompassLeaming student achievement outcomes in Pocatello,
Idaho, 2002-03. (Available from CompassLeaming, 9920 Pacific Heights Blvd., San Diego, CA 92121)
Interactive &Metis Associates. (2002). A multi-year analysis of the outcomes of Lightspan Achievement Now in
the Cleveland Municipal School District. Huntington, NY: Lightspan.
Isbell, S. K. (1993). Impact on learning of computer-assisted instruction when aligned with classroom curriculum
in second-grade mathematics and fourth-grade reading. Unpublished doctoral dissertation, Baylor University.
Jamison, M. B. (2000). The effects of linking classroom instruction in mathematics to a prescriptive delivery of
lessons on an integrated learning system. Unpublished doctoral dissertation, Texas Tech University.
Jitendra, A. K., Griffin, C. C, McGoey, K., Gardill, M. C, Bhat, P., &Riley, T. (1998). Effects of mathematical word
problem solving by students at risk or with mild disabilities. Journal of Educational Research, 91(6), 345-355.
Jitendra, A. K., &Hoff, K. (1996). The effects of schema-based instruction on the mathematical word-problem-
solving performance of students with learning disabilities. Journal of Learning Disabilities, 29(A), 422-431.
Johnson, D. W., Johnson, R. T., &Scott, L. (1978). The effects of cooperative and individualized instruction on
student attitudes and achievement. Journal of Social Psychology, 104, 207-216.
Johnson, J., Yanyo, L., &Hall, M. (2002). Evaluation of student math performance in California school districts
using Houghton Mifflin Mathematics. Raleigh, NC: EDSTAR.
Johnson, L. C. (1985, December). 7?<? effects ofthe groups of four cooperative learning model on student
problem-solving achievement in mathematics. Unpublished doctoral dissertation, University of Houston.
Johnson-Scott, P. L. (2006). The impact of Accelerated Math on student achievement. Unpublished doctoral
dissertation, Mississippi State University.
Karper, J., &Melnick, S. A. (1993). The effectiveness of team accelerated instruction on high achievers in
mathematics. Journal of Instructional Psychology, 20(1), 49-54.
Kastre, N. J. (1995). A comparison of computer-assisted instruction and traditional practice for mastery of basic
multiplication facts. Unpublished doctoral dissertation, Arizona State University.
10 October 2013 Page 23 of 33 ProQuest
Kelso, C. R., Booton, B., &Majumdar, D. (2003). Washington State Math Trailblazers student achievement
report. Chicago: University of Illinois at Chicago.
Kersh, M. (1970). A strategy for mastery learning in fifth grade arithmetic. Unpublished doctoral dissertation,
University of Chicago.
King, F. J., &Butzin, S. (1992). An evaluation of Project CHILD. Florida Technology in Education Quarterly, 4(A),
45-63.
Kirk, V. C. (2003). Investigation ofthe impact of integrated learning system use on mathematics achievement of
elementary students. Unpublished doctoral dissertation, East Tennessee State University.
Kopecky, C. L. (2005). A case study ofthe Math Matters professional development program in one elementary
school. Unpublished doctoral dissertation, University of Southern California.
Kosciolek, S. (2003). Instructional factors related to mathematics achievement: Evaluation of a mathematics
intervention. Unpublished doctoral dissertation, University of Minnesota.
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research
and Development, 42(2), 7-19.
Kroesbergen, E. H., &Van Luit, J. E. H. (2003). Mathematical interventions for children with special education
needs. Remedial and Special Education, 24, 97-1 14.
Kromhout, O. M. (1993). Evaluation report Project CHILD, 1992-1993. Tallahassee, FL.
Kromhout, O. M., &Butzin, S. M. (1993). Integrating computers into the elementary school curriculum: An
evaluation of nine Project CHILD model schools. Journal of Research of Computing in Education, 26(1), 55-70.
Kulik, J. A. (2003). Effects of using instructional technology in elementary and secondary schools: What
controlled evaluation studies say (SRI Project No. P10446.001). Arlington, VA: SRI International.
Laub, C. (1995). Computer-integrated learning system and elementary student achievement in mathematics: An
evaluation study. Unpublished Doctoral Dissertation, Temple University.
Laub, C, &Wildasin, R. (1998). Student achievement in mathematics and the use of computer-based instruction
in the Hempfield School District. Landsville, PA: Hempfield School District.
Leffler, K. (2001). Fifth-grade students in North Carolina show remarkable math gains. Summary of Independent
Math Research. Madison, WI: Renaissance Learning.
Leiker, V. (1993). The relationship between an integrated learning system, reading and mathematics
achievement, higher order thinking skills and certain demographic variables: A study conducted in two school
districts. Unpublished doctoral dissertation, Baylor University.
Levy, M. H. (1985). An evaluation of computer assisted instruction upon the achievement of fifth grade students
as measured by standardized tests. Unpublished doctoral dissertation, University of Bridgeport.
Lin, A., Podell, D. M., &Tournaki-Rein, N. (1994). CAI and the development of automaticity in mathematics skills
in students with and without mild mental handicaps. Computers in the Schools, 11, 43-58.
Lipsey, M. W., &Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Long, V. M. (1991). Effects of mastery learning on mathematics achievement and attitudes. Unpublished
doctoral dissertation, University of Missouri-Columbia.
Lucker, G. W., Rosenfield, D., Sikes, J., &Aronson, E. (1976). Performance in the interdependent classroom: A
field study. American Educational Research Journal, 13, 115-123.
Lykens, J. S. (2003). An evaluation of a standards-based mathematics program (Math Trailblazers) in
Delaware's Caesar Rodney School District. Unpublished doctoral dissertation, Wilmington College.
Mac Iver, M., Kemper, E., &Stringfield, S. (2003). The Baltimore Curriculum Project: Final report of the four-year
evaluation study. Baltimore: Johns Hopkins University, Center for Social Organization of Schools.
Madden, N. A., &Slavin, R. E. (1983). Cooperative learning and social acceptance of mainstreamed
academically handicapped students. Journal of Special Education, 77(2), 171-182.
Madden, N. A., Slavin, R. E., &Simons, K. (1997). MathWings: Early indicators of effectiveness (Rep. No. 17).
10 October 2013 Page 24 of 33 ProQuest
Baltimore: Johns Hopkins University, Center for Research on the Education of Students Placed at Risk.
Madden, N. A., Slavin, R. E., &Simons, K. (1999). MathWings: Effects on student mathematics performance.
Baltimore: Johns Hopkins University Center for Research on the Education of Students Placed at Risk.
Mahoney, S. J. (1990). A study of the effects of EXCEL Math on mathematics achievement of second- and
fourth-grade students. Unpublished doctoral dissertation, Northern Arizona University.
Mann, D., Shakeshaft, C, Becker, J., &Kottkamp, R. (1999). West Virginia story: Achievement gains from a
statewide comprehensive instructional technology program. Beverly Hills, CA: Milken Family Foundation with
the West Virginia Department of Education, Charleston.
Manuel, S. Q. (1987). The relationship between supplemental computer assisted mathematics instruction and
student achievement. Unpublished doctoral dissertation, University of Nebraska.
Martin, J. M. (1986). The effects of a cooperative, competitive, or combination goal structure on the
mathematics performance of Black children from extended family backgrounds. Unpublished doctoral
dissertation, Howard University.
Mason, D. A., &Good, T. L. (1993). Effects of two-group and whole-class teaching on regrouped elementary
students' mathematics achievement. American Educational Research Journal, 30(2), 328-360.
Math Learning Center. (2003). Bridges in the classroom: Teacher feedback, student data, ¤t research.
Salem, OR: Author.
Mathematics Evaluation Committee. (1997). Chicago mathematics evaluation report: Year two. Pennington, NJ:
Hopewell Valley Regional School District.
Mayo, J. C. (1995). A quasi-experimental study comparing the achievement of primary grade students using
Math Their Way with primary grade students using traditional instructional methods. Unpublished doctoral
dissertation, University of South Carolina.
McCabe, K. J. (2001). Mathematics in our schools: An effort to improve mathematics literacy. Unpublished
master's thesis, California State University, Long Beach.
McCormick, K. (2005). Examining the relationship between a standards-based elementary mathematics
curriculum and issues of equity. Unpublished doctoral dissertation, Indiana University.
McDermott, P. A., &Watkins, M. W. (1983). Computerized vs. conventional remedial instruction for learning-
disabled pupils. Journal of Special Education, 77(1), 81-88.
McKernan, M. M. (1992). The effects of "Mathematics Their Way" and Chicago Math Project on mathematical
applications and story problem strategies of second graders. Unpublished doctoral dissertation, Drake
University.
McWhirt, R., Mentavlos, M., Rose-Baele, J. S., &Donnelly, L. (2003). Evaluation of the implementation and
effectiveness of SuccessMaker. Charleston, SC: Charleston County School District.
Mehrens, W. A., &Phillips, S. E. (1986). Detecting impacts of curricular differences in achievement test data.
Journal of Educational Measurement, 23(3), 185-196.
Mercer, C. D., &Miller, S. P. (1992). Teaching students with learning problems in math to acquire, understand,
and apply basic math facts. Remedial and Special Education, 13(3), 19-35.
Merrell, M. L. B. (1996). The achievement and self-concept of elementary grade students who have received
Direct Instruction. Unpublished doctoral dissertation, University of Southern Mississippi.
Mevarech, Z. R. (1985). The effects of cooperative mastery learning strategies on mathematics achievement.
Journal of Educational Research, 78(3), 372-377.
Mevarech, Z. R. (1991). Learning mathematics in different mastery environments. Journal of Education
Research, 84(A), 225-231.
Mevarech, Z. R., &Rich, Y. (1985). Effects of computer-assisted mathematics instruction on disadvantaged
pupils' cognitive and affective development. Journal of Educational Research, 79(1), 5-11.
Meyer, L. (1984). Long-term academic effects of Direct Instruction Project Follow Through. Elementary School
10 October 2013 Page 25 of 33 ProQuest
Journal, 84(A), 380-394.
Miller, H. (1997). Quantitative analyses of student outcome measures. International Journal of Education
Research, 27(2), 119-136.
Mills, S. C. (1997). Implementing integrated learning systems in elementary classrooms. Unpublished doctoral
dissertation, University of Oklahoma.
Mintz, K. S. (2000). A comparison of computerized and traditional instruction in elementary mathematics.
Unpublished doctoral dissertation, University of Alabama.
Mokros, J., Berle-Carmen, M., Rubin, A., &O'Neil, K. (1996, April). Learning operations: Invented strategies that
work. Paper presented at the annual meeting of the American Educational Research Association, New York,
NY.
Mokros, J., Berle-Carmen, M., Rubin, A., &Wright, T. (1994). Full year pilot Grades 3 and 4: Investigations in
numbers, data, and space. Cambridge, MA: Technical Education Research Centers.
Monger, C. T. (1989). Effects of a mastery learning strategy on elementary and middle school mathematics
students' achievement and subject related affect. Unpublished doctoral dissertation, University of Tulsa.
Moody, E. C. (1994). Implementation and integration of a computer-based integrated learning system in an
elementary school. Unpublished doctoral dissertation, Florida State University.
Morgan, J. C. (1994). Individual accountability in cooperative learning groups: Its impact on achievement and
attitude with grade three students. Unpublished master's thesis, University of Manitoba.
Moskowitz, J. M., Malvin, J. H., Schaeffer, G. A., &Schaps, E. (1983). Evaluation of a cooperative learning
strategy. American Educational Research Journal, 20(A), 687-696.
Moss, J., &Case, R. (1999). Developing children's understanding of the rational numbers: A new model and an
experimental curriculum. Journal for Research in Mathematics Education, 30(2), 122-147.
Murphy, L. A. (1998). Learning and affective issues among higher- and lowerachieving third-graders in math
reform classrooms: Perspectives of children, parents, and teachers. Unpublished doctoral dissertation,
Northwestern University.
Murphy, R., Penuel, W., Means, B., Korbak, C, Whaley, A., &Allen, J. (2002). E-DESK: A review of recent
evidence on discrete educational software (SRI International Report). Menlo Park, CA: SRI International.
National Assessment of Educational Progress. (2005). The nation's report card. Washington, DC: National
Center for Education Statistics.
National Research Council. (2004). On evaluating curricular effectiveness. Washington, DC: National
Academies Press.
Nattiv, A. (1994). Helping behaviors and math achievement gain of students using cooperative learning.
Elementary School Journal, 94(3), 285-297.
New Century Education Corporation. (2003). New Century Integrated Instructional System, evidence of
effectiveness: Documented improvement results from client schools and districts. Piscataway, NJ: Author.
Nguyen, K. (1994). The 1993-1994 Saxon mathematics evaluation report. Oklahoma City: Oklahoma City Public
Schools.
Nguyen, K., &Elam, P. (1993). The 1992-1993 Saxon mathematics evaluation report. Oklahoma City: Oklahoma
City Public Schools.
Opuni, K. A. (2006). The effectiveness of the Consistency Management &Cooperative Discipline"' (CMCD)
model as a student empowerment and achievement enhancer: The experiences of two K-12 inner-city school
systems. Paper presented at the 4th Annual Hawaii International Conference of Education, Honolulu, Hawaii.
Orabuchi, I. I. (1992). Effects of using interactive CAI on primary grade students' higher-order thinking skills:
Inferences, generalizations, and math problem solving. Unpublished doctoral dissertation, Texas Woman's
University.
Orr, C. (1991). Evaluating restructured elementary classes: Project CHILD summative evaluation. Paper
10 October 2013 Page 26 of 33 ProQuest
presented at the Southeast Evaluation Association, Tallahassee, FL.
Patterson, D. (2005). The effects of Classworks in the classroom. Stephenville, TX: Tarleton State University.
Pellegrino, J. W., Hickey, D., Heath, A., Rewey, K., &Vye, N. J. (1992). Assessing the outcomes of an
innovative instructional program: The 1990-1991 implementation of the "Adventures of Jasper Woodbury."
Nashville, TN: Vanderbilt University, Learning Technology Center.
Perkins, S. A. (1987). The relationship between computer-assisted instruction in reading and mathematics
achievement and selected student variables. Unpublished doctoral dissertation, University of Michigan.
Peterson, P. L., Janicki, T. C, &Swing, S. R. (1981). Ability x Treatment interaction effects on children's learning
in large-group and small-group approaches. American Educational Research Journal, 18(A), 453-473.
Phillips, C. K. (2001). The effects of an integrated computer program on math and reading improvement in
grades three through five. Unpublished doctoral dissertation, University of Tennessee.
Pigott, H. E., Fantuzzo, J. W., &Clement, P. W. (1986). The effects of reciprocal peer tutoring and group
contingencies on the academic performance of elementary school children. Journal of Applied Behavior
Analysis, 19, 93-98.
Podell, D. M., Tournaki-Rein, N., &Lin, A. (1992). Automatization of mathematics skills via computer-assisted
instruction among students with mild mental handicaps. Education and Training in Mental Retardation, 27, 200-
206.
Pratton, J., &Hales, L. W. (1986). The effects of active participation on student learning. Journal of Educational
Research, 79(A), 210-215.
Quinn, D. W., &Quinn, N. W. (2001a). Evaluation series: Preliminary study: PLATO elementary math software,
Fairview Elementary, Dayton, Ohio. Bloomington, MN: PLATO Learning.
Quinn, D. W., &Quinn, N. W. (2001b). Evaluation series: Grades 1-8, Apache Junction Unified School District
43. Bloomington, MN: PLATO Learning.
Rader, J. B. (1996). The effect of curriculum alignment between mathematics instruction and CAI mathematics
on student performance and attitudes, staff attitudes, and alignment. Unpublished doctoral dissertation, United
States International University.
Ragosta, M. (1983). Computer-assisted instruction and compensatory education: A longitudinal analysis.
Machine-Mediated Learning, 1(1), 97-127.
Resendez, M., &Azin, M. (2005). The relationship between using Saxon Elementary and Middle School Math
and student performance on Georgia statewide assessments. Jackson, WY: PRES Associates.
Resendez, M., &Manley, M. A. (2005). A study on the effectiveness of the 2004 Scott Foresman-Addison
Wesley elementary math program. Jackson, WY: PRES Associates.
Resendez, M.. &Sridharan, S. (2005). A study on the effectiveness of the 2004 Scott Foresman-Addison
Wesley elementary math program: Technical report. Jackson, WY: PRES Associates.
Resendez, M., &Sridharan, S. (2006). A study of the effectiveness of the 2005 Scott Foresman-Addison Wesley
elementary math program. Jackson, WY: PRES Associates.
Resendez, M., Sridharan, S., &Azin, M. (2006). The relationship between using Saxon Elementary School Math
and student performance on the Texas statewide assessments. Jackson, WY: PRES Associates.
Riordan, J., &Noyce, P. (2001). The impact of two standards-based mathematics curricula on student
achievement in Massachusetts. Journal for Research in Mathematics Education, 32(A), 368-398.
Ross, L. G. (2003). The effects of a standards-based mathematics curriculum on fourth and fifth grade
achievement in two Midwest cities. Unpublished doctoral dissertation, University of Iowa.
Ross, S. M., &Nunnery, J. A. (2005). The effect of School Renaissance on student achievement in two
Mississippi school districts. Memphis, TN: Center for Research in Educational Policy, University of Memphis.
Roy, J. W. (1993). An investigation of the efficacy of computer-assisted mathematics, reading, and language
arts instruction. Unpublished doctoral dissertation, Baylor University.
10 October 2013 Page 27 of 33 ProQuest
Ruffin, M. R., Taylor, M., &Butts, L. W. (1991). Report of the 1989-1990 Barrett Math Program (Report No. 12,
Vol. 25). Atlanta, GA: Atlanta Public Schools, Department of Research and Evaluation. (ERIC Document
Reproduction Service No. 365508)
Rust, A. L. (1999). A study of the benefits of math manipulatives versus standard curriculum in the
comprehension of mathematical concepts. Knoxville, TN: Johnson Bible College. (ERIC Document
Reproduction Service No. 436395)
Salvo, L. C. (2005). Effects of an experimental curriculum on third graders' knowledge of multiplication facts.
Unpublished doctoral dissertation, George Mason University.
Schmidt, S. (1991). Technology for the 21st century: The effects of an integrated distributive computer network
system on student achievement. Unpublished doctoral dissertation, University of La Veme, CA.
Schoenfeld, A. H. (2006). What doesn't work: The challenge and failure of the What Works Clearinghouse to
conduct meaningful reviews of studies of mathematics curricula. Educational Researcher, 35(2), 13-21.
Schreiber, F., Lomas, C, &Mys, D. (1988). Evaluation of student reading and mathematics: WICAT computer
managed instructional program, Salina Elementary School. Dearborn, MI: Office of Research and Evaluation,
Public Schools.
Sconiers, S., Isaacs, A., Higgins, T., McBride, J., &Kelso, C. (2003). The Arc Center Tri-State Student
Achievement Study. Lexington, MA: COMAP.
Sedlmeier, P., &Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies?
Psychological Bulletin, 105(2), 309-316.
Shanoski, L. A. (1986). An analysis of the effectiveness of using microcomputer assisted mathematics
instruction with primary and intermediate level students. Unpublished doctoral dissertation, Indiana University of
Pennsylvania.
Shaughnessy, J., &Davis, A. (1998). Springfield research study: Impact of Opening Eyes and Visual
Mathematics curricula. Portland, OR: Northwest Regional Educational Laboratory.
Shawkey, J. (1989). A comparison of two first grade mathematics programs: Math Their Way and Explorations.
Unpublished doctoral dissertation, University of the Pacific.
Sheffield, S. (2004). Evaluation of Houghton Mifflin MathP 2005 in the Chicopee Public School District: Second
year results.
Sheffield, S. (2005). Evaluation of Houghton Mifflin Math 2005 in the Chicopee Public School District: First
year results.
Sherwood, R. D. (1991). The development and preliminary evaluation of anchored instruction environments for
developing mathematical and scientific thinking. Paper presented at the meeting of the National Association for
Research in Science Teaching, Lake Geneva, WI. (ERIC Document Reproduction Service No. ED335221)
Shiah, R. L., Mastropieri, M. A., Scruggs, T. E., &FuIk, B. J. (1994-1995). The effects of computer-assisted
instruction on the mathematical problem-solving of students with learning disabilities. Exceptionality, 5(3), 131-
161.
Simpson, N. (2001). Scott Foresman California Mathematics validation study pretest-posttest results (Report
No. VM-17-3005-CA). (Available from Pearson Scott Foresman, 1415 L Street, Suite 800, Sacramento, CA
95814)
Sinkis, D. M. (1993). A comparison of Chapter One student achievement with and without computer-assisted
instruction. Unpublished doctoral dissertation, University of Massachusetts.
Slavin, R. E. (1986). Best-evidence synthesis: An alternative to meta-analytic and traditional reviews.
Educational Researcher, 75(9), 5-11.
Slavin, R. E. (2008). What works? Issues in synthesizing educational program evaluations. Educational
Researcher, 37(1), 5-14.
Slavin, R. E., &Karweit, N. L. (1985). Effects of whole class, ability grouped, and individualized instruction on
10 October 2013 Page 28 of 33 ProQuest
mathematics achievement. American Educational Research Journal, 22(3), 351-367.
Slavin, R. E., Lake, C, &Groff, C. (2007). Effective programs for middle and high school mathematics: A best-
evidence synthesis. Baltimore: Johns Hopkins University, Center for Data-Driven Reform in Education.
Available at www.bestevidence.org
Slavin, R. E., Leavey, M., &Madden, N. A. (1984). Combining cooperative learning and individualized
instruction: Effects on student mathematics achievement, attitudes, and behaviors. Elementary School Journal,
84(A), 409-422.
Slavin, R. E., Madden, N. A., &Leavey, M. (1984a). Effects of cooperative learning and individualized instruction
on mainstreamed students. Exceptional Children, 50(5), 434-443.
Slavin, R. E., Madden, N. A., &Leavey, M. (1984b). Effects of team assisted individualization on the
mathematics achievement of academically handicapped and nonhandicapped students. Journal of Educational
Psychology, 76(5), 813-819.
Sloan, H. A. (1993). Direct Instruction in fourth and fifth grade classrooms. Unpublished master's thesis, Purdue
University.
Snider, V. E., &Crawford, D. B. (1996). Action research: Implementing Connecting Math Concepts. Effective
School Practices, 75(2), 17-26.
Snow, M. ( 1 993). The effects of computer-assisted instruction and focused tutorial services on the academic
achievement of marginal learners. Unpublished doctoral dissertation, University of Miami.
Spencer, T. M. (1999). The systemic impact of integrated learning systems on mathematics achievement.
Unpublished doctoral dissertation, Eastern Michigan University.
Spicuzza, R., Ysseldyke, J., Lemkuil, A., Kosciolek, S., Boys, C, &Teelucksingh, E. (2001). Effects of
curriculum-based monitoring on classroom instruction and math achievement. Journal of School Psychology,
39(6), 541-542.
SRA/McGraw-Hill. (2003). Everyday Mathematics student achievement studies (Vol. 4). Chicago: Author.
Stallings, J. (1985). A study of implementation of Madeline Hunter's model and its effects on students. Journal
of Educational Research, 78(6), 325-337.
Stallings, J., &Krasavage, E. M. (1986). Program implementation and student achievement in a four-year
Madeline Hunter Follow-Through Project. Elementary School Journal, 87(2), 117-138.
Stallings, P., Robbins, P., Presbrey, L., &Scott, J. (1986). Effects of instruction based on the Madeline Hunter
model on students' achievement: Findings from a followthrough project. Elementary School Journal, 86(5), 571-
587.
Stecker, P., &Fuchs, L. (2000). Expecting superior achievement using curriculumbased measurement: The
importance of individual progress monitoring. Learning Disabilities Research and Practice, 15(3), 128-134.
Stevens, J. W. (1991). Impact of an integrated learning system on third-, fourth-, and fifth-grade mathematics
achievement. Unpublished doctoral dissertation, Baylor University.
Stevens, R. J., &Slavin, R. E. (1995). Effects of a cooperative learning approach in reading and writing on
handicapped and nonhandicapped students' achievement, attitudes, and metacognition in reading and writing.
Elementary School Journal, 95, 241-262.
Stone, T. (1996). The academic impact of classroom computer usage upon middleclass primary grade level
elementary school children. Unpublished doctoral dissertation, Widener University.
Sullivan, F. L. (1989). The effects of an integrated learning system on selected fourthand fifth-grade students.
Unpublished doctoral dissertation, Baylor University.
Suppes, P., Fletcher, J. D., Zanotti, M., Lorton, P. V., Jr., &Searle, B. W. (1973). Evaluation of computer-
assisted instruction in elementary mathematics for hearing-impaired students. Stanford, CA: Institute for
Mathematical Studies in the Social Sciences, Stanford University.
Suyanto, W. (1998). The effects of student teams-achievement divisions on mathematics achievement in
10 October 2013 Page 29 of 33 ProQuest
Yogyakarta rural primary schools. Unpublished doctoral dissertation, University of Houston.
Swing, S., &Peterson, P. (1982). The relationship of student ability and small group interaction to student
achievement. American Educational Research Journal, 19, 259-274.
Tarver, S. G, &Jung, J. A. (1995). A comparison of mathematics achievement and mathematics attitudes of first
and second graders instructed with either a discoverylearning mathematics curriculum or a direct instruction
curriculum. Effective School Practices, 14(1), 49-57.
Taylor, L. (1999). An integrated learning system and its effect on examination performance in mathematics.
Computers and Education, 32(2), 95-107.
Teelucksingh, E., Ysseldyke, J., Spicuzza, R., &Ginsburg-Block, M. (2001). Enhancing the learning of English
language learners: Consultation and a curriculum based monitoring system. Minneapolis: University of
Minnesota, National Center on Educational Outcomes.
Tieso, C. (2005). The effects of grouping practices and curricular adjustments on achievement. Journal for the
Education ofthe Gifted, 29(1), 60-89.
Tingey, B., &Simon, C. (2001). Relationship study for SuccessMaker levels and SAT9 in Hueneme 3
Elementary District, school year 2000-2001, with growth analysis (Pearson Education Technologies). Pt.
Hueneme, CA.
Tingey, B. &Thrall, A. (2000). Duval County Public Schools evaluation report for 1999-2000 (Pearson Education
Technologies). Duval County, FL.
Torgerson, C. J. (2006). Publication bias: The Achilles' heel of systematic reviews? British Journal of
Educational Studies, 54(1), 89-102.
Trautman, T., &Howe, Q. (2004). Computer-aided instruction in mathematics: Improving performance in an
inner city elementary school serving mainly English language learners. Oklahoma City, OK: American Education
Corporation.
Trautman, T. S., &Klemp, R. (2004). Computer aided instruction and academic achievement: A study ofthe
AnyWhere Learning System in a district wide implementation. Oklahoma City, OK: American Education
Corporation.
Tsuei, M. (2005, June). Effects of a Web-based curriculum-based measurement system on math achievement:
A dynamic classwide growth model. Paper presented at the National Educational Computing Conference,
Philadelphia, PA.
Turner, L. G. (1985). An evaluation of the effects of paired learning in a mathematics computer-assisted-
instruction program. Unpublished doctoral dissertation, Arizona State University.
Tuscher, L. (1998). A three-year longitudinal study assessing the impact of the SuccessMaker program on
student achievement and student and teacher attitudes in the Methacton School District elementary schools.
Lehigh, PA: Lehigh University.
Underwood, J., Cavendish, S., Dowling, S., Fogelman, K., &Lawson, T. (1996). Are integrated learning systems
effective learning support tools? Computers Education, 26(1-3), 33-40.
Van Dusen, L. M., &Worthen, B. R. (1994). The impact of integrated learning system implementation on student
outcomes: Implications for research and evaluation. International Journal of Educational Research, 27(1), 13-
24.
Vaughan, W. (2002). Effects of cooperative learning on achievement and attitude among students of color.
Journal of Educational Research, 95(6), 359-364.
Villasenor, A., &Kepner, H. S. (1993). Arithmetic from a problem-solving perspective: An urban implementation.
Journal for Research in Mathematics Education, 24(1), 62-69.
Vogel, J. J., Greenwood-Ericksen, A., Cannon-Bowers, J., &Bowers, CA. (2006). Using virtual reality with and
without gaming attributes for academic achievement. Journal of Research on Technology in Education, 39(1),
105-118.
10 October 2013 Page 30 of 33 ProQuest
Vreeland, M., Vail, J., Bradley, L., Buetow, C, Cipriano, K., Green, C, et al. (1994). Accelerating cognitive
growth: The Edison School Math Project. Effective School Practices, 13(2), 64-69.
Waite, R. E. (2000). A study on the effects of Everyday Mathematics on student achievement of third-, fourth-,
and fifth- grade students in a large north Texas urban school district. Unpublished doctoral dissertation,
University of North Texas.
Watkins, M. W. (1986). Microcomputer-based math instruction with first-grade students. Computers in Human
Behavior, 2, 71-75.
Webster, A. H. (1990). The relationship of computer assisted instruction to mathematics achievement, student
cognitive styles, and student and teacher attitudes. Unpublished doctoral dissertation, Delta State University,
Cleveland, MS.
Webster, W. (1995). The evaluation of Project SEED, 1991-94: Detroit Public Schools. Detroit, MI: Detroit Public
Schools.
Webster, W., &Chadbourn, R. (1996). Evaluation of Project SEED through 1995. Dallas, TX: Dallas Public
Schools.
Webster, W. J. (1998). The national evaluation of Project SEED in five school districts: 1997-1998. Dallas, TX:
Webster Consulting.
Webster, W. J., &Chadbourn, R. A. (1989). Final report: The longitudinal effects of SEED instruction on
mathematics achievement and attitudes. Dallas, TX: Dallas ISD.
Webster, W. J., &Chadbourn, R. A. (1992). The evaluation of Project SEED: 1990-91. Dallas, TX: Dallas ISD.
Webster, W. J., &Dryden, M. (1998). The evaluation of Project SEED: 1997-1998: Detroit Public Schools.
Detroit, MI: Detroit Public Schools.
Weidinger, D. (2006). The effects of classwide peer tutoring on the acquisition of kindergarten reading and math
skills. Unpublished doctoral dissertation, University of Kansas.
Wellington, J. (1994). Evaluating a mathematics program for adoption: Connecting Math Concepts. Effective
School Practices, 13(2), 70-75.
Wenglinsky, H. (1998). Does it compute? The relationship between educational technology and student
achievement in mathematics. Princeton, NJ: Educational Testing Service Policy Information Center.
What Works Clearinghouse. (2005). Middle school math curricula. Washington, DC: U.S. Department of
Education. Retrieved July 6, 2006, from http://www
.whatworks.ed.gov/Topic.asp?tid=03&Returnpage="86"fault.asp
What Works Clearinghouse. (2006). Elementary school math. Washington, DC: U.S. Department of Education.
Retrieved September 21, 2006, from http://w-wc.org/Topic.asp?tid=04&Returnpage="87"fault.asp
Whitaker, J. C. (2005). Impact of an integrated learning system on reading and mathematics achievement.
Unpublished doctoral dissertation, Tennessee State University.
White, M. S. (1996). The effects of TIPS Math activities on fourth grade student achievement in mathematics
and on primary caregivers' and teachers' opinions about primary caregivers' involvement in schoolwork.
Unpublished master's thesis, University of Maryland.
Wildasin, R. L. (1994). A report on ILS implementation and student achievement in mathematics during the
1993-94 school year. Landisville, PA: Hempfield School District.
Williams, D. (2005). The impact of cooperative learning in comparison to traditional instruction on the
understanding of multiplication in third grade students. Unpublished doctoral dissertation, Capella University.
Wilson, C. L., &Sindelar, P. T. (1991). Direct Instruction in math word problems: Students with learning
disabilities. Exceptional Children, 57(6), 512-519.
Wodarz, N. (1994). The effects of computer usage on elementary students' attitudes, motivation and
achievement in mathematics. Unpublished doctoral dissertation, Northern Arizona University.
Woodward, J., &Baxter, J. (1997). The effects of an innovative approach to mathematics on academically low-
10 October 2013 Page 31 of 33 ProQuest
achieving students in inclusive settings. Exceptional Children, 63(3), 373-389.
Xin, J. F. (1999). Computer-assisted cooperative learning in integrated classrooms for students with and without
disabilities. Information Technology in Childhood Education, 1(1), 61-78.
Yager, S., Johnson, R. T., Johnson D. W., &Snider, B. (1986). The impact of group processing on achievement
in cooperative learning groups. Journal of Social Psychology, 126(3), 389-397.
Ysseldyke, J., Betts, J., Thill, T., &Hannigan, E. (2004). Use of an instructional management system to improve
mathematics skills for students in Title I programs. Preventing School Failure, 48(4), 10-14.
Ysseldyke, J., Spicuzza, R., Kosciolek, S., &Boys, C. (2003). Effects of a learning information system on
mathematics achievement and classroom structure. Journal of Educational Research, 96(3), 163-173.
Ysseldyke, J., Spicuzza, R., Kosciolek, S., Teelucksingh, E., Boys, C, &Lemkuil, A. (2003). Using a curriculum-
based instructional management system to enhance math achievement in urban schools. Journal of Education
for Students Placed at Risk, 8(20), 247-265.
Ysseldyke, J., &Tardrew, S. (2002). Differentiating math instruction: A large-scale study of Accelerated Math
[final rep.]. Wisconsin Rapids, WI: Renaissance Learning.
Ysseldyke, J., &Tardrew, S. (2005). Use of a progress monitoring system to enable teachers to differentiate
math instruction. Minneapolis: University of Minnesota.
Ysseldyke, J., Tardrew, S., Betts, J., Thill, T., &Hannigan, E. (2004). Use of an instructional management
system to enhance math instruction of gifted and talented students. Journal of Education for the Gifted, 27(4),
293-310.
Ysseldyke, J. E., &Bolt, D. (2006). Effect of technology-enhanced progress monitoring on math achievement.
Minneapolis: University of Minnesota.
Zollman, A., Oldham, B., &Wyrick, J. (1989). Effects of computer-assisted instruction on reading and
mathematics achievement of Chapter 1 students. Lexington, KY: Fayette County Public Schools.
Zuber, R. L. (1992). Cooperative learning by fifth-grade students: The effects of scripted and unscripted
techniques. Unpublished doctoral dissertation, Rutgers University.
AuthorAffiliation
Robert E. Slavin and Cynthia Lake
Johns Hopkins University
AuthorAffiliation
Authors
ROBERT E. SLAVIN is director of the Center for Research and Reform in Education at Johns Hopkins
University, director of the Institute for Effective Education at the University of York, and cofounder and chairman
of the Success for All Foundation; 200 W. Towsontown Boulevard, Baltimore, MD 21204; rslavin@jhu.edu.
CYNTHIA LAKE is a research scientist for the Center for Research and Reform in Education at Johns Hopkins
University, CDDRE, 200 W. Towsontown Boulevard, Baltimore, MD 21204; clake5@jhu.edu. Her research
interests include comprehensive school reform, school improvement, and educational research methods and
statistics. She is currently a data analyst of meta-analyses of education for the Best Evidence Encyclopedia.
Appendix
APPENDIX
Subject: Mathematics education; Elementary schools; No Child Left Behind Act 2001-US; Academic
achievement;
Publication title: Review of Educational Research
Volume: 78
Issue: 3
10 October 2013 Page 32 of 33 ProQuest
Pages: 427-515
Number of pages: 89
Publication year: 2008
Publication date: Sep 2008
Year: 2008
Publisher: American Educational Research Association
Place of publication: Washington
Country of publication: United States
Publication subject: Social Sciences: Comprehensive Works, Psychology, Education
ISSN: 00346543
CODEN: REDRAB
Source type: Scholarly Journals
Language of publication: English
Document type: Feature
Document feature: Tables References
ProQuest document ID: 214120806
Document URL:
http://ezproxy.mnsu.edu/login?url=http://search.proquest.com/docview/214120806?accountid=12259
Copyright: Copyright American Educational Research Association Sep 2008
Last updated: 2012-07-12
Database: ProQuest Psychology Journals
_______________________________________________________________
Contact ProQuest
Copyright 2013 ProQuest LLC. All rights reserved. - Terms and Conditions
10 October 2013 Page 33 of 33 ProQuest
Response to Intervention
in Elementary-Middle Math
National Association of Elementary School Principals
BEST PRACTICES
FOR BETTER SCHOOLS
Student Assessment
2
About NAESP
The mission of the National Association
of Elementary School Principals (NAESP)
is to lead in the advocacy and support for
elementary and middle-level principals and
other education leaders in their commitment
for all children.
Gail Connelly, Executive Director
Michael Schooley, Deputy Executive Director
National Association of
Elementary School Principals
1615 Duke Street
Alexandria, Virginia 22314
800-386-2377 or 703-684-3345
www.naesp.org
NAESP members receive access to this white
paper as a member benefit. Learn more at
www.naesp.org/content/membership-form
About BEST PRACTICES FOR BETTER SCHOOLS
Best Practices for Better Schools, an online
publications series developed by the National
Association of Elementary School Principals,
is intended to strengthen the effectiveness
of elementary and middle-level principals
by providing information and insight about
research-based practices and by offering
guidance for implementing them in schools.
This series of publications is intended to
inform discussion, strategies, and implemen-
tation, not to imply endorsement of any
specific approach by NAESP.
Student Assessment
Response to Intervention in Elementary-Middle Math
About This White Paper
The content of this issue of Best Practices for
Better Schools is excerpted with permission
from Doing What Works (DWW), a website
sponsored by the U.S. Department of
Education. The goal of DWW is to create an
online library of resources to help principals
and other educators implement research-
based instructional practice. DWW is led
by the Departments Office of Planning,
Evaluation & Policy Development (OPEPD),
which relies on the Institute of Education
Sciences (and occasionally other entities that
adhere to standards similar to those of IES)
to evaluate and recommend practices that
are supported by rigorous research. Much
of the DWW content is based on informa-
tion from IES What Works Clearinghouse
(WWC), which evaluates research on
practices and interventions to let the
education community know what is likely
to work.
NAESP was the only national education
association awarded a grant to widely
disseminate highlights of best-practice
content from the DWW website. Readers are
encouraged to visit the website to view all of
the resources related to this best practice and
to share this online resource with colleagues,
teachers, and other educators. No additional
permission is required.
NAESP cares about the environment. This
white paper is available from NAESP as an
online document only. NAESP members
and other readers are encouraged to share
this document with colleagues.
Deborah Bongiorno, Editor
Donna Sicklesmith-Anderson, Designer
Published 2011
P
Response to Intervention in
Elementary-Middle Math
PRINCIPALS KNOW that schools must help
all students develop the foundational skills
they need to succeed in class and to meet the
literacy and mathematics demands of work
and college. Response to Intervention (RtI) is
a comprehensive early detection, prevention,
and support system approach to help
students before they fall behind. This white
paper outlines implementation of an RtI
framework along with these four
recommended practices:
Implement screening and monitoring;
Focus on foundations of arithmetic;
Provide intentional teaching;
Build a systemwide framework.
Summaries of these practices follow.
Screen all students for potential math
difficulties and monitor their progress.
The first step in RtI is universal student
screening so schools can systematically
identify those at risk for math difficulties.
Accurate identification of at-risk students
requires efficient, reliable, and valid
measures of the most important math
objectives. Because no one assessment is
perfectly reliable, schools are encouraged to
use multiple screening measures, including
state assessment results. Once students
needing intervention have been identified,
regular progress monitoring is essential to
determine if supplemental instruction is
meeting their learning needs or if regrouping
is necessary.
ACTIONS
The RtI team evaluates screening measures
using reliability, efficiency, and validity
criteria.
It is important for the district- or school-
level RtI team to evaluate potential screening
measures. The team should select efficient,
reasonably reliable measures that
demonstrate predictive validity. Most
importantly, screening instruments should
cover critical instructional objectives for
each grade. In grades four through eight,
state assessment results can be used in
combination with a screening instrument to
increase the accuracy of decisions about who
is at risk.
Implement twice-a-year screening.
Screen students in the beginning and middle
of the year to ensure that those at risk are
identified and receive intervention services
in a timely fashion. Using the same grade-
level screening tools across the district
facilitates analysis of patterns of results
across schools.
Monitor student progress regularly.
Use grade-appropriate general outcome
measures to monitor the progress of students
receiving Tier two and Tier three
interventions and borderline students
receiving Tier one instruction at least
monthly, and use the data to regroup
3
4
students when necessary. Curriculum-
embedded assessments can be used in
interventions as often as daily or as
infrequently as every other week to
determine whether students are learning
from the intervention.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
Universal Screening in Math
Functions Progress Monitoring
Monitoring Student Progress
Data Team Meeting: Grade Five Math
Review
The Power of Data
TOOLS
The following tools and templates are
designed to help principals and teachers
implement this best practice in their school.
Each tool is a downloadable document that
principals can adapt to serve their particular
needs.
RtI Data Analysis Teaming Process Script:
Guidelines for conducting a data team
meeting.
Screening and Intervention Record Forms:
Screening forms used by RtI data teams to
record student performance, goals and skills.
Screening Tools Chart: Chart from the
National Center on Response to Intervention
identifies screening tools by content area and
rates each based on reliability and accuracy.
Screening for Mathematics Difficulties in
K-3 Students: Report examining the
effectiveness and key components of early
screening measures.
Graphing Progress How-To Packet:
Directions for creating and analyzing student
progress graphs.
Data Team Assessment and Planning
Worksheet: Worksheet to assess the status of
the data team process.
Problem Analysis and Intervention Design
Worksheets: Excerpts from training modules
to guide teams in data-driven instruction
and learning.
The Foundations of Arithmetic: Focus
Intervention on Whole and Rational
Numbers, Word Problems, and Fact
Fluency
In grades K through five, math interventions
should focus intensely on in-depth treatment
of whole numbers and operations, while
grades four through eight should address
rational numbers as well as advanced topics
in whole number arithmetic, such as long
division.
Interventions on solving word problems
should include instruction that helps
students identify common underlying
structures of various problem types. Students
can learn to use these structures to
categorize problems and determine
appropriate solutions for each problem type.
Intervention teachers at all grade levels
should devote about ten minutes of each
session to building fluent retrieval of basic
arithmetic facts. Primary-grade students can
learn efficient counting-on strategies to
improve math fact retrieval, and
intermediate and middle-grade students
should learn to use their knowledge of
properties (e.g., the commutative,
associative, and distributive laws) to derive
facts in their heads.
Intervention teachers at all grade levels should devote about
ten minutes of each session to building fluent retrieval of basic
arithmetic facts.
5
to use properties such as the commutative,
associative, and distributive laws to derive
facts in their heads.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
Math Content for Struggling Students
Word Problems
Reteaching Place Value in Tier Two
Content for Tiers Two and Three
Teaching Word Problem Structures
TOOLS
The following tools and templates are
designed to help principals and teachers
implement these best practices in their
school. Each tool is a downloadable
document that principals can adapt to serve
their particular needs.
K-8 Essential Mathematics Concepts and
Skills: Excerpt from the Iowa Core
Curriculum reviewing essential skills.
K-7 Essential Mathematics Standards
Assessment Form: Assessment form to
record and track student progress on math
proficiency.
National Mathematics Advisory Panel: Core
Principals of Instruction: Fact sheet
summarizing the Panels core findings.
National Mathematics Advisory Panel: K-8
Benchmarks: Recommended standards
based on international benchmarks.
RtI in Math for Elementary and Middle
Schools: A PowerPoint providing an
overview of math instruction
recommendations.
ACTIONS
Focus kindergarten through fifth-grade
interventions on whole numbers.
For kindergarten through grade five
students, Tier two and Tier three
interventions should focus almost exclusively
on properties of whole numbers and
operations. Some older students struggling
with whole numbers and operations would
also benefit from in-depth coverage of these
topics.
Focus fourth- through eighth-grade
interventions on rational numbers.
Tier two and Tier three interventions in
grades four through eight should focus on
in-depth coverage of rational numbers and
advanced topics in whole number
arithmetic, such as long division.
Ensure in-depth coverage of math topics.
Districts should select math curriculum
materials that offer an in-depth focus on the
essential topics recommended by the
National Math Panel. Tier two and Tier three
interventions should follow the same
in-depth treatment of these topics.
Interventions on solving word problems should
include instruction that helps students identify
common underlying structures.
Teach students about the structure of various
problem types, how to use these structures to
categorize problems, and how to determine
appropriate solutions for each problem type.
Teach students to transfer known solution
methods from familiar to unfamiliar
problems.
Interventions at all grade levels should devote
about ten minutes each session to building
fluent retrieval of basic arithmetic facts.
Extensive practice facilitates automatic
retrieval. In kindergarten through second
grade, explicitly teach students efficient
counting-on strategies to improve their
retrieval of mathematics facts. Students in
grades two through eight should learn how
Teach students about the structure of various problem types, how to
use these structures to categorize problems, and how to determine
appropriate solutions for each problem type.
6
Intentional Teaching: Provide Explicit
Instruction and Incorporate Visual
Representations and Motivational
Strategies
Intervention instruction should be explicit
and systematic, incorporating models of
proficient problem solving, verbalization of
thought processes, guided practice,
corrective feedback, and frequent cumulative
review. Instructional materials should
include examples of easy and difficult
problems. Students need guided practice
with scaffolding, including opportunities to
communicate their problem-solving
strategies. Motivation is key for students
struggling with math, so it is important to
praise effort and engagement to encourage
persistence.
Intervention materials should provide
students opportunities to work with visual
representations of math concepts, and
interventionists should be proficient in their
use. Tools such as number lines and strip
diagrams have many different uses in
explanation of mathematical concepts and
problem solving. When introducing new
concepts to students, it is helpful to begin
with concrete experiences, follow up with
visual representation, and then move to
abstract concepts.
ACTIONS
Tier two and Tier three math instruction should
provide clear explanations with thinkalouds.
Explicit teaching begins with the teachers
demonstration and modeling of proficient
problem solving with thinkalouds.
Explicit teaching includes guided practice with
scaffolding of the required problem-solving
steps.
Students need many opportunities to work
as a group and communicate their problem-
solving strategies for easy and difficult
problems. During guided practice, the
teacher provides support to enable students
to correctly solve problems, gradually
withdrawing support as students master
problem-solving steps.
Guided practice should include immediate
corrective feedback.
Extensive guided practice during Tier two
and Tier three instruction allows teachers to
provide students with immediate feedback
regarding the accuracy and appropriateness
of their problem-solving strategies. Feedback
should identify accurate understandings and
guide students to the correct responses when
they have made mistakes.
Use visual representations to explain math
concepts.
Intervention materials should provide
students opportunities to practice math
concepts with visual representations such as
number lines, arrays, and strip diagrams.
Interventionists should be proficient in the
use of visual representations.
Praise student effort and engagement.
Include motivational strategies in Tier two
and Tier three interventions, reinforcing
students effort and attention to lessons and
rewarding accomplishments. Help students
persist by setting goals and charting
progress.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
Explicit Instruction
Visual Representations
Concrete to Abstract Sequence
Organizing for Differentiation in the Core
Classroom
Pacing Instruction in Tier Three
Explicit Teaching in the Fifth-Grade Math
Core
Motivation is key for students struggling with math, so it is
important to praise effort and engagement to encourage persistence.
7
TOOLS
The following tools and templates are
designed to help principals and teachers
implement these best practices in their
school. Each tool is a downloadable
document that principals can adapt to serve
their particular needs.
Concrete-Representational-Abstract (CRA)
Instructional Approach Summary Report: A
strategy for teaching fractions that includes
multiple opportunities for practice at each
level
Solving Algebra and Other Story Problems
With Simple Diagrams: A strategy for
solving problems with visualization
methods.
Teaching Students Math Problem-Solving
Through Graphic Representations: Examples
of visual representation techniques.
Establish a systemwide framework for
RtI to support the three recommended
practices.
Implementation encompasses the
groundwork and support needed to put the
recommended practices into action. RtI
begins with a systemwide framework that
includes universal screening and progress
monitoring; content focused on whole
numbers in grades kindergarten through
fifth and rational numbers in grades four
through eight, including word problem
structure and daily fact fluency practice; and
systematic, explicit teaching incorporating
visualizations. Districts and schools need
leadership and guidance at all levels to
support implementation of a multi-tiered
system. State-level teams can inform policy
decisions and provide guidance on
assessments, instructional resources, and
funding allocation. Some states have
provided high-level support for RtI
implementation through special training or
technical assistance centers that are charged
with working with local districts and
schools. In those cases, district-level teams
can access professional development and
coaching in RtI implementation. Schools will
need to provide extensive training and
ongoing support to staff to ensure fidelity
and sustain an RtI framework.
ACTIONS
Build a comprehensive framework that
addresses reading and mathematics.
The components of an RtI framework are
consistent whether or not a district is
focusing on reading or math or addressing
both subjects at the same time. Components
include: universal screening, ongoing
progress monitoring, quality core instruction
for all students, and a system of tiered
interventions to address students skill needs.
When schools take stock of what they
already have in place, it is likely that some
components of an RtI system are already
part of the instructional programfor
example, a system of formative assessments
for monitoring progress. Districts and
schools will take different paths to building
out an RtI framework depending on what is
already working well and what components
they need to develop. Implementation will
involve cooperation among school personnel
at many levels so it is a good idea to create a
leadership team that includes representatives
from special education, services to English
learners, subject matter and assessment
specialists, and classroom teachers. Full
implementation of an RtI framework usually
requires several years.
Establish core mathematics instructional
programs focused on foundational skills.
Once schools have established a core
curriculum aligned with state and district
standards, they can choose intervention
programs that are compatible with the core
program and then provide intensive
small-group or individualized instruction.
Districts and schools need leadership and guidance at all levels to
support implementation of a multi-tiered system.
8
Alignment with the core program is not as
critical as ensuring that tiered instruction is
systematic, explicit, and focuses on high
priority mathematics skills.
Create leadership teams in districts and schools
to facilitate implementation of RtI
components.
Leadership teams are responsible for building
the infrastructure to support an effective RtI
framework. District teams are able to provide
guidance to schools in evaluating and
selecting core programs and assessment
measures, managing data collection and
reporting, and coordinating professional
development opportunities and instructional
resources across schools. Building-level
teams facilitate direct implementation of the
key components by coordinating staff and
resources, scheduling and other logistics, and
guiding teacher teams in using data and
providing interventions at the students level
of need.
Provide professional development and
instructional supports to sustain high-quality
implementation.
Staff members will require adequate training
and support to successfully implement
recommended practices. Districts can work
with regional and state networks to bring
external opportunities for training and
technical assistance to schools. At the
building level, coaches and specialists can
provide staff development, ongoing
classroom support, and collaborative
experiences to advance teachers skills in
implementing RtI components.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
How RtI Changes Special Education
The Phases of RtI Implementation
Partnering General and Special Education
State Leadership: Building an RtI System
Setting the Stage for RtI Implementation
Lessons From Iowa About RtI
RtI Training for School Districts
Principals Role in Instructional Decision
Making
Charting the Path
Powerful RtI Training Experiences
TOOLS
The following tools and templates are
designed to help principals and teachers
implement these best practices in their
school. Each tool is a downloadable
document that principals can adapt to serve
their particular needs.
RtI Self-Assessment Tool for Elementary
Schools: Tool that assesses ten readiness
categories for implementing an RtI
framework.
Scaling-Up Instruction and Technical
Assistance Briefs: Framework for developing
capacity to make statewide use of evidence-
based practice.
Funding Considerations for Implementing
RtI: Guide from the Pennsylvania
Department of Education with potential
funding sources for specific RtI components.
Professional Development Continuum:
Training continuum for staff on RtI
components, from beginning to advanced
level.
District Implementation Tracking Plan: A
resource from Oregon to prepare districts as
they implement RtI.
RtI Implementation Self-Report: A form for
reporting the status of implementation across
ten effectiveness indicators.
Once schools have established a core curriculum, they can choose
intervention programs compatible with the core program and then
provide intensive small-group or individualized instruction.
9
RtI Comprehensive Evaluation Tool: Tool
from Colorado to evaluate RtI components
and continuing needs.
Progress Monitoring Training Plan: A plan
for training staff on the principles of
progress-monitoring and implementation.
RtI Parent Guides: Guide and a brochure
describing the RtI framework to parents.
Conclusion
Good math instruction is built on screening
and monitoring of students, teaching the
foundations of arithmetic, and practicing
intentional teaching. These three practices
are, in turn, supported by the Response to
Intervention (RtI) approach, which is
dedicated to helping students before they fall
behind. To ensure that math instruction gets
the attention it deserves, principals have
access to solutions through NAESP and the
DWW website. These solutions are based on
solid research and the best thinking
currently available in the field. As more
white papers in NAESPs Best Practices for
Better Schools series are developed,
principals will continue to find help in
addressing critical issues in elementary and
middle school education.
SITE PROFILES
Cornell Elementary School (Iowa): RtI has
been evolving at Cornell Elementary over
the last 15 years.
Durham Elementary School (Oregon):
Durham Elementarys key to success is its
strong building-level leadership team.
Tri-Community Elementary School
(Pennsylvania): Tri-Communitys RtI
framework has helped foster school
turnaround.
John Wash Elementary School (California):
John Wash Elementarys framework involves
a three-tiered pyramid of instruction and
intervention.
Related Links
The Access Center: Resources to enhance
students with disabilities access to general
education, including these reports:
Concrete-Representational-Abstract
(CRA) Instructional Approach
Direct or Explicit Instruction and
Mathematics
Strategy/Implicit Instruction and
Mathematics
Center on Instruction (COI): Funded by the
U.S. Department of Education, this research
hub offers free resources, including:
Curriculum-Based Measurement in
Mathematics: An Evidence-Based
Formative Assessment Procedure (PDF)
Early Mathematics Assessment (Power-
Point)
Pre-Algebra and Algebra Instruction and
Assessments (PowerPoint)
Center on Instruction, continued
Principles of Effective Assessment for
Screening and Progress Monitoring
(PowerPoint)
Progress Monitoring for Elementary
Mathematics
Screening for Mathematics Difficulties in
K-3 Students (PDF)
Council for Exceptional Children (CEC):
Instructional Support Teams Help Sustain
Responsive Instruction Frameworks
IDEA Partnership: Response to Intervention
Collection: Library of tools, partnership
briefs and dialogue guides.
To ensure that math instruction gets the attention it deserves,
principals have access to solutions through NAESP and DWW.
10
Institute of Education Sciences (U.S.
Department of Education): Organizing
Instruction and Study to Improve Student
Learning (PDF)
Iowa Heartland Area Education Agency
(AEA): IDMHow to Get Started and
Instructional Decision Making (IDM)
The IRIS Center: A national center devoted
to providing free, online interactive training
enhancements for educators working with
students with disabilities.
Math Forum: Mathematics and Motivation
An Annotated Bibliography
MathVids: Instructional strategies, teaching
plans and videos for educators working with
struggling math students.
National Association of State Directors of
Special Education (NASDSE): RtI Project
National Council of Teachers of Mathemat-
ics: The Councils Illuminations library
provides hundreds of online activities and
lessons for educators.
National Center on Response to Intervention
(NCRtI): Housed at the American Institute
for Research, the Center provides technical
assistance to states and districts for building
capacity for RtI implementation.
National Center on Student Progress
Monitoring: Math Web Resources Library:
Downloadable articles, PowerPoints, FAQs
and links to additional resources on student
monitoring.
National Research Center on Learning
Disabilities (NRCLD): Effective Mathematics
Instruction
Oregon Response to Intervention Project
(OrRTI): How the EBIS/RTI Process Works
in Elementary Schools (PDF)
Pennsylvania Training and Technical
Assistance Network (PaTTAN): Materials to
build capacity of education agencies serving
special education students.
RTI Action Network: An organization
devoted to guiding educators to RtI
implementation.
TeachingLD: Teaching How TosMath and
Teaching Students Math Problem-Solving
Through Graphic Representations (PDF)
U.S. Department of Education (USDE):
Implementing RTI Using Title I, Title III,
and CEIS FundsKey Issues for Decision-
Makers Response to Intervention Frame-
work in Mathematics (PDF)
Research and Results
MASTERING MATH FACTS
Developing automaticity 1
Donald B. Crawford, Ph.D. Otter Creek Institute
The third stage of learning math facts: Developing automaticity
Donald B. Crawford, Ph.D. Otter Creek Institute
Everyone knows what automaticity is. It is the immediate, obligatory way we
apprehend the world around us. It is the fluent, effortless manner in which we perform
skilled behaviors. It is the popping into mind of familiar knowledge at the moment we
need it (Logan, 1991a, p. 347).
Fluency in computation and knowledge of math facts are part of the NCTM Math
Standards. As the explanation of the Number and Operations Standard states, Knowing
basic number combinationsthe single-digit addition and multiplication pairs and their
counterparts for subtraction and divisionis essential. Equally essential is computational
fluencyhaving and using efficient and accurate methods for computing (NCTM, 2003,
p. 32)..
Later within the explanation of this standard, it is stated that, By the end of
grade 2, students should know the basic addition and subtraction combinations, should be
fluent in adding two-digit numbers, and should have methods for subtracting two-digit
numbers. At the grades 35 level, as students develop the basic number combinations for
multiplication and division, they should also develop reliable algorithms to solve
arithmetic problems efficiently and accurately. These methods should be applied to larger
numbers and practiced for fluency (NCTM, 2003, p. 35).
For children to be fluent in computation and to be able to do math mentally
requires that they become fluent in the basic arithmetic facts. Researchers have long
maintained there are three general stages of learning math facts and different types of
instructional activities to effect learning in those stages (Ando & Ikeda, 1971; Ashlock,
1971; Bezuk & Cegelka, 1995; Carnine & Stein, 1981; Garnett, 1992; Garnett &
Fleischner, 1983). Researchers such as Garnett (1992) have carefully documented the
developmental sequence of procedures that children use to obtain the answers to math
facts. Children begin with a first stage of counting, or procedural knowledge of math
facts. A second stage consists of developing ways to remember math facts by relating
them to known facts. Then the third and final stage is the declarative knowledge of the
Developing automaticity 2
Donald B. Crawford, Ph.D. Otter Creek Institute
knowing the facts or direct retrieval. Another line of research has established that
most children move from using procedures to figure out math facts in the first grade to
the adult model of retrieval by fifth grade (Koshmider & Ashcraft, 1991). The switch
from predominantly procedural to predominantly declarative knowledge or direct
retrieval of math facts normally begins at around the 2
nd
to 3
rd
grade level (Ashcraft,
1984).
If these three levels of knowing math facts are attainable, what does that imply
about how to teach math facts? Different teaching and practice procedures apply to each
of these stages. Meyers & Thorton (1977) suggested activities specifically geared toward
different levels, including activities for overlearning intended to promote rapid recall of
a set or cluster of facts. As Ashcraft (1982) points out there is much more evidence of
direct retrieval of facts when you study adults or children in 4th grade or above. There is
much research on the efficacy of deriving facts from relationships developed in studies
on addition and subtraction with primary age children (Carpenter & Moser, 1984;
Garnett, 1992; Steinberg, 1985; Thorton, 1978). However, it is not always clear that this
research focuses on soon-to-be-discarded methods of answering math facts
Some of the advice on how to teach math facts is unclear about which stage is
being addressed. The term learning has been variously applied to all three stages, and
the term memorization has been used to apply to both the second and third stages
(Stein, Silbert & Carnine, 1997). What is likely is that teaching strategies that address
earlier stages of learning math facts may be counterproductive when attempting to
develop automaticity. This paper will examine research on math fact learning as it
applies to each of the three stages, with an emphasis on the third stage, the development
of automaticity.
First stage: Figuring out math facts
The first stage has been characterized as understanding, or conceptual or
procedural. This is the stage where the child must count or do successive addition, or
some other strategy to figure out the answer to a fact (Garnett, 1992). Specifically in
view is the childs ability to associate the written number sentence with a physical
referent (Ashlock, 1971, p. 359).
Developing automaticity 3
Donald B. Crawford, Ph.D. Otter Creek Institute
There is evidence that problems with learning mathematics may have their origins
in failure to completely master counting skills in kindergarten and first grade (Geary,
Bow-Thomas, & Yao, 1992). At the beginning stage arithmetic facts are problems to be
solved (Gersten & Chard, 1999). If students cannot solve basic fact problems given
plenty of timethen they simply do not understand the process, and certainly are not
ready to begin memorization (Kozinski & Gast, 1993). Students must understand the
process well enough to solve any fact problem given them before beginning
memorization procedures. To ensure continued success and progress in mathematics,
students should be taught conceptual understanding prior to memorization of the facts
(Miller, Mercer, & Dillon, 1992, p. 108). Importantly, the conceptual understanding, or
procedural knowledge of counting should be developed prior to memorization, rather
than in place of it. Prior to teaching for automaticity it is best to develop the
conceptual understanding of these math facts as procedural knowledge (Bezuk &
Cegelka, 1995, p. 365).
Second stage: Strategies for remembering math facts
The second stage has been characterized as relating or as strategies for
remembering. This can include pairs of facts related by the commutative property, e.g.,
5 + 3 = 3 + 5 = 8. This can also include families of facts such as 7 + 4 = 11, 4 + 7 = 11,
11 - 4 = 7, and 11 7 = 4. Garnett characterizes such strategies as more mature than
counting procedures indicating, Another mature strategy is linking one problem to a
related problem (e.g., for 5 + 6, thinking 5 + 5 =10, so 5 + 6 = 11) (Garnett, 1992, p.
212). The goal of the second stage is simple accuracy rather than developing fluency or
automaticity. Studies looking at use of strategies for remembering seldom use timed
tests, and never have rigorous expectations of highly fluent performance. As long as
students are accurate, a remembering strategy is considered successful.
Carpenter and Moser (1984) reported on a longitudinal study of childrens
evolving methods of solving simple addition and subtraction problems in first through 3
rd
grade. They identified sub-stages within counting as well as variations on recall
strategies. The focus of their study was on naturalistically developed strategies that
children had made up to solve problems they were presented, rather than evaluating a
teaching sequence for its efficiency. For example, they noted that children were not
Developing automaticity 4
Donald B. Crawford, Ph.D. Otter Creek Institute
consistent in using the most efficient strategy, even when they sometimes used that
method. Carpenter and Moser did caution that, It has not been clearly established that
instruction should reflect the natural development of concepts and skills (1984, p. 200).
Garnett (1992) identified similar stages and sub-stages in examining children with
learning disabilities.
Successful students often hit early on a strategy for remembering simple facts,
where less successful students lack such a strategy and may simply guess (Thorton, 1978;
Carnine & Stein, 1981). Research has demonstrated that teaching the facts to students,
especially those who dont develop their own strategies, in some logical order that
emphasizes the relationships, makes them easier to remember. Many studies have been
reported in which various rules and relationships are taught to help children derive the
answers to math facts (Carnine & Stein, 1981; Rightsel & Thorton, 1985; Steinberg,
1985; Thorton, 1978; Thorton & Smith, 1988; Van Houten, 1993).
An example of such a relationship rule is the, doubles plus 1 rule. This rule
applies to facts such as 8 + 9 which is the same as 8 + 8 plus 1, therefore because 8 + 8 =
16 and 16 + 1 = 17, therefore 8 + 9 = 17 (Thorton, 1978). Most authors suggest that
learning the facts in terms of relationships is an intermediate stage between the stage of
conceptual development that involves using a procedure and the stage of practicing to
develop automatic recall on the other hand (Steinberg, 1985; Stein et al., 1997; Suydam,
1984; Thorton, 1978). Research has indicated that helping them [students] to develop
thinking strategies is an important step between the development of concepts with
manipulative materials and pictures and the mastery of the facts with drill-and-practice
activities [emphasis added] (Suydam, 1984, p. 15). Additional practice activities are
needed (after teaching thinking strategies) in order to develop automatic, direct retrieval
of facts.
Thorton suggested that curriculum and classroom efforts should focus more
carefully on the development of strategy prior to drill on basic facts (1978, p. 226).
Thortons (1978) research showed that using relationships such as doubles and doubles
plus 1, and other tricks to remember facts as part of the teaching process resulted in
more facts being learned after eight weeks than in classes where such aids to memory
were not taught. Similarly, Carnine and Stein (1981) found that students instructed with
Developing automaticity 5
Donald B. Crawford, Ph.D. Otter Creek Institute
a strategy for remembering facts learned a set of 24 facts with higher accuracy than
students who were presented the facts to memorize with no aids to remembering (84% vs.
59%).
Steinberg (1985) studied the effect of teaching noncounting, derived facts
strategies in which the child uses a small set of known number facts to find or derive the
solution to unknown number facts (1985, p. 337). The subjects were second graders
and the focus was on addition and subtraction facts. The children changed strategies
from counting to using the derived facts strategies and did write the answers to more of
the fact problems within the 2 seconds allotted per problem than before such instruction.
Steinberg did not find that teaching derived facts strategies necessarily led to automaticity
of number facts.
Thorton and Smith (1988) found that after teaching strategies and relationship
activities to first graders, the students correctly answered more facts than a control group.
In the timed tests rates were about 20 problems per minute in the experimental group and
about 10 per minute in the traditional group. Although rates of responding did not
indicate automaticity, approximately 55% of the experimental group, compared to 12% in
the control group, reported that rather than using counting strategies, they had mostly
memorized a set of 9 target subtraction facts.
Van Houten (1993) specifically taught rules to children (although the rules were
unlike those normally used by children, or discussed in the literature) and found that the
children learned a set of seven facts more rapidly when there was a rule to remember
them with compared to a random mix of seven facts with no relationships. However, as
in the other studies, Van Houten did not measure development of automaticity through
some kind of timing criteria. Instead accuracy was the only measure of learning.
Isaacs & Carroll (1999) listed a potential instructional sequence of relationships to
be explored for addition and subtraction facts:
1. Basic concepts of addition; direct modeling and counting all for addition
2. The 0 and 1 addition facts; counting on; adding 2
3. Doubles (6 + 6, 8 + 8, etc.)
4. Complements of 10 (9 + 1, 8 + 2, etc.)
5. Basic concepts of subtraction; direct modeling for subtraction
Developing automaticity 6
Donald B. Crawford, Ph.D. Otter Creek Institute
6. Easy subtraction facts ( -0, -1, and -2 facts); counting back to subtract
7. Harder addition facts; derived-fact strategies for addition (near doubles, over-10
facts)
8. Counting up to subtract
9. Harder subtraction facts; derived-fact strategies for subtraction (using addition
facts, over-10 facts) (Isaacs & Carroll, 1999, p. 511-512).
Baroody noted that, mastery of the facts would include discovering, labeling,
and internalizing relationships. Meaningful instruction (the teaching of thinking
strategies) would probably contribute more directly to this process than drill approach
alone (1985, p. 95). Certainly the research is clear that learning rules and relationships
in the second stage helps students learn to give the correct answer to math facts. And
with the variety of rules that have been researched and found to be effective, the exact
nature of the rule children use is apparently immaterial. It appears that any rule or
strategy that allows the child to remember the correct answer, insures that practice will be
perfect practice and learning will be optimized. Performance may deteriorate if
practice is not highly accurate; because if the student is allowed to give incorrect
answers, those will then compete with the correct answer in memory (Goldman &
Pellegrino, 1986). Learning facts in this second stage develops accurate but not
necessarily automatic respondingwhich would be measured by rapid rates of
responding.
Third stage: Developing automaticity with math facts
The third stage of learning math facts has been called mastery, or overlearning, or
the development of automaticity. In this stage children develop the capacity to simply
recall the answers to facts without resorting to anything other direct retrieval of the
answer. Ashlock (1971) indicated that children must have immediate recall of the
basic facts so they can use them with facility in computation. If not, he may find it
difficult to develop skill in computation for he will frequently be diverted into tangential
procedures (1971, p. 363).
When automaticity is developed, one of its most notable traits is speed of
processing. Proficient levels of performance go beyond the accuracy (quality) of an
acquired skill to encompass sufficient speed (quantity) of performance. It is this sort of
Developing automaticity 7
Donald B. Crawford, Ph.D. Otter Creek Institute
proficiency with basic facts, rather than accuracy per se, which is so notably lacking in
many learning disabled childrens computation performance (Garnett & Fleischner, 1983,
p. 224).
The development of automaticity has been studied extensively in the psychology
literature. Research by psychologists on math facts and other variations on automatic fact
retrieval has been extensive, probably because Childrens acquisition of skill at mental
addition is a paradigm case of automaticity (Logan & Klapp, 1991a, p.180).
While research continues to refine the details of the models of the underlying
processes, there has for some time been a clear consensus that adults retrieve the
answers to math facts directly from memory. Psychologists argue that the process
underlying automaticity is memory retrieval: According to these theories, performance is
automatic when it is based on direct-access, single-step retrieval of solutions from
memory rather than some algorithmic computation (Logan & Klapp, 1991a, p. 179).
When facts have been well practiced, they are remembered quickly and
automaticallywhich frees up other mental processes to use the facts in more complex
problems (Ashcraft, 1992; Campbell, 1987b; Logan, 1991a). A variety of careful
psychological experiments have demonstrated that adults no longer use rules,
relationships, or procedures to derive the answers to math factsthey simply remember
or retrieve them (Ashcraft, 1985; Campbell, 1987a; Campbell, 1987b; Campbell &
Graham, 1985; Graham, 1987; Logan, 1988; Logan & Klapp, 1991a; Logan & Klapp,
1991b; McCloskey, Harley, & Sokol, 1991).
For example, Logan and Klapp (1991a) did experiments with alphabet
arithmetic where each letter corresponded to a number (A=1, B=2, C=3, etc.). In these
studies college students initially experienced the same need to count to figure out sums
that elementary children did. In the initial-counting stage the time to answer took longer
the higher the addends (requiring more time to count up). However, once these alphabet
facts became automaticthey took the same amount of time regardless of the addends,
and they took less time than they could possibly have counted. In addition, they found
that when students were presented with new items they had not previously hadthey
reverted to counting and the time to answer went up dramatically and was again related to
the size of the addends. When the students were presented with a mix of items, some
Developing automaticity 8
Donald B. Crawford, Ph.D. Otter Creek Institute
which required counting and others, which they had already learned, they were still able
to answer the ones they had previously learned roughly as fast as they had before.
Response times were able to clearly distinguish between the processes used on facts the
students were figuring out vs. the immediate retrieval process for facts that had become
automatic.
Does the same change in processes apply to children as they learn facts? Another
large set of experiments has demonstrated that as children mature they transition from
procedural (counting) methods of solving problems to the pattern shown by adults of
direct retrieval. (Ashcraft, 1982; Ashcraft, 1984; Ashcraft, 1992; Campbell, 1987;
Campbell & Graham, 1985; Geary & Brown, 1991; Graham, 1987; Koshmider &
Ashcraft, 1991; Logan, 1991b; Pelegrino & Goldman, 1987). There is evidence that
adults and children use strategies and rules for certain facts (Isaacs & Carroll, 1999).
However, this phenomena appears to be limited to facts adding + 0 or + 1 or
multiplying x 0 or x 1 and therefore does not indicate adult use of such strategies and
rules for any of the rest of the facts (Ashcraft, 1985).
Interestingly, most of the research celebrating strategies and rules relates to
addition and subtraction facts. Once children are expected to become fluent in recalling
multiplication facts such strategies are inadequate to the task, and memorization seems to
be the only way to accomplish the goal of mastery. Campbell and Grahams (1985) study
of error patterns in multiplication facts by children demonstrated that as children matured
errors became limited to products related correctly to one or the other of the factors, such
as 4 X 7 = 24 or 6 X 3 = 15. See also (Campbell, 1987b; Graham, 1987). They
concluded that the acquisition of simple multiplication skill is well described as a
process of associative bonding between problems and candidate answers....determined by
the relative strengths of correct and competing associations ... not a consequence ... of the
execution of reconstructive arithmetic procedures (Campbell & Graham, 1985, p. 359).
In other words, we remember answers based on practice with the facts rather than using
some procedure to derive the answers. And what causes difficulty in learning facts is that
we when we mistakenly bring up the answers to other similar math fact problems we
sometimes fail to correctly discard these errors (Campbell, 1987b).
Developing automaticity 9
Donald B. Crawford, Ph.D. Otter Creek Institute
Graham (1987) reported on a study where students who learned multiplication
facts in a mixed order had little or no difference in response times between smaller and
larger fact problems. Grahams recommendation was that facts be arranged in sets where
commonly confused problems are not taught together. In addition he suggested, it may
be beneficial to give the more difficult problems (e.g. 3 x 8, 4 x 7, 6 x 9, and 6 x 7) a head
start. This could be done by placing them in the first set and requiring a strict
performance criterion before moving on to new sets (Graham, 1987, p. 139).
Ashcraft has studied the development of mental arithmetic extensively (Ashcraft,
1982; Ashcraft, 1984; Ashcraft, 1992; Koshmider & Ashcraft, 1991). These studies have
examined response times and error rates over a range of ages. The picture is clear. In
young children who are still counting, response rates are consistent with counting rates.
Ashcraft found that young children took longer to answer factslong enough to count
the answers. He also found that they took longer in proportion to the sum of the
addendsan indication that they were counting. After children learn the facts, the
picture changes. Once direct retrieval is the dominant mode of answering, response rates
decrease from over 3 seconds down to less than one second on average. Ashcraft notes
that other researchers have demonstrated that while young children use a variety of
approaches to come up with the answer (as they develop knowledge of the math facts),
eventually these are all replaced by direct retrieval in 5
th
and 6
th
grade normally achieving
children.
Ashcraft and Christy (1995) summarized the model of math fact performance
from the perspective of cognitive psychology. ...the whole-number arithmetic facts are
eventually stored in long-term memory in a network-like structure .... at a particular level
of strength, with strength values varying as a function of an individuals practice and
experience....because these variables determine strength in memory (Ashcraft & Christy,
1995, p. 398). Interestingly enough, Ashcraft and Christys (1995) examination of
math texts led to the discovery in both addition and multiplication that smaller facts (2s-
5s) occur about twice as frequently as larger facts (6s-9s). This may in part explain why
larger facts are less well learned and have slightly slower response times.
Logans (1991b) studies of the transition to direct retrieval show that learners start
the process of figuring out the answer and use retrieval if the answer is remembered
Developing automaticity 10
Donald B. Crawford, Ph.D. Otter Creek Institute
before the process is completed. For a time there is a mixture of some answers being
retrieved from memory while others are being figured out. Once items are learned to the
automatic level, the answer occurs to the learner before they have time to count. Geary
and Brown (1991) found the same sort of evolving mix of strategies among gifted,
normal, and math-disabled children in the third and fourth grades.
Why is automaticity with math facts important?
Psychologists have long argued that higher-level aspects of skills require that
lower level skills be developed to automaticity. Turn-of-the-century psychologists
eloquently captured this relationship in the phrase, Automaticity is not genius, but it is
the hands and feet of genius. (Bryan & Harter, 1899; as cited in Bloom, 1986).
Automaticity occurs when tasks are learned so well that performance is fast, effortless,
and not easily susceptible to distraction (Logan, 1985).
An essential component of automaticity with math facts is that the answer must
come by means of direct retrieval, rather than following a procedure. This is tantamount
to the common observation that students who count on their fingers have not mastered
the facts. Why is counting on your fingers inadequate? Why isnt procedural
knowledge or counting a satisfactory end point? Although correct answers can be
obtained using procedural knowledge, these procedures are effortful and slow, and they
appear to interfere with learning and understanding higher-order concepts (Hasselbring,
Goin and Bransford, 1988, p. 2). The notion is that the mental effort involved in
figuring out facts tends to disrupt thinking about the problems in which the facts are
being used.
Some of the argument of this information-processing dilemma was developed by
analogy to reading, where difficulty with the process of simply decoding the words has
the effect of disrupting comprehension of the message. Gersten and Chard illuminated
the analogy between reading and math rather explicitly. Researchers explored the
devastating effects of the lack of automaticity in several ways. Essentially they argued
that the human mind has a limited capacity to process information, and if too much
energy goes into figuring out what 9 plus 8 equals, little is left over to understand the
concepts underlying multi-digit subtraction, long division, or complex multiplication
(1999, p. 21).
Developing automaticity 11
Donald B. Crawford, Ph.D. Otter Creek Institute
Practice is required to develop automaticity with math facts. The importance of
drill on components [such as math facts] is that the drilled material may become
sufficiently over-learned to free up cognitive resources and attention. These cognitive
resources may then be allocated to other aspects of performance, such as more complex
operations like carrying and borrowing, and to self-monitoring and control (Goldman &
Pellegrino, 1986, p. 134). This suggests that even mental strategies or mnemonic tricks
for remembering facts must be replaced with simple, immediate, direct retrieval, else the
strategy for remembering itself will interfere with the attention needed on the more
complex operations. For automaticity with math facts to have any value, the answers
must be recalled with no effort or much conscious attention, because we want the
conscious attention of the student directed elsewhere.
For students to be able to recall facts quickly in more complex computational
problems, research tells us the students must know their math facts at an acceptable level
of automaticity. Therefore, teachers...must be prepared to supplement by providing
more practice, as well as by establishing rate criteria that students must achieve. (Stein et
al., 1997, p. 93). The next question is what rate criteria are appropriate?
How fast is fast enough to be automatic?
Some educational researchers consider facts to be automatic when a response
comes in two or three seconds (Isaacs & Carroll, 1999; Rightsel & Thorton, 1985;
Thorton & Smith, 1988). However, performance is, by definition, not automatic at rates
that purposely allow enough time for students to use efficient strategies or rules for
some facts (Isaacs & Carroll, 1999, p. 513).
Most of the psychological studies have looked at automatic response time as
measured in milliseconds and found that automatic (direct retrieval) response times are
usually in the ranges of 400 to 900 milliseconds (less than one second) from presentation
of a visual stimulus to a keyboard or oral response (Ashcraft, 1982; Ashcraft, Fierman &
Bartolotta, 1984; Campbell, 1987a; Campbell, 1987b; Geary & Brown, 1991; Logan,
1988). Similarly, Hasselbring and colleagues felt students had automatized math facts
when response times were down to around 1 second from presentation of a stimulus
until a response was made (Hasselbring et al. 1987). If however, students are shown
the fact and asked to read it aloud then a second has already passed in which case no
Developing automaticity 12
Donald B. Crawford, Ph.D. Otter Creek Institute
delay should be expected after reading the fact. We consider mastery of a basic fact as
the ability of students to respond immediately to the fact question. (Stein et al., 1997, p.
87).
In most school situations students are tested on one-minute timings. Expectations
of automaticity vary somewhat. Translating a one-second-response time directly into
writing answers for one minute would produce 60 answers per minute. Sixty problems
per minute is exactly in the middle of the range of 40 to 80 problems per minute shown
by adult teachers in this authors workshops. However, some children, especially in the
primary grades, cannot write that quickly. In establishing mastery rate levels for
individuals, it is important to consider the learners characteristics (e.g., age, academic
skill, motor ability). For most students a rate of 40 to 60 correct digits per minute [25 to
35 problems per minute] with two or few errors is appropriate (Mercer & Miller, 1992,
p.23).
Howell and Nolet (2000) recommend an expectation of 40 correct facts per
minute, with a modification for students who write at less than 100 digits per minute.
The number of digits per minute is a percentage of 100 and that percentage is multiplied
by 40 problems to give the expected number of problems per minute; for example, a child
who can only write 75 digits per minute would have an expectation of 75% of 40 or 30
facts per minute.
One goal is to develop enough math fact skill in isolation, so that math fact speed
will continue to improve as a result of using facts in more complex problems. Miller and
Heward discussed the research finding that students who are able to compute basic math
facts at a rate of 30 to 40 problems correct per minute (or about 70 to 80 digits correct per
minute) continue to accelerate their rates as tasks in the math curriculum become more
complex....[however]...students whose correct rates were lower than 30 per minute
showed progressively decelerating trends when more complex skills were introduced.
The minimum correct rate for basic facts should be set at 30 to 40 problems per minute,
since this rate has been shown to be an indicator of success with more complex tasks
(1992, p. 100). If a range from 30 to 40 problems per minute is recommended the
higher end of 40 would be more likely to continue to accelerate than the lower end at 30,
making 40 a better goal.
Developing automaticity 13
Donald B. Crawford, Ph.D. Otter Creek Institute
Another recommendation was that the criterion be set at a rate [in digits per
minute] that is about 2/3 of the rate at which the student is able to write digits (Stein et
al., 1997, p. 87). For example a student who could write 100 digits per minute would be
expected to write 67 digits per minute, which translates to between 30 and 40 problems
per minute.
In summary, if measured individually, a response delay of up to 1 second would
be automatic. In writing the range of 30 to 40 seems to be the minimum, although a good
case can be made that setting an expectation of 40 would be more prudent. However,
students can be expected to be able to develop their fluency up to about 60 per minute,
for students who can write that quickly. Sadly, many school districts have expectations
as low as 16 to 20 problems per minute (from standards of 50 problems in 3 minutes or
100 problems in five minutes). Children can count answers on their fingers at the rate of
20 per minute. Such low expectations pass children who have only developed
procedural knowledge of how to figure out the facts, rather than the direct recall of
automaticity.
What type of practice effectively leads to automaticity?
Some children, despite a great deal of drill and practice on math facts, fail to
develop fluency or automaticity with the facts. Hasselbring and Goin (1988) reported
that researchers, who examined the effects of computer delivered drill-and-practice
programs, have generally reported that computer-based drill-and-practice seldom leads to
automaticity, especially amongst children with learning disabilities.
Ashlock explained the reason was that, the major result of practice is increased
rate and accuracy in doing the task that is actually practiced. For example, the child who
practices counting on his fingers to get missing sums usually learns to count on his
fingers more quickly and accurately. The child who uses skip counting or repeated
addition to find a product learns to use repeated addition more skillfullybut he
continues to add. Practices does not necessarily lead to more mathematically mature
ways of finding the missing number or to immediate recall as such....If the process to be
reinforced is recalling, then it is important that the child feel secure to state what he
recalls, even if he will later check his answer...(1971, p. 363)
Developing automaticity 14
Donald B. Crawford, Ph.D. Otter Creek Institute
Hasselbring et al. stated that, From our research we have concluded that if a
student is using procedural knowledge (i.e., counting strategies) to solve basic math facts,
typical computer-based drill and practice activities do not produce a developmental shift
whereby the student retrieves the answers from memory (1988, p. 4). This author
experienced the same failure of typical practice procedures to develop fluency in his own
students with learning disabilities. Months and sometimes multiple years of practice
resulted in students who barely more fluent than they were at the start.
Hasselbring and Goin noted that the minimal increases in fluency seen in such
students
can be attributed to students becoming more efficient at counting and not
due to the development of automatic recall of facts from memory....We
found that if a learner with a mild disability has not established the ability
to retrieve an answer from memory before engaging in a drill-and-practice
activity then time spend in drill-and-practice is essentially wasted. On the
other hand, if a student can retrieve a fact from memory, even slowly, then
use of drill-and-practice will quickly lead to the fluent recall of that
fact....the key to making computer-based drill an activity that will lead to
fluency is through additional instruction that will establish information in
long-term memory....Once acquisition occurs (i.e., information is stored in
long term memory), then drill-and-practice can be used effectively to
make the retrieval of this information fluent and automatic (1988, p.
203).
For practice to lead to automaticity, students must be recalling the facts, rather than
deriving them.
Other researchers have found the same phenomena, although they have used
different language. If students can recall answers to fact problems, rather than derive
them from a procedure, those answers have been learned. Therefore continued practice
could be called overlearning. For example, Drill and practice software is most
effective in the overlearning phase of learningi.e., effective drill and practice helps the
student develop fast and efficient retrieval processes (Goldman & Pelegrino, 1986).
Developing automaticity 15
Donald B. Crawford, Ph.D. Otter Creek Institute
If students are not recalling the answers and recalling them correctly, accurately,
while they are practicingthe practice is not valuable. So teaching students some variety
of memory aid would seem to be necessary. If the students cant recall a fact directly or
instantly, if they recall the trick they can get the right answerand then continue to
practice remembering the right answer.
However, there may be another way to achieve the same result. Garnett, while
encouraging student use of a variety of relationship strategies in the second stage, hinted
at the answer when she suggested that teachers should press for speed (direct retrieval)
with a few facts at a time (1992, p. 213).
How efficient is practice when a small set of facts is practiced (small enough to
remember easily)? Logan and Klapp (1991a) found that college students required less
than 15 minutes of practice to develop automaticity on a small set of six alphabet
arithmetic facts. The experimental results were predicted by theories that assume that
memory is the process underlying automaticity. According to those theories,
performance is automatic when it is based on single-step, direct-access retrieval of a
solution from memory; in principle, that can occur after a single exposure. (Logan &
Klapp, 1991a, p. 180).
This suggests that if facts were learned in small sets, which could easily be
remembered after a couple of presentations, the process of developing automaticity with
math facts could proceed with relatively little pain and considerably less drill than is
usually associated with learning all the facts. The conclusion that automatization
depends on the number of presentations of individual items rather than the total amount
of practice has interesting implications. It suggests that automaticity can be attained very
quickly if there is not much to be learned. Even if there is much to be learned, parts of it
can be automatized quickly if they are trained in isolation (Logan and Klapp, 1991a, p.
193).
Logan & Klapps studies show that, "The crucial variable [in developing
automaticity] is the number of trials per item, which reflects the opportunity to have
memorized the items (1991a, p. 187). So if children are only asked to memorize a
couple of facts at a time, they could develop automaticity fairly quickly with those facts.
Apparently extended practice is not necessary to produce automaticity. Twenty minutes
Developing automaticity 16
Donald B. Crawford, Ph.D. Otter Creek Institute
of rote memorization produced the same result as 12 sessions of practice on the task!
(Logan, 1991a, p. 355).
Has this in fact been demonstrated with children? As has been noted earlier few
researchers focus on the third stage development of automaticity. And there are even
fewer studies that instruct on anything less than 7 to 10 facts at a time. However, in cases
where small sets of facts have been used, the results have been uniformly successful,
even with students who previously had been unable to learn math facts. Cooke and
colleagues also found evidence in practicing math facts to automaticity, suggesting that
greater fluency can be achieved when the instructional load is limited to only a few new
facts interspersed with a review of other fluent facts (1993, p. 222). Stein et al. indicate
that a set of new facts should consist of no more than four facts (1997, p. 87).
Hasselbring et al. found that,
Our research suggest that it is best to work on developing
declarative knowledge by focusing on a very small set of new target facts
at any one timeno more than two facts and their reversals. Instruction
on this target set continues until the student can retrieve the answers to the
facts consistently and without using counting strategies.
We begin to move children away from the use of counting
strategies by using controlled response times. A controlled response
time is the amount of time allowed to retrieve and provide the answer to a
fact. We normally begin with a controlled response time of 3 seconds or
less and work down to a controlled response time around 1.25 seconds.
We believe that the use of controlled response times may be the
most critical step to developing automatization. It forces the student to
abandon the use of counting strategies and to retrieve answers rapidly
from the declarative knowledge network.
If the controlled response time elapses before the child can
respond, the student is given the answer and presented with the fact again.
This continues until the child gives the correct answer within the
controlled response time. (1988, p.4). [emphasis added].
Developing automaticity 17
Donald B. Crawford, Ph.D. Otter Creek Institute
Giving the answer to the student, rather than allowing them to derive the answer
changes the nature of the task. Instead of simply finding the answer the student is
involved in checking to see if he or she remembers the answer. If not, the student is
reminded of the answer and then gets more opportunities to practice remembering the
facts answer. Interestingly this findingthat it is important to allow the student only a
short amount of time before providing the answerhas been studied extensively as a
procedure psychologists call constant time delay.
Several studies have found that using a constant time delay procedure alone for
teaching math facts is quite effective (Bezuk & Cegelka, 1995). This near-errorless
technique means that if the student does not respond within the time allowed, a
controlling prompt (typically a teacher modeling the correct response) is provided. The
student then repeats the task and the correct answer (Koscinski & Gast, 1993b). The
results have shown that this type of near-errorless practice has been effective whether
delivered by computer (Koscinski & Gast, 1993a) or by teachers using flashcards
(Koscinski & Gast, 1993b).
Two of the aspects of the constant time delay procedures are instructionally
critical. One is the time allowed is on the order of 3 or 4 seconds, therefore ensuring that
students are using memory retrieval rather than reconstructive procedures, in other words,
they are remembering rather than figuring out the answers. Second, if the student fails to
remember he or she is immediately told the answer and asked to repeat it. So it
becomes clear that the point of the task is to remember the answers rather than continue
to derive them over and over.
These aspects of constant time delay teaching procedures are the same ones that
were found to be effective by Hasselbring and colleagues. Practicing on a small set of
facts that are recalled from memory until those few facts are answered very quickly
contrasts sharply with the typical practice of timed tests on all 100 facts at a time. In
addition, the requirement that these small sets are practiced until answers come easily in a
matter of one or two seconds, is also unusual. Yet, research indicates that developing
very strong associations between each small set of facts and their answers (as shown by
quick, correct answers), before moving on to learn more facts will improve success in
learning.
Developing automaticity 18
Donald B. Crawford, Ph.D. Otter Creek Institute
Campbells research found that most difficulty in learning math facts resulted
from interference from correct answers to allied problemswhere one of the factors
was the same (Campbell, 1987a; Campbell, 1987b; Campbell & Graham, 1985). He
asked the question, How might interference in arithmetic fact learning be
minimized?If strong correct associations are established for problems and answers
encountered early in the learning sequence (i.e. a high criterion for speed and accuracy),
those problems should be less susceptible to retroactive interference when other problems
and answers are introduced later. Furthermore, the effects of proactive interference on
later problems should also be reduced (Campbell, 1987b, p. 119-120).
Because confusion among possible answers is the key problem in learning math
facts, the solution is to establish a mastery-learning paradigm, where small sets of facts
are learned to high levels of mastery, before adding any more facts to be learned.
Researchers have found support for the mastery-learning paradigm, when they have
looked at the development of automaticity. The mechanics of achieving this gradual
mastery-learning paradigm have been outlined by two sets of authors. Hasselbring et al.
found that,
As stated, the key to making drill and practice an activity that will lead to
automaticity in learning handicapped children is additional instruction for establishing a
declarative knowledge network. Several instructional principles may be applied in
establishing this network:
1. Determine learners level of automaticity.
2. Build on existing declarative knowledge.
3. Instruct on a small set of target facts.
4. Use controlled response times.
5. Intersperse automatized with targeted nonautomatized facts during instruction.
(1988, p. 4).
The teacher cannot focus on a small set of facts and intersperse them with facts
that are already automatic until after an assessment determines which facts are already
automatic. Facts that are answered without hesitation after the student reads them aloud
would be considered automatic. Facts that are read to the student should be answered
within a second.
Developing automaticity 19
Donald B. Crawford, Ph.D. Otter Creek Institute
Silbert, Carnine, and Stein recommended similar make-up for a set of facts for
flashcard instruction. Before beginning instruction, the teacher makes a pile of flash
cards. This pile includes 15 cards: 12 should have the facts the student knew instantly on
the tests, and 3 would be facts the student did not respond to correctly on the test. (1990,
p. 127).
These authors also recommended procedures for flashcard practice similar to the
constant time delay teaching methods. If the student responds correctly but takes
longer than 2 seconds or so, the teacher places the card back two or three cards from the
front of the pile. Likewise, if the student responds incorrectly, the teacher tells the
student the correct answer and then places the card two or three cards back in the pile.
Cards placed two or three back from the front will receive intensive review. The teacher
would continue placing the card two or three places back in the pile until the student
responds acceptably (within 2 seconds) four times in a row (Silbert et al., 1990, p. 130).
Hasselbring et al. described how their computer program Fast Facts provided a
similar, though slightly more sophisticated presentation order during practice. Finally,
our research suggests that it is best to work on developing a declarative knowledge
network by interspersing the target facts with other already automatized facts in a
prespecified, expanding order. Each time the target fact is presented, another
automatized fact is added as a spacer so that the amount of time between presentations
of the target fact is expanded. This expanding presentation model requires the student to
retrieve the correct answers over longer and longer periods (1988, p. 5).
Unfortunately the research-based math facts drill-and-practice program Fast
Facts developed by Hasselbring and colleagues was not developed into a commercial
product. Nor were their recommendations incorporated into the design of existing
commercial math facts practice programs, which seldom provide either controlled
response times or practice on small sets. What practical alternatives to cumbersome
flashcards or ineffective computer drill-and-practice programs are available to classroom
teachers who want to use the research recommendations to develop automaticity with
math facts?
Developing automaticity 20
Donald B. Crawford, Ph.D. Otter Creek Institute
How can teachers implement class-wide effective facts practice?
Stein et al. tackled the issue of how teachers could implement an effective facts
practice program consistent with the research. As an alternative to total individualization,
they suggest developing a sequence of learning facts through which all students would
progress. Stein et al. describe the worksheets that students would master one at a time.
Each worksheet would be divided into two parts. The top half of the
worksheets should provide practice on new facts including facts from the
currently introduced set and from the two preceding sets. More
specifically, each of the facts from the new set would appear four times.
Each of the facts from the set introduced just earlier would appear three
times, and each of the facts from the set that preceded that one would
appear twice.The bottom half of the worksheet should include 30
problems. Each of the facts from the currently introduced set would
appear twice. The remaining facts would be taken from previously
introduced sets. (1997, p. 88).
The daily routine consists of student pairs, one of which does the practicing while
the other follows along with an answer key. Stein et al. specify:
The teacher has each student practice the top half of the worksheet twice. Each
practice session is timedthe student practices by saying complete statements (e.g., 4 + 2
= 6), rather than just answers.If the student makes an error, the tutor corrects by saying
the correct statement and having the student repeat the statement. The teacher allows
students a minute and a half when practicing the top part and a minute when practicing
the bottom half. (1997, p. 90).
After each student practices, the teacher conducts a timed one-minute test of the
30 problems on the bottom half. Students who correctly answered at least 28 of the 30
items, within the minute allowed on the bottom half test, have passed the worksheet and
are then given the next worksheet to work on (Stein, et al., 1997). A specific
performance criterion for fact mastery is critical to ensure mastery before moving on to
additional material. These criteria, however, appear to be lower than the slowest
definition of automaticity in the research literature. Teachers and children could benefit
Developing automaticity 21
Donald B. Crawford, Ph.D. Otter Creek Institute
from higher rate of mastery before progressing, in the range of 40 to 60 problems per
minuteif the students can write that quickly.
Stein et al. also delineate the organizational requirements for such a program to be
effective and efficient: A program to facilitate basic fact memorization should have the
following components:
1. a specific performance criterion for introducing new facts.
2. intensive practice on newly introduced facts
3. systematic practice on previously introduced facts
4. adequate allotted time
5. a record-keeping system
6. a motivation system (1997, p. 87).
The worksheets and the practice procedures ensure the first three points.
Adequate allotted time would be on the order of 10 to 15 minutes per day for each
student of the practicing pair to get their three minutes of practice, the one-minute test for
everyone and transition times. The authors caution that memorizing basic facts may
require months and months of practice (Stein et al., 1997, p. 92). A record-keeping
system could simply record the number of tries at each worksheet and which worksheets
had been passed. A motivation system that gives certificates and various forms of
recognition along the way will help students maintain the effort needed to master all the
facts in an operation.
Such a program of gradual mastery of small sets of facts at a time is
fundamentally different than the typical kind of facts practice. Because children are
learning only a small set of new facts it does not take many repetitions to commit them to
memory. The learning is occurring during the practice time. Similarly because the
timed tests are only over the facts already brought to mastery, children are quite
successful. Because they see success in small increments after only a couple of days
practice, students remain motivated and encouraged.
Contrast this with the common situation where children are given timed-tests over
all 100 facts in an operation, many of which are not in long term memory (still counting).
Students are attempting to become automatic on the whole set at the same time which
requires brute force to master. Because childrens efforts are not focused on a small set
Developing automaticity 22
Donald B. Crawford, Ph.D. Otter Creek Institute
to memorize, students often just become increasingly anxious and frustrated by their lack
of progress. Such tests do not teach students anything other than to remind them that
they are unsuccessful at math facts. Even systematically taking timed tests daily on the
set of 100 facts is ineffective as a teaching tool and is quite punishing to the learner.
Earlier special education researchers attempted to increase automaticity with math facts
by systematic drill and practice...But this brute force approach made mathematics
unpleasant, perhaps even punitive, for many (Gersten & Chard, 1999, p. 21).
These inappropriate type timed tests led to an editorial by Marilyn Burns in which
she offered her opinion that, Teachers who use timed tests believe that the tests help
children learn basic facts....Timed tests do not help children learn (Burns, 1995, p. 408-
409). Clearly timed tests only establish whether or not children have learnedthey do
not teach. However, if children are learning facts in small sets, and are being taught their
facts gradually, then timed tests will demonstrate this progress. Under such
circumstances, if children are privy to graphs showing their progress, they find it quite
motivating (Miller, 1983).
Given the most common form of timed tests, it is not surprising that Isaacs and
Carroll argue that ...an over reliance on timed tests is more harmful than beneficial
(Burns, 1995), this fact has sometimes been misinterpreted as meaning that they should
never be used. On the contrary, if we wish to assess fact proficiency, time is important.
Timed tests also serve the important purpose of communicating to students and parents
that basic-fact proficiency is an explicit goal of the mathematics program. However,
daily, or even weekly or monthly, timed tests are unnecessary (1999, p. 512).
In contrast, when children are successfully learning the facts, through the use of a
properly designed program, they are happy to take tests daily, to see if theyve improved.
Results from classroom studies in which time trials have been evaluated
showAccuracy does not suffer, but usually improves, and students enjoy being
timed...When asked which method they preferred, 26 of the 34 students indicated they
liked time trials better than the untimed work period (Miller & Heward, 1992, p. 101-
102).
In summary, the proper kind of practice can enable students to develop their skill
with math facts beyond strategies for remembering facts into automaticity, or direct
Developing automaticity 23
Donald B. Crawford, Ph.D. Otter Creek Institute
retrieval of math fact answers. Automaticity is achieved when students can answer math
facts with no hesitation or no more than one second delaywhich translates into 40 to 60
problems per minute. What is required for students to develop automaticity is a
particular kind of practice focused on small sets of facts, practiced under limited response
times, where the focus is on remembering the answer quickly rather than figuring it out.
The introduction of additional new facts should be withheld until students can
demonstrate automaticity with all previously introduced facts. Under these
circumstances students are successful and enjoy graphing their progress on regular timed
tests. Using an efficient method for bringing math facts to automaticity has the added
value of freeing up more class time to spend in higher level mathematical thinking.
REFERENCES
Ando, M. & Ikeda, H. (1971). Learning multiplication factsmare than a drill.
Arithmetic Teacher, 18, 365-368.
Ashcraft, M. H. (1982). The development of mental arithmetic: A chronometric
approach. Developmental Review, 2, 213-236.
Ashcraft, M. H. & Christy, K. S. (1995). The frequency of arithmetic facts in elementary
texts: Addition and multiplication in grades 1 6. Journal for Research in Mathematics
Education, 25(5), 396-421.
Ashcraft, M. H., Fierman, B. A., & Bartolotta, R. (1984). The production and verification
tasks in mental addition: An empirical comparison. Developmental Review, 4, 157-170.
Ashcraft, M. H. (1985). Is it farfetched that some of us remember our arithmetic facts?
Journal for Research in Mathematics Education, 16 (2), 99-105.
Ashcraft, M. H. (1992). Cognitive arithmetic: A review of data and theory. Cognition,
44, 75-106.
Ashlock, R. B. (1971). Teaching the basic facts: Three classes of activities. The
Arithmetic Teacher, 18
Baroody, A. J. (1985). Mastery of basic number combinations: internalization of
relationships or facts? Journal of Research in Mathematics Education, 16(2) 83-98.
Bezuk, N. S. & Cegelka, P. T. (1995). Effective mathematics instruction for all students.
In P. T. Cegelka & Berdine, W. H. (Eds.) Effective instruction for students with learning
difficulties. (pp. 345-384). Needham Heights, MA: Allyn and Bacon.
Developing automaticity 24
Donald B. Crawford, Ph.D. Otter Creek Institute
Bloom, B. S. (1986). Automaticity: The Hands and Feet of Genius." Educational
Leadership; 43(5), 70-77.
Burns, M. (1995) In my opinion: Timed tests. Teaching Children Mathematics, 1 408-
409.
Campbell, J. I. D. (1987a). Network interference and mental multiplication. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 13 (1), 109-123.
Campbell, J. I. D. (1987b). The role of associative interference in learning and retrieving
arithmetic facts. In J. A. Sloboda & D. Rogers (Eds.) Cognitive process in mathematics:
Keele cognition seminars, Vol. 1. (pp. 107-122). New York: Clarendon Press/Oxford
University Press.
Campbell, J. I. D. & Graham, D. J. (1985). Mental multiplication skill: Structure,
process, and acquisition. Canadian Journal of Psychology. Special Issue: Skill. 39(2),
367-386.
Carnine, D. W. & Stein, M. (1981). Organizational strategies and practice procedures for
teaching basic facts. Journal for Research in Mathematics Education, 12(1), 65-69.
Carpenter, T. P. & Moser, J. M. (1984). The acquisition of addition and subtraction
concepts in grade one through three. Journal for Research in Mathematics Education,
15(3), 179-202.
Cooke, N. L., Guzaukas, R., Pressley, J. S., Kerr, K. (1993) Effects of using a ratio of
new items to review items during drill and practice: Three experiments. Education and
Treatment of Children, 16(3), 213-234.
Garnett, K. (1992). Developing fluency with basic number facts: Intervention for
students with learning disabilities. Learning Disabilities Research and Practice; 7 (4)
210-216.
Garnett, K. & Fleischner, J. E. (1983). Automatization and basic fact performance of
normal and learning disabled children. Learning Disability Quarterly, 6(2), 223-230.
Geary, D. C. & Brown, S. C. (1991). Cognitive addition: Strategy choice and speed-of-
processing differences in gifted, normal, and mathematically disabled children.
Developmental Psychology, 27(3), 398-406.
Geary, D. C., Bow-Thomas, C. C., & Yao, Y. (1992). Counting knowledge and skill in
cognitive addition: A comparison of normal and mathematically disabled children.
Journal of Experimental Child Psychology, 54(3), 372-391.
Gersten, R. & Chard, D. (1999). Number sense: Rethinking arithmetic instruction for
students with mathematical disabilities. The Journal of Special Education, 33(1), 18-28.
Developing automaticity 25
Donald B. Crawford, Ph.D. Otter Creek Institute
Goldman, S. R. & Pellegrino, J. (1986). Microcomputer: Effective drill and practice.
Academic Therapy, 22(2). 133-140.
Graham, D. J. (1987). An associative retrieval model of arithmetic memory: how
children learn to multiply. In J. A. Sloboda & D. Rogers (Eds.) Cognitive process in
mathematics: Keele cognition seminars, Vol. 1. (pp. 123-141). New York: Clarendon
Press/Oxford University Press.
Hasselbring, T. S. & Goin, L. T. (1988) Use of computers. Chapter Ten. In G. A.
Robinson (Ed.), Best Practices in Mental Disabilities Vol. 2. (Eric Document
Reproduction Service No: ED 304 838).
Hasselbring, T. S., Goin, L. T., & Bransford, J. D. (1987). Effective Math Instruction:
Developing Automaticity. Teaching Exceptional Children, 19(3) 30-33.
Hasselbring, T. S., Goin, L. T., & Bransford, J. D. (1988). Developing math automaticity
in learning handicapped children: The role of computerized drill and practice. Focus on
Exceptional Children, 20(6), 1-7.
Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and
decision making. (3rd Ed.) Belmont, CA: Wadsworth/Thomson Learning.
Isaacs, A. C. & Carroll, W. M. (1999). Strategies for basic-facts instruction. Teaching
Children Mathematics, 5(9), 508-515.
Koshmider, J. W. & Ashcraft, M. H. (1991). The development of childrens mental
multiplication skills. Journal of Experimental Child Psychology, 51, 53-89.
Koscinski, S. T. & Gast, D. L. (1993a). Computer-assisted instruction with constant time
delay to teach multiplication facts to students with learning disabilities. Learning
Disabilities Research & Practice, 8(3), 157-168.
Koscinski, S. T. & Gast, D. L. (1993b). Use of constant time delay in teaching
multiplication facts to students with learning disabilities. Journal of Learning
Disabilities, 26(8) 533-544.
Logan, G. D. (1985). Skill and automaticity: Relations, implications, and future
directions. Canadian Journal of Psychology. Special Issue: Skill. 39(2), 367-386.
Logan, G. D. (1988). Toward an instance theory of automatization. Psychological
Review, 95(4), 492-527.
Logan, G. D. (1991a). Automaticity and memory. In W. E. Hockley & S. Lewandowsky
(Eds.) Relating theory and data: Essays on human memory in honor of Bennet B.
Murdock (pp. 347-366). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Logan, G. D. (1991b). The transition from algorithm to retrieval in memory-based
theories of automaticity. Memory & Cognition, 19(2), 151-158.
Developing automaticity 26
Donald B. Crawford, Ph.D. Otter Creek Institute
Logan, G. D. & Klapp, S. T. (1991a). Automatizing alphabet arithmetic I: Is extended
practice necessary to produce automaticity? Journal of Experimental Psychology:
Learning, Memory & Cognition, 17(2), 179-195.
Logan, G. D. & Klapp, S. T. (1991b). Automatizing alphabet arithmetic II: Are there
practice effects after automaticity is achieved? Journal of Experimental Psychology:
Learning, Memory & Cognition, 17(2), 179-195.
Mc Closkey, M., Harley, W., & Sokol, S. M. (1991). Models of Arithmetic Fact
Retrieval: An evaluation in light of findings from normal and brain-damaged subjects.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 17 (3), 377-397.
Mercer, C. D. & Miller, S. P. (1992). Teaching students with learning problems in math
to acquire, understand, and apply basic math facts. Remedial and Special Education,
13(3) 19-35.
Miller, A. D. & Heward, W. L. (1992). Do your students really know their math facts?
Using time trials to build fluency. Intervention in School and Clinic, 28(2) 98-104.
Miller, G. (1983). Graphs can motivate children to master basic facts. Arithmetic
Teacher, 31(2), 38-39.
Miller, S. P., Mercer, C. D. & Dillon, A. S. (1992). CSA: Acquiring and retaining math
skills. Intervention in School and Clinic, 28 (2) 105-110.
Myers, A. C. & Thorton, C. A. (1977). The learning disabled childlearning the basic
facts. Arithmetic Teacher, 25(3), 46-50.
National Council of Teachers of Mathematics (2000). Number and operations standard
in Chapter 3 Standards in Standards and principles for school mathematics. Retrieved
June 5, 2003 from http://standards.nctm.org/document/chapter3/numb.htm
Peters, E., Lloyd, J., Hasselbring, T., Goin, L., Bransford, J., Stein, M. (1987). Effective
mathematics instruction. Teaching Exceptional Children, 19(3), 30-35.
Pellegrino, J. W. & Goldman, S. R. (1987). Information processing and elementary
mathematics. Journal of Learning Disabilities, 20(1) 23-32, 57.
Rightsel, P. S. & Thorton, C. A. (1985). 72 addition facts can be mastered by mid-grade
1. Arithmetic Teacher, 33(3), 8-10.
Silbert, J., Carnine, D. & Stein, M. (1990). Direct Instruction Mathematics (2
nd
Ed.)
Columbus, OH: Merrill Publishing Co.
Steinberg, R. M. (1985). Instruction on derived facts strategies in addition and
subtraction. Journal for Research in Mathematics Education, 16(5), 337-355.
Developing automaticity 27
Donald B. Crawford, Ph.D. Otter Creek Institute
Stein, M., Silbert, J., & Carnine, D. (1997) Designing Effective Mathematics
Instruction: a direct instruction approach (3rd Ed). Upper Saddle River, NJ: Prentice-
Hall, Inc.
Suydam, M. N. (1984). Research report: Learning the basic facts. Arithmetic Teacher,
32(1), 15.
Thorton, C. A. (1978). Emphasizing thinking strategies in basic fact instruction. Journal
for Research in Mathematics Education, 9, 214-227.
Thorton, C. A. & Smith, P. J. (1988). Action research: Strategies for learning subtraction
facts. Arithmetic Teacher, 35(8), 8-12.
Van Houten, R. (1993). Rote vs. Rules: A comparison of two teaching and correction
strategies for teaching basic subtraction facts. Education and Treatment of Children, 16
(2) 147-159.
Results of field tests of Mastering Math Facts 1
Donald B. Crawford, Ph.D. Otter Creek Institute
2000-2001 Kendall Elementary School Kendall, Washington phone (360) 383-2055
Steve Merz Principal e-mailto: SMerz@mtbaker.wednet.edu
All groups: Testing
1. Took the Stanford Diagnostic Math Test Computation Subtest: 25 minutes, multiple choice
in September and again in May. Use the answer form provided.
2. Chose one operation to study and took 3 math fact timings (two minutes in length) of all
students in September and 3 timings again in May. Score was the average number of
problems correctly answered in two minutes.
Three groups were set up.
The control group stayed as they were. They used whichever methods of teaching/learning math
facts they were using before the study. The second group, was the standard Mastering Math
Facts group. This group followed MMF standard directions and procedures as written. They
took the time to do MMF daily (either practice or two-minute progress monitoring sheet). They
began all students in the chosen operation. They gave the two-minute timing in the chosen
operation once per week. They collected that data on the student graph (writing down the
number correct in two minutes as well as graphing it). A third group was designated the New
and Improved Mastering Math Facts group. They followed standard Mastering Math Facts
procedures with some modifications. They cut time in half for timings, to allow for higher
number per minute. They kept raising goals with no upper limit. They didnt move students on
until they had passed three times instead of once.
Findings of improvement on Math Facts
A comparison among the three groups was possible only amongst 2
nd
grade students
where there were 3 classes, one in each condition, all of whom studied addition facts. No other
grade offered such a clear comparison. The results show suggest that MMF in its standard set of
directions performs best. It is notable that the control group made almost no progress in math
facts relative to either of the groups using the MMF materials.
2nd Grade Facts by Group
0
5
10
15
20
25
30
35
Fall Facts Spring Facts
A
d
d
F
a
c
t
s
i
n
2
m
i
n
.
MMF
New & imp
Control
Because there was no significant difference between the Standard MMF group and the
New and Improved MMF group in the school as a whole, those results were collapsed into one
group.
Results of field tests of Mastering Math Facts 2
Donald B. Crawford, Ph.D. Otter Creek Institute
Facts Mastery by Group (Across Grades)
48
29
24
13
0
10
20
30
40
50
60
Fall Spring
MMF
Control
As can be seen from the graph, students did improve in math fact fluency, and as a result of
using Mastering Math Facts students improved significantly more (approximately twice) the
improvement of students in the control condition. Students who used MMF also ended the year
significantly better at facts than the control group children.
Effects on Math Achievement
Perhaps more interesting are the effects upon achievement as measured by scores on the
Stanford Diagnostic Math Test computation subtest.
SDMT %ile Scores by Group (Across Grades)
ANOVA Interaction Analysis F = 4.45, p < .04
39
29
30
28
0
5
10
15
20
25
30
35
40
45
50
Fall Spring
MMF
Control
Scores reported are the national percentile scores with the different norms appropriate to the time
of testing. Students in the control group did not significantly change their ranking from fall to
spring against the expected improvement as reflected in the spring norms. Students in the MMF
group however did show a significant rise in their national ranking in math achievement during
the year.
Results of field tests of Mastering Math Facts 3
Donald B. Crawford, Ph.D. Otter Creek Institute
1998-99 Emerson Elementary School Everett, Washington
phone (425) 356-4560 Martha Adams Principal
e- mail to martha_adams@everett.wednet.edu
This school did not wish to employ a control group. Several teachers across all grades
implemented Mastering Math Facts with varying levels of fidelity to program guidelines. Data
were collected from math timings, scores representing the number of facts correctly answered in
writing during a 2-minute test. The data from all students who were working on a given
operation were included without regard to grade level. Data were collected from the beginning
of practice on that operation, the middle month of practice and the end of practice for each
student.
The data clearly show improvement in fluency of math facts during the course of working
through the program of Mastering Math Facts.
Facts correct
First month Middle month Last month
0
25
50
75
25%-75% Median 10%-90%
During use of Mastering Math Facts
Addition Facts in 2 minutes
The sample size for the addition operation was 115 students. The mean number of facts correct
at the outset was 34 facts in two minutes (with a standard deviation of 9.6). The middle month
mean was up to 46 facts in two minutes (with a standard deviation of 14.3). The last month
mean was 52 (with a standard deviation of 17.1).
Results of field tests of Mastering Math Facts 4
Donald B. Crawford, Ph.D. Otter Creek Institute
Facts correct
First month Middle month Last month
0
25
50
75
25%-75% Median 10%-90%
During use of Mastering Math Facts
--Mult. Facts in 2 min.--
The sample size for multiplication was 61 students. The mean number of facts correct at
the outset was 34 facts in two minutes (with a standard deviation of 12.7). The middle month
mean was up to 52 facts in two minutes (with a standard deviation of 13.4). The last month
mean was 62 (with a standard deviation of 13.7).
Facts correct
First month Middle month Last month
0
25
50
75
25%-75% Median 10%-90%
During use of Mastering Math Facts
--Division Facts in 2 minutes--
The sample size for division was 50 students. The mean number of facts correct at the outset
was 41 facts in two minutes (with a standard deviation of 12.2). The middle month mean was up
to 56 facts in two minutes (with a standard deviation of 15.4). The last month mean was 66
(with a standard deviation of 14.9).
Saxon Math Updated May 2013 Page 1
WWC Intervention Report U.S. DEPARTMENT OF EDUCATION
What Works Clearinghouse
and Math Expressions. The average effect size across the curricula
and both grades (frst and second) was not large enough to be considered substantively important according to WWC
criteria (an effect size of at least 0.25). Based on the one statistically signifcant fnding, the WWC characterizes this
study as having statistically signifcant positive effects.
Resendez and Manley (2005) reported signifcant effects of the Saxon Math program on school-level math achieve-
ment in grades 2, 4, and 5, but reported no signifcant effects in grades 1 and 3. These fndings control for schools
baseline math achievement levels. Due to the lack of student-level data, the student-level effect size and improve-
ment index could not be calculated for this study. Based on WWC calculations, the average effect across grades
15 is not statistically signifcant. Therefore, this study is characterized as having indeterminate effects.
Thus, for the mathematics achievement domain, one study showed statistically signifcant positive effects and one
study showed indeterminate effects. This results in a rating of potentially positive effects, with a medium to large
extent of evidence.
Table 3. Rating of effectiveness and extent of evidence for the mathematics achievement domain
Rating of effectiveness Criteria met
Potentially positive effects
Evidence of a positive effect with
no overriding contrary evidence.
In the two studies that reported ndings, the estimated impact of the intervention on outcomes in the mathematics
achievement domain was positive and statistically signicant in one study and indeterminate in one study.
Extent of evidence Criteria met
Medium to large Two studies that included more than 8,060 students in 452 schools reported evidence of effectiveness in the
mathematics achievement domain.
Saxon Math Updated May 2013 Page 5
WWC Intervention Report
References
Study that meets WWC evidence standards without reservations
Agodini, R., Harris, B., Thomas, M., Murphy, R., & Gallagher, L. (2010). Achievement effects of four early elementary
school math curricula: Findings for rst and second graders (NCEE 2011-4001). Washington, DC: National Center
for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Retrieved from http://ies.ed.gov/ncee/pubs/20114001/pdf/20114001.pdf
Additional sources:
Agodini, R., & Harris, B. (2010). An experimental evaluation of four elementary school math curricula. Journal
of Research on Educational Effectiveness, 3(3), 199253.
Agodini, R., Harris, B., Atkins-Burnett, S., Heaviside, S., Novak, T., & Murphy, R. (2009). Achievement effects
of four early elementary school math curricula: Findings from rst graders in 39 schools (NCEE 2009-
4052). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of
Education Sciences, U.S. Department of Education.
Agodini, R., Harris, B., Atkins-Burnett, S., Heaviside, S., Novak, T., Murphy, R., & Institute of Education
Sciences (ED), National Center for Education Evaluation and Regional Assistance. (2009). Achieve-
ment effects of four early elementary school math curricula: Findings from rst graders in 39 schools
(NCEE 2009-4053). Executive summary. Washington, DC: National Center for Education Evaluation and
Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Study that meets WWC evidence standards with reservations
Resendez, M., & Manley, M. A. (2005). The relationship between using Saxon Elementary and Middle School Math
and student performance on Georgia statewide assessments. Orlando, FL: Harcourt Achieve.
Studies that do not meet WWC evidence standards
Bhatt, R., & Koedel, C. (2012). Large-scale evaluations of curricular effectiveness: The case of elementary math-
ematics in Indiana. Educational Evaluation and Policy Analysis, 34(4), 391412. The study does not meet
WWC evidence standards because it uses a quasi-experimental design in which the analytic intervention and
comparison groups are not shown to be equivalent.
Calvery, R., Bell, D., & Wheeler, G. (1993, November). A comparison of selected second and third graders math
achievement: Saxon vs. Holt. Paper presented at the meeting of the Mid-South Educational Research
Association, New Orleans, LA. The study does not meet WWC evidence standards because it uses a quasi-
experimental design in which the analytic intervention and comparison groups are not shown to be equivalent.
Cummins-Colburn, B. J. L. (2007). Differences between state-adopted textbooks and student outcomes on the
Texas Assessment of Knowledge and Skills examination. Dissertation Abstracts International, 68(06A), 168
2299. The study does not meet WWC evidence standards because it uses a quasi-experimental design in
which the analytic intervention and comparison groups are not shown to be equivalent.
Educational Research Institute of America. (2009). A longitudinal analysis of state mathematics scores for Indiana
schools using Saxon Math No. 362. Bloomington, IN: Author. The study does not meet WWC evidence stan-
dards because it uses a quasi-experimental design in which the analytic intervention and comparison groups
are not shown to be equivalent.
Good, K., Bickel, R., & Howley, C. (2006). Saxon Elementary Math program effectiveness study. Charlestown, WV:
Edvantia. The study does not meet WWC evidence standards because it uses a quasi-experimental design in
which the analytic intervention and comparison groups are not shown to be equivalent.
Hansen, E., & Greene, K. (2000). A recipe for math. Whats cooking in the classroom: Saxon or Traditional? The
study does not meet WWC evidence standards because it uses a quasi-experimental design in which the
analytic intervention and comparison groups are not shown to be equivalent.
Saxon Math Updated May 2013 Page 6
WWC Intervention Report
Hook, W., Bishop, W., & Hook, J. (2007). A quality math curriculum in support of effective teaching for elementary
schools. Educational Studies in Mathematics, 65(2), 125148. The study does not meet WWC evidence stan-
dards because it uses a quasi-experimental design in which the analytic intervention and comparison groups
are not shown to be equivalent.
Nguyen, K., Elam, P., & Weeter, R. (1993). The 199293 Saxon Mathematics program evaluation report. Oklahoma
City: Oklahoma City Public Schools. The study does not meet WWC evidence standards because the mea-
sures of effectiveness cannot be attributed solely to the interventionthe intervention was not implemented
as designed.
Resendez, M., & Azin, M. (2007). The relationship between using Saxon Elementary and Middle School Math and
student performance on California statewide assessments. Jackson, WY: PRES Associates. The study does
not meet WWC evidence standards because it uses a quasi-experimental design in which the analytic inter-
vention and comparison groups are not shown to be equivalent.
Additional source:
Resendez, M., & Azin, M. (2007). Saxon Math and California English Learners math performance: Research
brief. Jackson, WY: PRES Associates.
Resendez, M., & Azin, M. (2008). The relationship between using Saxon Math at the elementary and middle school
levels and student performance on the North Carolina statewide assessment: Final report. Jackson, WY: PRES
Associates. The study does not meet WWC evidence standards because it uses a quasi-experimental design
in which the analytic intervention and comparison groups are not shown to be equivalent.
Resendez, M., Sridharan, S., & Azin, M. (2006). The relationship between using Saxon Elementary School Math and
student performance on Texas statewide assessments. Jackson, WY: PRES Associates. The study does not
meet WWC evidence standards because it uses a quasi-experimental design in which the analytic intervention
and comparison groups are not shown to be equivalent.
Roan, C. (2012). A comparison of elementary mathematics achievement in everyday math and Saxon Math schools
in Illinois (Doctoral dissertation). Available from ProQuest Dissertations and Theses. (UMI No. 3507509) The
study does not meet WWC evidence standards because it uses a quasi-experimental design in which the
analytic intervention and comparison groups are not shown to be equivalent.
Studies that are ineligible for review using the Elementary School Mathematics Evidence Review Protocol
Bell, G. (2011). The effects of Saxon Math instruction on middle school students mathematics achievement (Doctoral
dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3488140) The study is
ineligible for review because it does not use a sample within the age or grade range specifced in the protocol.
Christofori, P. (2005). The effect of direct instruction math curriculum on higher-order problem solving (Unpublished
doctoral dissertation). University of South Florida, Sarasota. The study is ineligible for review because it does
not use a comparison group design or a single-case design.
Doe, C. (2006). Marvelous math products. MultiMedia & Internet@Schools, 13(3), 3033. The study is ineligible for
review because it does not examine the effectiveness of an intervention.
Fahsl, A. J. (2001). An investigation of the effects of exposure to Saxon Math textbooks, socioeconomic status and
gender on math achievement scores. Dissertation Abstracts International, 62(08), 2681A. The study is ineli-
gible for review because it does not use a comparison group design or a single-case design.
Educational Research Institute of America. (2009). A longitudinal analysis of state mathematics scores for Florida
schools using Saxon Math No. 365. The study is ineligible for review because it does not use a comparison
group design or a single-case design.
Educational Research Institute of America. (2009). A longitudinal analysis of state mathematics scores for Oklahoma
schools using Saxon Math No. 363. Bloomington, IN: Author. The study is ineligible for review because it does
not use a comparison group design or a single-case design.
Saxon Math Updated May 2013 Page 7
WWC Intervention Report
Harcourt Achieve, Inc. (2005). Case study research summaries of Saxon Math. Retrieved from http://saxonpublishers.
hmhco.com/en/resources/result_c.htm?ca=Research%3a+Effcacy&SRC1=4
The study is ineligible for review
because it does not use a comparison group design or a single-case design.
Klein, D. (2000). High achievement in mathematics: Lessons from three Los Angeles elementary schools. Washing-
ton, DC: Brookings Institution Press. The study is ineligible for review because it does not use a comparison
group design or a single-case design.
Plato, J. (1998). An evaluation of Saxon Math at Blessed Sacrament School. Retrieved from http://lrs.ed.uiuc.edu/
students/plato1/Final.html The study is ineligible for review because it does not use a comparison group
design or a single-case design.
Slavin, R. E., & Lake, C. (2007). Effective programs in elementary mathematics: A best-evidence synthesis. The Best
Evidence Encyclopedia, 1(2). Retrieved from http://www.bestevidence.org/word/elem_math_Feb_9_2007.pdf
The study is ineligible for review because it is a secondary analysis of the effectiveness of an intervention, such
as a meta-analysis or research literature review.
Additional source:
Slavin, R. E., & Lake, C. (2009). Effective programs for elementary mathematics: A best evidence synthesis. Edu-
cators summary. Retrieved from http://www.bestevidence.org/word/elem_math_Mar_11_2009_sum.pdf
Viadero, D. (2009). Study gives edge to 2 math programs. Education Week, 28(23), 113. The study is ineligible for
review because it does not examine the effectiveness of an intervention.
Vinogradova, E., King, C., & Rhoades, T. (2008, April). Success for all students: What works? Best practices in Mary-
land public schools. Paper presented at the annual meeting of the American Sociological Association, Boston,
MA. The study is ineligible for review because it does not examine the effectiveness of an intervention.
Saxon Math Updated May 2013 Page 8
WWC Intervention Report
Appendix A.1: Research details for Agodini et al. (2010)
Agodini, R., Harris, B., Thomas, M., Murphy, R., & Gallagher, L. (2010). Achievement effects of four
early elementary school math curricula: Findings for rst and second graders (NCEE 2011-4001).
Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute
of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/
pubs/20114001/pdf/20114001.pdf
Table A1. Summary of ndings Meets WWC evidence standards without reservations
Study ndings
Outcome domain Sample size
Average improvement index
(percentile points) Statistically signicant
Mathematics achievement 110 schools/8,060 students +3 Yes
Setting
The study took place in elementary schools in 12 districts across 10 states, including Con-
necticut, Florida, Kentucky, Minnesota, Mississippi, Missouri, Nevada, New York, South Caro-
lina, and Texas. Of the 12 districts, three were in urban areas, fve were in suburban areas, and
four were in rural areas.
Study sample
Following district and school recruitment and collection of consent from all teachers in the
participating grades, 111 participating schools were randomly assigned to one of four cur-
ricula: (a) Investigations in Number, Data, and Space
,
(b) Math Expressions, and (c) Scott ForesmanAddison Wesley Elementary Mathematics. Each
curriculum was implemented by comparison teachers for one school year.
Investigations in Number, Data, and Space