Anda di halaman 1dari 167

Math Fact Strategies 1

Math Fact Strategies Research Project


Annie Boso
Marygrove College
July 16, 2011











Math Fact Strategies 2

Abstract
An action research project was conducted in order to determine effective math fact strategies for
first graders. The traditional way of teaching math facts included using timed tests and
flashcards, with most students counting on their fingers or a number line. Six new research-
based strategies were taught and analyzed to decide which methods work best. The project took
place at Calcutta Elementary School in Calcutta, Ohio with nineteen students. Research,
including data collection and analysis, occurred between January and July, 2011. Data sources
included previous math grades, student journals, teacher observations with anecdotal notes, math
fact pre and post tests, a parent survey, teacher interview, focus groups, and Ohio Achievement
Test scores compared to third and fourth grade math averages. Overall, the action research
results showed the effectiveness of the new math fact strategies discussed, especially in
comparison with the former way of teaching math facts.
Introduction
This action research project will occur in a first grade class at Calcutta Elementary, a
small school of less than four hundred students in rural Ohio. The school district is in a low-
income area in Appalachia; however, most of the students are from middle-class homes. I have
taught first grade at Calcutta for almost two years. Prior to this position I was a substitute
teacher for one year, and I taught preschool for one year before that. There will be nineteen
students involved, all Caucasian, divided into ten girls and nine boys. I have a mixture of
academically high achieving students, as well as a few students who struggle in every subject,
with many students falling somewhere in between those two extremes. I typically have strong
parental support, which will be important in this study as I ask parents to complete a survey. The
Math Fact Strategies 3

support of my colleagues will also be needed when I conduct a teacher interview, ask for math
grades, and seek their advice.
One of the common struggles among elementary teachers is how to teach math facts in a
way students will remember them. There is a debate between whether it is better to have
students memorize math facts or develop an overall understanding of how to solve math
problems through math fact strategies. I currently use traditional methods of flashcard games
and timed tests to encourage memorization of facts. However, after taking a professional
development math course, I have been taught it is better to teach a more holistic approach.
Furthermore, students have displayed feelings of dislike and even stress when presented with the
more traditional methods of learning math facts. For this reason, it is my goal to conduct a study
of mathematics in hopes of discovering how to best develop automaticity of math facts in my
first graders in a meaningful way.
Data collection will begin with a pretest to determine how students currently solve math
problems, as well as their speed and accuracy. Various strategies will then be taught over the
course of several weeks with students being tested over each strategy and journaling their
thoughts about each one. Other data sources will include teacher interviews, parent surveys,
observation notes, math test scores, student focus groups, and a post-test. After the data has been
collected, I will analyze and interpret it in order to determine how the strategies impacted student
learning. My underlying assumptions about this project include:
1. Certain math strategies such as doubles and adding nine are more effective than doing
drill and timed tests alone.
2. First graders can implement these strategies when taught correctly.
Math Fact Strategies 4

3. Teaching math fact strategies will help students develop automaticity of math facts.
4. Automaticity of math facts is important.
5. Students who have developed automaticity of math facts perform better on higher level
math problems.
Area-of-focus Statement
The purpose of this study is to investigate the effects of various strategies on student
mastery of first grade mathematics. This statement identifies the purpose of study as I am
seeking different strategies to improve my students understanding and automaticity of basic
math facts in new ways (Mills, 2007, p. 44).
Research Questions
1. What math fact strategies work best?
2. How do math fact strategies impact student learning?
3. Why is automaticity of math facts important for students?
Mayner states, Developing the research questions triggers everything (Video, Strategies in
Action: Advice for ways to avoid common mistakes). For this reason, I developed questions to
guide my research throughout the project.
Variables
There will be several variables, or anything that will not be the same, throughout the
research project (Video, Expert Commentary: How should I consider variables in my research?).
Firstly, the math strategies taught will vary from my current way of teaching math facts. There
will be other daily changes to consider, such as my enthusiasm or students attitudes towards the
Math Fact Strategies 5

subject. The time of day the topic is studied will most likely change slightly from day to day.
Other variables possibly affecting my class on any given day include snow days, holidays, being
absent, and interruptions from parents, administrators, or programs.
It is necessary here to also define a few key phrases in the area-of-focus statement and
research questions. Math facts refers to addition and subtraction facts zero through ten. Math
fact strategies means methods such as counting on or doubles, which will be discussed in
the project. The phrase automaticity of math facts means being able to have a rapid recall of
facts without having to count or use other strategies to figure out the given problem.
Review of Literature
To help answer my research questions, I examined current literature on the topic of
teaching math facts, including journal articles, books, research papers, and websites. I
discovered the debate among educators arguing math fact strategies, which is the new trend,
versus the more traditional approach of memorization of math facts. Many of the strategies I
read about were familiar to me; however, I did learn about some new strategies or programs
claiming to increase student mathematic skills. After reviewing the literature, I noticed a theme
declaring students develop automaticity of math facts best when taught strategies. I chose six
reoccurring strategies, or promising practices, from the literature I studied to implement with my
first graders. The following are the resources I will use in my research project:
Brennan, Julie. (March 2006). What about memorizing math facts? Arent they important?
Living Math! Retrieved from
http://www.livingmath.net/MemorizingMathFacts/tabid/306/Default.aspx
Math Fact Strategies 6

This webpage differs slightly in that it is written by a mother who has a child who
struggled with learning math facts. The author asks why it is important to have math facts
memorized (or have automaticity of math facts). She also discusses several ways to develop
declarative knowledge through card and board games. In the end, the author agrees automaticity
of facts is vital for students to develop a higher level of understanding of math throughout life.
Burns, Matthew. (2005). Using incremental rehearsal to increase fluency of single-digit
multiplication facts with children identified as learning disabled in mathematics
computation. Education and Treatment of Children, 28(3), 237-249.
This research article describes the effects of Incremental Rehearsal (IR) on students with
math disabilities level to develop fluency of math facts. While this paper does focus on
multiplication facts, it has valuable information regarding automaticity of math facts. The paper
explains a study done in detail, and is therefore, deeper than other articles. It is interesting as it
describes how students math scores increased due to repetitious drill and practice of facts. This
article allows me to see my problem through someone elses lens as it discusses ways to
increase automaticity for students who struggle with learning disabilities (Adapted from Mills,
2011, pp. 44-45).
Caron, Thomas. (July/August 2007). Learning multiplication. The Clearing House, 80(6),
278-282.
The author makes the point that to develop automaticity students must have meaningful
experiences with the facts, as stated, A recent summary of the relationships between brain
Math Fact Strategies 7

research and educational practice emphasizes the vast importance of understanding developed
from rich and meaningful connections as opposed to the relative uselessness of superficial rote
memory (Caine et al. 2004). He provides these experiences by doing daily worksheets where
students see how many facts can be solved in one minute, and the answers are given at the top of
the page. A good quote about automaticity is, Robert Gagne (1983) emphasized that the
processes of computation that underlie all problem solving must be not just learned, not just
mastered, but automatized. Developing automaticity frees up cognitive capacity for problem
solving.
Crawford, Donald. (n.d.) The third stage of learning math facts: developing
automaticity. R and D Instructional Solutions. Retrieved from
http://rocketmath.net/uploads/Math_Facts_research.pdf
This research article is about how students develop automaticity of math facts through
various stages. The paper points out several unique ideas as it discusses each stage individually,
but still focuses on the third stage which is automaticity. A great quote from the abstract is,
Learning math facts proceeds through three stages: 1) procedural knowledge of figuring out
facts; 2) strategies for remembering facts based on relationships; 3) automaticity in math facts
(Crawford, n.d., p. 1). This quote shows the breakdown of how to teach math facts. Like other
articles, it is stated that the most effective math facts are learned in sets of two to four.
Furthermore, I think part of my current problem of teaching students math facts lies in this
statement, teaching strategies that address earlier stages of learning math facts may be
counterproductive when attempting to develop automaticity (Crawford, n.d., p. 3). The author
Math Fact Strategies 8

explains how learning the various strategies is an important step for students to understand
between having the procedural knowledge of counting up and the final rapid recall stage. A
quote for my project is, Thortons (1978) research showed that using relationships such as
doubles and doubles plus 1, and other tricks to remember facts as part of the teaching process
resulted in more facts being learned after eight weeks than in classes where such aids to memory
were not taught (Crawford, n.d., p. 7). Page 15 begins a section directly answering my third
research question, Why is automaticity of math facts important for students?
Hanlon, Bill. (2004). Facts practice add by strategy. Hanlon Math. Retrieved from
http://hanlonmath.com/pdfFiles/340AddfactsR.pdf
While this is not an article, it is a very useful page for my project. This file created by
Bill Hanlon goes through the main strategies I will be studying during my research. For
example, there are several pages dedicated to doubles facts, others that focus on sums to ten, and
so on. This source will be useful in my project as I will be able to refer to it when creating
assessments for my students to study the effectiveness of various strategies.
Hanlon, Bill. (2004). Strategies for learning the math facts. Hanlon Math. Retrieved from
http://hanlonmath.com/pdfFiles/244StrategiesforFactsBH.pdf
This article starts with a quote that will provide answers to one of my research questions,
The more sophisticated mental operations in mathematics of analysis, synthesis, and evaluation
are impossible without rapid and accurate recall of specific knowledge (Hanlon, 2004, p. 2).
While this paper simply goes through the various strategies of Hanlon Math, it does argue young
children have difficulties starting with the memorization of math facts, but rather should begin
Math Fact Strategies 9

by learning these methods. This article will be beneficial to me as it explains several of the
strategies I will implement in my project. It also explains strategies for subtracting, multiplying
and dividing.
Kaechele, Michael. (2009, July 26). Is rote memorization a 21
st
century skill? [Web log
post]. Retrieved from
http://concretekax.blogspot.com/2009/07/is-rote-memorization-20th-century-skill.html
This website is a blog that contemplates the need for students to memorize information,
such as math facts. The author points out that when he began teaching a middle school math
class his students struggled to remember their basic multiplication facts. To remedy this issue,
he began having his students review and recite facts. The rest of the site is a discussion,
primarily between teachers, on whether or not memorization is the best way to teach math facts.
This blog would be useful to compare varying teachers feelings in my research. I like this
source because it offers a variety to the other journal articles I already have, which Korzym
states, you still need to make sure youve reviewed various articles from different sources
(Video, Expert Commentary: How do I conduct a Literature Review?).
Lehner, Patty. (2008). What is the relationship between fluency and automaticity
through systematic teaching with technology (FASTT Math) and improved student
computational skills? Retrieved from
http://www.vbschools.com/accountability/action_research/PatriciaLehner.pdf
Math Fact Strategies 10

This article discusses the importance of automaticity so students can build upon their
basic facts with higher computation skills. It points out the struggle teachers face to help
students develop automaticity, and how students think they know their math facts when they
really only know strategies to solve them. Research is shared showing the correlation between
knowing basic math facts and success in the workplace. The article then studies the effects of
computer software, FASTT Math, in relation to developing automaticity. This study found that
lacking automaticity of math facts not only impedes higher level mathematical understandings,
but even the development of everyday life skills (Lehner, 2008, p. 4).
Math Facts Automaticity. (2009). Renaissance Learning. Retrieved from
http://doc.renlearn.com/KMNet/R004344828GJ F314.pdf
This is another article that discusses the importance of automaticity of math facts. It
begins by explaining the problem with math and what can be done about it. According to this
paper, the issue is a lack of foundations in math facts. The article discusses how if math facts are
not automatic it uses more working memory, making it more difficult to solve complex
problems. There are interesting graphs shared showing how few students practice math facts
multiple times a week and how students who have math facts mastered rank higher in
mathematics. The author states the importance of timed practices when developing automaticity
of facts. The rest of the paper is then dedicated to a description of how a certain program
increases automaticity among students.
Math Fluency. (2011). Scholastic. Retrieved from
http://www2.scholastic.com/browse/article.jsp?id=324
Math Fact Strategies 11

This article is an excerpt from a research paper that discusses the need for fluency of
math facts in order to attain higher order math skills (Math Fluency, 2011). It is up-to-date
as suggested by Mills (2007, p. 37) since it has a copyright date of 2011. This article contains
amazingly in-depth explanations of how having math facts memorized uses less working
memory in ones brain. This article, like many others, compares learning math facts to learning
how to read. Students need to know how to solve addition problems, just as they need to know
how to sound out letters to make words. To be successful, however, students must develop a
declarative knowledge of automatically knowing their math facts, the same as students need to
automatically recognize sight words.
Mental Math Strategies. (2005). Saskatoon Public Schools Online Learning Centre.
Retrieved from http://olc.spsd.sk.ca/de/math1-3/p-mentalmath.html#master
This website shares many of the common math fact strategies including, Doubles, near
doubles, commutative property, adding ten and more. It is a quick and easy source to find some
of the most recommended strategies. A good quote from this website is, It is recommended that
students learn patterns and strategies for as many facts as possible so that they strengthen their
understanding of the relationships between numbers and the patterns in mathematics. Then they
begin to memorize (Mental Math Strategies, 2005).
R & D Instructional Solutions. (2009). Teacher Directions. Rocket Math. Retrieved from
http://www.rocketmath.net/uploads/Rocket_Math_Teacher_Directions.pdf
Rocket Math is a program that focuses on teaching math facts for automaticity. While
this product does more or less encourage memorization, it has countless reports of being a fast
Math Fact Strategies 12

effective way to help students learn their math facts. The program teaches four math facts a day,
which students practice through a drill with a partner, then take a test. If the student passes, they
get to move on to the next four facts the next day. If the student does not pass, they practice
those facts for homework and try again the next day, moving on once they pass.
Woodward, John. (2006). Developing automaticity in multiplication facts: integrating
strategy instruction with timed practice drills. Learning Disability Quarterly, 29, 269-
289. Retrieved from http://www2.ups.edu/faculty/woodward/LDQfall06.pdf
Woodwards paper describes how automaticity of math facts can be developed among
students through the combination of various strategies and timed drills. The author states even
the recent research that places an emphasis on problem solving still acknowledges the
importance of automaticity of math facts. This information supports my original ideas for my
project. The paper then explains the process used to collect data regarding the main topic,
including trying various strategies and timed practice drills. The final conclusions of
Woodwards research showed an increase in students mastery of math facts with only timed
tests or only strategies, but even more so when combining the two.
Woodward, John. (n.d.). Fact mastery and more! Overview for Teaching Facts. Retrieved
from http://www2.ups.edu/faculty/woodward/facts%20overview.pdf
This article explains why students need to know math facts, and then it provides
worksheets of various strategies to help students reach this goal. The author states how, without
rapid knowledge of math facts, it is difficult to solve math of higher-level thinking. He further
states how learning the math strategies themselves enforce mental computation skills which
Math Fact Strategies 13

students lack. It is suggested that math facts are often taught too quickly, and it would be better
if facts were taught a few at a time. The strategies are the common Plus Zero/One, Doubles,
Sums to ten and so on; however, they are presented in a fashion similar to Rocket Math where
students must perfect each section before moving on to the next.
Yermish, Aimee. (2003). Why memorize math facts? Hoagies Gifted Education Page.
Retrieved from http://www.hoagiesgifted.org/why_memorize.htm
This article is primarily about how even students who are gifted need to be fluent in
knowing their math facts. The author explains the importance of automaticity by saying, When
it takes you too long and uses too much working memory to do the basic skills, you can't keep
track of the higher skills you're supposed to be learning in your current grade. Furthermore, in a
more casual tone, she states, It's going to be very difficult to get to graduate-level mathematics
if you can't hack calculus because you couldn't hack algebra because you couldn't hack middle-
school math because you couldn't hack arithmetic (Yermish, 2003).
Proposed Intervention or Innovation
In hopes of improving students retention of basic math facts, six various strategies will
be implemented. Instead of using current methods of teaching math facts (flashcard games and
timed tests), strategies shown to be effective through research will be used. The strategies
utilized in this project will be adding zero and one, adding two, doubles, doubles plus one, sums
to ten, and adding nine. These strategies will be applied in order to address the
teaching/learning issue identified (Mills, 2007, p. 45).

Math Fact Strategies 14

Data Collection Matrix (Appendix A)
Research Questions 1 2 3
1.What math fact
strategies work best?

Previous math grades Student Journal Teacher observation
with anecdotal notes
2. How do math fact
strategies impact
student learning?
Math fact pre-test

Parent survey Math fact post-test
3. Why is
automaticity of math
facts important for
students?
Teacher Interview Focus Groups (Group
that knows math facts
& Group that does
not) Solving higher
level math
3
rd
and 4
th
grade
teachers math
assessment
grades/OAT scores in
math

Previous Math Grades
Data will be collected from students previous math fact grades in timed and untimed
assessments. Throughout the year, students have taken daily timed tests as well as once a week
untimed tests on whichever math fact we are working. For example, one week students are being
tested over +7 facts and the next week -7. I will analyze this data by looking at students
averages and comparing their previous grades with new ones from similar assessments from this
study. Grades will be interpreted by how they change due to the different teaching strategies.
Currently, students typically solve math facts by counting up or down on fingers, a number line
or by rote memorization. After the math fact strategies are taught, however, students grades will,
hopefully, improve. Analyzing and interpreting the quantitative data of previous math grades
Math Fact Strategies 15

will allow me to determine which math strategies are more effective, counting on combined with
drill and practice or the new research based strategies.
Student Journals
My matrix consists of quantitative and qualitative data collection tools to bring balance to
the research. Using a student journal as a data source will provide qualitative data when much of
my other data collection techniques are quantitative. It will also allow me to discover which
math fact strategies students use and prefer. After learning a new strategy, students will write a
description of how to use the strategy to solve math facts as well as their own thoughts and
feelings towards the strategy. For example, after learning and implementing the adding nines
strategy I would expect students to be able to write in their journal how the math problem 6+9
can be solved by thinking what is one less than six (five) and make it a teen giving the sum of
fifteen. Beyond explaining the strategy, however, I want to know whether or not students find
this strategy effective, useful or easy to use. For this reason, they will be asked to share their
thoughts on the strategy. These ideas will help answer research questions, What math fact
strategies work best? and How do math fact strategies impact student learning? as they will
identify how students use the strategies and which ones they feel work best. Student journals are
an important aspect of my research because they will allow me to collect data, not only of
teachers or parents points of view, but the students as well. Having students views in my
research will cause the data to be more accurate and complete.
I will analyze student journals by identifying themes among students writings. Mills
(2007) suggests developing a coding system to determine patterns that emerge, such as events
that keep repeating themselves, or key phrases that participants use to describe their feelings (p.
Math Fact Strategies 16

123). As I read students journals, I will make notes on index cards to analyze common themes
among students thoughts.
One way I could interpret the data from student journals is by extending the analysis
(Mills, 2007). After reviewing students thoughts, I will ask questions about the study of math
facts noting implications that might be drawn without actually drawing them (Mills, 2007, p.
135). For example, once I interpret the data more questions may arise such as, Why do students
prefer learning math fact strategies or memorizing math facts?
Teacher Observations with Anecdotal Notes
Observations will be made throughout the data collection process by taking anecdotal
notes. The attached form will be used to keep track of how students are doing with learning the
math fact strategies individually and as a group. By observing students while they learn the new
ideas, I will be able to determine which strategies work best. These notes will then be analyzed
by recording any common themes among students reactions, understandings, and applications of
the material. This qualitative data collection source will balance out the other more numeric
sources. The data will be interpreted as it may lead to more questions for the students about why
they solved a problem a certain way, made a positive or negative comment, or displayed a
specific attitude.
Math Fact Pretest
Data collection for this research project will begin with a math fact pretest. Teacher
made tests are one of the most common quantitative data collection techniques (Mills, 2007,
p. 73). This pretest will assess students current understanding of various math strategies, which
Math Fact Strategies 17

will help answer the research question regarding the way strategies impact student learning of
math facts. The sets of math problems on the test are grouped according to strategies that will be
researched throughout the project, including adding zero and counting on by one, counting on by
two, doubles, doubles plus one, sums to ten, and adding nine. In my research, these strategies
will be compared with students current addition methods.
This assessment will be given in small groups for various reasons. First, students will
time themselves so after learning the various strategies I can analyze their time to see if there is
improvement from the pretest to the similar post-test. Since I do not have access to twenty
timers, I will only be able to work with a few students at one time. Furthermore, students will be
observed as they complete the test, noting whether or not they use certain strategies, and using
small groups will allow me to carefully monitor each student. Students will do a few practice
rounds of timing themselves and completing math problems before beginning the test to prepare.
Using a pretest will give me clear results of whether or not students know how to use various
math fact strategies effectively, and as Munafo states, qualitative data was helpful, but the
quantitative is probably the most beneficial (Video, Strategies in Action: Data collection tools).
Parent Survey
A parent survey will be conducted during the research process in order to gain a different
perspective on how math fact strategies impact student learning. In the past two years, several
parents have commented on their dislike for the way math facts are currently assessed in my
classroom. This complaint stems from their child feeling anxious about timed tests and moving
on to a new set of facts before the student has mastered the previous set. After sending home a
note informing parents of the new strategies being taught and giving students time to learn the
Math Fact Strategies 18

strategies, I will ask parents to complete a survey. Their answers will show me how parents
view the methods as well as any changes they have noted in their childs skill or attitude towards
math facts. The surveys will be analyzed by tallying answers of yes and no questions and
identifying common themes of any written comments. The data will then be interpreted by
consulting critical colleagues, and it will be taken into consideration when planning how to teach
math facts in following years.
Math Fact Parent Survey (Appendix B)
Please complete the survey. Your name and your childs name will not be included in the
sharing of the results. Thank you!

Do you feel the previous way of using flashcards and YES NO
timed tests to teach math facts was effective?

Does your child practice math facts at home? YES NO

Do you feel the new math fact strategies are effective? YES NO

Did your child like learning math facts by using YES NO
flashcards and timed tests?

Does your child like learning math facts by using YES NO
the new math fact strategies?

Would you recommend using the old or new methods OLD NEW
for teaching math facts?
Math Fact Strategies 19


Have you seen a change in your childs attitude YES NO
towards learning math facts since starting the new strategies?

(If yes, positive or negative?) POSITIVE NEGATIVE

Have you seen a change in your childs grade in YES NO
math facts since starting the new strategies?

(If yes, positive or negative?) POSITIVE NEGATIVE

Other comments? _____________________________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________

Math Fact Post-test
Students will complete a math fact post-test at the conclusion of the unit in order to
provide quantitative data for my research. The assessment is the same as the pre-test and will
show if students have improved their time solving math facts and whether or not they implement
the taught strategies. While the post-test primarily answers the research question, How do math
fact strategies impact student learning? it also addresses, What math fact strategies work best?
as it will show how students speed and accuracy has changed since the math fact pre-test. The
pre and post tests will work together to give an overall view of student understanding.
I will analyze the post-test in a variety of ways. First, I will record the times each student
posted in order to determine an average speed for each student to complete ten math facts. I will
Math Fact Strategies 20

compare this data with the times from the pre-test to see how they have improved. Percentage of
math facts correct will also be averaged for each strategy to show whether or not students have
increased understanding of math facts strategies.
I will interpret post-test results by turning to theory (Mills, 2007). The results of
students post-tests will be influenced by theories about effective math fact strategies. By
studying, teaching and implementing these strategies, I will be able to compare students post-test
scores to the results found in my theory resources. Once I interpret the post-test data, I will be
able to change the way I teach math facts to accommodate those strategies that work best, which
Korzym says is, the most empowering part of research (Video, Expert Commentary, How do
I analyze and interpret my data?).
Math Pre/Post Test (Appendix C)

Directions: Time yourself as you solve these problems. Stop the timer when you have
solved them all, then answer the questions about them.

Start the timer and begin.
2+1= 1+3= 0+1= 7+1= 5+0=
0+3= 1+4= 7+0= 9+1= 1+8=
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No
Math Fact Strategies 21


Start the timer and begin.
2 + 5 = 3 + 2 = 6 + 2 = 2 + 4 = 1 + 2 =
10 + 2 = 3 + 3 = 2 + 8 = 7 + 2 = 9 + 2 =
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
5 + 5 = 2 + 2 = 1 + 1 = 4 + 4 = 0 + 0 =
10 + 10 = 3 + 3 = 8 + 8 = 7 + 7 = 6 + 6 =
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
2 + 3 = 4 + 5 = 5 + 6 = 7 + 8 = 6 + 7 =
4 + 3 = 8 + 9 = 1 + 2 = 3 + 2 = 10 + 9 =
Stop the timer.
Math Fact Strategies 22

Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
2 + 8 = 3 + 7 = 1 + 9 = 6 + 4 = 0 + 10 =
5 + 5 = 10 + 0 = 7 + 3 = 8 + 2 = 4 + 6 =
Stop the timer.
Time it took to solve the problems: __________

How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
4 + 9 = 9 + 2 = 1 + 9 = 9 + 9 = 9 + 5 =
10 + 9 = 9 + 3 = 9 + 8 = 7 + 9 = 6 + 9 =
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
Math Fact Strategies 23

3. Did you use a different strategy? Yes No

Teacher Interview
Another qualitative data collection instrument I will utilize is a teacher interview.
Including an interview will balance qualitative items with quantitative in my research.
Questioning other teachers will help me answer the research question, Why is automaticity of
math facts important for students? The interview will focus on how math facts are taught, why
they are taught in that fashion and their importance. The answers to these questions will give me
insight to commonly taught strategies, which also will address my first research question, as well
as the reasons behind using them. Since my interview will answer multiple research questions, it
will help complete the data. I will interview the eight first grade teachers in my district since my
research focuses on first grade mathematics, but I also want to interview all the teachers in my
building to gain a broader aspect of how my school teaches math facts across the grade levels.
I will analyze the teacher interview by coding the transcripts. After reading through the
data several times, I will be able to identify reoccurring themes throughout my interviews. Once I
have determined common themes, I will go back through the transcripts again and make note of
those ideas, similar to the example in Mills (2007) on pages 127-129. For example, I may find
many teachers use memorization strategies through drill and timed tests. In that scenario, drill,
memorization and timed tests could be common themes to look for in the interview transcripts.
I will interpret the teacher interviews by, seeking the advice of critical friends (Mills,
2007, p. 136). Once I have identified the themes of common strategies and purposes from the
interviews, I can then seek the advice of other colleagues to help me interpret what this
information means. It will be important for me, at this point, to seek the advice of colleagues who
Math Fact Strategies 24

did not complete the interview to limit any bias. On the other hand, it would be beneficial to
report back to those teachers, who did complete the interview, to share with them the results of
my study. Interpreting the teacher interviews will help me determine which math fact strategies
work best and why learning them to the point of automatic recall is important, which I will then
be able to implement in my own classroom.
Focus Groups
In order to help answer the question, Why is automaticity of math facts important for
students? I will use focus groups. This activity will provide me with both qualitative and
quantitative data for my research. Groups will be based on students who have mastered their
math facts and students who still rely on counting on their fingers or a number line to solve math
facts. Results from the math fact pretest will determine which students will be in which groups.
Each group will then be asked to solve higher level math problems that involve math facts; for
example, questions that involve algebra concepts or word problems. The results of the focus
groups will be analyzed by how each group performed overall on the problems. Completing this
activity will provide data that shows whether or not students perform better on higher level
mathematics if they have first developed automaticity of math facts. The data will be interpreted
as students in both groups will continue to be encouraged to develop automaticity of math facts
and use those skills to solve higher level math problems.
Problems for Focus Groups (Appendix D)
1. What number should go in the blank? 6+5 = _____+2 _____+4 = 2+2
3+_____ = 11+1 2+8 = 4+_____ 7+_____ = 5+2
2. Read and solve each word problem.
Math Fact Strategies 25

Two birds sat in a nest. Ben ate five apples.
Eight more birds joined them. Justin ate five apples too.
How many birds were there in all? How many apples did the boys eat all together?

One cat had seven stripes. Olivia had one doll.
Another cat had nine stripes. Alania had six dolls.
How many stripes did the cats How many dolls did the girls have all
have in all? together?

There were zero cookies on a plate.
I put four cookies on it.
How many cookies are on the plate in all?

Third and Fourth Grade Math Grades/OAT Scores
The other data collection source used to answer my third research question will be a
combination of third and fourth grade math grades and math scores on the Ohio Achievement
Test. This quantitative data will show if there is a correspondence between lacking automaticity
of math facts and struggling with higher level math problems. I will ask the third and fourth
grade teachers in my building for a copy of their math grades and test scores to collect this data.
The grades will be analyzed by looking at math fact grades in comparison to overall math grades.
Similarly, test scores will be analyzed by reviewing results from test questions which involve
math facts and higher level thinking. I will interpret the data by relating it to theory stating a
correlation between students who correctly complete higher level math problems and those who
have automaticity of math facts.
Math Fact Strategies 26

Data Analysis
Question One
To answer the question, What math fact strategies work best? I collected and analyzed
quantitative and qualitative data from three sources: students previous math grades, student
journals, and teacher observations with anecdotal notes. These sources show me the new
research based strategies are more effective and overall preferred by students than my previous
way of teaching math facts. There were a total of nineteen students who participated in this
study.
Previous Math Grades
By comparing students previous grades in math facts to their math fact grades when
using the research based strategies, I am able to decide which methods work best. The following
chart contrasts students average math fact grades over six weeks when using the previous
methods of strictly flashcards and timed tests and six weeks when using the new strategies.
Student Math Grades Chart
Students Previous Average
(Former methods)
New Average
(New methods)
1 88 96
2 53 72
3 100 100
4 68 86
5 100 100
6 64 68
7 98 100
8 72 83
9 100 100
10 92 91
11 98 98
12 93 94
13 53 74
14 88 93
Math Fact Strategies 27

15 98 99
16 87 81
17 73 83
18 79 83
19 52 79

This source contributes to the research because it illustrates student growth from the
previous way of teaching math facts to using the new strategies. Seventeen out of nineteen
students either stayed the same or improved in their average math grade. The green shows the
highest percentages of growth with Student Fours average going from 68% to 86%, and Student
Nineteens average grade going from 52% to 79%. The two yellow rows, Students Ten and
Sixteen, are the only students who decreased in their average math grades. The results of all but
two students improving in their math grades show the new research based strategies are more
effective than the former methods of using only flashcards and timed tests.
Student Journals
Student journals provide interesting qualitative data when determining which math fact
strategies work best. Students kept weekly journals over the new math strategies filling in three
main areas for each strategy including, how to use the strategy, how they felt about it and why,
and any other comments about it. For the Adding Zero and One strategy students demonstrated
their understanding of the strategy by explaining, When it is plus zero you just look at the other
number. When it is plus one you just count up one. All nineteen students agreed, this strategy
was easy with thoughts similar to, Its super easy cause I can think it in my head super fast or
I just look at it and know it in my head. Other comments included more explanations of why
they liked it, how easy it was, examples, and even one remark of it being too easy. The Adding
Two strategy also proved well liked and easy to use by students when their thoughts were, Its
Math Fact Strategies 28

easy because you just count up two and I love it! Its cool. Students explained this strategy
by reverberating a poem they learned, If you see a plus two thats what you do, go to the other
number and count up two! or they simply stated, You count up two in your head. Students
shared a variety of ways to do the Doubles strategy including, Memorize the doubles, Sing
the Doubles Rap, and Just count up the same number. With this strategy, students disagreed
saying, It is easy because you just know it or It is hard because some are hard to remember.
Other comments, however, were, I really do like it, Memorize, and Its easy as pie. At the
Doubles Plus One strategy, all nineteen students, except one, said This strategy is hard. The
one student who disagreed explained, It is easy because you just do the double and count up
one. Although the other students admittedly found this strategy more challenging, they were
still able to describe how to do it saying, First, go to the lower number, do the double, and then
count up one more. One student in the section for other comments gave the example, Because
if I have 7+8 you just say ok 7+7=14 so if I just count up one it will be 15. When asked how to
do the Sums to Ten strategy students wrote, Memorize and Learn which ones equal ten.
They said they liked them because They are easy and All you have to do is remember which
ones are ten. For the final strategy, Adding Nine, students correctly explained the process
writing, Go to the other number, count down one and make it a teen. Students were split in
their reactions to this strategy saying, It was hard and easy and It was kind of in the middle.
In other comments one student stated, People think it is easy, but it is not. Student journal
entries contribute to the research as they show students struggled most with the Doubles Plus
One and Adding Nine strategies, while finding the other strategies enjoyable and easy to use.
Strategies which work best are the ones students prefer to use and can apply rather than
strategies which prove to be challenging, confusing and disliked.
Math Fact Strategies 29

Teacher Observations with Anecdotal Notes
The final source used to determine which math fact strategies work best are teacher
observations with anecdotal notes. Throughout the six weeks of teaching the new strategies I
wrote down notes of students reactions, understandings and applications of the strategies. These
detailed observations contribute to the research because they identify students thoughts on the
strategies on a daily basis. They also show how students did on timed tests throughout the week
and on a non-timed test at the end of the week. In the first week, when learning the Adding Zero
and One strategy, students made comments such as, This is too easy, This is kindergarten
stuff, and I can do it without thinking about it! Students then proved their understanding by
being able to apply their knowledge on timed tests where, by the third day of taking the test, all
nineteen students earned 100% at least once. On the non-timed test, sixteen out of nineteen
students received 100%, with the three who did not missing one or two problems. Adding Two
was another strategy students were comfortable with, still commenting on its ease, though not as
often as the first strategy. On the timed tests only four students out of nineteen did not reach
100%, but their tests showed they understood the concept, but did not finish in time. On both the
timed tests and the non-timed test, every student made improvements in Adding Two using the
new methods from when we did it at the beginning of the year using the old strategies. The
Doubles strategy was liked by students more when I introduced The Doubles Rap a song to
help remember the math facts. Twelve out of nineteen students had 100% on the timed tests, and
the seven who did not struggled on any facts above 5+5. On the non-timed test, eleven students
had 100%, four earned 97%, two had 91% and one student received 75%. Students reactions
were more negative with the Doubles Plus One strategy. Since it is a more complex strategy, I
had to explain and give examples several times before they understood how to use it. By the end
Math Fact Strategies 30

of the week, students said they were still confused; however, most of them were applying it
correctly. The timed test results were considerably lower on this week with only five students
earning 100%. The non-timed test was similar with five 100%s, five As either 94% or 97%,
three Bs at 88% or 91%, two 81%s, one 63%, and one 13%. The student who had only 13%
added up one, skipping the do the double step, showing a complete lack of understanding. These
lower results show the difficulty of this strategy. The Sums to Ten strategy was easier for the
students, who seemed comfortable using the strategy by the end of the week, but frequently
forgot 3+7 and 4+6. Sixteen out of nineteen students received 100% on their timed tests,
demonstrating their understanding. On the non-timed test thirteen students earned 100%, four
others had As with 94% or 97%, one student had 91%, and the same student who earned 13%
on Doubles Plus One had 41% on this test. Although the other students did well, this particular
student showed a lack of understanding by writing random numbers for the answers. Students
thought the final strategy, Adding Nine, was difficult at first, but showed some improvements by
the end of the week. The timed tests for this week had overall lower grades than any other
strategy with two students earning 95%, one 90%, one 75%, four 70%s, one 55%, one 50%, and
one 45%. Students did somewhat better on the non-timed test, which indicates an understanding
of the strategy, but a lack of speed when using it. Eleven students out of nineteen had 100%, two
had 97%, one earned 94%, one received 91%, two had 84%, one had 78%, and one had 75%.
Overall these observations of student reactions, understandings and applications show students
enjoy and understand the new strategies, but struggle with Doubles Plus One and Adding Nine.
Each of these three sources, previous math grades, journals and teacher observations,
help me understand what math fact strategies work best. Both the qualitative and quantitative
data contribute by revealing an improvement in student knowledge of math facts when using the
Math Fact Strategies 31

new methods. When examining which strategies work best, one is also considering the impact
strategies have upon student understanding.
Question Two
I was able to answer the question, How do math fact strategies impact student learning?
by collecting data from and analyzing math fact pre-tests, parent surveys, and math fact post-
tests. These three sources show me how the research based strategies can positively impact
student understanding of math facts.
Pre-test
The first source used to answer this question was a math fact pre-test. The pre-test
contributes data by showing how students solved math facts before being taught the new research
based strategies. For each section students could have counted on their fingers, used a number
line, or used a different strategy, meaning they knew a strategy of how to solve the problem other
than counting on fingers and using a number line, or they had it memorized. Students could use
any combination of these methods. For example, they might have counted on their fingers for a
few facts in a section while having the others memorized. It also tells how long students took to
answer each strategy section, which will be compared to their times on the post-test.
Students best results were with the strategy Plus Zero and One. It indicates eighteen out
of nineteen students used a strategy other than counting on their fingers or a number line, three
out of nineteen counted on their fingers, and three out of nineteen counted up using a number
line. Students averaged one minute and eleven seconds on this section. Next, sixteen out of
nineteen students used a different strategy to solve Plus Two math facts, eleven out of nineteen
students counted on fingers, and eight out of nineteen used number lines to solve the math
problems, averaging two minutes and six seconds for the ten problems. Students demonstrated
Math Fact Strategies 32

previous knowledge of Doubles as nineteen of nineteen students used a different strategy on at
least one of the Doubles facts, eight of nineteen students counted on their fingers, and six of
nineteen used a number line, while averaging one minute and forty-two seconds. On the
Doubles Plus One section, students started relying more on their fingers and number lines as only
thirteen out of nineteen used a different strategy, fourteen of nineteen counted on their fingers
and seven of nineteen used number lines. In this section, students struggles also show in their
times by averaging two minutes and fifty seconds. When students reached the Sums to Ten
section, while they did comment, A lot of these are ten! they still relied heavily on their fingers
and number lines to help solve the problems with fifteen of nineteen using a different strategy,
fifteen of nineteen counting on fingers, and six of nineteen using a number line completing it in
an average of one minute and forty-three seconds. The last section, Plus Nine, proved to be the
most challenging for students. Even though twelve of nineteen students used a different strategy,
none of them used the Plus Nine strategy; rather they used strategies they knew, such as Plus
One for 9+1. Fifteen of nineteen students counted on their fingers, and seven of nineteen
students used number lines averaging the slowest time, two minutes and fifty-six seconds to
answer ten math facts. Because of this data, I know which strategies students knew the best, Plus
Zero and One as well as Doubles, and which they knew the least, Doubles Plus One and Plus
Nine.
Parent Survey
Another source I used to determine how math fact strategies impact student learning were
parent surveys. Of nineteen parent surveys sent home, ten were returned, although the numbers
do not always add up to ten because sometimes parents circled both responses or left it blank.
The parent surveys contributed because they gave insight into which math fact strategies were
Math Fact Strategies 33

preferred by students and parents at home, if parents saw a change in their childs math grades or
attitude, and it allowed me to see any miscommunications between home and school with
regards to the strategies. When asked, Do you feel the previous way of using flashcards and
timed tests to teach math facts was effective? six parents said yes, and two parents said no. To
the question, Does your child practice math facts at home? nine parents answered yes and one
said no. Seven parents said they feel the new math fact strategies are effective, while two said
they are not. Parents responded to the question, Did your child like learning math facts by
using flashcards and timed tests? with six saying yes and four replying no. An encouraging
seven out of ten parents said yes to the question Does your child like learning math facts by
using the new math fact strategies? while only one said no. Two parents said they would
recommend using the old methods for teaching math facts, while seven parents preferred the new
strategies. When asked if they have seen a change in their childs attitude towards learning math
facts since starting the new strategies, one parent responded no, while seven parents said yes
with five indicating a positive change and one saying the change was negative. The next
question was, Have you seen a change in your childs grade in math facts since starting the new
strategies? to which three parents said no, and six said yes with four marking it was a positive
change, and two saying it was negative.
Besides the circle your response section of the survey, there was also an area for other
comments, which four out of the ten surveys included. One parent remarked, I think the new
strategies will stick with them throughout life. I can still remember strategies like that from being
in school. I feel that the same thing can happen from your new strategies. Another parent
wanted to clarify some of their responses saying, The only reason I said no about the old ways
being effective is because my child has anxiety about the timed tests. He worries about it so
Math Fact Strategies 34

much that I think he gets so caught up in how much time is left that he hurries through and
doesnt do as well as he could. Then I had two parents with negative comments, although they
both were misinformed in some way. These comments were results of miscommunication, and
both students discussed actually showed improvement at school when using the new strategies.
The first parent said, Negative change in childs grade because he dropped a letter grade on
recent report card in math. When I looked at his math grades, however, his drop was in other
areas in math, not in math facts. In math facts, he went from 88% with the old methods to 93%
using the new strategies. The other parent commented, The new strategies seem to confuse
him. I believe he is misunderstanding how the new strategies work, and I cant figure out how
hes trying to explain them to me. This students grade in math facts actually went from 73%
with the old methods of timed tests and flashcards to 83% with the new strategies. In class he
seemed to do very well using the strategies with the exception of Doubles Plus One and Adding
Nine, which are the two most challenging strategies to learn. These results showed that parents
found a combination of the old methods and new strategies to be the most effective and
preferred, while leaning more towards using the new ways.
Post Test
The final source used in this section, a math fact post test, correlates with the pre-test.
The post test contributes data because it shows the methods students used to solve math facts
after learning the new research based strategies, and how long students took to answer each
strategy section, in comparison with the pre-test. The following is a chart which shows the pre
and post test data of how students solved the math facts in each strategy section.


Math Fact Strategies 35

Pre/Post Test Graph

Overall, the results show students used their fingers or a number line less to solve each
section on the post test than they did on the pre-test. Furthermore, all nineteen students used a
different strategy on every section, except in Adding Nine, where two students used only their
fingers or a number line to solve all ten equations. These results inform me of students better
understanding of math facts when using the new strategies, rather than the former way. On
Adding Zero and One, only one student used their fingers, zero students used a number line and
all nineteen used a different strategy. Students lowered their average speed in this section from
one minute and eleven seconds to thirty-seven seconds. Next, nineteen out of nineteen students
used a different strategy to solve Plus Two math facts, nine out of nineteen students counted on
fingers, and only one out of nineteen used a number line to solve the math problems. Students
lowered their average speed from two minutes and six seconds to one minute and five seconds.
Math Fact Strategies 36

Again, nineteen of nineteen students used a different strategy to solve Doubles, seven of
nineteen students counted on their fingers, and three of nineteen used a number line, while
averaging twenty-nine seconds less than their previous time with one minute and thirteen
seconds. Although the numbers did decrease, Doubles Plus One proved to remain the most
challenging section for students as twelve of nineteen students still counted on their fingers and
four of nineteen used number lines. However, all nineteen students were able to use a different
strategy for at least one of the ten problems, and students improved their time by going from two
minutes and fifty seconds down to one minute and forty-four seconds. Students seemed to forget
about the Sums to Ten strategy because ten of nineteen still counted on fingers and three used a
number line, although nineteen of nineteen were able to use a different strategy too. Students
completed this section in an average of one minute and eleven seconds, down from their previous
one minute and forty-three seconds. The last section, Adding Nine, ended up being the most
improved area for students. The number of students using their fingers dropped from fifteen of
nineteen to only eight of nineteen. Likewise, students using a number line decreased from seven
of nineteen students to two of nineteen. In the pre-test students averaged the slowest time in this
section at two minutes and fifty-six seconds to answer ten math facts; however, they were able to
lower that time down to one minute and thirty-two seconds on the post test. When comparing
the pre and post tests, I am able to determine students best understand the Adding Zero and One
strategy, still struggle some with the Doubles Plus One strategy, and improve greatly when
learning the Adding Nine strategy. The data showing the reduction in how long it took students
to solve the ten problems in each section contributes because it shows the more students
understand math facts, the faster they can solve them.
Math Fact Strategies 37

By collecting the qualitative and quantitative data from math fact pre and post tests and
parent surveys, I understand the positive influences math fact strategies have on student learning.
After considering the impact math fact strategies have on student understanding, a new question
emerges inquiring why students need to know math facts with rapid recall.
Question Three
The research question, Why is automaticity of math facts important for students? was
answered through teacher interviews, student focus groups, and math grades of third and fourth
graders along with their Ohio Achievement Test (OAT) scores. While these three sources were
the most difficult to collect and analyze, the results clearly show why it is beneficial for students
to know their math facts automatically.
Teacher Interviews
Four teachers were interviewed to determine how various teachers teach math facts and
why it is important for students to develop automaticity of math facts. When asked, How do
you teach math facts to your students? three of four teachers mentioned using timed tests and
flashcards, although they also added these tools are only used as reinforcements, not formal
assessments. Three teachers also said they use computer programs or games to teach math facts.
Three of four teachers identified math strategies, such as Adding Nine, Doubles and Sums to Ten
as other teaching methods. These responses show me most teachers use a combination of
strategies to teach math facts. The second question was related to the first, asking why they
teach math facts the ways they stated. Three of four teachers discussed using a variety of
strategies in order to reach all types of learners, while one educator said, Strategies seem to help
my students learn and understand their facts more than rote memorization. Question three deals
directly with automaticity of math facts by asking why rapid recall of math facts is important for
Math Fact Strategies 38

students. Two teachers discussed the value of having automaticity because quickly adding math
facts is a skill needed in everyday life. Another teacher talked about how the basic skills must be
solid for students to be successful at higher level math, even in two and three digit addition. This
teacher added, Students seem to understand the regrouping process, but make mistakes on basic
addition and/or subtraction facts. The surprising response came from the third grade veteran
teacher who said, I dont necessarily think its important to have rapid recall, but it is a means
for them to learn their facts. This answer shocked me because I thought this teacher would be
more insistent about automaticity as she teaches even higher level math than I do. When asked
how their students typically solve math facts, teachers had a variety of responses. Three of four
said students count on their fingers, two teachers mentioned number lines and two teachers said
students use Touch Math. Other answers included using the math fact strategies, manipulatives
and memory. The next question was to discuss whether they encourage rote memorization of
math facts or understanding of the math fact strategies and why. Two teachers said they focus on
students learning the strategies because students need to learn how to apply math concepts. One
teacher said she uses both by teaching the strategies, but when the class has a hard time
remembering a certain math fact, they work on memorizing it. The other teacher said he does
not focus on any one strategy, but introduces them all then encourages memorization. When
asked whether or not their students struggle with rapid recall of math facts, two teachers said no,
the majority of their students do well with them, while the other two teachers said yes, they
struggle. I was not surprised when one of the teachers who said yes, her students struggle, was
the same teacher who said she did not consider rapid recall of facts important. If a teacher does
not encourage students to develop automaticity of math facts, they will most likely struggle in
being able to recall their facts rapidly. The final question was whether or not their students
Math Fact Strategies 39

enjoyed learning and solving math facts to which all four teachers answered yes, all discussing
the ways they make them fun for students. One teacher added if students are good at their math
facts they enjoy them, but the ones who struggle with remembering them become frustrated with
trying to solve them. The responses to my interview questions contribute because they
demonstrate the emphasis teachers place on learning math facts, how they teach them, and how
students respond to those teaching methods. Overall, teachers discussed the benefits of students
developing automaticity of math facts by using a combination of teaching strategies.
Focus Groups
To also answer the question about the importance of automaticity of math facts, students
solved higher level math problems which included basic math facts in focus groups. Based on
students math fact strategy pre-test scores, five students were placed in a low group, and seven
students were placed in a high group. The students were given the same problems, but their
responses differed greatly. On the first section of problems, which had algebra concepts, only
one student of five in the low group understood and could solve the problems. The other
students were confused, not wanting to try. They used number lines, counted on their fingers, or
guessed a number to fit in the blank. Students said, They are hard because there were a whole
bunch of numbers and There were a lot I didnt know. The high group, however, had six of
seven students understand the problems and solve them correctly. This set of students solved
the problems much faster than the lower group, with less confusion overall. Only two students in
this group relied on their fingers to help them answer the problems, the rest used the new math
strategies to help them, or already had the facts memorized. Students in the high group said,
Some were easy, but some where a little harder because I had to think in my head and They
were easy, you just use all the strategies. These results show when it comes to higher level
Math Fact Strategies 40

math, the students who already have automaticity of math facts are quicker to understand new
concepts. This data also indicates students who lack automaticity struggle, not only because the
concept is new, but because they do not have a strong foundation of math facts to help them.
The other section of math problems the focus groups did were word problems with basic
math facts. Because a word problem was a concept we already covered, students knew how to
solve these problems with five of five students in the low group and seven of seven in the high
group all understanding the problems. Of them students said, They are easier to figure out,
They are easier because you just write the numbers they tell you, and They are easy because
you get to read the stories and then try to use the strategies. When making observations
however, I did notice slight differences in the groups. While the low group solved the problems
quickly and correctly, they all still counted on their fingers or a number line for at least one of
the problems, whereas the high group had the math facts memorized or used the strategies. Since
the low group was relying more on the older strategies, they took longer to solve the problems.
The high group, having automaticity of most of the math problems, completed the problems
faster. This data contributes because it shows when students have automaticity of math facts,
they can solve higher level problems involving math facts faster than those who rely on other
tools to help them count.
Third and Fourth Grade Math Grades and Test Scores
The final data used to determine the importance of automaticity of math facts are third
and fourth graders average math grades and test scores on the Ohio Achievement Test (OAT).
When looking at third and fourth grade OAT results in mathematics, there is a definite
correlation between doing well in Number Sense and Operations (math facts) and doing well
Math Fact Strategies 41

overall. The data in the following graph contributes because it indicates students who did better
on the Number Sense and Operations portion had higher overall percentages in mathematics.
OAT Results Comparison Graph

As shown in the graph, students who did well on the Number Sense and Operations section of
the test did better overall as well, while students who struggled with math facts were less
proficient on the test as a whole. It should be added, this was not the case in every section of the
test. For example, in the Geometry and Spatial Sense portion, Third Grade C had the highest
percentage with 96%, Third Grade B had 93%, and Third Grade A had 90%. This data shows
how math facts in particular affect ones overall success in mathematics.
To determine if this relationship is also true in the classroom on a daily basis, I examined
third and fourth graders math grades. Nineteen of forty-six students earned As in their math
facts while eighteen of forty-six earned As in their overall math grades. Only three students of
forty-six received As in their math facts but not their math grades overall. Similarly, two
Math Fact Strategies 42

students of forty-six earned As in their overall math, but not in their math facts. This data
contributes because it indicates students who do well in math facts, or have developed
automaticity, typically do well in math overall.
Furthermore, when I asked teachers for their grades, I also asked them if they saw a
correlation between students grades and test scores based on whether or not students knew their
math facts automatically. Teachers agreed, students who have automaticity of math facts
typically have higher math grades and test scores than students who struggle with math facts or
count on their fingers or a number line. These results contribute by indicating students with
automaticity of math facts are more likely to have higher math grades and test scores.
The data from all three sources directed at question three share the idea of automaticity of
math facts being important in order for students to be able to do higher-level math problems
without the added complications of having to count up using their fingers or a number line. The
qualitative and quantitative data collected and analyzed from these nine sources answers the
three posed research questions; however, the data is of no use if it is left in this stage. The
questions must be asked, Now what? What is to be done with this information concerning the
effects of various math fact strategies on student mastery?
Action Plan
This action research project has taught me many things about teaching math facts which I
will not only use to improve my instruction, but will share with other educators as well. Overall,
I have learned the best way to teach math facts is a combination of the research based strategies
with timed tests and flashcards as reinforcement. While it is important for students to learn how
to count up using their fingers, manipulatives or a number line, this research shows students who
develop automaticity of math facts through other strategies benefit more long term. For this
Math Fact Strategies 43

reason, when teaching math facts I will focus on teaching the strategies while using timed tests
and flashcards as supporting activities. From this data, I have learned when teaching the math
strategies it would be beneficial to spread out the more challenging strategies, such as Doubles
Plus One and Adding Nine, over the course of at least two weeks, instead of one; although,
every classroom will be different and may need more or less time on a certain strategy. Out of
the nine data sources, I felt students math grades and the pre and post tests were the most useful
quantitative data, while teacher observations and teacher interviews were the most beneficial
qualitative data. I also found the parent survey to be a valuable resource and a tool I should use
more often even in other content areas. I will share this project with other educators by
submitting it to educational journals and explaining the data to my colleagues so they too can
understand the effects of various strategies on student mastery of first grade mathematics. In the
end, the goal of this project, determining the effectiveness of various math fact strategies, was
accomplished.









Math Fact Strategies 44

References
Brennan, Julie. (March 2006). What about memorizing math facts? Arent they
important? Living Math! Retrieved from
http://www.livingmath.net/MemorizingMathFacts/tabid/306/Default.aspx
Burns, Matthew. (2005). Using incremental rehearsal to increase fluency of single-digit
multiplication facts with children identified as learning disabled in mathematics
computation. Education and Treatment of Children, 28(3), 237-249.
Caron, Thomas. (July/August 2007). Learning multiplication. The Clearing House, 80(6),
278-282.
Crawford, Donald. (n.d.) The third stage of learning math facts: developing
automaticity. R and D Instructional Solutions. Retrieved from
http://rocketmath.net/uploads/Math_Facts_research.pdf
Hanlon, Bill. (2004). Facts practice add by strategy. Hanlon Math. Retrieved from
http://hanlonmath.com/pdfFiles/340AddfactsR.pdf
Hanlon, Bill. (2004). Strategies for learning the math facts. Hanlon Math. Retrieved from
http://hanlonmath.com/pdfFiles/244StrategiesforFactsBH.pdf
Kaechele, Michael. (2009, July 26). Is rote memorization a 21
st
century skill? [Web log
post]. Retrieved from http://concretekax.blogspot.com/2009/07/is-rote-memorization-
20th-century-skill.html
Math Fact Strategies 45

Lehner, Patty. (2008). What is the relationship between fluency and automaticity
through systematic teaching with technology (FASTT Math) and improved student
computational skills? Retrieved from
http://www.vbschools.com/accountability/action_research/PatriciaLehner.pdf
Math Facts Automaticity. (2009). Renaissance Learning. Retrieved from
http://doc.renlearn.com/KMNet/R004344828GJ F314.pdf
Math Fluency. (2011). Scholastic. Retrieved from
http://www2.scholastic.com/browse/article.jsp?id=324
Mental Math Strategies. (2005). Saskatoon Public Schools Online Learning Centre.
Retrieved from http://olc.spsd.sk.ca/de/math1-3/p-mentalmath.html#master
Mills, Geoffrey E. (2007). Action Research: A Guide for the Teacher Researcher (3
rd

ed.). Upper Saddle River, NJ: Merrill/Prentice Hall.
R & D Instructional Solutions. (2009). Teacher Directions. Rocket Math. Retrieved from
http://www.rocketmath.net/uploads/Rocket_Math_Teacher_Directions.pdf
Teachscape, Inc. (Producer). (2007). Advice for ways to avoid common mistakes
[Streaming Video]. San Francisco, CA: Producer.
Teachscape, Inc. (Producer). (2007). Data Collection Tools. [Streaming Video].
San Francisco, CA: Producer.
Teachscape, Inc. (Producer). (2007). How do I analyze and interpret my data?
Math Fact Strategies 46

[Streaming Video]. San Francisco, CA: Producer.
Teachscape, Inc. (Producer). (2007). How do I conduct a literature review? [Streaming
Video]. San Francisco, CA: Producer.
Teachscape, Inc. (Producer). (2007). How should I consider variables in my research?
[Streaming Video]. San Francisco, CA: Producer.
Woodward, John. (2006). Developing automaticity in multiplication facts: integrating
strategy instruction with timed practice drills. Learning Disability Quarterly, 29, 269-
289. Retrieved from http://www2.ups.edu/faculty/woodward/LDQfall06.pdf
Woodward, John. (n.d.). Fact mastery and more! Overview for Teaching Facts. Retrieved
from http://www2.ups.edu/faculty/woodward/facts%20overview.pdf
Yermish, Aimee. (2003). Why memorize math facts? Hoagies Gifted Education Page.
Retrieved from http://www.hoagiesgifted.org/why_memorize.htm







Math Fact Strategies 47

Appendices
Data Collection Matrix (Appendix A)
Research Questions 1 2 3
1.What math fact
strategies work best?

Previous math grades Student Journal Teacher observation
with anecdotal notes
2. How do math fact
strategies impact
student learning?
Math fact pre-test

Parent survey Math fact post-test
3. Why is
automaticity of math
facts important for
students?
Teacher Interview Focus Groups (Group
that knows math facts
& Group that does
not) Solving higher
level math
3
rd
and 4
th
grade
teachers math
assessment
grades/OAT scores in
math

Math Fact Parent Survey (Appendix B)
Please complete the survey. Your name and your childs name will not be included in the
sharing of the results. Thank you!

Do you feel the previous way of using flashcards and YES NO
timed tests to teach math facts was effective?

Does your child practice math facts at home? YES NO

Do you feel the new math fact strategies are effective? YES NO

Did your child like learning math facts by using YES NO
Math Fact Strategies 48

flashcards and timed tests?

Does your child like learning math facts by using YES NO
the new math fact strategies?

Would you recommend using the old or new methods OLD NEW
for teaching math facts?

Have you seen a change in your childs attitude YES NO
towards learning math facts since starting the new strategies?

(If yes, positive or negative?) POSITIVE NEGATIVE

Have you seen a change in your childs grade in YES NO
math facts since starting the new strategies?

(If yes, positive or negative?) POSITIVE NEGATIVE

Other comments? _____________________________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________

Math Pre/Post Test (Appendix C)

Directions: Time yourself as you solve these problems. Stop the timer when you have
solved them all, then answer the questions about them.

Math Fact Strategies 49

Start the timer and begin.
2+1= 1+3= 0+1= 7+1= 5+0=
0+3= 1+4= 7+0= 9+1= 1+8=
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
2 + 5 = 3 + 2 = 6 + 2 = 2 + 4 = 1 + 2 =
10 + 2 = 3 + 3 = 2 + 8 = 7 + 2 = 9 + 2 =
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
5 + 5 = 2 + 2 = 1 + 1 = 4 + 4 = 0 + 0 =
10 + 10 = 3 + 3 = 8 + 8 = 7 + 7 = 6 + 6 =
Stop the timer.
Time it took to solve the problems: __________
Math Fact Strategies 50

How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
2 + 3 = 4 + 5 = 5 + 6 = 7 + 8 = 6 + 7 =
4 + 3 = 8 + 9 = 1 + 2 = 3 + 2 = 10 + 9 =
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
2 + 8 = 3 + 7 = 1 + 9 = 6 + 4 = 0 + 10 =
5 + 5 = 10 + 0 = 7 + 3 = 8 + 2 = 4 + 6 =
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Start the timer and begin.
Math Fact Strategies 51

4 + 9 = 9 + 2 = 1 + 9 = 9 + 9 = 9 + 5 =
10 + 9 = 9 + 3 = 9 + 8 = 7 + 9 = 6 + 9 =
Stop the timer.
Time it took to solve the problems: __________
How did you solve these problems? (Circle)
1 . Did you count on your fingers? Yes No
2. Did you use a number line? Yes No
3. Did you use a different strategy? Yes No

Problems for Focus Groups (Appendix D)
1. What number should go in the blank? 6+5 = _____+2 _____+4 = 2+2
3+_____ = 11+1 2+8 = 4+_____ 7+_____ = 5+2
2. Read and solve each word problem.
Two birds sat in a nest. Ben ate five apples.
Eight more birds joined them. Justin ate five apples too.
How many birds were there in all? How many apples did the boys eat all together?

One cat had seven stripes. Olivia had one doll.
Another cat had nine stripes. Alania had six dolls.
How many stripes did the cats How many dolls did the girls have all
have in all? together?

There were zero cookies on a plate.
I put four cookies on it.
How many cookies are on the plate in all?
Math Fact Strategies 52

Student Math Fact Strategy Journal (Appendix E)
This week we learned the ____________________ math fact strategy.
You use this strategy by _______________________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________.
This strategy is __________________ because _____________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________.
Other thoughts about the strategy: _______________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________.

Teacher Observation Form (Appendix F)

Section 1 Plus 0/1
Student reactions:______________________________________________________
Math Fact Strategies 53

____________________________________________________________________
____________________________________________________________________
Student understandings:_________________________________________________
____________________________________________________________________
____________________________________________________________________
Student applications: ___________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 2 Plus 2
Student reactions:______________________________________________________
____________________________________________________________________
____________________________________________________________________
Student understandings:_________________________________________________
____________________________________________________________________
____________________________________________________________________
Student applications: ___________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 3 Doubles
Student reactions:______________________________________________________
____________________________________________________________________
____________________________________________________________________
Student understandings:_________________________________________________
____________________________________________________________________
____________________________________________________________________
Student applications: ___________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 4 Doubles Plus 1
Math Fact Strategies 54

Student reactions:______________________________________________________
____________________________________________________________________
____________________________________________________________________
Student understandings:_________________________________________________
____________________________________________________________________
____________________________________________________________________
Student applications: ___________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 5 Sums to 10
Student reactions:______________________________________________________
____________________________________________________________________
____________________________________________________________________
Student understandings:_________________________________________________
____________________________________________________________________
____________________________________________________________________
Student applications: ___________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 6 Adding 9
Student reactions:______________________________________________________
____________________________________________________________________
____________________________________________________________________
Student understandings:_________________________________________________
____________________________________________________________________
____________________________________________________________________
Student applications: ___________________________________________________
___________________________________________________________________
___________________________________________________________________

Math Fact Strategies 55

Teacher Pre/Post-test Observations (Appendix G)

Section 1 Plus 0/1 Did the student:
Count on fingers? Use a number line? Use a different strategy?
Comments:____________________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 2 Plus 2 Did the student:
Count on fingers? Use a number line? Use a different strategy?
Comments:____________________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 3 Doubles - Did the student:
Count on fingers? Use a number line? Use a different strategy?
Comments:____________________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 4 Doubles Plus 1 - Did the student:
Count on fingers? Use a number line? Use a different strategy?
Comments:____________________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 5 Sums to 10 - Did the student:
Count on fingers? Use a number line? Use a different strategy?
Math Fact Strategies 56

Comments:____________________________________________________________
____________________________________________________________________
____________________________________________________________________
Section 6 Adding 9 - Did the student:
Count on fingers? Use a number line? Use a different strategy?
Comments:____________________________________________________________
____________________________________________________________________
____________________________________________________________________

Teacher Interview (Appendix H)
1. How do you teach math facts to your students? (i.e. Timed tests, flashcards, strategies: doubles,
nines trick, etc., computer games)

2. Why do you teach math facts that way?

3. Why is it important for students to be able to rapidly recall math facts?

4. What strategies do your students use to solve math facts? (i.e. Count on fingers, use a number
line, etc.)

5. Do you encourage rote memorization through drills or do you focus on math strategies? (i.e.
Doubles, sums to ten) Why?

6. Overall, do your students struggle with rapid recall of math facts?
7. Overall, do your students enjoy learning/solving math facts?


Psychology in the Schools, Vol. 47(4), 2010 C 2010 Wiley Periodicals, Inc.
Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/pits.20474
AN INVESTIGATION OF DETECT, PRACTICE, AND REPAIR TO REMEDY MATH-FACT
DEFICITS IN A GROUP OF THIRD-GRADE STUDENTS
BRIAN C. PONCY
Oklahoma State University
CHRISTOPHER H. SKINNER
The University of Tennessee
PHILIP K. AXTELL
The University of Tennessee and Cherokee Health Systems
A multiple-probe-across-problem-sets (tasks) design was used to evaluate the effects of the De-
tect, Practice, and Repair (DPR) on multiplication-fact uency development in seven third-grade
students nominated by their teacher as needing remediation. DPR is a multicomponent interven-
tion and begins with a group-administered, metronome-paced assessment used to identify specic
facts in need of repair. Next, Cover, Copy, and Compare (CCC) procedures are used to enhance
automaticity with those specic facts. Lastly, students complete a 1-min speed drill and self-graph
their uency performance. Results showed large level and trend increases in fact uency after DPR
was applied across all three sets of multiplication problems. Discussion focuses on the importance
of developing effective and efcient basic-skill-remediation procedures and directions for future
research.
C
2010 Wiley Periodicals, Inc.
As the majority of students referred for school psychological services are experiencing aca-
demic problems, school psychologists who are involved with remedying students problems can
assist by developing and evaluating procedures designed to prevent and remedy academic skills
decits (Ownby, Wallbrown, DArti, & Armstrong, 1985; Shapiro, 2004). A recent report from the
National Mathematics Advisory Panel (NMAP, 2008) indicated that American students are failing
to demonstrate a variety of key mathematics skills, especially when compared to students from other
countries. NMAP has identied a variety of areas (e.g., curriculum, instruction, and assessment)
that need to be addressed, including the recognition of the mutually reinforcing benets of con-
ceptual understanding, procedural uency, and automatic (i.e., quick and effortless) recall of facts
(p. xiv). This underscores the importance of effective and efcient remediation procedures designed
to increase students mastery of basic math facts (simple addition, subtraction, multiplication, and
division).
Developing automaticity with basic math facts may enhance students ability to learn, develop,
and/or apply advanced mathematics skills and concepts (Shapiro, 2004). For example, students who
expend too much of their available cognitive resources (e.g., attention, working memory) solving
basic facts may have insufcient cognitive resources to learn and apply skills associated with math
reasoning and problem-solving tasks (e.g., Gagne, 1982; Hasselbring, Goin, & Bransford, 1987;
Pellegrino & Goldman, 1987). Also, those students who are accurate but slow when responding to
basic facts may be less likely to choose to engage in mathematics, which may cause them to fall
further behind in their skill development and limit their ability to maintain, apply, and generalize
math skills (Skinner, 2002; Skinner, Pappas, &Davis, 2005). These ndings support the development
and validation of sustainable, efcient procedures designed to enhance math-fact automaticity.
Using scientically supported strategies and procedures, Poncy, Skinner, and OMara (2006)
developed Detect, Practice, and Repair (DPR) to enhance basic fact automaticity and learning rates
Correspondence to: Brian C. Poncy, Oklahoma State University, 420 Willard Hall, Stillwater, OK, 74078. E-mail:
brian.poncy@okstate.edu
342
Remedying Math-Fact Decits with DPR 343
by having students practice only those facts in need of remediation. DPR is composed of three
phases. During the detect phase, students are paced through fact problems using 1.5-s intervals.
After the completion of the detect phase (i.e., the metronome-paced pretest designed to identify
idiosyncratic facts that are not automatic), students identify the rst several problems they did not
complete, and these problems are targeted in the practice phase. During the practice phase, students
apply Cover, Copy, and Compare (CCC) procedures to these identied problems (Skinner, Belore,
Mace, Williams, & Johns, 1997; Skinner, Turco, Beatty, & Rasavage, 1989). Students complete the
DPRsequence during the repair phase, which consists of a uency drill (1-min sprint) and a feedback
component during which students self-graph their performance on the sprint.
A substantive research base supports the use of DPR components to enhance learning. The
primary instructional component of DPR is CCC, which has been shown to enhance math-fact
uency across skills and students (see Skinner, McLaughlin, & Logan, 1997). To maximize the
effectiveness of CCC, the detect phase is used to identify specic problems that are not mastered (i.e.,
not completed in 1.5 s). This is important as researchers have shown that spending instructional time
working on material already mastered can retard learning rates of unmastered spelling and reading
words (Cates et al., 2003; Joseph & Nist, 2006; Nist & Joseph, 2008). The group administration of
the detect phase results in each student having a unique set of problems based on her/his individual
response patterns (i.e., only the specic problems that the individual student did not respond to within
1.5 s), which are addressed in the practice phase. Thus, each student focuses the time allotted to CCC
only on the specic facts in need of automaticity enhancement. The detect and sprint phases provide
opportunities to respond under timed conditions. Researchers investigating explicit timing have
found that such procedures can enhance math uency (Codding et al., 2007; Rhymer, Henington,
Skinner, & Looby, 1999; Rhymer, Skinner, Henington, DReaux, & Sims, 1998; Van Houten &
Thompson, 1976). The repair phase (sprint) following CCC provides students with an opportunity to
independently respond (without prompts) immediately following CCC, which may enhance learning
(Bliss et al., in press). Self-graphing performance on these sprints may also contribute to uency
building (Van Houten, Hill, & Parsons, 1975). Finally, all three steps are designed to occasion high
rates of active responding, which have been shown to enhance math-fact uency (Skinner et al.,
1997).
The combination of strategies and procedures embedded in DPR was designed to provide
educators with an efcient method to individualize instructional objectives for each student in a
group setting. Although the study by Poncy and colleagues (2006) suggested that DPR is an efcient
procedure for identifying and remedying math-fact decits, the baseline-intervention (i.e., A-B)
design prevented researchers from drawing conclusions about the causeeffect relationship between
DPR and increases in math-fact uency. The primary purpose of the current study was to extend the
research on DPR by using a multiple-probe design to investigate the effect of DPR on a group of
7 third-grade students multiplication-fact uency. All participants were nominated by the teacher,
who indicated that they were having difculty responding quickly and accurately (automatically) to
multiplication facts. Specically, we sought to evaluate our hypotheses that the implementation of
DPR would result in an increase of the level and trend of digits correct per minute (DCM) scores for
the group, as well as for the individual students, and that DCM levels would be maintained after the
removal of DPR.
METHOD
Participants and Setting
Participants included seven 8- to 10-year-old, third-grade students (ve girls, two boys),
who attended an elementary school in the Southeastern United States. Six (86%) students were
Psychology in the Schools DOI: 10.1002/pits
344 Poncy et al.
African-American, and one (14%) was Latino. None of the students were receiving special educa-
tion services to address math decits. Prior to the study, each student had received instruction with
basic multiplication facts from a basal curriculum. The teacher described the curriculum as balanced
and reported that basic facts were taught and incorporated across a variety of contexts focusing
on problem solving. In addition, the teacher had students practice multiplication facts grouped by
number (e.g., twos, threes, nines). Originally, eight participants were nominated by the teacher,
who was asked to indicate all students in her class who were struggling to complete multiplication
facts automatically. Baseline assessments supported the nominations, with DCM averages ranging
from 10.9 to 26.2 DCM, well below the mastery criteria of 40 DCM (Howell, Fox, & Morehead,
1993) on basic multiplication-fact problems. One students data were dropped from the analysis due
to excessive absences (i.e., missing more than 4 days of data collection). Of the remaining seven
students, all but one were absent fewer than 2 days of the study. The other student (Student 5) was
absent for 4 days. Procedures were run in the students general education classroom during the rst
15 minutes of the scheduled math period (11:0011:15 AM).
Materials
Assessment Packet. Baseline and intervention-assessment data were collected via
experimenter-constructed probes. The assessment probes were constructed by dividing 36 mul-
tiplication facts into three mutually exclusive problem sets containing 12 problems each (see
Appendix). Problems with a zero or one in it were deleted from the pool. For each problem set,
six different assessment packets were constructed, each containing four pages. Each page contained
the 12 problems specic to the probe set under investigation. Problems were arranged so that the
12 problems never occurred in the same order within each probe packet. These assessments occurred
at the beginning of the math period prior to the implementation of DPR. When the students received
DPR, assessments were administered after they nished DPR.
Intervention Packet. When completing the intervention, students would record responses on a
corresponding packet of three sheets. The rst page was a detect sheet that contained all 12 problems
for each set listed four times resulting in a sheet containing 48 problems (i.e., 4 groups of 12). Six
alternate detect sheets were constructed for each set that listed the problems in different sequences.
This initial detect sheet was used with the metronome-paced detect phase of DPR. The second
sheet was a CCC sheet that contained 36 boxes with 6 rows and 6 columns. The rst row of boxes
labeled the column where the problem and answer was to be written, and numbered the remaining
ve columns where the problems would be practiced. The ve boxes in the rst column under
the problem and answer label were reserved for the problems identied during the detect phase of
DPR. The remaining 25 boxes were blank for students to write their responses when using the CCC
procedures. The third sheet was a sprint sheet, which was similar to the detect sheets (a sheet of 48
problems that contained 4 sets of 12 problems of the targeted problem set). These sheets were used
by students to complete as many problems as possible in 1 min (i.e., a math sprint). Although the
assessment probes, detect sheets, and math sprint sheets contained the same problems, the problems
were presented in different orders. Intervention packets were kept in a plain folder with the students
name on it, and a graph was stapled on the inside of the folder. The student would use the graph to
record the number of problems completed during the math sprints of the repair phase.
Other Materials. Other materials included three poster boards containing the multiplication
problems and answers for each of the three problem sets, an intervention integrity checklist, a
stopwatch, and a metronome.
Psychology in the Schools DOI: 10.1002/pits
Remedying Math-Fact Decits with DPR 345
Experimental Design, Dependent Measures, and Analysis Procedures
A multiple-probe-across-problem-sets design was used to evaluate the effect of DPR on student
DCM scores (Cuvo, 1979; Horner & Baer, 1978). The intervention was implemented in a staggered
fashion across sets, and maintenance data were collected periodically after DPR procedures were
removed from each problem set. The study took place over a 16-day period. During collection of the
dependent variable (i.e., DCM), students had 1 min to complete as many problems as possible. A
digit was scored as correct when the appropriate number was written in the proper column when the
student was presented with a basic multiplication-fact problem (Shinn, 1989). These correct digits
were totaled to obtain DCM. On days when all three problem sets were assessed, the sets were
sequenced in a counterbalanced order across days.
For each assessment session, DCM scores were averaged across students and plotted on a
time-series graph, which was used to make phase-change decisions. Visual analysis of differences in
level and trend was used to determine when phase changes were needed. To avoid making decisions
based on unrepresentative data, if two or more students were absent during a session the data were
excluded. This resulted only in the exclusion of data from the rst day of baseline. Results were
interpreted using visual analysis of class-average data, and, for individuals, each students within-
phase average and trends were compared across phases. Trends were calculated using ordinary
least-squares regression.
Assessment Procedures
Students were seated at an assigned desk and given an assessment probe packet face down. The
group was instructed that they would have 1 min to complete as many problems as possible. They
were told to attempt each problem and, if they were unable to complete it, to put an x over the
problem and go to the next. Students were told to turn the packet over and begin. After 1 min, they
were asked to stop and put the assessment packet in their folder. On days when all three problem
sets were assessed, the students would take out the next problem set and the assessment procedures
were repeated until each set had been assessed. Assessments were collected and scored later. During
baseline and the collection of the nal maintenance data point, the collection of DCM data was done
at the beginning of math class. During the intervention phase, the collection of DCM data was done
following DPR procedures.
DPR Intervention Procedures
DPR consists of three phases: (1) detect, (2) practice (CCC), and (3) repair (a math sprint with
self-graphing). The experimenter began the DPR intervention by passing out the students folders
and having the students take out their packets and place them on their desks, face down. During
the detect phase, a metronome was placed on a table at the front of the room and students were
instructed to try to write their answer before the click of the metronome. The metronome was turned
on and the experimenter told the students to turn over their packets and begin. When the metronome
clicked, students attempted to write the answer before the next click and so forth until the group was
instructed to stop. This phase required approximately 72 seconds when setting the metronome at
40 beats per min, the same rate used in the pilot study by Poncy and coworkers (2006). Next, students
examined their detect sheets, circled the rst ve problems they did not complete, and applied CCC
only to those problems.
During the practice phase, students wrote the rst ve uncompleted problems and answers in
the corresponding problem and answer boxes in the left column of the CCC sheet. If students
were unsure of the correct answer, they were instructed to reference a poster board at the front of
the room that displayed the problems and answers for the targeted problem set being instructed.
Psychology in the Schools DOI: 10.1002/pits
346 Poncy et al.
This poster board was turned around after the detect phase to prevent students from practicing
inaccurate responses during CCC. After the problems and answers were written, the students read
the printed problem and answer, covered the problem and answer, wrote the problem and answer,
and then checked the accuracy of the response by comparing it to the model. When errors were
made, students put a line through the incorrect response and copied the correct problem and answer.
When these steps were completed, the students would move to the next box until they had completed
ve correct written responses for each target problem. This drill resulted in the students using CCC
procedures to correctly write each of the ve target problems and answers ve times each. Because
the group was given a xed amount of time (i.e., 5 min) to nish the CCC sheets in their packet, in
some instances students failed to complete the 25 possible correct problems and answers. If students
did nish before the 5-min period ended, they were instructed to study (i.e., verbally rehearse) the
completed problems on the CCC sheet.
The next, and nal, step of DPR was the repair phase. During this phase, students turned to
the sprint sheet in their intervention packets and were instructed to complete as many problems as
possible in 1 min. Students were told to work fast but that it was important to complete problems
accurately. After 1 min, students were instructed to stop and count the number of problems completed
and document that number on the graph stapled to the inside of their folder. Finally, the students shut
their folders and handed them to the front for the researcher to collect. Students spent approximately
12 min per day completing DPR. Following the DPR intervention, the assessment procedures were
implemented and DCM data were collected (i.e., assessments were conducted to collect dependent
variable data).
DPR Training Procedures
The day after the fourth baseline data point was collected (i.e., the 5th day of the study),
the primary experimenter used description, demonstration, modeling, practice, and feedback to
teach students the DPR intervention. Students rst practiced the detect phase by writing letters in
conjunction with the metronome clicks. Specically, students were given a piece of paper with eight
rows of six boxes and were instructed to write the letters az, placing one letter in each box at
the sound of each metronome click. This exercise was done twice. Next, the experimenter trained
students to carry out CCC procedures with letters from the alphabet (e.g., C +F =R). Each student
independently worked using a practice CCC sheet. For both the detect and the practice (i.e., CCC)
phases, the experimenter described and demonstrated procedures and answered questions. While
students practiced the procedures, the experimenter moved around the room and provided labeled
praise for the accurate application of the detect and practice procedures. This session required
approximately 20 min. Immediately after the training, the experimenter passed out the intervention
packets to the students, and the initial DPR intervention session began.
After the students had completed the detect and practice phases, as well as the math sprint,
they were instructed to open their intervention folders, locate the graph stapled on the inside,
and raise their hands. After all of the students had located the graph, they were told to count the
number of problems that were completed during the math sprint and to mark the graph to represent
their performance. The experimenter modeled howthis would be done and checked that each student
correctly accomplished this step as the student folders were picked up. The training for self-graphing
was completed during the initial day of intervention and took approximately 5 min.
Procedural Integrity and Inter-Scorer Agreement
A second observer was in the classroom during 4 of the 11 intervention sessions (36%) and
collected data on the integrity of the intervention and assessment procedures. These data showed
Psychology in the Schools DOI: 10.1002/pits
Remedying Math-Fact Decits with DPR 347
that the experimenter correctly implemented 100% of the steps when implementing the intervention
and collecting assessment data. To calculate inter-scorer agreement (IA), 50 of the 244 administered
probes (20%) were scored by an independent scorer. Average IA on DCM was 98.5% (range 84%
100%). IA was calculated by dividing the number of agreements on digits correct by the number of
possible agreements and multiplying by 100.
No formal procedures were used to measure student adherence to procedures. The experimenters
were walking around the room during the DPR sessions, however, and provided students with
prompts, redirects, and/or feedback if procedures were being violated.
RESULTS
Figure 1 displays the groups average DCM data across sessions and phases. Baseline data for
Set A shows no clear trend, whereas the data for Sets B and C have an upward trend that attens after
the third day, allowing for the evaluation of intervention effects (Cuvo, 1979; Horner & Baer, 1978).
Upon the introduction of DPR, data from Sets A and C show an immediate increase with differences
of 8 DCM for Set A and 10 DCM for Set C from the nal baseline data point. For Set B, the initial
data point failed to achieve a similar level change; however, by the 2nd day of the treatment, the
group had increased their mean uency performance by approximately 8 DCM. The absence of
similar increases across concurrent baseline phases suggests that spillover, history, and/or testing
effects did not account for the sharp increases in DCM after DPR was applied. Maintenance phase
data show a slight decrease for Problem Set A, followed by a slight increase, whereas Sets B and C
show a slight decrease. Across all three sets, however, these data suggest that students maintained
most of their increases in computation uency.
Table 1 displays the DCM data for each student and the group using phase averages with
baseline, intervention, and maintenance. On average, students increased from an average of 20.9
DCM in baseline phase to 33.3 and 32.9 DCM during the intervention and maintenance phases,
respectively. The 11 intervention sessions were conducted over 11 days and were estimated to take
approximately 12 min per day. This resulted in a total duration of roughly 130 min of instructional
time, with students increasing an average of 5.7 DCM for every hour of instruction and a 1.1 DCM
per day increase. An analysis of individual data showed that all seven students made gains, with
increases from baseline to intervention phases ranging from 5.1 to 18.9 DCM and growth from
baseline to maintenance phases ranging from 6.3 to 17.8 DCM. Both the visual analysis of DCM
data across problem sets and a comparison of phase means suggest that DPR enhanced DCM scores
for the group.
To investigate differences in trend, least-squares regression slopes were calculated for both
baseline and intervention phases for each problem set. Table 2 displays these data for each of the
individuals and the group. The slope of the groups baseline data for Sets A, B, and C estimated
an increase of 0.3, 0.4, and 0.5 DCM per day and supports the interpretation of the visual analysis
that baseline growth rates were relatively at. In comparison, the slope of the groups intervention
phase data for Sets A, B, and C estimated increases of 1.5, 4.9, and 4.8 DCM per day, respectively.
A comparison of baseline and intervention slope estimates for individual students also supports that
DPR increased growth rates when compared to baseline. There were 20 opportunities to compare
baseline and intervention phase slopes, as Student 3s slope for Set C could not be calculated due
to excessive absences during this phase. Slope calculations were higher for the intervention phase
across 16 of the 20 problem sets.
DISCUSSION
Although Poncy and colleagues (2006) found increases in math-fact uency after DPRhad been
applied, the A-B design did not control for any threats to internal validity (Skinner & Skinner, 2007).
Psychology in the Schools DOI: 10.1002/pits
348 Poncy et al.
FIGURE 1. Class average digits correct per minute across problem sets and phases.
We extended this research by employing a multiple-probe design to control for threats to internal
validity, and found clearer evidence that DPR procedures caused increases (i.e., immediate increase
in target sets without concurrent increases in untargeted sets) in uency that were maintained. These
Psychology in the Schools DOI: 10.1002/pits
Remedying Math-Fact Decits with DPR 349
Table 1
Average Phase Digits Correct Per Minute Data for Each Student and the Group
Student Baseline (BL) Intervention (Int) Maintenance (M) BL-Int Growth BL-M Growth
1 20.2 30.7 28.3 10.5 8.1
2 26.4 42.2 36.7 15.8 10.3
3 21.3 33.1 28.7 11.8 7.4
4 11.9 20.0 21.0 8.1 9.1
5 15.7 20.8 22.0 5.1 6.3
6 26.2 45.1 41.7 18.9 15.5
7 24.2 36.4 42.0 12.2 17.8
Group Mean 20.9 33.3 32.9 12.4 12
ndings suggest that DPRmay be an effective math-fact uency remediation procedure when applied
to small groups.
Limitations and Directions for Research
Visual analysis of Figure 1 and comparisons of phase means provide convincing evidence of
the effectiveness of DPR. The trend data (see Table 2), although largely supportive, appears less
Table 2
Baseline and DPR Phase Slope Calculations for Each Student and the Group
Student Problem Set Baseline Slope Intervention Slope Difference
1 A 0.2 1.7 1.5
B 0.9 8.0 7.1
C 0.5 1.0 1.5
2 A 3.3 8.8 5.5
B 1.1 7.6 6.5
C 1.7 0.5 1.2
3 A 0.5 6.0 6.5
B 0.5 4.7 4.2
C 0.1

4 A 0.6 3.2 2.6
B 0.9 0.4 0.5
C 0.6 7.0 6.4
5 A 1.0 2.0 3.0
B 0.8 1.1 1.9
C 0.2 6.0 5.8
6 A 0.4 2.0 1.6
B 0.2 7.9 7.7
C 0.7 3.0 3.7
7 A 0.4 0.7 1.1
B 0.6 3.1 2.5
C 1.1 4.5 3.4
Group A 0.3 1.5 1.2
B 0.4 4.9 4.5
C 0.5 4.8 4.3

Student was only present for one day during the intervention phase.
Psychology in the Schools DOI: 10.1002/pits
350 Poncy et al.
convincing with four instances of negative DPR phase trends. There are several reasons why we
caution readers from overinterpreting these trend data, however. There were few data points used
to calculate each trend (only three or four for each intervention phase, and even fewer when one
considers that students were absent on certain days). Also, the item sets were constricted, containing
only 12 problems each. Although constricting sets may have been conducive to the demonstration
of large and immediate intervention effects, these rapid increases in DCM can lead to a gross
underestimation of the treatment effect when determining student learning rates. For example, Set
C baseline phase concludes with the students completing approximately 23 DCM. On the rst day
they increased to 33 DCM. Unfortunately, within-phase statistical slope calculations exclude this
immediate increase of 10 DCM and only measure the additional 9-DCM increase that occurred over
the next 3 DPR sessions. Thus, such calculations fail to measure the true impact of DPR.
DPR includes several components that may have caused, or interacted with each other to
cause, the increases in fact uency. Previous research on explicit timing suggests that the detect
phase and the post-CCC sprint with self-graphing may have enhanced uency (Codding et al.,
2007; Van Houten et al., 1975). Additionally, the CCC procedure may have caused the increases in
uency (Skinner et al., 1989). Finally, each of these procedures has been shown to enhance rates of
accurate, academic responding, which has been shown to enhance math uency (Skinner et al., 1997).
Therefore, researchers should conduct component analysis studies to identify the component(s) that
caused the increases in uency.
Although many of the aforementioned intervention components have been well researched, DPR
is unique in that it incorporates the detect pretest. Although many have indicated the need to remedy
skill decits with evidence-based interventions, much less attention has been paid to intervention
efciency (Bramlett, Cates, Savina, & Lauinger, 2010; Skinner, 2008, 2010). By incorporating
the metronome-paced group assessment (detect phase), DPR provides an efcient procedure for
identifying specic math-fact problems in need of remediation. This procedure allows each student
to apply the self-managed CCCintervention to only those problems in need of remediation, a strategy
that has been shown to be effective in enhancing learning rates (Cates et al., 2003; Joseph & Nist,
2006; Nist & Joseph, 2008). In the current study, students may have beneted from this attempt at
streamlining instructional time with DPR resulting in an average increase of 5.5 DCM per week in
response to 12 min of instruction per day. Regardless, future researchers should conduct studies to
determine if procedures similar to the detect phase enhance instructional efciency (e.g., learning
per instructional minute).
The data from the current study suggest that DPR is an efcient method to increase student
DCM performance. Future researchers should determine if DPR enhances learning rates (amount
learned given a stable unit of instructional time) compared to other interventions, such as explicit
timing and the taped-problems intervention (e.g., Codding et al., 2007; McCallum, Skinner, Turner,
& Saecker, 2006). Component analysis studies are needed to assess increases in learning rates
caused by different treatments and/or treatment component combinations. These studies may allow
educators to reduce or eliminate time spent on time-consuming components that do not enhance,
or cause little increases in, automaticity and/or allot more time to components that cause larger
increases in learning rates. Such studies may help us to rene DPR and other interventions to
maximize learning rates, which may allow students to return to general education activities, as
opposed to remedial activities, more quickly (Skinner, 2008).
A related limitation that should be addressed is treatment integrity. In the current study, proce-
dural integrity data were collected on the experimenters behavior. The intervention was designed,
however, for efcient and simultaneous application across the group of students, allowing each
student to efciently identify specic idiosyncratic target problems in need of remediation during
the detect phase and self-instruction procedures to remedy these decits (e.g., CCC, self-graphing).
Psychology in the Schools DOI: 10.1002/pits
Remedying Math-Fact Decits with DPR 351
The primary experimenter and teacher moved around the room observing students and prompting
them when they made procedural errors; however, as students were simultaneously working on
different problems, they could not and did not collect treatment integrity data on each students
self-assessment, self-instruction, and self-evaluation behavior. Future researchers should attempt to
collect data on student behaviors to ensure that they are carrying out procedures as planned. These
integrity data will be critical for researchers attempting to identify effective components, as the
degree that each component contributes to skill development cannot be assessed unless researchers
know that the component was administered with integrity.
To address external validity limitations associated with the current study, researchers should
conduct similar studies across grade levels, students (e.g., students with disabilities), and math facts
(e.g., addition, subtraction, and division). Because fact uency may enhance learning of advanced
skills and reduce math anxiety (Cates & Rhymer, 2003; NMAP, 2008), longitudinal studies are
needed to evaluate the effects of DPR on students advanced math achievement, their disposition
toward math (e.g., math-esteem), and the probability of students choosing to do more math or engage
in more advance math (e.g., take advanced math electives).
CONCLUSION
Poncy and colleagues (2006) conducted a pilot study and found evidence that DPR may have
enhanced uency. In the current study, the application of a multiple-probe design to control for threats
to internal validity extends this research, and results provide stronger evidence that DPR procedures
caused increases in uency, with the average student increasing her/his DCM score by 63% in
a 2-week period, using a little more than 2 hours of instructional time. Next, researchers should
attempt to enhance the external validity of DPR and conduct studies designed to identify which
components are effective in causing the increases in math-fact uency. By identifying components
that cause increases in uency and assessing the time required for each component, researchers may
be able to adapt DPR to allow for greater increases in math uency, while requiring less instructional
time (e.g., eliminating time-consuming components that account for little or no learning, applying
more time to components that cause the most rapid increases in learning rates). Additionally, to
help prevent basic skill decits from hindering future learning, researchers should compare DPR
with other efcient and sustainable interventions so that educators can select interventions that
remedy skill decits as rapidly as possible (Skinner, 2008; Skinner, Belore, & Watson, 1995/
2002).
APPENDIX
Set A: 5 9 = 45, 3 9 = 27, 6 6 = 36, 3 4 = 12, 7 9 = 63, 2 2 = 4, 6 3 = 18, 7 2 =
14, 5 4 = 20, 4 8 = 32, 8 8 = 64, 7 5 = 35
Set B: 4 2 = 8, 2 6 = 12, 4 6 = 24, 9 2 = 18, 6 8 = 48, 7 4 = 28, 5 5 = 25, 9 9 =
81, 3 2 = 6, 8 5 = 40, 8 7 = 56, 7 3 = 21
Set C: 6 9 = 54, 7 6 = 42, 7 7 = 49, 3 5 = 15, 4 4 = 16, 8 3 = 24, 8 2 = 16, 3 3 =
9, 8 9 = 72, 5 6 = 30, 9 4 = 36, 2 5 = 10
REFERENCES
Bliss, S. L., Skinner, C. H., McCallum, E., Saecker, L. B., Rowland, E., & Brown, K. S. (in press). Does a brief post-treatment
assessment enhance learning? An investigation of taped problems on multiplication uency. Journal of Behavioral
Education.
Bramlett, R., Cates, G. L., Savina, E., &Lauinger, B. (2010). Assessing effectiveness and efciency of academic interventions
in school psychology journals: 1995-2005. Psychology in the Schools, 47, 114125.
Psychology in the Schools DOI: 10.1002/pits
352 Poncy et al.
Cates, G. L., & Rhymer, K. N. (2003). Examining the relationship between mathematics anxiety and mathematics perfor-
mance: An instructional hierarchy perspective. Journal of Behavioral Education, 12, 2334.
Cates, G. L., Skinner, C. H., Watson, S. T., Meadows, T. J., Weaver, A., & Jackson, B. (2003). Instructional effectiveness and
instructional efciency as considerations for data-based decision making: An evaluation of interspersing procedures.
School Psychology Review, 32, 601617.
Codding, R. S., Shiyko, M., Russo, M., Birch, S., Fanning, E., & Jaspen, D. (2007). Comparing mathematics interventions:
Does initial level of uency predict intervention effectiveness? Journal of School Psychology, 45, 603617.
Cuvo, A. J. (1979). Multiple-baseline design in instructional research: Pitfalls of measurement and procedural advantages.
American Journal of Mental Deciency, 84, 219228.
Gagne, R. M. (1982). Some issues in psychology of mathematics instruction. Journal of Research in Mathematics Education,
14, 718.
Hasselbring, T. S., Goin, L. I., & Bransford, J. D. (1987). Developing automaticity. Teaching Exceptional Children, 1,
3033.
Horner, R. D., & Baer, D. M. (1978). Multiple-probe technique: A variation of the multiple baseline. Journal of Applied
Behavior Analysis, 11, 189196.
Howell, K. W., Fox, S. L., & Morehead, M. K. (1993). Curriculum-based evaluation teaching and decision making (2nd ed.).
Columbus, OH: Charles E. Merrill.
Joseph, L. M., & Nist, L. M. (2006). Comparing the effects of unknown-known ratios on word reading learning versus
learning rates. Journal of Behavioral Education, 15, 6979.
McCallum, E., Skinner, C. H., Turner, H., & Saecker, L. (2006). The taped-problems intervention: Increasing multiplication
fact uency using a low-tech, class-wide, time-delay intervention. School Psychology Review, 35, 419434.
National Mathematics Advisory Panel. (2008). Foundations for success: The nal report of the national mathematics advisory
panel. U.S. Department of Education: Washington, DC.
Nist, L., & Joseph, L. M. (2008). Effectiveness and efciency of ashcard drill instructional methods on urban rst graders
word recognition, acquisition, maintenance, and generalization. School Psychology Review, 27, 294308.
Ownby, R. L., Wallbrown, F., DArti, A., & Armstrong, B. (1985). Patterns of referrals for school psychological services:
Replication of the referral problems and category system. Special Services in the School, 1(4), 5366.
Pellegrino, J. W., & Goldman, S. R. (1987). Information processing and elementary mathematics. Journal of Learning
Disabilities, 20, 2332, 57.
Poncy, B. C., Skinner, C. H., &OMara, T. (2006). Detect, practice, and repair (DPR): The effects of a class-wide intervention
on elementary students math-fact uency. Journal of Evidence Based Practices for Schools, 7, 4768.
Rhymer, K. N., Henington, C., Skinner, C. H., & Looby, E. J. (1999). The effects of explicit timing on mathematics
performance in Caucasian and African-American second-grade students. School Psychology Quarterly, 14, 397
407.
Rhymer, K. N., Skinner, C. H., Henington, C., DReaux, R. A., & Sims, S. (1998). Effects of explicit timing on mathematics
problem completion rates in African-American third- grade students. Journal of Applied Behavior Analysis, 31, 673
677.
Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). New York: The Guilford
Press.
Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. New York: The Guilford Press.
Skinner, C. H. (2002). An empirical analysis of interspersal research: Evidence, implications and applications of the discrete
task completion hypothesis. Journal of School Psychology, 40, 347368.
Skinner, C. H. (2008). Theoretical and applied implications of precisely measuring learning rates. School Psychology Review,
37, 309314.
Skinner, C. H. (2010). Applied comparative effectiveness researchers must measure learning rates: A commentary on
efciency articles. Psychology in the Schools, 47, 166172.
Skinner, C. H., Belore, P. J., Mace, H. W., Williams, S., & Johns, G. A. (1997). Altering response topography to increase
response efciency and learning rates. School Psychology Quarterly, 12, 5464.
Skinner, C. H., Belore, P. J., & Watson, T. S. (1995/2002). Assessing the relative effects of interventions in students with
mild disabilities: Assessing instructional time. Journal of Psychoeducational Assessment, 20, 345356. (Reprinted from
Assessment in Rehabilitation and Exceptionality, 2, 207220, 1995).
Skinner, C. H., McLaughlin, T. F., & Logan, P. (1997). Cover, copy, and compare: A self- managed academic intervention
effective across skills, students, and settings. Journal of Behavioral Education, 7, 295306.
Skinner, C. H., Pappas, D. N., & Davis, K. A. (2005). Enhancing academic engagement: Providing opportunities for
responding and inuencing students to choose to respond. Psychology in the Schools, 42, 389403.
Skinner, C. H., & Skinner, A. L. (2007). Establishing an evidence base for a classroom management procedure with a series
of studies: Evaluating the color wheel. Journal of Evidence-Based Practices for Schools, 8, 88101.
Psychology in the Schools DOI: 10.1002/pits
Remedying Math-Fact Decits with DPR 353
Skinner, C. H., Turco, T. L., Beatty, K. L., & Rasavage, C. (1989). Cover, copy, and compare: An intervention for increasing
multiplication performance. School Psychology Review, 18, 212220.
Van Houten, R., Hill, S., & Parsons, M. (1975). An analysis of a performance feedback system: The effects of timing and
feedback, public posting, and praises upon academic performance and peer interaction. Journal of Applied Behavior
Analysis, 8, 449457.
Van Houten, R., & Thompson, C. (1976). The effect of explicit timing on math performance. Journal of Applied Behavior
Analysis, 9, 227230.
Psychology in the Schools DOI: 10.1002/pits
Copyright of Psychology in the Schools is the property of John Wiley & Sons, Inc. / Education and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express
written permission. However, users may print, download, or email articles for individual use.
_______________________________________________________________

_______________________________________________________________

Report Information from ProQuest
October 10 2013 22:31
_______________________________________________________________

10 October 2013 ProQuest
Table of contents
1. Effective Programs in Elementary Mathematics: A Best-Evidence Synthesis............................................. 1
10 October 2013 ii ProQuest
Document 1 of 1

Effective Programs in Elementary Mathematics: A Best-Evidence Synthesis
Author: Slavin, Robert E; Lake, Cynthia

Publication info: Review of Educational Research 78.3 (Sep 2008): 427-515.
ProQuest document link
Abstract: This article reviews research on the achievement outcomes of three types of approaches to improving
elementary mathematics: mathematics curricula, computer-assisted instruction (CAI), and instructional process
programs. Study inclusion requirements included use of a randomized or matched control group, a study
duration of at least 12 weeks, and achievement measures not inherent to the experimental treatment. Eighty-
seven studies met these criteria, of which 36 used random assignment to treatments. There was limited
evidence supporting differential effects of various mathematics textbooks. Effects of CAI were moderate. The
strongest positive effects were found for instructional process approaches such as forms of cooperative
learning, classroom management and motivation programs, and supplemental tutoring programs. The review
concludes that programs designed to change daily teaching practices appear to have more promise than those
that deal primarily with curriculum or technology alone. [PUBLICATION ABSTRACT]
Links: Get article
Full text: Headnote
This article reviews research on the achievement outcomes of three types of approaches to improving
elementary mathematics: mathematics curricula, computer-assisted instruction (CAI), and instructional process
programs. Study inclusion requirements included use of a randomized or matched control group, a study
duration of at least 12 weeks, and achievement measures not inherent to the experimental treatment. Eighty-
seven studies met these criteria, of which 36 used random assignment to treatments. There was limited
evidence supporting differential effects of various mathematics textbooks. Effects of CAI were moderate. The
strongest positive effects were found for instructional process approaches such as forms of cooperative
learning, classroom management and motivation programs, and supplemental tutoring programs. The review
concludes that programs designed to change daily teaching practices appear to have more promise than those
that deal primarily with curriculum or technology alone.
Keywords: best-evidence syntheses, elementary mathematics, experiments, meta-analysis, reviews of
research.
The mathematics achievement of American children is improving but still has a long way to go. On the National
Assessment of Educational Progress (NAEP, 2005), the math scores of fourth graders steadily improved from
1990 to 2005, increasing from 12% proficient or above to 35%. Among eighth graders, the percentage of
students scoring proficient or better gained from 15% in 1990 to 20% in 2005. These trends are much in
contrast to the trend in reading, which changed only slightly between 1992 and 2005.
However, although mathematics performance has grown substantially for all subgroups, the achievement gap
between African American and Hispanic students and their White counterparts remains wide. In 2005, 47% of
White fourth graders scored at or above proficient on the NAEP, but only 13% of African American students and
19% of Hispanic students scored this well.
Furthermore, the United States remains behind other developed nations in international comparisons of
mathematics achievement. For example, U.S. 15-year-olds ranked 28th on the Organization for Economic
Cooperation and Development's Program for International Student Assessment study of mathematics
achievement, significantly behind countries such as Finland, Canada, the Czech Republic, and France.
Although we can celebrate the growth of America's children in mathematics, we cannot be complacent. The
10 October 2013 Page 1 of 33 ProQuest
achievement gap between children of different ethnicities, and between U.S. children and those in other
countries, gives us no justification for relaxing our focus on improving mathematics for all children. Under No
Child Left Behind, schools can meet their adequate yearly progress goals only if all subgroups meet state
standards (or show adequate growth) in all subjects tested. Nationally, thousands of schools are under
increasing sanctions because the school, or one or more subgroups, is not making sufficient progress in math.
For this reason, educators are particularly interested in implementing programs and practices that have been
shown to improve the achievement of all children.
One way to reduce mathematics achievement gaps, and to improve achievement overall, is to provide low-
performing schools training and materials known to be markedly more effective than typical programs. No Child
Left Behind, for example, emphasizes the use of research-proven programs to help schools meet their
adequate yearly progress goals. Yet, for such a strategy to be effective, it is essential to know what specific
programs are most likely to improve mathematics achievement. The purpose of this review is to summarize and
interpret the evidence on elementary mathematics programs in hopes of informing policies and practices
designed to reduce achievement gaps and improve the mathematics achievement of all children.
Reviews of Effective Programs in Elementary Mathematics
The No Child Left Behind Act strongly emphasizes encouraging schools to use federal funding (such as Title I)
on programs with strong evidence of effectiveness from "scientifically-based research." No Child Left Behind
defines scientifically based research as research that uses experimental methods to compare groups using
programs to control groups, preferably with random assignment to conditions. Yet, in mathematics, what
programs meet this standard? There has never been a published review of scientific research on all types of
effective programs. There have been meta-analyses on outcomes of particular approaches to mathematics
education, such as use of educational technology (e.g., H. J. Becker, 1992; Chambers, 2003; Kulik, 2003; R.
Murphy et al., 2002), calculators (e.g., Ellington, 2003), and math approaches for at-risk children (e.g., Baker,
Gersten, &Lee, 2002; Kroesbergen &Van Luit, 2003). There have been reviews of the degree to which various
math programs correspond to current conceptions of curriculum, such as those carried out by Project 2061
evaluating middle school math textbooks (American Association for the Advancement of Science, 2000). The
What Works Clearinghouse (2006) is doing a review of research on effects of alternative math textbooks, but
this has not appeared as of this writing. However, there are no comprehensive reviews of research on all of the
programs and practices available to educators.
In 2002, the National Research Council (NRC) convened a blue-ribbon panel to review evaluation data on the
effectiveness of mathematics curriculum materials, focusing in particular on innovative programs supported by
the National Science Foundation (NSF) but also looking at evaluations of non-NSF materials (Confrey, 2006;
NRC, 2004). The NRC panel assembled research evaluating elementary and secondary math programs and
ultimately agreed on 63 quasi-experimental studies covering all grade levels, kindergarten through 12, that they
considered to meet minimum standards of quality.
The authors of the NRC (2004) report carefully considered the evidence across the 63 studies and decided that
they did not warrant any firm conclusions. Using a vote-count procedure, they reported that among 46 studies of
innovative programs that had been supported by NSF, 59% found significantly positive effects, 6% significant
negative effects, and 35% no differences. Most of these studies involved elementary and secondary programs
of the University of Chicago School Mathematics Project. For commercial programs, fhe corresponding
percentages were 29%, 1 3%, and 59%. Based on this, the report tentatively suggested that NSF-funded
programs had better outcomes. Other than this very general finding, the report was silent about the evidence on
particular programs. In addition to concerns about methodological limitations, the report maintained that it is not
enough to show differences in student outcomes; curricula, the authors argued, should be reviewed for content
by math educators and mathematicians to be sure they correspond to current conceptions of what math content
should be. None of the studies combined this kind of curriculum review with rigorous evaluation methods, so the
10 October 2013 Page 2 of 33 ProQuest
NRC chose not to describe the outcomes it found in the 63 evaluations that met its minimum standards (see
Confrey, 2006).
Focus of the Present Review
This review examines research on all types of math programs that are available to elementary educators today.
The intention is to place all types of programs on a common scale. In this way, we hope to provide educators
with meaningful, unbiased information that they can use to select programs and practices most likely to make a
difference with their students. In addition, the review is intended to look broadly for factors that might underlie
effective practices across programs and program types and to inform an overarching theory of effective
instruction in elementary mathematics.
The review examines three general categories of math approaches. One is mathematics curricula, in which the
main focus of the reform is on introduction of alternative textbooks. Some of the programs in this category were
developed with extensive funding by the NSF, which began in the 1990s. Such programs provide professional
development to teachers, and many include innovative instructional methods. However, the primary theory of
action behind this set of reforms is that higher level objectives, including a focus on developing critical
mathematics concepts and problem-solving skills and pedagogical aids such as the use of manipulatives and
improved sequencing of objectives, and other features of textbooks will improve student outcomes. Outcomes
of such programs have not been comprehensively reviewed previously, although they are currently being
reviewed by the What Works Clearinghouse (2006).
A second major category of programs is computer-assisted instruction (CAI), which uses technology to enhance
student achievement in mathematics. CAI programs are almost always supplementary, meaning that students
experience a full textbook-based program in mathematics and then go to a computer lab or a classroombased
computer to receive additional instruction. CAI programs diagnose students' levels of performance and then
provide exercises tailored to students' individual needs. Their theory of action depends substantially on this
individualization and on the computer's ability to continuously assess students' progress and accommodate their
needs. This is the one category of program that has been extensively reviewed in the past, most recently by
Kulik (2003), R. Murphy et al. (2002), and Chambers (2003).
A third category is instructional process programs. This set of interventions is highly diverse, but what
characterizes its approaches is a focus on teachers' instructional practices and classroom management
strategies rather than on curriculum or technology. With two exceptions, studies of instructional process
strategies hold curriculum constant. Instructional process programs introduce variations in withinclass grouping
arrangements (as in cooperative learning or tutoring) and in the amounts and uses of instructional time. Their
theories of action emphasize enhancing teachers' abilities to motivate children, to engage their thinking
processes, to improve classroom management, and to accommodate instruction to students' needs. Their
hallmark is extensive professional development, usually incorporating follow-up and continuing interactions
among the teachers themselves.
The three approaches to mathematics reform can be summarized as follows:
* change the curriculum,
* supplement the curriculum with CAI, or
* change classroom practices.
The categorization of programs in this review relates to a long-standing debate in research on technology by
Kozma (1994) and Clark (2001). Clark argued that research on technology must hold curriculum constant in
order to identify the unique contributions of the technology. Kozma replied that technology and curriculum were
so intertwined that it was not meaningful to separate them in analysis. As a practical matter, content, media, and
instructional processes are treated in different ways in the research discussed here. The mathematics curricula
vary textbooks but otherwise do not make important changes in media or instructional methods. The CAI
studies invariably consider technology and curricula together; none do as Clark suggested. Most of the
10 October 2013 Page 3 of 33 ProQuest
instructional process studies vary only the teaching methods and professional development, holding curriculum
constant, but a few (Team-Assisted Individualization [TAI] Math, Project CHILD, Direct Instruction, and Project
SEED) combine curricula, processes, and (in the case of Project CHILD) media.
The categorization of the programs was intended to facilitate understanding and contribute to theory, not to
restrict the review. No studies were excluded due to lack of fit with one of the categories. This review examines
research on dozens of individual programs to shed light on the broad types of mathematics reforms most likely
to enhance the mathematics achievement of elementary school children.
Review Method
This article reviews studies of elementary mathematics programs in an attempt to apply consistent, well-justified
standards of evidence to draw conclusions about effective elementary mathematics programs. The review
applies a technique called "best-evidence synthesis" (Slavin, 1986, 2007), which seeks to apply consistent,
well-justified standards to identify unbiased, meaningful quantitative information from experimental studies, and
then discusses each qualifying study, computing effect sizes but also describing the context, design, and
findings of each. Bestevidence synthesis closely resembles meta-analysis (Cooper, 1998; Lipsey &Wilson,
2001), but it requires more extensive discussion of key studies instead of primarily pooling results across many
studies. In reviewing educational programs, this distinction is particularly important, as there are typically few
studies of any particular program, so understanding the nature and quality of fhe contribution made by each
study is essential. The review procedures, described below, are similar to those applied by the What Works
Clearinghouse (2005, 2006). (See Slavin, 2007, for a detailed description and justification for fhe review
procedures used here and in syntheses by Cheung &Slavin, 2005, and by Slavin, Lake, &Groff, 2007.)
The purpose of this review is to examine the quantitative evidence on elementary mathematics programs to
discover how much of a scientific basis there is for competing claims about the effects of various programs. Our
intention is to inform practitioners, policy makers, and researchers about the current state of fhe evidence on
this topic and to identify gaps in the knowledge base that are in need of further scientific investigation.
Limitations of the Review
This article is a quantitative synthesis of achievement outcomes of alternative mathematics approaches. It does
not report on qualitative or descriptive evidence, attitudes, or other nonachievement outcomes. These are
excluded not because they are unimportant but because space limitations do not allow for a full treatment of all
of the information available on each program. Each report cited, and many that were not included (listed in
Appendix A), contain much valuable information, such as descriptions of settings, nonquantitative and
nonachievement outcomes, and the story of what happened in each study. The present article extracts from
these rich sources just the information on experimental control differences on quantitative achievement
measures in order to contribute to an understanding of the likely achievement affects of using each of fhe
programs discussed. Studies are included or excluded and are referred to as being high or low in quality solely
based on their contributions to an unbiased, well-justified quantitative estimate of fhe strength of the evidence
supporting each program. For a deeper understanding of all of the findings of each study, please see the
original reports.
Literature Search Procedures
A broad literature search was carried out in an attempt to locate every study that could possibly meet the
inclusion requirements. This included obtaining all of the elementary studies cited by the NRC (2004) and by
other reviews of mathematics programs, including technology programs that teach math (e.g., Chambers, 2003;
Kulik, 2003; R. Murphy et al., 2002). Electronic searches were made of educational databases (JSTOR, ERIC,
EBSCO, PsycINFO, and Dissertation Abstracts), Web-based repositories (Google, Yahoo, Google Scholar),
and math education publishers' Web sites. Citations of studies appearing in the studies found in the first wave
were also followed up.
Effect Sizes
10 October 2013 Page 4 of 33 ProQuest
In general, effect sizes were computed as the difference between experimental and control individual student
posttests, after adjustment for pretests and other covariates, divided by the unadjusted control group standard
deviation. If the control group standard deviation was not available, a pooled standard deviation was used.
Procedures described by Lipsey and Wilson (2001) and by Sedlmeier and Gigerenzer (1989) were used to
estimate effect sizes when unadjusted standard deviations were not available, as when the only standard
deviation presented was already adjusted for covariates or when only gain score standard deviations were
available. School- or classroom-level standard deviations were adjusted to approximate individual-level
standard deviations, as aggregated standard deviations tend to be much smaller than individual standard
deviations. If pretest and posttest means and standard deviations were presented but adjusted means were not,
effect sizes for pretests were subtracted from effect sizes for posttests.
Criteria for Inclusion
Criteria for inclusion of studies in this review were as follows:
1. The studies involved elementary (K-5) children, plus sixth graders if they were in elementary schools.
2. The studies compared children taught in classes using a given mathematics program to those in control
classes using an alternative program or standard methods.
3. Studies could have taken place in any country, but the report had to be available in English.
4. Random assignment or matching with appropriate adjustments for any pretest differences (e.g., ANCOVA)
had to be used. Studies without control groups, such as pre-post comparisons and comparisons to "expected"
gains, were excluded. Studies with pretest differences of more than 50% of a standard deviation were excluded,
because even with ANCOVAs, large pretest differences cannot be adequately controlled for, as underlying
distributions may be fundamentally different. (See the Methodological Issues section, later in the article, for a
discussion of randomized and matched designs.)
5. The dependent measures included quantitative measures of mathematics performance, such as standardized
mathematics measures. Experimenter-made measures were accepted if they were described as comprehensive
measures of mathematics, which would be fair to the control groups, but measures of math objectives inherent
to the program (but unlikely to be emphasized in control groups) were excluded. For example, a study of CAI by
Van Dusen and Worthen (1994) found no differences on a standardized test (effect size [ES] = +0.01) but a
substantial difference on a test made by the software developer (ES = +0.35). The software-specific measure
was excluded, as it probably focused on objectives and formats practiced in the CAI group but not in the control
group. (See the Methodological Issues section, later in the article, for a discussion of this issue.)
6. A minimum treatment duration of 12 weeks was required. This requirement is intended to focus the review on
practical programs intended for use for the whole year rather than on brief investigations. Brief studies may not
allow programs intended to be used over the whole year to show their full effect. On the other hand, brief
studies often advantage experimental groups that focus on a particular set of objectives during a limited time
period while control groups spread that topic over a longer period. For example, a 30-day experiment by
Cramer, Post, and delMas (2002) evaluated a fractions curriculum that is part of the Rational Number Project.
Control teachers using standard basais were asked to delay their fractions instruction to January to match the
exclusive focus of the experimental group on fractions, but it seems unlikely that their focus would have been
equally focused on fractions, the only skill assessed.
Appendix A lists studies that were considered but excluded according to these criteria, as well as the reasons
for exclusion. Appendix B lists abbreviations used throughout this review.
Methodological Issues in Studies of Elementary Mathematics Programs
The three types of mathematics programs reviewed here, mathematics curricula, CAI programs, and
instructional process programs, suffer from different characteristic methodological problems (see Slavin, 2007).
Across most of the evaluations, lack of random assignment is a serious problem. Matched designs are used in
most studies that met the inclusion criteria, and matching leaves studies open to selection bias. That is, schools
10 October 2013 Page 5 of 33 ProQuest
or teachers usually choose to implement a given experimental program and are compared to schools or
teachers who did not choose the program. The fact of this self-selection means that no matter how well
experimental groups and control groups are matched on other factors, the experimental group is likely to be
more receptive to innovation, more concerned about math, have greater resources for reform, or otherwise have
advantages that cannot be controlled for statistically. Alternatively, it is possible that schools that would choose
a given program might have been dissatisfied with results in the past and might therefore be less effective than
comparable schools. Either way, matching reduces internal validity by allowing for the possibility that outcomes
are influenced by whatever (unmeasured) factors that led the school or teacher to choose the program. It affects
external validity in limiting the generalization of findings to schools or teachers who similarly chose to use the
program.
Garden-variety selection bias is bad enough in experimental design, but many of the studies suffered from
design features that add to concerns about selection bias. In particular, many of the curriculum evaluations used
a post hoc design, in which a group of schools using a given program, perhaps for many years, is compared
after the fact to schools that matched the experimental program at pretest or that matched on other variables,
such as poverty or reading measures. The problem is that only the "survivors" are included in the study.
Schools that bought the materials, received the training, but abandoned the program before the study took
place are not in the final sample, which is therefore limited to more capable schools. As one example of this,
Waite (2000), in an evaluation of Everyday Mathematics, described how 17 schools in a Texas city originally
received materials and training. Only 7 were still implementing it at the end of the year, and 6 of these agreed to
be in the evaluation. We are not told why the other schools dropped out, but it is possible that the staff members
of the remaining 6 schools may have been more capable or motivated than those at schools that dropped the
program. The comparison group in the same city was likely composed of fhe full range of more and less
capable school staff members, and they presumably had the same opportunity to implement Everyday
Mathematics but chose not to do so. Other post hoc studies, especially those with multiyear implementations,
must also have had some number of dropouts but typically did not report how many schools there were at first
and how many dropped out. There are many reasons schools may have dropped out, but it seems likely that
any school staff able to implement any innovative program for several years is more capable, more reform
oriented, or better led than those unable to do so or (even worse) than those that abandoned the program
because it was not working. As an analog, imagine an evaluation of a diet regimen that studied only people who
kept up the diet for a year. There are many reasons a person might abandon a diet, but chief among them is
that it is not working, so looking only at the nondropouts would bias such a study.
Worst of all, post hoc studies usually report outcome data selected from many potential experimental and
comparison groups and may therefore report on especially successful schools using the program or matched
schools that happen to have made particularly small gains, making an experimental group look better by
comparison. The fact that researchers in post hoc studies often have pretest and posttest data readily available
on hundreds of potential matches, and may deliberately or inadvertently select the schools that show the
program to best effect, means that readers must take results from after-the-fact comparisons with a grain of salt.

Finally, because post hoc studies can be very easy and inexpensive to do, and are usually contracted for by
publishers rather than supported by research grants or conducted for dissertations, such studies are likely to be
particularly subject to the "file drawer" problem. That is, post hoc studies that fail to find expected positive
effects are likely to be quietly abandoned, whereas studies supported by grants or produced for dissertations
will almost always result in a report of some kind. The file drawer problem has been extensively described in
research on meta-analyses and other quantitative syntheses (see, e.g., Cooper, 1998), and it is a problem in all
research reviews, but it is much more of a problem with post hoc studies.
Despite all of these concerns, post hoc studies were reluctantly included in this review for one reason: Without
10 October 2013 Page 6 of 33 ProQuest
them, there would be no evidence at all concerning most of the commercial textbook series used by the vast
majority of elementary schools. As long as the experimental and control groups were well matched at pretest on
achievement and demographic variables, and met other inclusion requirements, we decided to include them,
but readers should be very cautious in interpreting their findings. Prospective studies, in which experimental and
control groups were designated in advance and outcomes are likely to be reported whatever they turn out to be,
are always to be preferred to post hoc studies, other factors being equal.
Another methodological limitation of almost all of the studies in this review is analysis of data at the individual-
student level. The treatments are invariably implemented at the school or classroom levels, and student scores
within schools and classrooms cannot be considered independent. In clustered settings, individual-level
analysis does not introduce bias, but it does greatly overstate statistical significance, and in studies involving a
small number of schools or classes it can cause treatment effects to be confounded with school or classroom
effects. In an extreme form, a study comparing, say, one school or teacher using Program A and one using
Program B may have plenty of statistical power at the student level, but treatment effects cannot be separated
from characteristics of the schools or teachers.
Several studies did randomly assign groups of students to treatments but nevertheless analyzed at the
individual-student level. The random assignment in such studies is beneficial because it essentially eliminates
selection bias. However, analysis at the student level, rather than at the level of random assignment, still
confounds treatment effects and school or classroom effects, as noted earlier. We call such studies randomized
quasi-experiments and consider them more methodologically rigorous, all other things being equal, than
matched studies, but less so than randomized studies in which analysis is at the level of random assignment.
Some of the qualifying studies, especially of instructional process programs, were quite small, involving a
handful of schools or classes. Beyond the problem of confounding, small studies often allow the developers or
experimenters to be closely involved in implementation, creating far more faithful and high-quality
implementations than would be likely in more realistic circumstances. Unfortunately, many of the studies that
used random assignment to treatments were very small, often with just one teacher or class per treatment. Also,
the file drawer problem is heightened with small studies, which are likely to be published or otherwise reported
only if their results are positive (see Cooper, 1998).
Another methodological problem inherent to research on alternative mathematics curricula relates to outcome
measures. In a recent criticism of the What Works Clearinghouse, Schoenfeld (2006) expressed concern that
because most studies of mathematics curricula use standardized tests or state accountability tests focused
more on traditional skills than on concepts and problem solving, there is a serious risk of "false negative" errors,
which is to say that studies might miss true and meaningful effects on unmeasured outcomes characteristic of
innovative curricula (also see Confrey, 2006, for more on this point). This is indeed a serious problem, and there
is no solution to it. Measuring content taught only in the experimental group risks false positive errors, just as
use of standardized tests risks false negatives. In the present review, only outcome measures that assess
content likely to have been covered by all groups are considered; measures inherent to the treatment are
excluded. However, many curriculum studies include outcomes for subscales, such as computation, concepts
and applications, and problem solving, and these outcomes are separately reported in this review. Therefore, if
an innovative curriculum produces, for example, better outcomes on problem solving but no differences on
computation, that might be taken as an indication that it is succeeding at least in its area of emphasis.
A total of 87 studies met the inclusion criteria. Tables 1 , 2, and 3 list all the qualifying studies. Within sections
on each program, studies that used random assignment (if any) are listed first, then randomized quasi-
experiments, then prospective matched studies, and finally post hoc matched studies. Within these categories,
studies with larger sample sizes are listed first.
This article discusses conclusions drawn from the qualifying studies, but studyby-study discussions are withheld
so that the article will fit within the space requirements of the Review of Educational Research. These
10 October 2013 Page 7 of 33 ProQuest
descriptions appear in a longer version of this article, which can be found at http://www.bestevidence.org/
_images/word_docs/Eff%20progs%20ES%20math%20Version%201.2%20for%
20BEE%2002%2009%2007.doc.
Perhaps the most common approach to reform in mathematics involves adoption of reform-oriented textbooks,
along with appropriate professional development. Programs that have been evaluated fall into three categories.
One is programs developed under funding from the NSF that emphasize a constructivist philosophy, with a
strong emphasis on problem solving, manipulatives, and concept development and a relative de-emphasis on
algorithms. At the opposite extreme is Saxon Math, a back-tobasics curriculum that emphasizes building
students' confidence and skill in computation and word problems. Finally, there are traditional commercial
textbook programs.
The reform-oriented programs supported by NSF, especially Everyday Mathematics, have been remarkably
successful in making the transition to widespread commercial application. Sconiers, Isaacs, Higgins, McBride,
and Kelso (2003) estimated that, in 1999, 10% of all schools were using one of three programs that had been
developed under NSF funding and then commercially published. That number is surely higher as of this writing.
Yet, experimental control evaluations of these and other curricula that meet the most minimal standards of
methodological quality are very few. Only five studies of the NSF programs met the inclusion standards, and all
but one of these was a post hoc matched comparison.
This section reviews the evidence on mathematics curricula. Overall, 13 studies met the inclusion criteria, of
which only 2 used random assignment. Table 1 summarizes the methodological characteristics and outcomes
of these studies. Descriptions of each study can be seen at http://www.bestevidence.org/_images/
word_docs/Eff%20progs%20ES%20math%20Version%201.2%20for%20BEE% 2002%2009%2007.doc.
With a few exceptions, the studies that compared alternative mathematics curricula are of marginal
methodological quality. Ten of the 13 qualifying studies used post hoc matched designs in which control
schools, classes, or students were matched with experimental groups after outcomes were known. Even though
such studies are likely to overstate program outcomes, the outcomes reported in diese studies are modest. The
median effect size was only +0. 10. The enormous ARC study (Sconiers et al., 2003) found an average effect
size of only +0. 10 for the three most widely used of the NSFsupported mathematics curricula, taken together.
Riordan and Noyce (2001), in a post hoc study of Everyday Mathematics, did find substantial positive effects
(ES = +0.34) in comparison to controls for schools that had used the program for 4 to 6 years, but effects for
schools that used the program for 2 to 3 years were much smaller (ES = +0.15). This finding may suggest that
schools need to implement this program for 4 to 6 years to see a meaningful benefit, but the difference in
outcomes may just be a selection artifact, due to the fact that schools that were not succeeding may have
dropped the program before the 4th year. The evidence for the impacts of all of die curricula on standardized
tests is diin. The median effect size across five studies of the NSF-supported curricula is only +0. 1 2, very
similar to the findings of the ARC study.
The reform-oriented math curricula may have positive effects on outcomes not assessed by standardized tests,
as suggested by Schoenfeld (2006) and Confrey (2006). However, the results on standardized and state
accountability measures do not suggest differentially strong impacts on outcomes such as problem solving or
concepts and applications diat one might expect, as diese are the focus of the NSF curricula and odier reform
curricula.
Evidence supporting Saxon Math, the very traditional, algorithmically focused curriculum that is the polar
opposite of the NSF-supported models, was lacking. The one methodologically adequate study evaluating the
program, by Resendez and Azin (2005), found no differences on Georgia state tests between elementary
students who experienced Saxon Math and those who used other texts.
A review of research on middle and high school math programs by Slavin et al. (2007) using methods
essentially identical to those used in the present review also found very limited impacts of alternative textbooks.
10 October 2013 Page 8 of 33 ProQuest
Across 38 qualifying studies, the median effect size was only +0.05. The effect sizes were also +0.05 in 24
studies of NSF-supported curricula and were +0.12 in 11 studies of Saxon Math.
More research is needed on all of these programs, but fhe evidence to date suggests a surprising conclusion
that despite all fhe heated debates about the content of mathematics, there is limited high-quality evidence
supporting differential effects of different math curricula.
Computer-Assisted Instruction
A long-standing approach to improving the mathematics performance of elementary students is computer-
assisted instruction, or CAI. Over the years, CAI strategies have evolved from limited drill-and-practice
programs to sophisticated integrated learning systems that combine computerized placement and instruction.
Typically, CAI materials have been used as supplements to classroom instruction and are often used only a few
times a week. Some of the studies of CAI in math have involved only 30 minutes per week. What CAI primarily
adds is the ability to identify children's strengths and weaknesses and then give them selfinstructional exercises
designed to fill in gaps. In a hierarchical subject like mathematics, especially computation, this may be of
particular importance.
A closely related strategy, computer-managed learning systems, is also reviewed in this section as a separate
subcategory.
As noted earlier, CAI is one of the few categories of elementary mathematics interventions that has been
reviewed extensively. Most recently, for example, Kulik (2003) reviewed research on the uses of CAI in reading
and math and concluded that studies supported the effectiveness of CAI for math but not for reading. R. Murphy
et al. (2002) concluded that CAI was effective in both subjects, but with much larger effects in math than in
reading. A recent large, randomized evaluation of various CAI programs found no effects on student
achievement in reading or math (Dynarski et al., 2007).
Table 2 summarizes qualifying research on several approaches to CAI in elementary mathematics. Many of
these involved earlier versions of CAI that no longer exist, but it is still useful to be aware of the earlier evidence,
as many of the highest quality studies were done in the 1980s and early 1990s. Overall, 38 studies of CAI met
the inclusion criteria, and 15 of these used randomized or randomized quasi-experimental designs. In all cases,
control groups used nontechnology approaches, such as traditional textbooks. For descriptions of each study,
see http://www.bestevidence.org/_images/word_docs/Eff%20progs%20ES%20math %20Version%20 1
.2%20for%20BEE%2002%2009%2007.doc.
In sheer numbers of studies, CAI is the most extensively studied of all approaches to elementary math reform.
Most studies of CAI find positive effects, especially on measures of math computations. Across all studies from
which effect sizes could be computed, these effects are meaningful in light ofthe fact that CAI is a supplemental
approach, rarely occupying more than three 30-minute sessions weekly (and often alternating with CAI reading
instruction). The median effect size was +0.19. This is larger than the median found for the curriculum studies
(+0. 1 0), and it is based on many more studies (38 vs. 13) and on many more randomized and randomized
quasi-experimental studies (15 vs. 2). However, it is important to note that most of these studies are quite old,
and they usually evaluated programs that are no longer commercially available.
In the Slavin et al. (2007) review of middle and high school math programs, the median effect size across 36
qualifying studies of CAI was +0.17, nearly identical to the estimate for elementary CAI studies.
Although outcomes of studies of CAI are highly variable, most studies do find positive effects, and none
significantly favored a control group. Although the largest number of studies has involved Jostens, there is not
enough high-quality evidence on particular CAI approaches to recommend any one over another, at least based
on student outcomes on standardized tests.
In studies that break down their results by subscales, outcomes are usually stronger for computation than for
concepts or problem solving. This is not surprising, as CAI is primarily used as a supplement to help children
with computation skills. Because of the hierarchical nature of math computation, CAI has a special advantage in
10 October 2013 Page 9 of 33 ProQuest
this area because of its ability to assess students and provide them with individualized practice on skills that
they have the prerequisites to learn but have not yet learned.
Instructional Process Strategies
Many researchers and reformers have sought to improve children's mathematics achievement by giving
teachers extensive professional development on the use of instructional process strategies, such as cooperative
learning, classroom management, and motivation strategies (see Hill, 2004). Curriculum reforms and CAI also
typically include professional development, of course, but the strategies reviewed in this section are primarily
characterized by a focus on changing what teachers do with the curriculum they have, not changing the
curriculum.
The programs in this section are highly diverse, so they are further divided into seven categories:
1. cooperative learning,
2. cooperative/individualized programs,
3. direct instruction,
4. mastery learning,
5. professional development focused on math content,
6. professional development focused on classroom management and motivation, and
7. supplemental programs.
A total of 36 studies evaluated instructional process programs. Table 3 summarizes characteristics and
outcomes of these studies. For discussions of each study, see
http://www.bestevidence.org/_images/word_docs7Eff%20progs%20ES%
20math%20Version%201.2%20for%20BEE%2002%2009%2007.doc.
Research on instructional process strategies tends to be of much higher quality than research on mathematics
curricula or CAI. Out of 36 studies, 19 used randomized or randomized quasi-experimental designs. Many had
small samples and/or short durations, and in some cases there were confounds between treatments and
teachers, but even so, these are relatively high-quality studies, most of which were published in peer-reviewed
journals. The median effect size for randomized studies was +0.33, and the median was also +0.33 for all
studies taken together.
The research on instructional process strategies identified several methods with strong positive outcomes on
student achievement. In particular, the evidence supports various forms of cooperative learning. The median
effect size across 9 studies of cooperative learning was +0.29 for elementary programs (and +0.38 for middle
and high school programs; see Slavin et al., 2007). Particularly positive effects were found for Classwide Peer
Tutoring (CWPT) and Peer-Assisted Learning Strategies (PALS), which are pair learning methods, and Student
Teams- Achievement Division (STAD) and TAI Math, which use groups of four. Project CHILD, which also uses
cooperative learning, was successfully evaluated in 1 study.
Two programs that focus on classroom management and motivation also had strong evidence of effectiveness
in large studies. These are the Missouri Mathematics Project, with three large randomized and randomized
quasi-experiments with positive outcomes, and Consistency Management &Cooperative Discipline (CMCD).
Positive effects were also seen for Dynamic Pedagogy and Cognitively Guided Instruction (CGI), which focus on
helping teachers understand math content and pedagogy. Four small studies supported direct instruction
models, Connecting Math Concepts, and User-Friendly Direct Instruction.
Programs that supplemented traditional classroom instruction also had strong positive effects. These include
small-group tutoring for struggling first graders and Project SEED, which provides a second math period
focused on high-level math concepts.
The research on these instructional process strategies suggests that the key to improving math achievement
outcomes is changing the way teachers and students interact in the classroom. It is important to be clear that
the well-supported programs are not ones that just provide generic professional development or professional
10 October 2013 Page 10 of 33 ProQuest
development focusing on mathematics content knowledge. What characterizes the successfully evaluated
programs in this section is a focus on how teachers use instructional process strategies, such as using time
effectively, keeping children productively engaged, giving children opportunities and incentives to help each
other learn, and motivating students to be interested in learning mathematics.
Overall Patterns of Outcomes
Across all categories, 87 studies met the inclusion criteria for this review, of which 36 used randomized or
randomized quasi-experimental designs: 13 studies (2 randomized) of mathematics curricula, 38(11
randomized, 4 randomized quasiexperimental) of CAI, and 36 (9 randomized, 10 randomized quasi-
experimental) of instructional process programs. The median effect size was +0.22. The effect size for
randomized and randomized quasi-experimental studies (n = 36) was +0.29, and for fully randomized studies (n
= 22) it was +0.28, indicating that randomized studies generally produced effects similar to those of matched
quasi-experimental studies. Recall that the matched studies had to meet stringent methodological standards, so
the similarity between randomized and matched outcomes reinforces the observation made by Glazerman,
Levy, and Myers (2002) and Torgerson (2006) that high-quality studies with well-matched control groups
produce outcomes similar to those of randomized experiments.
Overall effect sizes differed, however, by type of program. Median effect sizes for all qualifying studies were
+0.10 for mathematics curricula, +0.19 for CAI programs, and +0.33 for instructional process programs. Effect
sizes were above the overall median (+0.22) in 15% of studies of mathematics curricula, 37% of CAI studies,
and 72% of instructional process programs. The difference in effect sizes between the instructional process and
other programs is statistically significant (^sup 2^ = 15.71, p <. 001).
With only a few exceptions, effects were similar for disadvantaged and middleclass students and for students of
different ethnic backgrounds. Effects were also generally similar on all subscales of math tests, except that CAI
and TAI Math generally had stronger effects on measures of computations than on measures of concepts and
applications.
Summarizing Evidence of Effectiveness for Current Programs
In several recent reviews of research on outcomes of various educational programs, reviewers have
summarized program outcomes using a variety of standards. This is not as straightforward a procedure as
might be imagined, as several factors must be balanced (Slavin, 2008). These include the number of studies,
the average effect sizes, and the methodological quality of studies.
The problem is that the number of studies of any given program is likely to be small, so simply averaging effect
sizes (as in meta-analyses) is likely to overemphasize small, biased, or otherwise flawed studies with large
effect sizes. For example, in the present review, there are several very small matched studies with effect sizes
in excess of +1.00, and these outcomes cannot be allowed to overbalance large and randomized studies with
more modest effects. Emphasizing numbers of studies can similarly favor programs with many small, matched
studies, which may collectively be biased toward positive findings by file drawer effects. The difference in
findings for CAI programs between small numbers of randomized experiments and large numbers of matched
experiments shows the danger of emphasizing numbers of studies without considering quality. Finally,
emphasizing methodological factors alone risks eliminating most studies or emphasizing studies that may be
randomized but are very small, confounding teacher and treatment effects, or may be brief, artificial, or
otherwise not useful for judging the likely practical impact of a treatment.
In this review, we applied a procedure for characterizing the strength of the evidence favoring each program
that attempts to balance methodological, replication, and effect size factors. Following the What Works
Clearinghouse (2006), the Comprehensive School Reform Quality Center (2005), and Borman, Hewes,
Overman, and Brown (2003), but placing a greater emphasis on methodological quality, we categorized
programs as follows (see Slavin, 2007):
* Strong evidence of effectiveness-at least one large randomized or randomized quasi-experimental study, or
10 October 2013 Page 11 of 33 ProQuest
two smaller studies, with a median effect size of at least +0.20. A large study is defined as one in which at least
10 classes or schools, or 250 students, were assigned to treatments.
* Moderate evidence of effectiveness - at least one randomized or randomized quasi-experimental study, or a
total of two large or four small qualifying matched studies, with a median effect size of at least +0.20.
* Limited evidence of effectiveness - at least one qualifying study of any design with a statistically significant
effect size of at least +0.10.
* Insufficient evidence of effectiveness - one or more qualifying study of any design with nonsignificant
outcomes and a median effect size less than +0.10.
N No qualifying studies.
Table 4 summarizes currently available programs falling into each of these categories (programs are listed in
alphabetical order within each category). Note that programs that are not currently available, primarily the older
CAI programs, do not appear in the table, as it is intended to represent the range of options from which today's
educators might choose.
In line with the previous discussions, the programs represented in each category are strikingly different. In the
Strong Evidence category appear five instructional process programs, four of which are cooperative learning
programs: Classwide Peer Tutoring, PALS, STAD, and TAI Math. The fifth program is a classroom management
and motivation model, the Missouri Mathematics Project.
The Moderate Evidence category is also dominated by instructional process programs, including two
supplemental designs, small-group tutoring, and Project SEED, as well as CGI, which focuses on training
teachers in mathematical concepts, and CMCD, which focuses on school and classroom management and
motivation. Connecting Math Concepts, an instructional process program tied to a specific curriculum, also
appears in this category. The only current CAI program with this level of evidence is Classworks.
The Limited Evidence category includes five math curricula: Everyday Mathematics, Excel Math, Growing With
Mathematics, Houghton Mifflin Mathematics, and Knowing Mathematics. Dynamic Pedagogy, Project CHILD,
and Mastery Learning - instructional process programs - are also in this category, along with Lightspan and
Accelerated Math. Four programs, listed under the Insufficient Evidence category, had only one or two studies,
which failed to find significant differences. The final category, No Qualifying Studies, lists 48 programs.
Discussion
The research reviewed in this article evaluates a broad range of strategies for improving mathematics
achievement. Across all topics, the most important conclusion is that there are fewer high-quality studies than
one would wish for. Although a total of 87 studies across all programs qualified for inclusion, there were small
numbers of studies on each particular program. There were 36 studies, 19 of which involved instructional
process strategies, that randomly assigned schools, teachers, or students to treatments, but many of these
tended to be small and therefore to confound treatment effects with school and teacher effects. There were
several large-scale, multiyear studies, especially of mathematics curricula, but these tended to be post hoc
matched quasi-experiments, which can introduce serious selection bias. Clearly, more randomized evaluations
of programs used on a significant scale over a year or more are needed.
This being said, there were several interesting patterns in the research on elemen- tary mathematics programs.
One surprising observation is the lack of evidence that it matters very much which textbook schools choose
(median ES = +0.10 across 13 studies). Quality research is particularly lacking in this area, but the mostly
matched post hoc studies that do exist find modest differences between programs. NSFfunded curricula such
as Everyday Mathematics, Investigations, and Math Trailblazers might have been expected to at least show
significant evidence of effectiveness for outcomes such as problem solving or concepts and applications, but
the quasi-experimental studies that qualified for this review find little evidence of strong effects even in these
areas. The large national study of these programs by Sconiers et al. (2003) found effect sizes of only +0.10 for
all outcomes, and the median effect size for 5 studies of NSF-funded programs was +0.12.
10 October 2013 Page 12 of 33 ProQuest
It is possible that the state assessments used in the Sconiers et al. (2003) study and other studies may have
failed to detect some of the more sophisticated skills taught in NSF-funded programs but not other programs, a
concern expressed by Schoenfeld (2006) in his criticism of the What Works Clearinghouse. However, in light of
the small effects seen on outcomes such as problem solving, probability and statistics, geometry, and algebra, it
seems unlikely that misalignment between the NSF-sponsored curricula and the state tests accounts for the
modest outcomes.
Studies of CAI found a median effect size (ES = +0. 19) higher than that found for mathematics curricula, and
there were many more high-quality studies of CAI. A number of studies showed substantial positive effects of
using CAI strategies, especially for computation, across many types of programs. However, the highest quality
studies, including the few randomized experiments, mostly found no significant differences.
CAI effects in math, although modest in median effect size, are important in light of the fact that in most studies
CAI was used for only about 30 minutes three times a week or less. The conclusion that CAI is effective in math
is in accord with the findings of a recent review of research on technology applications by Kulik (2003), who
found positive effects of CAI in math but not in reading.
The most striking conclusion from the review, however, is the evidence supporting various instructional process
strategies. Twenty randomized experiments and randomized quasi-experiments found impressive affects
(median ES = +0.33) for programs that target teachers' instructional behaviors rather than math content alone.
Several categories of programs were particularly supported by high-quality research. Cooperative learning
methods, in which students work in pairs or small teams and are rewarded based on the learning of all team
members, were found to be effective in 9 well-designed studies, 8 of which used random assignment, with a
median effect size of +0.29. These included studies of Classwide Peer Tutoring, PALS, and STAD. Team
Accelerated Instruction, which combines cooperative learning and individualization, also had strong evidence of
effectiveness. Another well-supported approach included programs that focus on improving teachers' skills in
classroom management, motivation, and effective use of time, in particular the Missouri Mathematics Project
and CMCD. Studies supported programs focusing on helping teachers introduce mathematics concepts
effectively, such as CGI, Dynamic Pedagogy, and Connecting Math Concepts.
Supplementing classroom instruction with well-targeted supplementary instruction is another strategy with
strong evidence of effectiveness. In particular, small-group tutoring for first graders struggling in math and
Project SEED, which provides an additional period of instruction from professional mathematicians, have strong
evidence.
The debate about mathematics reform has focused primarily on curriculum, not on professional development or
instruction (see, e.g., American Association for the Advancement of Science, 2000; NRC, 2004). Yet this review
suggests mat in terms of outcomes on traditional measures, such as standardized tests and state accountability
assessments, curriculum differences appear to be less consequential than instructional differences are. This is
not to say that curriculum is unimportant. There is no point in teaching the wrong mathematics. The research on
the NSFsupported curricula is at least comforting in showing that reform-oriented curricula are no less effective
than traditional curricula on traditional measures, and they may be somewhat more effective, so their
contribution to nontraditional outcomes does not detract from traditional ones. The movement led by the
National Council of Teachers of Mathematics to focus math instruction more on problem solving and concepts
may account for the gains over time on NAEP, which itself focuses substantially on these domains.
Also, it is important to note that the three types of approaches to mathematics instruction reviewed here do not
conflict with each other and may have additive effects if used together. For example, schools might use an NSF-
supported curriculum such as Everyday Mathematics with well-structured cooperative learning and
supplemental CAI, and the effects may be greater than those of any of these programs by themselves.
However, the findings of this review suggest that educators and researchers might do well to focus more on
how mathematics is taught, rather than expecting that choosing one or another textbook by itself will move their
10 October 2013 Page 13 of 33 ProQuest
students forward.
As noted earlier, the most important problem in mathematics education is the gap in performance between
middle- and lower-class students and between White and Asian American students and African American,
Hispanic, and Native American students. The studies summarized in this review took place in widely diverse
settings, and several of them reported outcomes separately for various subgroups. Overall, there is no clear
pattern of differential effects for students of different social class or ethnic background. Programs found to be
effective with any subgroup tend to be effective with all groups. Rather than expecting to find programs with
different effects on students in me same schools and classrooms, the information on effective mathematics
programs might better be used to address the achievement gap by providing research-proven programs to
schools serving many disadvantaged and minority students. Federal Reading First and Comprehensive School
Reform programs were intended to provide special funding to help highpoverty, low-achieving schools adopt
proven programs. A similar strategy in mathematics could help schools with many students struggling in math to
implement innovative programs with strong evidence of effectiveness, as long as the schools agree to
participate in the full professional development process used in successful studies and to implement all aspects
of the program with quality and integrity.
The mathematics performance of America's students does not justify complacency. In particular, schools
serving many students at risk need more effective programs. This article provides a starting place in
determining which programs have the strongest evidence bases today. Hopefully, higher quality evaluations of a
broader range of programs will appear in the coming years. What is important is that we use what we know now
at the same time that we work to improve our knowledge base in the future so that our children receive the most
effective mathematics instruction we can give them.
References
References
Abram, S. L. (1984). The effect of computer assisted instruction on first grade phonics and mathematics
achievement computation. Unpublished doctoral dissertation, Northern Arizona University.
Ai, X. (2002). District Mathematics Plan evaluation: 2001-2002 evaluation report. Los Angeles: Program
Evaluation and Research Branch, Los Angeles Unified School District.
Al-Halal, A. (2001). The effects of individualistic learning and cooperative learning strategies on elementary
students ' mathematics achievement and use of social skills. Unpublished doctoral dissertation, Ohio University.

Alifrangis, C. M. (1991). An integrated learning system in an elementary school: Implementation, attitudes, and
results. Journal of Computing in Childhood Education, 2(3), 51-66.
Allinder, R. M., &Oats, R. G. (1997). Effects of acceptability on teachers' implementation of curriculum-based
measurement and student achievement in mathematics computation. Remedial and Special Education, 18(2),
113-120.
American Association for the Advancement of Science. (2000). Middle school mathematics textbooks: A
benchmark-based evaluation. Washington, DC: Author.
Anderson, L., Scott, C, &Hutlock, N. (1976, April). The effects of a mastery learning program on selected
cognitive, affective and ecological variables in Grades 1 through 6. Paper presented at the annual meeting of
the American Educational Research Association, San Francisco, CA.
Anelli, C. (1977). Computer-assisted instruction and reading achievement of urban third and fourth graders.
Unpublished doctoral dissertation, Rutgers University.
Armour-Thomas, E., Walker, E., Dixon-Roman, E. J., Mejia, B., &Gordon, E. J. (2006, April). An evaluation of
dynamic pedagogy on third-grade math achievement of ethnic minority children. Paper presented at the annual
meeting of the American Educational Research Association, San Francisco, CA.
Atkeison-Cherry, N. K. (2004). A comparative study of mathematics learning of thirdgrade children using a
10 October 2013 Page 14 of 33 ProQuest
scripted curriculum and third-grade children using a nonscripted curriculum. Unpublished doctoral dissertation,
Union University.
Atkins, J. (2005). The association between the use of Accelerated Math and students ' math achievement.
Unpublished doctoral dissertation, East Tennessee State University.
Austin Independent School District. (2001). Austin collaborative for mathematics education, 1999-2000
evaluation. Unpublished manuscript.
Axelrod, S., McGregor, G., Sherman, J., &Hamlet, C. (1987). Effects of video games as reinforcers for
computerized addition performance. Journal of Special Education Technology, 9(1), 1-8.
Baker, S., Gersten, R., &Lee, D. (2002). A synthesis of empirical research teaching mathematics to low-
achieving students. Elementary School Journal, 103(1), 5 1-69.
Bass, G., Ries, R., &Sharpe, W. (1986). Teaching basic skills though microcomputer assisted instruction.
Journal of Educational Computing Research, 2(2), 207-219.
Beck Evaluation &Testing Associates. (2006). Progress in mathematics 2006: Grade 1 pre-post field test
evaluation. Pleasantville, NY: Author.
Becker, H. J. (1992). Computer-based integrated learning systems in the elementary and middle grades: A
critical review and synthesis of evaluation reports. Journal of Educational Computing Research, 8(1), 1-41.
Becker, H. J. (1994). Mindless or mindful use of integrating learning systems. International Journal of
Educational Research, 21(1), 65-79.
Becker, W. C, &Gersten, R. (1982). A follow-up of Follow Through: The later effects of the direct instruction
model on children in the fifth and sixth grades. American Educational Research Journal, 19(1), 75-92.
Bedell, J. P. (1998). Effects of reading and mathematics software formats on elementary students '
achievement. Unpublished doctoral dissertation, University of Miami.
Beirne-Smith, M. (1991). Peer tutoring in arithmetic for children with learning disabilities. Exceptional Children,
57, 330-337.
Bereiter, C, &Kurland, M. (1981-1982). A constructive look at Follow-Through results. Interchange, 12, 1-22.
Birch, J. H. (2002). The effects ofthe Delaware Challenge Grant Program on the standardized reading and
mathematics test scores of second and third grade students in the Caesar Rodney School District. Unpublished
doctoral dissertation, Wilmington College.
Biscoe, B., &Harris, B. (2004). Growing With Mathematics: National evaluation study summary report. DeSoto,
TX: Wright Group.
Bolser, S., &oilman, D. (2003). Saxon Math, Southeast Fountain Elementary School: Effective or ineffective?
(ERIC Document Reproduction Service No. ED474537).
Borman, G. D., Hewes, G. M., Overman, L. T., &Brown, S. (2003). Comprehensive school reform and
achievement: A meta-analysis. Review of Educational Research, 73(2), 125-230.
Borton, W. M. (1988). The effects of computer managed mastery learning on mathematics test scores in the
elementary school. Journal of Computer-Based Instruction, 75(3), 95-98.
Bosfield, G. (2004). A comparison of traditional mathematical learning and cooperative mathematical learning.
Unpublished master's thesis, California State University, Domnguez Hills.
Boys, C.J. (2003). Mastery orientation through task-focused goals: Effects on achievement and motivation.
Unpublished master's thesis, University of Minnesota.
Brandt, W. C., &Hutchinson, C. (2005). Romulus Community Schools comprehensive school reform evaluation.
Naperville, IL: Learning Point Associates.
Brem, S. K. (2003). AM users outperform controls when exposure and quality of interaction are high: A two-year
study of the effects of Accelerated Math and Math Renaissance performance in a Title I elementary school
[tech. rep.] . Tempe: Arizona State University.
Brent, G., &DiObilda, N. (1993). Effects of curriculum alignment versus direct instruction on urban children.
10 October 2013 Page 15 of 33 ProQuest
Journal of Educational Research, 86(6), 333-338.
Briars, D., &Resnick, L. (2000). Standards, assessments - and what else? The essential elements of standards-
based school improvement. Los Angeles: National Center for Research on Evaluation, Standards, and Student
Testing, UCLA.
Briars, D. J. (2004, July). The Pittsburgh story: Successes and challenges in implementing standards-based
mathematics programs. Paper presented at the meeting of the UCSMP Everyday Mathematics Leadership
Institute, Lisle, IL.
Brown, F., &Boshamer, C. C. (2000). Using computer assisted instruction to teach mathematics: A study.
Journal of the National Alliance of Black School Educators, 4(1), 62-71.
Brush, T. (1997). The effects on student achievement and attitudes when using integrated learning systems with
cooperative pairs. Educational Research and Development, 45(1), 51-64.
Bryant, R. R. (1981). Effects of team-assisted individualization on the attitudes and achievement of third, fourth,
and fifth grade students of mathematics. Unpublished doctoral dissertation, University of Maryland.
Burke, A. J. (1980). Students' potential for learning contrasted under tutorial and group approaches to
instruction. Unpublished doctoral dissertation, University of Chicago.
Burkhouse, B., Loftus, M., Sadowski, B., &Buzad, K. (2003). Thinking Mathematics as professional
development: Teacher perceptions and student achievement. Scranton, Pennsylvania: Mary wood University.
(ERIC Document Reproduction Service No. ED482911)
Burton, S. K. (2005). The effects of ability grouping on academic gains of rural elementary school students.
Unpublished doctoral dissertation, Union University.
Butzin, S. M. (2001). Using instructional technology in transformed learning environments: An evaluation of
Project CHILD. Journal of Research on Computing in Education, 33, 367-373.
Cabezon, E. (1984). The effects of marked changes in student achievement pattern on the students, their
teachers, and their parents: The Chilean case. Unpublished doctoral dissertation, University of Chicago.
Calvery, R., Bell, D., &Wheeler, G. (1993, November). A comparison of selected second and third graders' math
achievement: Saxon vs. Holt. Paper presented at the Annual Meeting of the Mid-South Educational Research
Association, New Orleans, LA.
Campbell, P. F., Rowan, T. E., &Cheng, Y. (1995, April). Project IMPACT: Mathematics achievement in
predominantly minority elementary classrooms attempting reform. Paper presented at the Annual Meeting of the
American Educational Research Association, San Francisco, CA.
Cardelle-Elawar, M. (1990). Effects of feedback tailored to bilingual students' mathematics needs on verbal
problem solving. Elementary School Journal, 91, 165-175.
Cardelle-Elawar, M. (1992). Effects of teaching metacognitive skills to students with low mathematics ability.
Teaching and Teacher Education, 8, 109-121.
Cardelle-Elawar, M. (1995). Effects of metacognitive instruction on low achievers in mathematics problems.
Teaching and Teacher Education, 11, 81-95.
Carpenter, T. P., Fennema, E., Peterson, P. L., Chiang, C. P., &Loef, M. (1989). Using knowledge of children's
mathematics thinking in classroom teaching: An experimental study. American Education Research Journal, 26,
499-531.
Carrier, C, Post, T., &Heck, W. (1985). Using microcomputers with fourth-grade students to reinforce arithmetic
skills. Journal for Research in Mathematics Education, 16(l), 45-51.
Carroll, W. (1995). Report on the field test of fifth grade Everyday Mathematics. Chicago: University of Chicago.
Carroll, W. (1998). Geometric knowledge of middle school students in a reform-based mathematics curriculum.
School Science and Mathematics, 98, 188-197.
Carroll, W. M. (1993). Mathematical knowledge of kindergarten and first-grade students in Everyday
Mathematics [UCSMP report]. Unpublished manuscript.
10 October 2013 Page 16 of 33 ProQuest
Carroll, W. M. (1994-1995). Third grade Everyday Mathematics students' performance on the 1993 and 1994
Illinois state mathematics test. Unpublished manuscript.
Carroll, W. M. (1996a). A follow-up to the fifth-grade field test of Everyday Mathematics: Geometry, and mental
and written computation. Chicago: University of Chicago School Mathematics Project.
Carroll, W. M. (1996b). Use of invented algorithms by second graders in a reform mathematics curriculum.
Journal of Mathematical Behavior, 15, 137-150.
Carroll, W. M. (1996c). Mental computation of students in a reform-based mathematics curriculum. School
Science and Mathematics, 96, 305-311.
Carroll, W. M. (1997). Mental and written computation: Abilities of students in a reform-based mathematics
curriculum. Mathematics Educator, 2(1), 18-32.
Carroll, W. M. (2000). Invented computational procedures of students in a standardsbased curriculum. Journal
of Mathematical Behavior, 18, 111-121.
Carroll, W. M. (2001a). A longitudinal study of children in the Everyday Mathematics curriculum. Unpublished
manuscript.
Carroll, W. M. (2001b). Students in a standards-based curriculum: Performance on the 1999 Illinois State
Achievement Test. Illinois Mathematics Teacher, 52(1), 3-7.
Carroll, W. M., &Fuson, K. C. (1998). Multidigit computation skills of second and third graders in Everyday
Mathematics: A follow-up to the longitudinal study. Unpublished manuscript.
Carroll, W. M., &Isaacs, A. (2003). Achievement of students using the University of Chicago School
Mathematics Project's Everyday Mathematics. In S. L. Senk &D. R. Thompson (Eds.), Standards-based school
mathematics curricula: What are they? What do students learn? (pp. 79-108). Mahwah, NJ: Lawrence Erlbaum.
Carroll, W. M., &Porter, D. (1994). Afield test of fourth grade Everyday Mathematics: Summary report. Chicago:
University of Chicago School Mathematics Project, Elementary Component.
Carter, A., Beissinger, J., Cirulis, ?., Gartzman, M., Kelso, C, &Wagreich, P. (2003). Student learning and
achievement with math. In S. L. Senk &D. R. Thompson (Eds.), Standards-based school mathematics curricula:
What are they? What do students learn? (pp. 45-78). Mahwah, NJ: Lawrence Erlbaum.
Chambers, E. A. (2003). Efficacy of educational technology in elementary and secondary classrooms: A meta-
analysis of the research literature from 1992-2002. Unpublished doctoral dissertation, Southern Illinois
University at Carbondale.
Chan, L., Cole, P., &Cahill, R. (1988). Effect of cognitive entry behavior, mastery level, and information about
criterion on third graders' mastery of number concepts. Journal for Research in Mathematics Education, 19,
439-448.
Chang, K. E., Sung, Y. T., &Lin, S. F. (2006). Computer-assisted learning for mathematical problem solving.
Computers and Education, 46(2), 140-151.
Chase, A., Johnston, K., Delameter, B., Moore, D., &Golding, J. (2000). Evaluation of the Math Steps
curriculum: Final report. Boston: Houghton Mifflin.
Cheung, A., &Slavin, R. (2005). A synthesis of research on language of reading instruction for English language
learners. Review of Education Research, 75, 247-284.
Chiang, A. (1978) Demonstration of the use of computer-assisted instruction with handicapped children: Final
report. (Rep. No. 446-AM-66076A). Arlington, VA: RMC Research Corp. (ERIC Document Reproduction Service
No. EDl 16913)
Clariana, R. B. (1994). The effects of an integrated learning system on third graders' mathematics and reading
achievement. San Diego, CA: Jostens Learning Corporation. (ERIC Document Reproduction Service No.
ED409181)
Clariana, R. B. (1996). Differential achievement gains for mathematics computation, concepts, and applications
with an integrated learning system. Journal of Computers in Mathematics and Science, 15(3), 203-215.
10 October 2013 Page 17 of 33 ProQuest
Clark, R. E. (Ed.). (2001). Learning from media: Arguments, analysis, and evidence. Greenwich, CT:
Information Age.
Clarke, B., &Shinn, M. (2004). A preliminary investigation into the identification and development of early
mathematics curriculum-based measurement. School Psychology Review, 33, 234-248.
Cobb, P., Wood, T., Yackel, E., Nicholls, J., Wheatley, G., Trigatti, B., et al. (1991). Assessment of a problem-
centered second-grade mathematics project. Journal for Research in Mathematics Education, 22(1), 3-29.
Cognition and Technology Group at Vanderbilt. (1992). The Jasper series as an example of anchored
instruction: Theory, program description, and assessment data. Educational Psychologist, 27(3), 291-315.
Comprehensive School Reform Quality Center. (2005). CSRQ Center Report on Elementary School
Comprehensive School Reform Models. Washington, DC: American Institutes for Research.
Confrey, J. (2006). Comparing and contrasting the National Research Council report on evaluating curricular
effectiveness with the What Works Clearinghouse approach. Educational Evaluation and Policy Analysis, 28(3),
195-213.
Cooper, H. (1998). Synthesizing research (3rd ed.). Thousand Oaks, CA: Sage.
Cooperative Mathematics Project. (1996). Findings from the evaluation of the Number Power Mathematics
Program: Final report to the National Science Foundation. Oakland, CA: Developmental Studies Center.
Cox, B. F. (1986). Instructional management system: How it affects achievement, selfconcept, and attitudes
toward math of fifth-grade students. Unpublished doctoral dissertation, Oklahoma State University.
Craig, J., &Cairo, L. (2005). Assessing the relationship between questioning and understanding to improve
learning and thinking (QUILT) and student achievement in mathematics: A pilot study. Charleston, WV:
Appalachia Educational Lab.
Cramer, K. A., Post, T. R., &delMas, R. C. (2002). Initial fraction learning by fourth and fifth grade students: A
comparison of the effects of using commercial curricula with the effects of using the Rational Number Project
curriculum. Journal for Research in Mathematics Education, 33(2), 111-144.
Crawford, D. B., &Snider, V. E. (2000). Effective mathematics instruction: The importance of curriculum.
Education and Treatment of Children, 23(2), 122-142.
Crenshaw, H. M. (1982). A comparison of two computer assisted instruction management systems in remedial
math. Unpublished doctoral dissertation, Texas A&M University.
Dahn, V. L. (1992). The effect of integrated learning systems on mathematics and reading achievement and
student attitudes in selected Salt Lake City, Utah, elementary schools. Unpublished doctoral dissertation,
Brigham Young University.
De Russe, J. T. (1999). The effect of staff development training and the use of cooperative learning methods on
the academic achievement of third through sixth grade MexicanAmerican students. Unpublished doctoral
dissertation, Texas A&M University.
Dev, P. C, Doyle, B. A., &Valenta, B. (2002). Labels needn't stick: "At-risk" first graders rescued with
appropriate intervention. Journal of Education for Students Placed at Risk, 7(3), 327-332.
Dilworth, R., &Warren, L. (1980). An independent investigation of Real Math: The field-testing and learner-
verification studies. San Diego, CA: Center for the Improvement of Mathematics Education.
Dobbins, E. R. (1993). Math computer assisted instruction with remedial students and students with mild
learning/behavior disabilities. Unpublished doctoral dissertation, University of Alabama.
Donnelly, L. (2004, April). Year two results: Evaluation of the implementation and effectiveness of
SuccessMaker during 2002-2003. Paper presented at the annual meeting of the American Educational
Research Association, San Diego, California.
Drueck, J. V., Fuson, K. C, Carroll, W. M., &Bell, M. S. (1995, April). Performance of U.S. first graders in a
reform math curriculum compared to Japanese, Chinese, and traditionally taught U.S. students. Paper
presented at the annual meeting of the American Education Research Association, San Francisco, CA.
10 October 2013 Page 18 of 33 ProQuest
DuPaul, G. J., Ervin, R. A., Hook, C. L., &McGoey, K. E. (1998). Peer tutoring for children with attention deficit
hyperactivity disorder: Effects on classroom behavior and academic performance. Journal of Applied Behavior
Analysis, 31, 595-592.
Dynarski, M., Agodini, R., Heaviside, S., Novak, T., Carey, N., Campuzano, L., et al. (2007). Effectiveness of
reading and mathematics software products: Findings from the first student cohort. Washington, DC: Institute of
Education Sciences.
Earnheart, M. (1989). A study ofthe effectiveness of mastery learning in a low socioeconomic setting.
Unpublished doctoral dissertation, University of Mississippi.
Ebmeier, H., &Good, T. (1979). The effects of instructing teachers about good teaching on the mathematics
achievement of fourth grade students. American Educational Research Journal, 16(1), 1-16.
EDSTAR. (2004). Large-scale evaluation of student achievement in districts using Houghton Mifflin
Mathematics: Phase two. Raleigh, NC: Author.
Ellington, A. J. (2003). A meta-analysis of the effects of calculators on students' achievement and attitude levels
in precollege mathematics classes. Journal for Research in Mathematics Education, 34(5), 433-463.
Emihovich, C, &Miller, G. (1988). Effects of Logo and CAI on Black first graders' achievement, reflectivity, and
self-esteem. Elementary School Journal, 88(5), 472-487.
Estep, S. G., Mclnerney, W. D., Vockell, E., &Kosmoski, G. (1999-2000). An investigation of the relationship
between integrated learning systems and academic achievement. Journal of Educational Technology Systems,
28(1), 5-19.
Fahsl, A. J. (2001). An investigation of the effects of exposure to Saxon math textbooks, socioeconomic status
and gender on math achievement scores. Unpublished doctoral dissertation, Oklahoma State University.
Fantuzzo, J. W., Davis, G. Y., &Ginsburg, M. D. (1995). Effects of parent involvement in isolation or in
combination with peer tutoring on student self -concept and mathematics achievement. Journal of Educational
Psychology, 87(2), 272-281.
Fantuzzo, J. W., King, J. A., &Heller, L. R. (1992). Effects of reciprocal peer tutoring on mathematics and school
adjustment: A component analysis. Journal of Educational Psychology, 84(3), 331-339.
Fantuzzo, J. W., Polite, K., &Grayson, N. (1990). An evaluation of reciprocal peer tutoring across elementary
school settings. Journal of School Psychology, 28, 309-323.
Faykus, S. P. (1993). Impact of an integrated learning system: Analysis of student achievement through
traditional and curriculum-based procedures. Unpublished doctoral dissertation, Arizona State University.
Fennema, E., Carpenter, T. P., Franke, M. L., Levi, L., Jacobs, V. R., &Empson, S. B. (1996). A longitudinal
study of learning to use children's thinking in mathematics instruction. Journal for Research in Mathematics
Education, 27(4), 403-434.
Fischer, F. (1990). A part-part-whole curriculum for teaching number to kindergarten. Journal for Research in
Mathematics Education, 21, 207-215.
Fletcher, J. D., Hawley, D. E., &Piele, P. K. (1990). Costs, effects, and utility of microcomputer-assisted
instruction. American Educational Research Journal, 27(4), 783-806.
Florida Tax Watch. (2005). Florida TaxWatch's comparative evaluation of Project CHILD: Phase IV, 2005.
Retrieved from http:www.floridataxwatch.org/projchild/ projchild4.html
Flowers, J. (1998). A study of proportional reasoning as it relates to the development of multiplication concepts.
Unpublished doctoral dissertation, University of Michigan.
Foley, M. M. (1994). A comparison of computer-assisted instruction with teachermanaged instructional
practices. Unpublished doctoral dissertation, Columbia University Teachers College.
Follmer, R. (2001). Reading, mathematics, and problem solving: The effects of direct instruction in the
development of fourth grade students ' strategic reading and problem-solving approaches to text-based, non-
routine mathematics problems. Unpublished doctoral dissertation, Widener University.
10 October 2013 Page 19 of 33 ProQuest
Freiberg, H. J., Connell, M. L., &Lorentz, J. (2001). Effects of consistency management on student mathematics
achievement in seven Chapter 1 elementary schools. Journal of Education for Students Placed at Risk, 6(3),
249-270.
Freiberg, H. J., Huzinec, C, &Borders, K. (2006). Classroom management and mathematics student
achievement: A study of fourteen inner-city elementary schools. Unpublished manuscript.
Freiberg, H. J., Prokosch, N., Treiser, E. S., &Stein, T. (1990). Turning around five atrisk elementary schools.
Journal of School Effectiveness and School Improvement, 7(1), 5-25.
Freiberg, H. J., Stein, T. A., &Huang, S. (1995). Effects of classroom management intervention on student
achievement in inner-city elementary schools. Educational Research and Evaluation, 7(1), 36-66.
Fuchs, L., Compton, D., Fuchs, D., Paulsen, K., Bryant, J., &Hamlett, C. (2005). The prevention, identification,
and cognitive determinants of math difficulty. Journal of Educational Psychology, 97(3), 493-513.
Fuchs, L., Fuchs, D., Finelli, R., Courey, S., &Hamlett, C. (2004). Expanding schemabased transfer instruction
to help third graders solve real-life mathematical problems. American Educational Research Journal, 47(2), 419-
445.
Fuchs, L., Fuchs, D., Finelli, R., Courey, S., Hamlett, C, Sones, E., et al. (2006). Teaching third graders about
real-life mathematical problem solving: A randomized control study. Elementary School Journal, 106(A), 293-
311.
Fuchs, L., Fuchs, D., &Hamlett, C. (1989). Effects of alternative goal structures within curriculum-based
measurement. Exceptional Children, 55(5), 429-438.
Fuchs, L., Fuchs, D., Hamlett, C, &Stecker, P. (1991). Effects of curriculum-based measurement and
consultation on teacher planning and student achievement in mathematics operations. American Educational
Research Journal, 28(3), 617-641.
Fuchs, L., Fuchs, D. &Karns, K. (2001). Enhancing kindergarteners' mathematical development: Effects of peer-
assisted learning strategies. Elementary School Journal, 707(5), 495-510.
Fuchs, L., Fuchs, D., Karns, K., Hamlett, C, &Katzaroff, M. (1999). Mathematics performance assessment in the
classroom: Effects on teacher planning and student problem solving. American Educational Research Journal,
36(3), 609-646.
Fuchs, L., Fuchs, D., Karns, K., Hamlett, C, Katzaroff, M., &Dutka, S. (1997). Effects of task- focused goals on
low-achieving students with and without learning disabilities. American Educational Research Journal, 34(3),
513-543.
Fuchs, L., Fuchs, D. Phillips, N., Hamlett, C, &Karns, K. (1995). Acquisition and transfer effects of classwide
peer-assisted learning strategies in mathematics for students with varying learning histories. School Psychology
Review, 24(A), 604-620.
Fuchs, L., Fuchs, D., &Prentice, K. (2004). Responsiveness to mathematical problemsolving instruction:
Comparing students at risk of mathematics disability with and without risk of reading disability. Journal of
Learning Disabilities, 37(A), 293-306.
Fuchs, L., Fuchs, D., Prentice, K., Burch, M., Hamlett, C, Owen, R., Hosp, M., et al. (2003). Explicitly teaching
for transfer: Effects on third-grade students' mathematical problem solving. Journal of Educational Psychology,
95(2), 293-305.
Fuchs, L., Fuchs, D., Prentice, K., Burch, M., Hamlett, C, Owen, R., &Schroeter, K. (2003). Enhancing third-
grade students' mathematical problem solving with selfregulated learning strategies. Journal of Educational
Psychology, 95(2), 306-315.
Fuchs, L., Fuchs, D., Prentice, K., Hamlett, C, Finelli, R., &Courey, S. (2004). Enhancing mathematical problem
solving among third-grade students with schemabased instruction. Journal of Educational Psychology, 96(A),
635-647.
Fuchs, L., Fuchs, D., Yazdian, L? &Powell, S. (2002). Enhancing first-grade children's mathematical
10 October 2013 Page 20 of 33 ProQuest
development with peer-assisted learning strategies. School Psychology Review, 31(A), 569-583.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., &Appleton, A. C. (2002). Explicitly teaching for transfer: Effects on the
mathematical problem-solving performance of students with mathematical disabilities. Learning Disabilities
Research and Practice, 77(2), 90-106.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., &Bentz (1994). Classwide curriculum-based
measurement: Helping general educators meet the challenge of student diversity. Exceptional Children, 60(6),
518-537.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., Karns, K., &Dutka, S. (1997). Enhancing students' helping
behavior during peer-mediated instruction with conceptual mathematical explanations. Elementary School
Journal, 97(3), 223-249.
Fueyo, V., &Bushell, D. (1998). Using number line procedures and peer tutoring to improve the mathematics
computation of low-performing first graders. Journal of Applied Behavior Analysis, 31(3), 417-430.
Fuson, K., Carroll, W., &Drueck, J. (2000). Achievement results for second and third graders using the
standards-based curriculum Everyday Mathematics. Journal for Research in Mathematics Education, 31(3),
277-295.
Fuson, K. C, &Carroll, W. M. (n.d.-a). Performance of U.S. fifth graders in a reform math curriculum compared
to Japanese, Chinese, and traditionally taught U.S. students. Unpublished manuscript.
Fuson, K. C, &Carroll, W. M. (n.d.-b). Summary of comparison of Everyday Math (EM) and McMillan (MC):
Evanston student performance on whole-class tests in Grades 1, 2, 3, and 4. Unpublished manuscript.
Gabbert, B., Johnson, D. W., &Johnson, R. T. (1986). Cooperative learning, group-toindividual transfer, process
gain, and the acquisition of cognitive reasoning strategies. Journal of Psychology, 120(3), 265-278.
Gallagher, P. A. (1991). Training the parents of Chapter 1 math students in the use of mastery learning
strategies to help increase math achievement. Unpublished doctoral dissertation, University of Pittsburgh.
Gatti, G. (2004a). ARC study. In Pearson Education (Ed.). Investigations in Number, Data, and Space:
Validation and research. Upper Saddle River, NJ: Pearson.
Gatti, G. (2004b). Scott Foresman-Addison Wesley Math national effect size study. (Available from Pearson
Education, K-12 School Group, 1 Lake Street, Upper Saddle River, NJ 07458)
Giancola, S. (2000). Evaluation results ofthe Delaware Challenge Grant Project Lead Education Agency: Capital
School District. Newark: University of Delaware.
Gilbert-Macmillan, K. (1983). Mathematical problem solving in cooperative small groups and whole class
instruction. Unpublished doctoral dissertation, Stanford University.
Gill, B. (1995). Project CHILD middle school follow-up evaluation: Final report. Daniel Memorial Institute,
Jacksonville, FL.
Gilman, D., &Brantley, T. (1988). The effects of computer-assisted instruction on achievement, problem-solving
skills, computer skills, and attitude: A study of an experimental program at Marrs Elementary School, Mount
Vernon, Indiana. Indianapolis: Indiana State University. (ERIC Document Reproduction Service No. ED302232)
Ginsburg, A., Leinwand, S., Anstrom, T., &Pollok, E. (2005). What the United States can learn from Singapore's
world-class mathematics system (and what Singapore can learn from the United States): An exploratory study.
Washington, DC: American Institutes for Research.
Ginsburg-Block, M. (1998). The effects of reciprocal peer problem-solving on the mathematics achievement,
academic motivation and self-concept of "at risk" urban elementary students. Unpublished doctoral dissertation,
University of Pennsylvania.
Ginsburg-Block, M., &Fantuzzo, J. W. (1997). Reciprocal peer tutoring: An analysis of "teacher" and "student"
interactions as a function of training and experience. School Psychology Quarterly, 12, 134-149.
Ginsburg-Block, M. D., &Fantuzzo, J. W. (1998). An evaluation of the relative effectiveness of NCTM standards-
based interventions for low-achieving urban elementary students. Journal of Educational Psychology, 90, 560-
10 October 2013 Page 21 of 33 ProQuest
569.
Glassman, P. (1989). A study of cooperative learning in mathematics, writing, and reading in the intermediate
grades: A focus upon achievement , attitudes, and selfesteem by gender, race, and ability group. Unpublished
doctoral dissertation, Hofstra University.
Glazerman, S., Levy, D. M., &Myers, D. (2002). Nonexperimental replications of social experiments: A
systematic review. Washington, DC: Mathematica.
Goldberg, L. F. (1989). Implementing cooperative learning within six elementary school learning disability
classrooms to improve math achievement and social skills. Unpublished doctoral dissertation, Nova University.
Good, K., Bickel, R., &Howley, C. (2006). Saxon Elementary Math program effectiveness study. Charleston,
WV: Edvantia.
Good, T. L. &Grouws, D. A. (1979). The Missouri Mathematics Effectiveness Project: An experimental study in
fourth-grade classrooms. Journal of Educational Psychology, 77(3), 355-362.
Goodrow, A. ( 1 998). Children 's construction of number sense in traditional, constructivism and mixed
classrooms. Unpublished doctoral dissertation, Tufts University.
Great Source. (2006). Every Day Counts Grades K-6: Research base and program effectiveness [computer
manual]. Retrieved from http://www.greatsource.com/ GreatSource/pdf/EveryDayCountsResearch206.pdf
Greenwood, C. R., Delquadri, J. C, &Hall, R. V. (1989). Longitudinal effects of classwide peer tutoring. Journal
of Educational Psychology, 81(3), 371-383.
Greenwood, C. R., Dinwiddie, G., Terry, B., Wade, L., Stanley, S., Thibadeau, S., et al. (1984). Teacher-
versus-peer mediated instruction: An ecobehavioral analysis of achievement outcomes. Journal of Applied
Behavior Analysis, 17, 521-538.
Greenwood, C. R., Terry, B., Utley, C. A., &Montagna, D. (1993). Achievement, placement, and services:
Middle school benefits of ClassWide Peer Tutoring used at the elementary school. School Psychology Review,
22(3), 497-516.
Griffin, S. A., Case, R., &Siegler, R. S. (1994). Rightstart: Providing the central conceptual prerequisites for first
formal learning of arithmetic to students at risk for school failure. In K. McGiIIy (Ed.), Classroom lessons:
Integrating cognitive theory and classroom practice (pp. 25-49). Cambridge, MA: MIT Press.
Grossen, B., &Ewing, S. (1994). Raising mathematics problem-solving performance: Do the NCTM teaching
standards help? Final report. Effective School Practices, 13(2), 79-91.
Gwaltney, L. (2000). Year three final report the Lightspan Partnership, Inc. Achieve Now Project: Unified School
District 259, Wichita Public Schools. Wichita, KS: Allied Educational Research and Development Services.
Hallmark, B. W. (1994). The effects of group composition on elementary students' achievement , self-concept,
and interpersonal perceptions in cooperative learning groups. Unpublished doctoral dissertation, University of
Connecticut.
Hansen, E., &Greene, K. (n.d.). A recipe for math. What's cooking in the classroom: Saxon or traditional?
Retrieved March 1, 2006, from http://www.secondaryenglish.com/recipeformath.html
Hativa, N. (1998). Computer-based drill and practice in arithmetic: Widening the gap between high- and low-
achieving students. American Educational Research Journal, 25(3), 366-397.
Haynie, T. R. (1989). The effects of computer-assisted instruction on the mathematics achievement of selected
groups of elementary school students. Unpublished doctoral dissertation, George Washington University.
Heller, L. R., &Fantuzzo, J. W. (1993). Reciprocal peer tutoring and parent partnership: Does parent
involvement make a difference? School Psychology Review, 22(3), 517-535.
Hess, R. D., &McGarvey, L. J. (1987). School-relevant effects of educational use of microcomputers in
kindergarten classrooms and homes. Journal of Educational Computing Research, 3(3), 269-287.
Hickey, D. T., Moore, A. L., &Pellegrino, J. W. (2001). The motivational and academic consequences of
elementary mathematics environments: Do constructivist innovations and reforms make a difference? American
10 October 2013 Page 22 of 33 ProQuest
Educational Research Journal, 38(3), 611-652.
Hiebert, J., &Wearne, D. (1993). Instructional tasks, classroom discourse, and students' learning in second-
grade arithmetic. American Educational Research Journal, 30(2), 393-425.
Hill, H. (2004). Professional development standards and practices in elementary school math. Elementary
School Journal, 104(3), 215-232.
Hohn, R., &Frey, B. (2002). Heuristic training and performance in elementary mathematical problem solving.
Journal of Educational Research, 95(6), 374-380.
Holmes, C. T., &Brown, C. L. (2003). A controlled evaluation of a total school improvement process, School
Renaissance. Paper presented at the National Renaissance Conference, Nashville, TN.
Hooper, S. (1992). Effects of peer interaction during computer-based mathematics instruction. Journal of
Education Research, 85(3), 180-189.
Hotard, S., &Cortez, M. (1983). Computer-assisted instruction as an enhancer of remediation. Lafayette:
University of Southwestern Louisiana.
Houghton Mifflin, (n.d.). Knowing Mathematics field test results for Lincoln Public Schools: Fall 2002-Spring
2003. Boston: Author.
Hunter, C. T. L. (1994). A study of the effect of instructional method on the reading and mathematics
achievement of Chapter 1 students in rural Georgia. Unpublished doctoral dissertation, South Carolina State
University.
Interactive. (2000). Documenting the effects of Lightspan Achieve Now! in the Adams County School District 50:
Year 3 report. Huntington, NY: Lightspan.
Interactive. (2001). Documenting the effects of Lightspan Achieve Now! in the Hempstead Union Free School
District: Year 2 report. Huntington, NY: Lightspan.
Interactive. (2003, August). An analysis of CompassLeaming student achievement outcomes in Pocatello,
Idaho, 2002-03. (Available from CompassLeaming, 9920 Pacific Heights Blvd., San Diego, CA 92121)
Interactive &Metis Associates. (2002). A multi-year analysis of the outcomes of Lightspan Achievement Now in
the Cleveland Municipal School District. Huntington, NY: Lightspan.
Isbell, S. K. (1993). Impact on learning of computer-assisted instruction when aligned with classroom curriculum
in second-grade mathematics and fourth-grade reading. Unpublished doctoral dissertation, Baylor University.
Jamison, M. B. (2000). The effects of linking classroom instruction in mathematics to a prescriptive delivery of
lessons on an integrated learning system. Unpublished doctoral dissertation, Texas Tech University.
Jitendra, A. K., Griffin, C. C, McGoey, K., Gardill, M. C, Bhat, P., &Riley, T. (1998). Effects of mathematical word
problem solving by students at risk or with mild disabilities. Journal of Educational Research, 91(6), 345-355.
Jitendra, A. K., &Hoff, K. (1996). The effects of schema-based instruction on the mathematical word-problem-
solving performance of students with learning disabilities. Journal of Learning Disabilities, 29(A), 422-431.
Johnson, D. W., Johnson, R. T., &Scott, L. (1978). The effects of cooperative and individualized instruction on
student attitudes and achievement. Journal of Social Psychology, 104, 207-216.
Johnson, J., Yanyo, L., &Hall, M. (2002). Evaluation of student math performance in California school districts
using Houghton Mifflin Mathematics. Raleigh, NC: EDSTAR.
Johnson, L. C. (1985, December). 7?<? effects ofthe groups of four cooperative learning model on student
problem-solving achievement in mathematics. Unpublished doctoral dissertation, University of Houston.
Johnson-Scott, P. L. (2006). The impact of Accelerated Math on student achievement. Unpublished doctoral
dissertation, Mississippi State University.
Karper, J., &Melnick, S. A. (1993). The effectiveness of team accelerated instruction on high achievers in
mathematics. Journal of Instructional Psychology, 20(1), 49-54.
Kastre, N. J. (1995). A comparison of computer-assisted instruction and traditional practice for mastery of basic
multiplication facts. Unpublished doctoral dissertation, Arizona State University.
10 October 2013 Page 23 of 33 ProQuest
Kelso, C. R., Booton, B., &Majumdar, D. (2003). Washington State Math Trailblazers student achievement
report. Chicago: University of Illinois at Chicago.
Kersh, M. (1970). A strategy for mastery learning in fifth grade arithmetic. Unpublished doctoral dissertation,
University of Chicago.
King, F. J., &Butzin, S. (1992). An evaluation of Project CHILD. Florida Technology in Education Quarterly, 4(A),
45-63.
Kirk, V. C. (2003). Investigation ofthe impact of integrated learning system use on mathematics achievement of
elementary students. Unpublished doctoral dissertation, East Tennessee State University.
Kopecky, C. L. (2005). A case study ofthe Math Matters professional development program in one elementary
school. Unpublished doctoral dissertation, University of Southern California.
Kosciolek, S. (2003). Instructional factors related to mathematics achievement: Evaluation of a mathematics
intervention. Unpublished doctoral dissertation, University of Minnesota.
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research
and Development, 42(2), 7-19.
Kroesbergen, E. H., &Van Luit, J. E. H. (2003). Mathematical interventions for children with special education
needs. Remedial and Special Education, 24, 97-1 14.
Kromhout, O. M. (1993). Evaluation report Project CHILD, 1992-1993. Tallahassee, FL.
Kromhout, O. M., &Butzin, S. M. (1993). Integrating computers into the elementary school curriculum: An
evaluation of nine Project CHILD model schools. Journal of Research of Computing in Education, 26(1), 55-70.
Kulik, J. A. (2003). Effects of using instructional technology in elementary and secondary schools: What
controlled evaluation studies say (SRI Project No. P10446.001). Arlington, VA: SRI International.
Laub, C. (1995). Computer-integrated learning system and elementary student achievement in mathematics: An
evaluation study. Unpublished Doctoral Dissertation, Temple University.
Laub, C, &Wildasin, R. (1998). Student achievement in mathematics and the use of computer-based instruction
in the Hempfield School District. Landsville, PA: Hempfield School District.
Leffler, K. (2001). Fifth-grade students in North Carolina show remarkable math gains. Summary of Independent
Math Research. Madison, WI: Renaissance Learning.
Leiker, V. (1993). The relationship between an integrated learning system, reading and mathematics
achievement, higher order thinking skills and certain demographic variables: A study conducted in two school
districts. Unpublished doctoral dissertation, Baylor University.
Levy, M. H. (1985). An evaluation of computer assisted instruction upon the achievement of fifth grade students
as measured by standardized tests. Unpublished doctoral dissertation, University of Bridgeport.
Lin, A., Podell, D. M., &Tournaki-Rein, N. (1994). CAI and the development of automaticity in mathematics skills
in students with and without mild mental handicaps. Computers in the Schools, 11, 43-58.
Lipsey, M. W., &Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Long, V. M. (1991). Effects of mastery learning on mathematics achievement and attitudes. Unpublished
doctoral dissertation, University of Missouri-Columbia.
Lucker, G. W., Rosenfield, D., Sikes, J., &Aronson, E. (1976). Performance in the interdependent classroom: A
field study. American Educational Research Journal, 13, 115-123.
Lykens, J. S. (2003). An evaluation of a standards-based mathematics program (Math Trailblazers) in
Delaware's Caesar Rodney School District. Unpublished doctoral dissertation, Wilmington College.
Mac Iver, M., Kemper, E., &Stringfield, S. (2003). The Baltimore Curriculum Project: Final report of the four-year
evaluation study. Baltimore: Johns Hopkins University, Center for Social Organization of Schools.
Madden, N. A., &Slavin, R. E. (1983). Cooperative learning and social acceptance of mainstreamed
academically handicapped students. Journal of Special Education, 77(2), 171-182.
Madden, N. A., Slavin, R. E., &Simons, K. (1997). MathWings: Early indicators of effectiveness (Rep. No. 17).
10 October 2013 Page 24 of 33 ProQuest
Baltimore: Johns Hopkins University, Center for Research on the Education of Students Placed at Risk.
Madden, N. A., Slavin, R. E., &Simons, K. (1999). MathWings: Effects on student mathematics performance.
Baltimore: Johns Hopkins University Center for Research on the Education of Students Placed at Risk.
Mahoney, S. J. (1990). A study of the effects of EXCEL Math on mathematics achievement of second- and
fourth-grade students. Unpublished doctoral dissertation, Northern Arizona University.
Mann, D., Shakeshaft, C, Becker, J., &Kottkamp, R. (1999). West Virginia story: Achievement gains from a
statewide comprehensive instructional technology program. Beverly Hills, CA: Milken Family Foundation with
the West Virginia Department of Education, Charleston.
Manuel, S. Q. (1987). The relationship between supplemental computer assisted mathematics instruction and
student achievement. Unpublished doctoral dissertation, University of Nebraska.
Martin, J. M. (1986). The effects of a cooperative, competitive, or combination goal structure on the
mathematics performance of Black children from extended family backgrounds. Unpublished doctoral
dissertation, Howard University.
Mason, D. A., &Good, T. L. (1993). Effects of two-group and whole-class teaching on regrouped elementary
students' mathematics achievement. American Educational Research Journal, 30(2), 328-360.
Math Learning Center. (2003). Bridges in the classroom: Teacher feedback, student data, &current research.
Salem, OR: Author.
Mathematics Evaluation Committee. (1997). Chicago mathematics evaluation report: Year two. Pennington, NJ:
Hopewell Valley Regional School District.
Mayo, J. C. (1995). A quasi-experimental study comparing the achievement of primary grade students using
Math Their Way with primary grade students using traditional instructional methods. Unpublished doctoral
dissertation, University of South Carolina.
McCabe, K. J. (2001). Mathematics in our schools: An effort to improve mathematics literacy. Unpublished
master's thesis, California State University, Long Beach.
McCormick, K. (2005). Examining the relationship between a standards-based elementary mathematics
curriculum and issues of equity. Unpublished doctoral dissertation, Indiana University.
McDermott, P. A., &Watkins, M. W. (1983). Computerized vs. conventional remedial instruction for learning-
disabled pupils. Journal of Special Education, 77(1), 81-88.
McKernan, M. M. (1992). The effects of "Mathematics Their Way" and Chicago Math Project on mathematical
applications and story problem strategies of second graders. Unpublished doctoral dissertation, Drake
University.
McWhirt, R., Mentavlos, M., Rose-Baele, J. S., &Donnelly, L. (2003). Evaluation of the implementation and
effectiveness of SuccessMaker. Charleston, SC: Charleston County School District.
Mehrens, W. A., &Phillips, S. E. (1986). Detecting impacts of curricular differences in achievement test data.
Journal of Educational Measurement, 23(3), 185-196.
Mercer, C. D., &Miller, S. P. (1992). Teaching students with learning problems in math to acquire, understand,
and apply basic math facts. Remedial and Special Education, 13(3), 19-35.
Merrell, M. L. B. (1996). The achievement and self-concept of elementary grade students who have received
Direct Instruction. Unpublished doctoral dissertation, University of Southern Mississippi.
Mevarech, Z. R. (1985). The effects of cooperative mastery learning strategies on mathematics achievement.
Journal of Educational Research, 78(3), 372-377.
Mevarech, Z. R. (1991). Learning mathematics in different mastery environments. Journal of Education
Research, 84(A), 225-231.
Mevarech, Z. R., &Rich, Y. (1985). Effects of computer-assisted mathematics instruction on disadvantaged
pupils' cognitive and affective development. Journal of Educational Research, 79(1), 5-11.
Meyer, L. (1984). Long-term academic effects of Direct Instruction Project Follow Through. Elementary School
10 October 2013 Page 25 of 33 ProQuest
Journal, 84(A), 380-394.
Miller, H. (1997). Quantitative analyses of student outcome measures. International Journal of Education
Research, 27(2), 119-136.
Mills, S. C. (1997). Implementing integrated learning systems in elementary classrooms. Unpublished doctoral
dissertation, University of Oklahoma.
Mintz, K. S. (2000). A comparison of computerized and traditional instruction in elementary mathematics.
Unpublished doctoral dissertation, University of Alabama.
Mokros, J., Berle-Carmen, M., Rubin, A., &O'Neil, K. (1996, April). Learning operations: Invented strategies that
work. Paper presented at the annual meeting of the American Educational Research Association, New York,
NY.
Mokros, J., Berle-Carmen, M., Rubin, A., &Wright, T. (1994). Full year pilot Grades 3 and 4: Investigations in
numbers, data, and space. Cambridge, MA: Technical Education Research Centers.
Monger, C. T. (1989). Effects of a mastery learning strategy on elementary and middle school mathematics
students' achievement and subject related affect. Unpublished doctoral dissertation, University of Tulsa.
Moody, E. C. (1994). Implementation and integration of a computer-based integrated learning system in an
elementary school. Unpublished doctoral dissertation, Florida State University.
Morgan, J. C. (1994). Individual accountability in cooperative learning groups: Its impact on achievement and
attitude with grade three students. Unpublished master's thesis, University of Manitoba.
Moskowitz, J. M., Malvin, J. H., Schaeffer, G. A., &Schaps, E. (1983). Evaluation of a cooperative learning
strategy. American Educational Research Journal, 20(A), 687-696.
Moss, J., &Case, R. (1999). Developing children's understanding of the rational numbers: A new model and an
experimental curriculum. Journal for Research in Mathematics Education, 30(2), 122-147.
Murphy, L. A. (1998). Learning and affective issues among higher- and lowerachieving third-graders in math
reform classrooms: Perspectives of children, parents, and teachers. Unpublished doctoral dissertation,
Northwestern University.
Murphy, R., Penuel, W., Means, B., Korbak, C, Whaley, A., &Allen, J. (2002). E-DESK: A review of recent
evidence on discrete educational software (SRI International Report). Menlo Park, CA: SRI International.
National Assessment of Educational Progress. (2005). The nation's report card. Washington, DC: National
Center for Education Statistics.
National Research Council. (2004). On evaluating curricular effectiveness. Washington, DC: National
Academies Press.
Nattiv, A. (1994). Helping behaviors and math achievement gain of students using cooperative learning.
Elementary School Journal, 94(3), 285-297.
New Century Education Corporation. (2003). New Century Integrated Instructional System, evidence of
effectiveness: Documented improvement results from client schools and districts. Piscataway, NJ: Author.
Nguyen, K. (1994). The 1993-1994 Saxon mathematics evaluation report. Oklahoma City: Oklahoma City Public
Schools.
Nguyen, K., &Elam, P. (1993). The 1992-1993 Saxon mathematics evaluation report. Oklahoma City: Oklahoma
City Public Schools.
Opuni, K. A. (2006). The effectiveness of the Consistency Management &Cooperative Discipline"' (CMCD)
model as a student empowerment and achievement enhancer: The experiences of two K-12 inner-city school
systems. Paper presented at the 4th Annual Hawaii International Conference of Education, Honolulu, Hawaii.
Orabuchi, I. I. (1992). Effects of using interactive CAI on primary grade students' higher-order thinking skills:
Inferences, generalizations, and math problem solving. Unpublished doctoral dissertation, Texas Woman's
University.
Orr, C. (1991). Evaluating restructured elementary classes: Project CHILD summative evaluation. Paper
10 October 2013 Page 26 of 33 ProQuest
presented at the Southeast Evaluation Association, Tallahassee, FL.
Patterson, D. (2005). The effects of Classworks in the classroom. Stephenville, TX: Tarleton State University.
Pellegrino, J. W., Hickey, D., Heath, A., Rewey, K., &Vye, N. J. (1992). Assessing the outcomes of an
innovative instructional program: The 1990-1991 implementation of the "Adventures of Jasper Woodbury."
Nashville, TN: Vanderbilt University, Learning Technology Center.
Perkins, S. A. (1987). The relationship between computer-assisted instruction in reading and mathematics
achievement and selected student variables. Unpublished doctoral dissertation, University of Michigan.
Peterson, P. L., Janicki, T. C, &Swing, S. R. (1981). Ability x Treatment interaction effects on children's learning
in large-group and small-group approaches. American Educational Research Journal, 18(A), 453-473.
Phillips, C. K. (2001). The effects of an integrated computer program on math and reading improvement in
grades three through five. Unpublished doctoral dissertation, University of Tennessee.
Pigott, H. E., Fantuzzo, J. W., &Clement, P. W. (1986). The effects of reciprocal peer tutoring and group
contingencies on the academic performance of elementary school children. Journal of Applied Behavior
Analysis, 19, 93-98.
Podell, D. M., Tournaki-Rein, N., &Lin, A. (1992). Automatization of mathematics skills via computer-assisted
instruction among students with mild mental handicaps. Education and Training in Mental Retardation, 27, 200-
206.
Pratton, J., &Hales, L. W. (1986). The effects of active participation on student learning. Journal of Educational
Research, 79(A), 210-215.
Quinn, D. W., &Quinn, N. W. (2001a). Evaluation series: Preliminary study: PLATO elementary math software,
Fairview Elementary, Dayton, Ohio. Bloomington, MN: PLATO Learning.
Quinn, D. W., &Quinn, N. W. (2001b). Evaluation series: Grades 1-8, Apache Junction Unified School District
43. Bloomington, MN: PLATO Learning.
Rader, J. B. (1996). The effect of curriculum alignment between mathematics instruction and CAI mathematics
on student performance and attitudes, staff attitudes, and alignment. Unpublished doctoral dissertation, United
States International University.
Ragosta, M. (1983). Computer-assisted instruction and compensatory education: A longitudinal analysis.
Machine-Mediated Learning, 1(1), 97-127.
Resendez, M., &Azin, M. (2005). The relationship between using Saxon Elementary and Middle School Math
and student performance on Georgia statewide assessments. Jackson, WY: PRES Associates.
Resendez, M., &Manley, M. A. (2005). A study on the effectiveness of the 2004 Scott Foresman-Addison
Wesley elementary math program. Jackson, WY: PRES Associates.
Resendez, M.. &Sridharan, S. (2005). A study on the effectiveness of the 2004 Scott Foresman-Addison
Wesley elementary math program: Technical report. Jackson, WY: PRES Associates.
Resendez, M., &Sridharan, S. (2006). A study of the effectiveness of the 2005 Scott Foresman-Addison Wesley
elementary math program. Jackson, WY: PRES Associates.
Resendez, M., Sridharan, S., &Azin, M. (2006). The relationship between using Saxon Elementary School Math
and student performance on the Texas statewide assessments. Jackson, WY: PRES Associates.
Riordan, J., &Noyce, P. (2001). The impact of two standards-based mathematics curricula on student
achievement in Massachusetts. Journal for Research in Mathematics Education, 32(A), 368-398.
Ross, L. G. (2003). The effects of a standards-based mathematics curriculum on fourth and fifth grade
achievement in two Midwest cities. Unpublished doctoral dissertation, University of Iowa.
Ross, S. M., &Nunnery, J. A. (2005). The effect of School Renaissance on student achievement in two
Mississippi school districts. Memphis, TN: Center for Research in Educational Policy, University of Memphis.
Roy, J. W. (1993). An investigation of the efficacy of computer-assisted mathematics, reading, and language
arts instruction. Unpublished doctoral dissertation, Baylor University.
10 October 2013 Page 27 of 33 ProQuest
Ruffin, M. R., Taylor, M., &Butts, L. W. (1991). Report of the 1989-1990 Barrett Math Program (Report No. 12,
Vol. 25). Atlanta, GA: Atlanta Public Schools, Department of Research and Evaluation. (ERIC Document
Reproduction Service No. 365508)
Rust, A. L. (1999). A study of the benefits of math manipulatives versus standard curriculum in the
comprehension of mathematical concepts. Knoxville, TN: Johnson Bible College. (ERIC Document
Reproduction Service No. 436395)
Salvo, L. C. (2005). Effects of an experimental curriculum on third graders' knowledge of multiplication facts.
Unpublished doctoral dissertation, George Mason University.
Schmidt, S. (1991). Technology for the 21st century: The effects of an integrated distributive computer network
system on student achievement. Unpublished doctoral dissertation, University of La Veme, CA.
Schoenfeld, A. H. (2006). What doesn't work: The challenge and failure of the What Works Clearinghouse to
conduct meaningful reviews of studies of mathematics curricula. Educational Researcher, 35(2), 13-21.
Schreiber, F., Lomas, C, &Mys, D. (1988). Evaluation of student reading and mathematics: WICAT computer
managed instructional program, Salina Elementary School. Dearborn, MI: Office of Research and Evaluation,
Public Schools.
Sconiers, S., Isaacs, A., Higgins, T., McBride, J., &Kelso, C. (2003). The Arc Center Tri-State Student
Achievement Study. Lexington, MA: COMAP.
Sedlmeier, P., &Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies?
Psychological Bulletin, 105(2), 309-316.
Shanoski, L. A. (1986). An analysis of the effectiveness of using microcomputer assisted mathematics
instruction with primary and intermediate level students. Unpublished doctoral dissertation, Indiana University of
Pennsylvania.
Shaughnessy, J., &Davis, A. (1998). Springfield research study: Impact of Opening Eyes and Visual
Mathematics curricula. Portland, OR: Northwest Regional Educational Laboratory.
Shawkey, J. (1989). A comparison of two first grade mathematics programs: Math Their Way and Explorations.
Unpublished doctoral dissertation, University of the Pacific.
Sheffield, S. (2004). Evaluation of Houghton Mifflin MathP 2005 in the Chicopee Public School District: Second
year results.
Sheffield, S. (2005). Evaluation of Houghton Mifflin Math 2005 in the Chicopee Public School District: First
year results.
Sherwood, R. D. (1991). The development and preliminary evaluation of anchored instruction environments for
developing mathematical and scientific thinking. Paper presented at the meeting of the National Association for
Research in Science Teaching, Lake Geneva, WI. (ERIC Document Reproduction Service No. ED335221)
Shiah, R. L., Mastropieri, M. A., Scruggs, T. E., &FuIk, B. J. (1994-1995). The effects of computer-assisted
instruction on the mathematical problem-solving of students with learning disabilities. Exceptionality, 5(3), 131-
161.
Simpson, N. (2001). Scott Foresman California Mathematics validation study pretest-posttest results (Report
No. VM-17-3005-CA). (Available from Pearson Scott Foresman, 1415 L Street, Suite 800, Sacramento, CA
95814)
Sinkis, D. M. (1993). A comparison of Chapter One student achievement with and without computer-assisted
instruction. Unpublished doctoral dissertation, University of Massachusetts.
Slavin, R. E. (1986). Best-evidence synthesis: An alternative to meta-analytic and traditional reviews.
Educational Researcher, 75(9), 5-11.
Slavin, R. E. (2008). What works? Issues in synthesizing educational program evaluations. Educational
Researcher, 37(1), 5-14.
Slavin, R. E., &Karweit, N. L. (1985). Effects of whole class, ability grouped, and individualized instruction on
10 October 2013 Page 28 of 33 ProQuest
mathematics achievement. American Educational Research Journal, 22(3), 351-367.
Slavin, R. E., Lake, C, &Groff, C. (2007). Effective programs for middle and high school mathematics: A best-
evidence synthesis. Baltimore: Johns Hopkins University, Center for Data-Driven Reform in Education.
Available at www.bestevidence.org
Slavin, R. E., Leavey, M., &Madden, N. A. (1984). Combining cooperative learning and individualized
instruction: Effects on student mathematics achievement, attitudes, and behaviors. Elementary School Journal,
84(A), 409-422.
Slavin, R. E., Madden, N. A., &Leavey, M. (1984a). Effects of cooperative learning and individualized instruction
on mainstreamed students. Exceptional Children, 50(5), 434-443.
Slavin, R. E., Madden, N. A., &Leavey, M. (1984b). Effects of team assisted individualization on the
mathematics achievement of academically handicapped and nonhandicapped students. Journal of Educational
Psychology, 76(5), 813-819.
Sloan, H. A. (1993). Direct Instruction in fourth and fifth grade classrooms. Unpublished master's thesis, Purdue
University.
Snider, V. E., &Crawford, D. B. (1996). Action research: Implementing Connecting Math Concepts. Effective
School Practices, 75(2), 17-26.
Snow, M. ( 1 993). The effects of computer-assisted instruction and focused tutorial services on the academic
achievement of marginal learners. Unpublished doctoral dissertation, University of Miami.
Spencer, T. M. (1999). The systemic impact of integrated learning systems on mathematics achievement.
Unpublished doctoral dissertation, Eastern Michigan University.
Spicuzza, R., Ysseldyke, J., Lemkuil, A., Kosciolek, S., Boys, C, &Teelucksingh, E. (2001). Effects of
curriculum-based monitoring on classroom instruction and math achievement. Journal of School Psychology,
39(6), 541-542.
SRA/McGraw-Hill. (2003). Everyday Mathematics student achievement studies (Vol. 4). Chicago: Author.
Stallings, J. (1985). A study of implementation of Madeline Hunter's model and its effects on students. Journal
of Educational Research, 78(6), 325-337.
Stallings, J., &Krasavage, E. M. (1986). Program implementation and student achievement in a four-year
Madeline Hunter Follow-Through Project. Elementary School Journal, 87(2), 117-138.
Stallings, P., Robbins, P., Presbrey, L., &Scott, J. (1986). Effects of instruction based on the Madeline Hunter
model on students' achievement: Findings from a followthrough project. Elementary School Journal, 86(5), 571-
587.
Stecker, P., &Fuchs, L. (2000). Expecting superior achievement using curriculumbased measurement: The
importance of individual progress monitoring. Learning Disabilities Research and Practice, 15(3), 128-134.
Stevens, J. W. (1991). Impact of an integrated learning system on third-, fourth-, and fifth-grade mathematics
achievement. Unpublished doctoral dissertation, Baylor University.
Stevens, R. J., &Slavin, R. E. (1995). Effects of a cooperative learning approach in reading and writing on
handicapped and nonhandicapped students' achievement, attitudes, and metacognition in reading and writing.
Elementary School Journal, 95, 241-262.
Stone, T. (1996). The academic impact of classroom computer usage upon middleclass primary grade level
elementary school children. Unpublished doctoral dissertation, Widener University.
Sullivan, F. L. (1989). The effects of an integrated learning system on selected fourthand fifth-grade students.
Unpublished doctoral dissertation, Baylor University.
Suppes, P., Fletcher, J. D., Zanotti, M., Lorton, P. V., Jr., &Searle, B. W. (1973). Evaluation of computer-
assisted instruction in elementary mathematics for hearing-impaired students. Stanford, CA: Institute for
Mathematical Studies in the Social Sciences, Stanford University.
Suyanto, W. (1998). The effects of student teams-achievement divisions on mathematics achievement in
10 October 2013 Page 29 of 33 ProQuest
Yogyakarta rural primary schools. Unpublished doctoral dissertation, University of Houston.
Swing, S., &Peterson, P. (1982). The relationship of student ability and small group interaction to student
achievement. American Educational Research Journal, 19, 259-274.
Tarver, S. G, &Jung, J. A. (1995). A comparison of mathematics achievement and mathematics attitudes of first
and second graders instructed with either a discoverylearning mathematics curriculum or a direct instruction
curriculum. Effective School Practices, 14(1), 49-57.
Taylor, L. (1999). An integrated learning system and its effect on examination performance in mathematics.
Computers and Education, 32(2), 95-107.
Teelucksingh, E., Ysseldyke, J., Spicuzza, R., &Ginsburg-Block, M. (2001). Enhancing the learning of English
language learners: Consultation and a curriculum based monitoring system. Minneapolis: University of
Minnesota, National Center on Educational Outcomes.
Tieso, C. (2005). The effects of grouping practices and curricular adjustments on achievement. Journal for the
Education ofthe Gifted, 29(1), 60-89.
Tingey, B., &Simon, C. (2001). Relationship study for SuccessMaker levels and SAT9 in Hueneme 3
Elementary District, school year 2000-2001, with growth analysis (Pearson Education Technologies). Pt.
Hueneme, CA.
Tingey, B. &Thrall, A. (2000). Duval County Public Schools evaluation report for 1999-2000 (Pearson Education
Technologies). Duval County, FL.
Torgerson, C. J. (2006). Publication bias: The Achilles' heel of systematic reviews? British Journal of
Educational Studies, 54(1), 89-102.
Trautman, T., &Howe, Q. (2004). Computer-aided instruction in mathematics: Improving performance in an
inner city elementary school serving mainly English language learners. Oklahoma City, OK: American Education
Corporation.
Trautman, T. S., &Klemp, R. (2004). Computer aided instruction and academic achievement: A study ofthe
AnyWhere Learning System in a district wide implementation. Oklahoma City, OK: American Education
Corporation.
Tsuei, M. (2005, June). Effects of a Web-based curriculum-based measurement system on math achievement:
A dynamic classwide growth model. Paper presented at the National Educational Computing Conference,
Philadelphia, PA.
Turner, L. G. (1985). An evaluation of the effects of paired learning in a mathematics computer-assisted-
instruction program. Unpublished doctoral dissertation, Arizona State University.
Tuscher, L. (1998). A three-year longitudinal study assessing the impact of the SuccessMaker program on
student achievement and student and teacher attitudes in the Methacton School District elementary schools.
Lehigh, PA: Lehigh University.
Underwood, J., Cavendish, S., Dowling, S., Fogelman, K., &Lawson, T. (1996). Are integrated learning systems
effective learning support tools? Computers Education, 26(1-3), 33-40.
Van Dusen, L. M., &Worthen, B. R. (1994). The impact of integrated learning system implementation on student
outcomes: Implications for research and evaluation. International Journal of Educational Research, 27(1), 13-
24.
Vaughan, W. (2002). Effects of cooperative learning on achievement and attitude among students of color.
Journal of Educational Research, 95(6), 359-364.
Villasenor, A., &Kepner, H. S. (1993). Arithmetic from a problem-solving perspective: An urban implementation.
Journal for Research in Mathematics Education, 24(1), 62-69.
Vogel, J. J., Greenwood-Ericksen, A., Cannon-Bowers, J., &Bowers, CA. (2006). Using virtual reality with and
without gaming attributes for academic achievement. Journal of Research on Technology in Education, 39(1),
105-118.
10 October 2013 Page 30 of 33 ProQuest
Vreeland, M., Vail, J., Bradley, L., Buetow, C, Cipriano, K., Green, C, et al. (1994). Accelerating cognitive
growth: The Edison School Math Project. Effective School Practices, 13(2), 64-69.
Waite, R. E. (2000). A study on the effects of Everyday Mathematics on student achievement of third-, fourth-,
and fifth- grade students in a large north Texas urban school district. Unpublished doctoral dissertation,
University of North Texas.
Watkins, M. W. (1986). Microcomputer-based math instruction with first-grade students. Computers in Human
Behavior, 2, 71-75.
Webster, A. H. (1990). The relationship of computer assisted instruction to mathematics achievement, student
cognitive styles, and student and teacher attitudes. Unpublished doctoral dissertation, Delta State University,
Cleveland, MS.
Webster, W. (1995). The evaluation of Project SEED, 1991-94: Detroit Public Schools. Detroit, MI: Detroit Public
Schools.
Webster, W., &Chadbourn, R. (1996). Evaluation of Project SEED through 1995. Dallas, TX: Dallas Public
Schools.
Webster, W. J. (1998). The national evaluation of Project SEED in five school districts: 1997-1998. Dallas, TX:
Webster Consulting.
Webster, W. J., &Chadbourn, R. A. (1989). Final report: The longitudinal effects of SEED instruction on
mathematics achievement and attitudes. Dallas, TX: Dallas ISD.
Webster, W. J., &Chadbourn, R. A. (1992). The evaluation of Project SEED: 1990-91. Dallas, TX: Dallas ISD.
Webster, W. J., &Dryden, M. (1998). The evaluation of Project SEED: 1997-1998: Detroit Public Schools.
Detroit, MI: Detroit Public Schools.
Weidinger, D. (2006). The effects of classwide peer tutoring on the acquisition of kindergarten reading and math
skills. Unpublished doctoral dissertation, University of Kansas.
Wellington, J. (1994). Evaluating a mathematics program for adoption: Connecting Math Concepts. Effective
School Practices, 13(2), 70-75.
Wenglinsky, H. (1998). Does it compute? The relationship between educational technology and student
achievement in mathematics. Princeton, NJ: Educational Testing Service Policy Information Center.
What Works Clearinghouse. (2005). Middle school math curricula. Washington, DC: U.S. Department of
Education. Retrieved July 6, 2006, from http://www
.whatworks.ed.gov/Topic.asp?tid=03&Returnpage="86"fault.asp
What Works Clearinghouse. (2006). Elementary school math. Washington, DC: U.S. Department of Education.
Retrieved September 21, 2006, from http://w-wc.org/Topic.asp?tid=04&Returnpage="87"fault.asp
Whitaker, J. C. (2005). Impact of an integrated learning system on reading and mathematics achievement.
Unpublished doctoral dissertation, Tennessee State University.
White, M. S. (1996). The effects of TIPS Math activities on fourth grade student achievement in mathematics
and on primary caregivers' and teachers' opinions about primary caregivers' involvement in schoolwork.
Unpublished master's thesis, University of Maryland.
Wildasin, R. L. (1994). A report on ILS implementation and student achievement in mathematics during the
1993-94 school year. Landisville, PA: Hempfield School District.
Williams, D. (2005). The impact of cooperative learning in comparison to traditional instruction on the
understanding of multiplication in third grade students. Unpublished doctoral dissertation, Capella University.
Wilson, C. L., &Sindelar, P. T. (1991). Direct Instruction in math word problems: Students with learning
disabilities. Exceptional Children, 57(6), 512-519.
Wodarz, N. (1994). The effects of computer usage on elementary students' attitudes, motivation and
achievement in mathematics. Unpublished doctoral dissertation, Northern Arizona University.
Woodward, J., &Baxter, J. (1997). The effects of an innovative approach to mathematics on academically low-
10 October 2013 Page 31 of 33 ProQuest
achieving students in inclusive settings. Exceptional Children, 63(3), 373-389.
Xin, J. F. (1999). Computer-assisted cooperative learning in integrated classrooms for students with and without
disabilities. Information Technology in Childhood Education, 1(1), 61-78.
Yager, S., Johnson, R. T., Johnson D. W., &Snider, B. (1986). The impact of group processing on achievement
in cooperative learning groups. Journal of Social Psychology, 126(3), 389-397.
Ysseldyke, J., Betts, J., Thill, T., &Hannigan, E. (2004). Use of an instructional management system to improve
mathematics skills for students in Title I programs. Preventing School Failure, 48(4), 10-14.
Ysseldyke, J., Spicuzza, R., Kosciolek, S., &Boys, C. (2003). Effects of a learning information system on
mathematics achievement and classroom structure. Journal of Educational Research, 96(3), 163-173.
Ysseldyke, J., Spicuzza, R., Kosciolek, S., Teelucksingh, E., Boys, C, &Lemkuil, A. (2003). Using a curriculum-
based instructional management system to enhance math achievement in urban schools. Journal of Education
for Students Placed at Risk, 8(20), 247-265.
Ysseldyke, J., &Tardrew, S. (2002). Differentiating math instruction: A large-scale study of Accelerated Math
[final rep.]. Wisconsin Rapids, WI: Renaissance Learning.
Ysseldyke, J., &Tardrew, S. (2005). Use of a progress monitoring system to enable teachers to differentiate
math instruction. Minneapolis: University of Minnesota.
Ysseldyke, J., Tardrew, S., Betts, J., Thill, T., &Hannigan, E. (2004). Use of an instructional management
system to enhance math instruction of gifted and talented students. Journal of Education for the Gifted, 27(4),
293-310.
Ysseldyke, J. E., &Bolt, D. (2006). Effect of technology-enhanced progress monitoring on math achievement.
Minneapolis: University of Minnesota.
Zollman, A., Oldham, B., &Wyrick, J. (1989). Effects of computer-assisted instruction on reading and
mathematics achievement of Chapter 1 students. Lexington, KY: Fayette County Public Schools.
Zuber, R. L. (1992). Cooperative learning by fifth-grade students: The effects of scripted and unscripted
techniques. Unpublished doctoral dissertation, Rutgers University.
AuthorAffiliation
Robert E. Slavin and Cynthia Lake
Johns Hopkins University
AuthorAffiliation
Authors
ROBERT E. SLAVIN is director of the Center for Research and Reform in Education at Johns Hopkins
University, director of the Institute for Effective Education at the University of York, and cofounder and chairman
of the Success for All Foundation; 200 W. Towsontown Boulevard, Baltimore, MD 21204; rslavin@jhu.edu.
CYNTHIA LAKE is a research scientist for the Center for Research and Reform in Education at Johns Hopkins
University, CDDRE, 200 W. Towsontown Boulevard, Baltimore, MD 21204; clake5@jhu.edu. Her research
interests include comprehensive school reform, school improvement, and educational research methods and
statistics. She is currently a data analyst of meta-analyses of education for the Best Evidence Encyclopedia.
Appendix
APPENDIX
Subject: Mathematics education; Elementary schools; No Child Left Behind Act 2001-US; Academic
achievement;
Publication title: Review of Educational Research
Volume: 78
Issue: 3
10 October 2013 Page 32 of 33 ProQuest
Pages: 427-515
Number of pages: 89
Publication year: 2008
Publication date: Sep 2008
Year: 2008
Publisher: American Educational Research Association
Place of publication: Washington
Country of publication: United States
Publication subject: Social Sciences: Comprehensive Works, Psychology, Education
ISSN: 00346543
CODEN: REDRAB
Source type: Scholarly Journals
Language of publication: English
Document type: Feature
Document feature: Tables References
ProQuest document ID: 214120806
Document URL:
http://ezproxy.mnsu.edu/login?url=http://search.proquest.com/docview/214120806?accountid=12259
Copyright: Copyright American Educational Research Association Sep 2008
Last updated: 2012-07-12
Database: ProQuest Psychology Journals
_______________________________________________________________

Contact ProQuest
Copyright 2013 ProQuest LLC. All rights reserved. - Terms and Conditions
10 October 2013 Page 33 of 33 ProQuest
Response to Intervention
in Elementary-Middle Math
National Association of Elementary School Principals
BEST PRACTICES
FOR BETTER SCHOOLS

Student Assessment
2
About NAESP
The mission of the National Association
of Elementary School Principals (NAESP)
is to lead in the advocacy and support for
elementary and middle-level principals and
other education leaders in their commitment
for all children.
Gail Connelly, Executive Director
Michael Schooley, Deputy Executive Director
National Association of
Elementary School Principals
1615 Duke Street
Alexandria, Virginia 22314
800-386-2377 or 703-684-3345
www.naesp.org
NAESP members receive access to this white
paper as a member benefit. Learn more at
www.naesp.org/content/membership-form
About BEST PRACTICES FOR BETTER SCHOOLS
Best Practices for Better Schools, an online
publications series developed by the National
Association of Elementary School Principals,
is intended to strengthen the effectiveness
of elementary and middle-level principals
by providing information and insight about
research-based practices and by offering
guidance for implementing them in schools.
This series of publications is intended to
inform discussion, strategies, and implemen-
tation, not to imply endorsement of any
specific approach by NAESP.
Student Assessment
Response to Intervention in Elementary-Middle Math
About This White Paper
The content of this issue of Best Practices for
Better Schools is excerpted with permission
from Doing What Works (DWW), a website
sponsored by the U.S. Department of
Education. The goal of DWW is to create an
online library of resources to help principals
and other educators implement research-
based instructional practice. DWW is led
by the Departments Office of Planning,
Evaluation & Policy Development (OPEPD),
which relies on the Institute of Education
Sciences (and occasionally other entities that
adhere to standards similar to those of IES)
to evaluate and recommend practices that
are supported by rigorous research. Much
of the DWW content is based on informa-
tion from IES What Works Clearinghouse
(WWC), which evaluates research on
practices and interventions to let the
education community know what is likely
to work.
NAESP was the only national education
association awarded a grant to widely
disseminate highlights of best-practice
content from the DWW website. Readers are
encouraged to visit the website to view all of
the resources related to this best practice and
to share this online resource with colleagues,
teachers, and other educators. No additional
permission is required.
NAESP cares about the environment. This
white paper is available from NAESP as an
online document only. NAESP members
and other readers are encouraged to share
this document with colleagues.
Deborah Bongiorno, Editor
Donna Sicklesmith-Anderson, Designer
Published 2011
P
Response to Intervention in
Elementary-Middle Math
PRINCIPALS KNOW that schools must help
all students develop the foundational skills
they need to succeed in class and to meet the
literacy and mathematics demands of work
and college. Response to Intervention (RtI) is
a comprehensive early detection, prevention,
and support system approach to help
students before they fall behind. This white
paper outlines implementation of an RtI
framework along with these four
recommended practices:
Implement screening and monitoring;
Focus on foundations of arithmetic;
Provide intentional teaching;
Build a systemwide framework.
Summaries of these practices follow.
Screen all students for potential math
difficulties and monitor their progress.
The first step in RtI is universal student
screening so schools can systematically
identify those at risk for math difficulties.
Accurate identification of at-risk students
requires efficient, reliable, and valid
measures of the most important math
objectives. Because no one assessment is
perfectly reliable, schools are encouraged to
use multiple screening measures, including
state assessment results. Once students
needing intervention have been identified,
regular progress monitoring is essential to
determine if supplemental instruction is
meeting their learning needs or if regrouping
is necessary.
ACTIONS
The RtI team evaluates screening measures
using reliability, efficiency, and validity
criteria.
It is important for the district- or school-
level RtI team to evaluate potential screening
measures. The team should select efficient,
reasonably reliable measures that
demonstrate predictive validity. Most
importantly, screening instruments should
cover critical instructional objectives for
each grade. In grades four through eight,
state assessment results can be used in
combination with a screening instrument to
increase the accuracy of decisions about who
is at risk.
Implement twice-a-year screening.
Screen students in the beginning and middle
of the year to ensure that those at risk are
identified and receive intervention services
in a timely fashion. Using the same grade-
level screening tools across the district
facilitates analysis of patterns of results
across schools.
Monitor student progress regularly.
Use grade-appropriate general outcome
measures to monitor the progress of students
receiving Tier two and Tier three
interventions and borderline students
receiving Tier one instruction at least
monthly, and use the data to regroup
3
4
students when necessary. Curriculum-
embedded assessments can be used in
interventions as often as daily or as
infrequently as every other week to
determine whether students are learning
from the intervention.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
Universal Screening in Math
Functions Progress Monitoring
Monitoring Student Progress
Data Team Meeting: Grade Five Math
Review
The Power of Data
TOOLS
The following tools and templates are
designed to help principals and teachers
implement this best practice in their school.
Each tool is a downloadable document that
principals can adapt to serve their particular
needs.
RtI Data Analysis Teaming Process Script:
Guidelines for conducting a data team
meeting.
Screening and Intervention Record Forms:
Screening forms used by RtI data teams to
record student performance, goals and skills.
Screening Tools Chart: Chart from the
National Center on Response to Intervention
identifies screening tools by content area and
rates each based on reliability and accuracy.
Screening for Mathematics Difficulties in
K-3 Students: Report examining the
effectiveness and key components of early
screening measures.
Graphing Progress How-To Packet:
Directions for creating and analyzing student
progress graphs.
Data Team Assessment and Planning
Worksheet: Worksheet to assess the status of
the data team process.
Problem Analysis and Intervention Design
Worksheets: Excerpts from training modules
to guide teams in data-driven instruction
and learning.
The Foundations of Arithmetic: Focus
Intervention on Whole and Rational
Numbers, Word Problems, and Fact
Fluency
In grades K through five, math interventions
should focus intensely on in-depth treatment
of whole numbers and operations, while
grades four through eight should address
rational numbers as well as advanced topics
in whole number arithmetic, such as long
division.
Interventions on solving word problems
should include instruction that helps
students identify common underlying
structures of various problem types. Students
can learn to use these structures to
categorize problems and determine
appropriate solutions for each problem type.
Intervention teachers at all grade levels
should devote about ten minutes of each
session to building fluent retrieval of basic
arithmetic facts. Primary-grade students can
learn efficient counting-on strategies to
improve math fact retrieval, and
intermediate and middle-grade students
should learn to use their knowledge of
properties (e.g., the commutative,
associative, and distributive laws) to derive
facts in their heads.
Intervention teachers at all grade levels should devote about
ten minutes of each session to building fluent retrieval of basic
arithmetic facts.
5
to use properties such as the commutative,
associative, and distributive laws to derive
facts in their heads.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
Math Content for Struggling Students
Word Problems
Reteaching Place Value in Tier Two
Content for Tiers Two and Three
Teaching Word Problem Structures
TOOLS
The following tools and templates are
designed to help principals and teachers
implement these best practices in their
school. Each tool is a downloadable
document that principals can adapt to serve
their particular needs.
K-8 Essential Mathematics Concepts and
Skills: Excerpt from the Iowa Core
Curriculum reviewing essential skills.
K-7 Essential Mathematics Standards
Assessment Form: Assessment form to
record and track student progress on math
proficiency.
National Mathematics Advisory Panel: Core
Principals of Instruction: Fact sheet
summarizing the Panels core findings.
National Mathematics Advisory Panel: K-8
Benchmarks: Recommended standards
based on international benchmarks.
RtI in Math for Elementary and Middle
Schools: A PowerPoint providing an
overview of math instruction
recommendations.
ACTIONS
Focus kindergarten through fifth-grade
interventions on whole numbers.
For kindergarten through grade five
students, Tier two and Tier three
interventions should focus almost exclusively
on properties of whole numbers and
operations. Some older students struggling
with whole numbers and operations would
also benefit from in-depth coverage of these
topics.
Focus fourth- through eighth-grade
interventions on rational numbers.
Tier two and Tier three interventions in
grades four through eight should focus on
in-depth coverage of rational numbers and
advanced topics in whole number
arithmetic, such as long division.
Ensure in-depth coverage of math topics.
Districts should select math curriculum
materials that offer an in-depth focus on the
essential topics recommended by the
National Math Panel. Tier two and Tier three
interventions should follow the same
in-depth treatment of these topics.
Interventions on solving word problems should
include instruction that helps students identify
common underlying structures.
Teach students about the structure of various
problem types, how to use these structures to
categorize problems, and how to determine
appropriate solutions for each problem type.
Teach students to transfer known solution
methods from familiar to unfamiliar
problems.
Interventions at all grade levels should devote
about ten minutes each session to building
fluent retrieval of basic arithmetic facts.
Extensive practice facilitates automatic
retrieval. In kindergarten through second
grade, explicitly teach students efficient
counting-on strategies to improve their
retrieval of mathematics facts. Students in
grades two through eight should learn how
Teach students about the structure of various problem types, how to
use these structures to categorize problems, and how to determine
appropriate solutions for each problem type.
6
Intentional Teaching: Provide Explicit
Instruction and Incorporate Visual
Representations and Motivational
Strategies
Intervention instruction should be explicit
and systematic, incorporating models of
proficient problem solving, verbalization of
thought processes, guided practice,
corrective feedback, and frequent cumulative
review. Instructional materials should
include examples of easy and difficult
problems. Students need guided practice
with scaffolding, including opportunities to
communicate their problem-solving
strategies. Motivation is key for students
struggling with math, so it is important to
praise effort and engagement to encourage
persistence.
Intervention materials should provide
students opportunities to work with visual
representations of math concepts, and
interventionists should be proficient in their
use. Tools such as number lines and strip
diagrams have many different uses in
explanation of mathematical concepts and
problem solving. When introducing new
concepts to students, it is helpful to begin
with concrete experiences, follow up with
visual representation, and then move to
abstract concepts.
ACTIONS
Tier two and Tier three math instruction should
provide clear explanations with thinkalouds.
Explicit teaching begins with the teachers
demonstration and modeling of proficient
problem solving with thinkalouds.
Explicit teaching includes guided practice with
scaffolding of the required problem-solving
steps.
Students need many opportunities to work
as a group and communicate their problem-
solving strategies for easy and difficult
problems. During guided practice, the
teacher provides support to enable students
to correctly solve problems, gradually
withdrawing support as students master
problem-solving steps.
Guided practice should include immediate
corrective feedback.
Extensive guided practice during Tier two
and Tier three instruction allows teachers to
provide students with immediate feedback
regarding the accuracy and appropriateness
of their problem-solving strategies. Feedback
should identify accurate understandings and
guide students to the correct responses when
they have made mistakes.
Use visual representations to explain math
concepts.
Intervention materials should provide
students opportunities to practice math
concepts with visual representations such as
number lines, arrays, and strip diagrams.
Interventionists should be proficient in the
use of visual representations.
Praise student effort and engagement.
Include motivational strategies in Tier two
and Tier three interventions, reinforcing
students effort and attention to lessons and
rewarding accomplishments. Help students
persist by setting goals and charting
progress.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
Explicit Instruction
Visual Representations
Concrete to Abstract Sequence
Organizing for Differentiation in the Core
Classroom
Pacing Instruction in Tier Three
Explicit Teaching in the Fifth-Grade Math
Core
Motivation is key for students struggling with math, so it is
important to praise effort and engagement to encourage persistence.
7
TOOLS
The following tools and templates are
designed to help principals and teachers
implement these best practices in their
school. Each tool is a downloadable
document that principals can adapt to serve
their particular needs.
Concrete-Representational-Abstract (CRA)
Instructional Approach Summary Report: A
strategy for teaching fractions that includes
multiple opportunities for practice at each
level
Solving Algebra and Other Story Problems
With Simple Diagrams: A strategy for
solving problems with visualization
methods.
Teaching Students Math Problem-Solving
Through Graphic Representations: Examples
of visual representation techniques.
Establish a systemwide framework for
RtI to support the three recommended
practices.
Implementation encompasses the
groundwork and support needed to put the
recommended practices into action. RtI
begins with a systemwide framework that
includes universal screening and progress
monitoring; content focused on whole
numbers in grades kindergarten through
fifth and rational numbers in grades four
through eight, including word problem
structure and daily fact fluency practice; and
systematic, explicit teaching incorporating
visualizations. Districts and schools need
leadership and guidance at all levels to
support implementation of a multi-tiered
system. State-level teams can inform policy
decisions and provide guidance on
assessments, instructional resources, and
funding allocation. Some states have
provided high-level support for RtI
implementation through special training or
technical assistance centers that are charged
with working with local districts and
schools. In those cases, district-level teams
can access professional development and
coaching in RtI implementation. Schools will
need to provide extensive training and
ongoing support to staff to ensure fidelity
and sustain an RtI framework.
ACTIONS
Build a comprehensive framework that
addresses reading and mathematics.
The components of an RtI framework are
consistent whether or not a district is
focusing on reading or math or addressing
both subjects at the same time. Components
include: universal screening, ongoing
progress monitoring, quality core instruction
for all students, and a system of tiered
interventions to address students skill needs.
When schools take stock of what they
already have in place, it is likely that some
components of an RtI system are already
part of the instructional programfor
example, a system of formative assessments
for monitoring progress. Districts and
schools will take different paths to building
out an RtI framework depending on what is
already working well and what components
they need to develop. Implementation will
involve cooperation among school personnel
at many levels so it is a good idea to create a
leadership team that includes representatives
from special education, services to English
learners, subject matter and assessment
specialists, and classroom teachers. Full
implementation of an RtI framework usually
requires several years.
Establish core mathematics instructional
programs focused on foundational skills.
Once schools have established a core
curriculum aligned with state and district
standards, they can choose intervention
programs that are compatible with the core
program and then provide intensive
small-group or individualized instruction.
Districts and schools need leadership and guidance at all levels to
support implementation of a multi-tiered system.
8
Alignment with the core program is not as
critical as ensuring that tiered instruction is
systematic, explicit, and focuses on high
priority mathematics skills.
Create leadership teams in districts and schools
to facilitate implementation of RtI
components.
Leadership teams are responsible for building
the infrastructure to support an effective RtI
framework. District teams are able to provide
guidance to schools in evaluating and
selecting core programs and assessment
measures, managing data collection and
reporting, and coordinating professional
development opportunities and instructional
resources across schools. Building-level
teams facilitate direct implementation of the
key components by coordinating staff and
resources, scheduling and other logistics, and
guiding teacher teams in using data and
providing interventions at the students level
of need.
Provide professional development and
instructional supports to sustain high-quality
implementation.
Staff members will require adequate training
and support to successfully implement
recommended practices. Districts can work
with regional and state networks to bring
external opportunities for training and
technical assistance to schools. At the
building level, coaches and specialists can
provide staff development, ongoing
classroom support, and collaborative
experiences to advance teachers skills in
implementing RtI components.
WHAT PRINCIPALS SAY
Principals can see how these actions are
implemented in schools by viewing these
web-based interviews with teachers and
specialists:
How RtI Changes Special Education
The Phases of RtI Implementation
Partnering General and Special Education
State Leadership: Building an RtI System
Setting the Stage for RtI Implementation
Lessons From Iowa About RtI
RtI Training for School Districts
Principals Role in Instructional Decision
Making
Charting the Path
Powerful RtI Training Experiences
TOOLS
The following tools and templates are
designed to help principals and teachers
implement these best practices in their
school. Each tool is a downloadable
document that principals can adapt to serve
their particular needs.
RtI Self-Assessment Tool for Elementary
Schools: Tool that assesses ten readiness
categories for implementing an RtI
framework.
Scaling-Up Instruction and Technical
Assistance Briefs: Framework for developing
capacity to make statewide use of evidence-
based practice.
Funding Considerations for Implementing
RtI: Guide from the Pennsylvania
Department of Education with potential
funding sources for specific RtI components.
Professional Development Continuum:
Training continuum for staff on RtI
components, from beginning to advanced
level.
District Implementation Tracking Plan: A
resource from Oregon to prepare districts as
they implement RtI.
RtI Implementation Self-Report: A form for
reporting the status of implementation across
ten effectiveness indicators.
Once schools have established a core curriculum, they can choose
intervention programs compatible with the core program and then
provide intensive small-group or individualized instruction.
9
RtI Comprehensive Evaluation Tool: Tool
from Colorado to evaluate RtI components
and continuing needs.
Progress Monitoring Training Plan: A plan
for training staff on the principles of
progress-monitoring and implementation.
RtI Parent Guides: Guide and a brochure
describing the RtI framework to parents.
Conclusion
Good math instruction is built on screening
and monitoring of students, teaching the
foundations of arithmetic, and practicing
intentional teaching. These three practices
are, in turn, supported by the Response to
Intervention (RtI) approach, which is
dedicated to helping students before they fall
behind. To ensure that math instruction gets
the attention it deserves, principals have
access to solutions through NAESP and the
DWW website. These solutions are based on
solid research and the best thinking
currently available in the field. As more
white papers in NAESPs Best Practices for
Better Schools series are developed,
principals will continue to find help in
addressing critical issues in elementary and
middle school education.
SITE PROFILES
Cornell Elementary School (Iowa): RtI has
been evolving at Cornell Elementary over
the last 15 years.
Durham Elementary School (Oregon):
Durham Elementarys key to success is its
strong building-level leadership team.
Tri-Community Elementary School
(Pennsylvania): Tri-Communitys RtI
framework has helped foster school
turnaround.
John Wash Elementary School (California):
John Wash Elementarys framework involves
a three-tiered pyramid of instruction and
intervention.
Related Links
The Access Center: Resources to enhance
students with disabilities access to general
education, including these reports:
Concrete-Representational-Abstract
(CRA) Instructional Approach
Direct or Explicit Instruction and
Mathematics
Strategy/Implicit Instruction and
Mathematics
Center on Instruction (COI): Funded by the
U.S. Department of Education, this research
hub offers free resources, including:
Curriculum-Based Measurement in
Mathematics: An Evidence-Based
Formative Assessment Procedure (PDF)
Early Mathematics Assessment (Power-
Point)
Pre-Algebra and Algebra Instruction and
Assessments (PowerPoint)
Center on Instruction, continued
Principles of Effective Assessment for
Screening and Progress Monitoring
(PowerPoint)
Progress Monitoring for Elementary
Mathematics
Screening for Mathematics Difficulties in
K-3 Students (PDF)
Council for Exceptional Children (CEC):
Instructional Support Teams Help Sustain
Responsive Instruction Frameworks
IDEA Partnership: Response to Intervention
Collection: Library of tools, partnership
briefs and dialogue guides.
To ensure that math instruction gets the attention it deserves,
principals have access to solutions through NAESP and DWW.
10
Institute of Education Sciences (U.S.
Department of Education): Organizing
Instruction and Study to Improve Student
Learning (PDF)
Iowa Heartland Area Education Agency
(AEA): IDMHow to Get Started and
Instructional Decision Making (IDM)
The IRIS Center: A national center devoted
to providing free, online interactive training
enhancements for educators working with
students with disabilities.
Math Forum: Mathematics and Motivation
An Annotated Bibliography
MathVids: Instructional strategies, teaching
plans and videos for educators working with
struggling math students.
National Association of State Directors of
Special Education (NASDSE): RtI Project
National Council of Teachers of Mathemat-
ics: The Councils Illuminations library
provides hundreds of online activities and
lessons for educators.
National Center on Response to Intervention
(NCRtI): Housed at the American Institute
for Research, the Center provides technical
assistance to states and districts for building
capacity for RtI implementation.
National Center on Student Progress
Monitoring: Math Web Resources Library:
Downloadable articles, PowerPoints, FAQs
and links to additional resources on student
monitoring.
National Research Center on Learning
Disabilities (NRCLD): Effective Mathematics
Instruction
Oregon Response to Intervention Project
(OrRTI): How the EBIS/RTI Process Works
in Elementary Schools (PDF)
Pennsylvania Training and Technical
Assistance Network (PaTTAN): Materials to
build capacity of education agencies serving
special education students.
RTI Action Network: An organization
devoted to guiding educators to RtI
implementation.
TeachingLD: Teaching How TosMath and
Teaching Students Math Problem-Solving
Through Graphic Representations (PDF)
U.S. Department of Education (USDE):
Implementing RTI Using Title I, Title III,
and CEIS FundsKey Issues for Decision-
Makers Response to Intervention Frame-
work in Mathematics (PDF)
Research and Results
MASTERING MATH FACTS
Developing automaticity 1
Donald B. Crawford, Ph.D. Otter Creek Institute
The third stage of learning math facts: Developing automaticity
Donald B. Crawford, Ph.D. Otter Creek Institute
Everyone knows what automaticity is. It is the immediate, obligatory way we
apprehend the world around us. It is the fluent, effortless manner in which we perform
skilled behaviors. It is the popping into mind of familiar knowledge at the moment we
need it (Logan, 1991a, p. 347).
Fluency in computation and knowledge of math facts are part of the NCTM Math
Standards. As the explanation of the Number and Operations Standard states, Knowing
basic number combinationsthe single-digit addition and multiplication pairs and their
counterparts for subtraction and divisionis essential. Equally essential is computational
fluencyhaving and using efficient and accurate methods for computing (NCTM, 2003,
p. 32)..
Later within the explanation of this standard, it is stated that, By the end of
grade 2, students should know the basic addition and subtraction combinations, should be
fluent in adding two-digit numbers, and should have methods for subtracting two-digit
numbers. At the grades 35 level, as students develop the basic number combinations for
multiplication and division, they should also develop reliable algorithms to solve
arithmetic problems efficiently and accurately. These methods should be applied to larger
numbers and practiced for fluency (NCTM, 2003, p. 35).
For children to be fluent in computation and to be able to do math mentally
requires that they become fluent in the basic arithmetic facts. Researchers have long
maintained there are three general stages of learning math facts and different types of
instructional activities to effect learning in those stages (Ando & Ikeda, 1971; Ashlock,
1971; Bezuk & Cegelka, 1995; Carnine & Stein, 1981; Garnett, 1992; Garnett &
Fleischner, 1983). Researchers such as Garnett (1992) have carefully documented the
developmental sequence of procedures that children use to obtain the answers to math
facts. Children begin with a first stage of counting, or procedural knowledge of math
facts. A second stage consists of developing ways to remember math facts by relating
them to known facts. Then the third and final stage is the declarative knowledge of the
Developing automaticity 2
Donald B. Crawford, Ph.D. Otter Creek Institute
knowing the facts or direct retrieval. Another line of research has established that
most children move from using procedures to figure out math facts in the first grade to
the adult model of retrieval by fifth grade (Koshmider & Ashcraft, 1991). The switch
from predominantly procedural to predominantly declarative knowledge or direct
retrieval of math facts normally begins at around the 2
nd
to 3
rd
grade level (Ashcraft,
1984).
If these three levels of knowing math facts are attainable, what does that imply
about how to teach math facts? Different teaching and practice procedures apply to each
of these stages. Meyers & Thorton (1977) suggested activities specifically geared toward
different levels, including activities for overlearning intended to promote rapid recall of
a set or cluster of facts. As Ashcraft (1982) points out there is much more evidence of
direct retrieval of facts when you study adults or children in 4th grade or above. There is
much research on the efficacy of deriving facts from relationships developed in studies
on addition and subtraction with primary age children (Carpenter & Moser, 1984;
Garnett, 1992; Steinberg, 1985; Thorton, 1978). However, it is not always clear that this
research focuses on soon-to-be-discarded methods of answering math facts
Some of the advice on how to teach math facts is unclear about which stage is
being addressed. The term learning has been variously applied to all three stages, and
the term memorization has been used to apply to both the second and third stages
(Stein, Silbert & Carnine, 1997). What is likely is that teaching strategies that address
earlier stages of learning math facts may be counterproductive when attempting to
develop automaticity. This paper will examine research on math fact learning as it
applies to each of the three stages, with an emphasis on the third stage, the development
of automaticity.
First stage: Figuring out math facts
The first stage has been characterized as understanding, or conceptual or
procedural. This is the stage where the child must count or do successive addition, or
some other strategy to figure out the answer to a fact (Garnett, 1992). Specifically in
view is the childs ability to associate the written number sentence with a physical
referent (Ashlock, 1971, p. 359).
Developing automaticity 3
Donald B. Crawford, Ph.D. Otter Creek Institute
There is evidence that problems with learning mathematics may have their origins
in failure to completely master counting skills in kindergarten and first grade (Geary,
Bow-Thomas, & Yao, 1992). At the beginning stage arithmetic facts are problems to be
solved (Gersten & Chard, 1999). If students cannot solve basic fact problems given
plenty of timethen they simply do not understand the process, and certainly are not
ready to begin memorization (Kozinski & Gast, 1993). Students must understand the
process well enough to solve any fact problem given them before beginning
memorization procedures. To ensure continued success and progress in mathematics,
students should be taught conceptual understanding prior to memorization of the facts
(Miller, Mercer, & Dillon, 1992, p. 108). Importantly, the conceptual understanding, or
procedural knowledge of counting should be developed prior to memorization, rather
than in place of it. Prior to teaching for automaticity it is best to develop the
conceptual understanding of these math facts as procedural knowledge (Bezuk &
Cegelka, 1995, p. 365).
Second stage: Strategies for remembering math facts
The second stage has been characterized as relating or as strategies for
remembering. This can include pairs of facts related by the commutative property, e.g.,
5 + 3 = 3 + 5 = 8. This can also include families of facts such as 7 + 4 = 11, 4 + 7 = 11,
11 - 4 = 7, and 11 7 = 4. Garnett characterizes such strategies as more mature than
counting procedures indicating, Another mature strategy is linking one problem to a
related problem (e.g., for 5 + 6, thinking 5 + 5 =10, so 5 + 6 = 11) (Garnett, 1992, p.
212). The goal of the second stage is simple accuracy rather than developing fluency or
automaticity. Studies looking at use of strategies for remembering seldom use timed
tests, and never have rigorous expectations of highly fluent performance. As long as
students are accurate, a remembering strategy is considered successful.
Carpenter and Moser (1984) reported on a longitudinal study of childrens
evolving methods of solving simple addition and subtraction problems in first through 3
rd
grade. They identified sub-stages within counting as well as variations on recall
strategies. The focus of their study was on naturalistically developed strategies that
children had made up to solve problems they were presented, rather than evaluating a
teaching sequence for its efficiency. For example, they noted that children were not
Developing automaticity 4
Donald B. Crawford, Ph.D. Otter Creek Institute
consistent in using the most efficient strategy, even when they sometimes used that
method. Carpenter and Moser did caution that, It has not been clearly established that
instruction should reflect the natural development of concepts and skills (1984, p. 200).
Garnett (1992) identified similar stages and sub-stages in examining children with
learning disabilities.
Successful students often hit early on a strategy for remembering simple facts,
where less successful students lack such a strategy and may simply guess (Thorton, 1978;
Carnine & Stein, 1981). Research has demonstrated that teaching the facts to students,
especially those who dont develop their own strategies, in some logical order that
emphasizes the relationships, makes them easier to remember. Many studies have been
reported in which various rules and relationships are taught to help children derive the
answers to math facts (Carnine & Stein, 1981; Rightsel & Thorton, 1985; Steinberg,
1985; Thorton, 1978; Thorton & Smith, 1988; Van Houten, 1993).
An example of such a relationship rule is the, doubles plus 1 rule. This rule
applies to facts such as 8 + 9 which is the same as 8 + 8 plus 1, therefore because 8 + 8 =
16 and 16 + 1 = 17, therefore 8 + 9 = 17 (Thorton, 1978). Most authors suggest that
learning the facts in terms of relationships is an intermediate stage between the stage of
conceptual development that involves using a procedure and the stage of practicing to
develop automatic recall on the other hand (Steinberg, 1985; Stein et al., 1997; Suydam,
1984; Thorton, 1978). Research has indicated that helping them [students] to develop
thinking strategies is an important step between the development of concepts with
manipulative materials and pictures and the mastery of the facts with drill-and-practice
activities [emphasis added] (Suydam, 1984, p. 15). Additional practice activities are
needed (after teaching thinking strategies) in order to develop automatic, direct retrieval
of facts.
Thorton suggested that curriculum and classroom efforts should focus more
carefully on the development of strategy prior to drill on basic facts (1978, p. 226).
Thortons (1978) research showed that using relationships such as doubles and doubles
plus 1, and other tricks to remember facts as part of the teaching process resulted in
more facts being learned after eight weeks than in classes where such aids to memory
were not taught. Similarly, Carnine and Stein (1981) found that students instructed with
Developing automaticity 5
Donald B. Crawford, Ph.D. Otter Creek Institute
a strategy for remembering facts learned a set of 24 facts with higher accuracy than
students who were presented the facts to memorize with no aids to remembering (84% vs.
59%).
Steinberg (1985) studied the effect of teaching noncounting, derived facts
strategies in which the child uses a small set of known number facts to find or derive the
solution to unknown number facts (1985, p. 337). The subjects were second graders
and the focus was on addition and subtraction facts. The children changed strategies
from counting to using the derived facts strategies and did write the answers to more of
the fact problems within the 2 seconds allotted per problem than before such instruction.
Steinberg did not find that teaching derived facts strategies necessarily led to automaticity
of number facts.
Thorton and Smith (1988) found that after teaching strategies and relationship
activities to first graders, the students correctly answered more facts than a control group.
In the timed tests rates were about 20 problems per minute in the experimental group and
about 10 per minute in the traditional group. Although rates of responding did not
indicate automaticity, approximately 55% of the experimental group, compared to 12% in
the control group, reported that rather than using counting strategies, they had mostly
memorized a set of 9 target subtraction facts.
Van Houten (1993) specifically taught rules to children (although the rules were
unlike those normally used by children, or discussed in the literature) and found that the
children learned a set of seven facts more rapidly when there was a rule to remember
them with compared to a random mix of seven facts with no relationships. However, as
in the other studies, Van Houten did not measure development of automaticity through
some kind of timing criteria. Instead accuracy was the only measure of learning.
Isaacs & Carroll (1999) listed a potential instructional sequence of relationships to
be explored for addition and subtraction facts:
1. Basic concepts of addition; direct modeling and counting all for addition
2. The 0 and 1 addition facts; counting on; adding 2
3. Doubles (6 + 6, 8 + 8, etc.)
4. Complements of 10 (9 + 1, 8 + 2, etc.)
5. Basic concepts of subtraction; direct modeling for subtraction
Developing automaticity 6
Donald B. Crawford, Ph.D. Otter Creek Institute
6. Easy subtraction facts ( -0, -1, and -2 facts); counting back to subtract
7. Harder addition facts; derived-fact strategies for addition (near doubles, over-10
facts)
8. Counting up to subtract
9. Harder subtraction facts; derived-fact strategies for subtraction (using addition
facts, over-10 facts) (Isaacs & Carroll, 1999, p. 511-512).
Baroody noted that, mastery of the facts would include discovering, labeling,
and internalizing relationships. Meaningful instruction (the teaching of thinking
strategies) would probably contribute more directly to this process than drill approach
alone (1985, p. 95). Certainly the research is clear that learning rules and relationships
in the second stage helps students learn to give the correct answer to math facts. And
with the variety of rules that have been researched and found to be effective, the exact
nature of the rule children use is apparently immaterial. It appears that any rule or
strategy that allows the child to remember the correct answer, insures that practice will be
perfect practice and learning will be optimized. Performance may deteriorate if
practice is not highly accurate; because if the student is allowed to give incorrect
answers, those will then compete with the correct answer in memory (Goldman &
Pellegrino, 1986). Learning facts in this second stage develops accurate but not
necessarily automatic respondingwhich would be measured by rapid rates of
responding.
Third stage: Developing automaticity with math facts
The third stage of learning math facts has been called mastery, or overlearning, or
the development of automaticity. In this stage children develop the capacity to simply
recall the answers to facts without resorting to anything other direct retrieval of the
answer. Ashlock (1971) indicated that children must have immediate recall of the
basic facts so they can use them with facility in computation. If not, he may find it
difficult to develop skill in computation for he will frequently be diverted into tangential
procedures (1971, p. 363).
When automaticity is developed, one of its most notable traits is speed of
processing. Proficient levels of performance go beyond the accuracy (quality) of an
acquired skill to encompass sufficient speed (quantity) of performance. It is this sort of
Developing automaticity 7
Donald B. Crawford, Ph.D. Otter Creek Institute
proficiency with basic facts, rather than accuracy per se, which is so notably lacking in
many learning disabled childrens computation performance (Garnett & Fleischner, 1983,
p. 224).
The development of automaticity has been studied extensively in the psychology
literature. Research by psychologists on math facts and other variations on automatic fact
retrieval has been extensive, probably because Childrens acquisition of skill at mental
addition is a paradigm case of automaticity (Logan & Klapp, 1991a, p.180).
While research continues to refine the details of the models of the underlying
processes, there has for some time been a clear consensus that adults retrieve the
answers to math facts directly from memory. Psychologists argue that the process
underlying automaticity is memory retrieval: According to these theories, performance is
automatic when it is based on direct-access, single-step retrieval of solutions from
memory rather than some algorithmic computation (Logan & Klapp, 1991a, p. 179).
When facts have been well practiced, they are remembered quickly and
automaticallywhich frees up other mental processes to use the facts in more complex
problems (Ashcraft, 1992; Campbell, 1987b; Logan, 1991a). A variety of careful
psychological experiments have demonstrated that adults no longer use rules,
relationships, or procedures to derive the answers to math factsthey simply remember
or retrieve them (Ashcraft, 1985; Campbell, 1987a; Campbell, 1987b; Campbell &
Graham, 1985; Graham, 1987; Logan, 1988; Logan & Klapp, 1991a; Logan & Klapp,
1991b; McCloskey, Harley, & Sokol, 1991).
For example, Logan and Klapp (1991a) did experiments with alphabet
arithmetic where each letter corresponded to a number (A=1, B=2, C=3, etc.). In these
studies college students initially experienced the same need to count to figure out sums
that elementary children did. In the initial-counting stage the time to answer took longer
the higher the addends (requiring more time to count up). However, once these alphabet
facts became automaticthey took the same amount of time regardless of the addends,
and they took less time than they could possibly have counted. In addition, they found
that when students were presented with new items they had not previously hadthey
reverted to counting and the time to answer went up dramatically and was again related to
the size of the addends. When the students were presented with a mix of items, some
Developing automaticity 8
Donald B. Crawford, Ph.D. Otter Creek Institute
which required counting and others, which they had already learned, they were still able
to answer the ones they had previously learned roughly as fast as they had before.
Response times were able to clearly distinguish between the processes used on facts the
students were figuring out vs. the immediate retrieval process for facts that had become
automatic.
Does the same change in processes apply to children as they learn facts? Another
large set of experiments has demonstrated that as children mature they transition from
procedural (counting) methods of solving problems to the pattern shown by adults of
direct retrieval. (Ashcraft, 1982; Ashcraft, 1984; Ashcraft, 1992; Campbell, 1987;
Campbell & Graham, 1985; Geary & Brown, 1991; Graham, 1987; Koshmider &
Ashcraft, 1991; Logan, 1991b; Pelegrino & Goldman, 1987). There is evidence that
adults and children use strategies and rules for certain facts (Isaacs & Carroll, 1999).
However, this phenomena appears to be limited to facts adding + 0 or + 1 or
multiplying x 0 or x 1 and therefore does not indicate adult use of such strategies and
rules for any of the rest of the facts (Ashcraft, 1985).
Interestingly, most of the research celebrating strategies and rules relates to
addition and subtraction facts. Once children are expected to become fluent in recalling
multiplication facts such strategies are inadequate to the task, and memorization seems to
be the only way to accomplish the goal of mastery. Campbell and Grahams (1985) study
of error patterns in multiplication facts by children demonstrated that as children matured
errors became limited to products related correctly to one or the other of the factors, such
as 4 X 7 = 24 or 6 X 3 = 15. See also (Campbell, 1987b; Graham, 1987). They
concluded that the acquisition of simple multiplication skill is well described as a
process of associative bonding between problems and candidate answers....determined by
the relative strengths of correct and competing associations ... not a consequence ... of the
execution of reconstructive arithmetic procedures (Campbell & Graham, 1985, p. 359).
In other words, we remember answers based on practice with the facts rather than using
some procedure to derive the answers. And what causes difficulty in learning facts is that
we when we mistakenly bring up the answers to other similar math fact problems we
sometimes fail to correctly discard these errors (Campbell, 1987b).
Developing automaticity 9
Donald B. Crawford, Ph.D. Otter Creek Institute
Graham (1987) reported on a study where students who learned multiplication
facts in a mixed order had little or no difference in response times between smaller and
larger fact problems. Grahams recommendation was that facts be arranged in sets where
commonly confused problems are not taught together. In addition he suggested, it may
be beneficial to give the more difficult problems (e.g. 3 x 8, 4 x 7, 6 x 9, and 6 x 7) a head
start. This could be done by placing them in the first set and requiring a strict
performance criterion before moving on to new sets (Graham, 1987, p. 139).
Ashcraft has studied the development of mental arithmetic extensively (Ashcraft,
1982; Ashcraft, 1984; Ashcraft, 1992; Koshmider & Ashcraft, 1991). These studies have
examined response times and error rates over a range of ages. The picture is clear. In
young children who are still counting, response rates are consistent with counting rates.
Ashcraft found that young children took longer to answer factslong enough to count
the answers. He also found that they took longer in proportion to the sum of the
addendsan indication that they were counting. After children learn the facts, the
picture changes. Once direct retrieval is the dominant mode of answering, response rates
decrease from over 3 seconds down to less than one second on average. Ashcraft notes
that other researchers have demonstrated that while young children use a variety of
approaches to come up with the answer (as they develop knowledge of the math facts),
eventually these are all replaced by direct retrieval in 5
th
and 6
th
grade normally achieving
children.
Ashcraft and Christy (1995) summarized the model of math fact performance
from the perspective of cognitive psychology. ...the whole-number arithmetic facts are
eventually stored in long-term memory in a network-like structure .... at a particular level
of strength, with strength values varying as a function of an individuals practice and
experience....because these variables determine strength in memory (Ashcraft & Christy,
1995, p. 398). Interestingly enough, Ashcraft and Christys (1995) examination of
math texts led to the discovery in both addition and multiplication that smaller facts (2s-
5s) occur about twice as frequently as larger facts (6s-9s). This may in part explain why
larger facts are less well learned and have slightly slower response times.
Logans (1991b) studies of the transition to direct retrieval show that learners start
the process of figuring out the answer and use retrieval if the answer is remembered
Developing automaticity 10
Donald B. Crawford, Ph.D. Otter Creek Institute
before the process is completed. For a time there is a mixture of some answers being
retrieved from memory while others are being figured out. Once items are learned to the
automatic level, the answer occurs to the learner before they have time to count. Geary
and Brown (1991) found the same sort of evolving mix of strategies among gifted,
normal, and math-disabled children in the third and fourth grades.
Why is automaticity with math facts important?
Psychologists have long argued that higher-level aspects of skills require that
lower level skills be developed to automaticity. Turn-of-the-century psychologists
eloquently captured this relationship in the phrase, Automaticity is not genius, but it is
the hands and feet of genius. (Bryan & Harter, 1899; as cited in Bloom, 1986).
Automaticity occurs when tasks are learned so well that performance is fast, effortless,
and not easily susceptible to distraction (Logan, 1985).
An essential component of automaticity with math facts is that the answer must
come by means of direct retrieval, rather than following a procedure. This is tantamount
to the common observation that students who count on their fingers have not mastered
the facts. Why is counting on your fingers inadequate? Why isnt procedural
knowledge or counting a satisfactory end point? Although correct answers can be
obtained using procedural knowledge, these procedures are effortful and slow, and they
appear to interfere with learning and understanding higher-order concepts (Hasselbring,
Goin and Bransford, 1988, p. 2). The notion is that the mental effort involved in
figuring out facts tends to disrupt thinking about the problems in which the facts are
being used.
Some of the argument of this information-processing dilemma was developed by
analogy to reading, where difficulty with the process of simply decoding the words has
the effect of disrupting comprehension of the message. Gersten and Chard illuminated
the analogy between reading and math rather explicitly. Researchers explored the
devastating effects of the lack of automaticity in several ways. Essentially they argued
that the human mind has a limited capacity to process information, and if too much
energy goes into figuring out what 9 plus 8 equals, little is left over to understand the
concepts underlying multi-digit subtraction, long division, or complex multiplication
(1999, p. 21).
Developing automaticity 11
Donald B. Crawford, Ph.D. Otter Creek Institute
Practice is required to develop automaticity with math facts. The importance of
drill on components [such as math facts] is that the drilled material may become
sufficiently over-learned to free up cognitive resources and attention. These cognitive
resources may then be allocated to other aspects of performance, such as more complex
operations like carrying and borrowing, and to self-monitoring and control (Goldman &
Pellegrino, 1986, p. 134). This suggests that even mental strategies or mnemonic tricks
for remembering facts must be replaced with simple, immediate, direct retrieval, else the
strategy for remembering itself will interfere with the attention needed on the more
complex operations. For automaticity with math facts to have any value, the answers
must be recalled with no effort or much conscious attention, because we want the
conscious attention of the student directed elsewhere.
For students to be able to recall facts quickly in more complex computational
problems, research tells us the students must know their math facts at an acceptable level
of automaticity. Therefore, teachers...must be prepared to supplement by providing
more practice, as well as by establishing rate criteria that students must achieve. (Stein et
al., 1997, p. 93). The next question is what rate criteria are appropriate?
How fast is fast enough to be automatic?
Some educational researchers consider facts to be automatic when a response
comes in two or three seconds (Isaacs & Carroll, 1999; Rightsel & Thorton, 1985;
Thorton & Smith, 1988). However, performance is, by definition, not automatic at rates
that purposely allow enough time for students to use efficient strategies or rules for
some facts (Isaacs & Carroll, 1999, p. 513).
Most of the psychological studies have looked at automatic response time as
measured in milliseconds and found that automatic (direct retrieval) response times are
usually in the ranges of 400 to 900 milliseconds (less than one second) from presentation
of a visual stimulus to a keyboard or oral response (Ashcraft, 1982; Ashcraft, Fierman &
Bartolotta, 1984; Campbell, 1987a; Campbell, 1987b; Geary & Brown, 1991; Logan,
1988). Similarly, Hasselbring and colleagues felt students had automatized math facts
when response times were down to around 1 second from presentation of a stimulus
until a response was made (Hasselbring et al. 1987). If however, students are shown
the fact and asked to read it aloud then a second has already passed in which case no
Developing automaticity 12
Donald B. Crawford, Ph.D. Otter Creek Institute
delay should be expected after reading the fact. We consider mastery of a basic fact as
the ability of students to respond immediately to the fact question. (Stein et al., 1997, p.
87).
In most school situations students are tested on one-minute timings. Expectations
of automaticity vary somewhat. Translating a one-second-response time directly into
writing answers for one minute would produce 60 answers per minute. Sixty problems
per minute is exactly in the middle of the range of 40 to 80 problems per minute shown
by adult teachers in this authors workshops. However, some children, especially in the
primary grades, cannot write that quickly. In establishing mastery rate levels for
individuals, it is important to consider the learners characteristics (e.g., age, academic
skill, motor ability). For most students a rate of 40 to 60 correct digits per minute [25 to
35 problems per minute] with two or few errors is appropriate (Mercer & Miller, 1992,
p.23).
Howell and Nolet (2000) recommend an expectation of 40 correct facts per
minute, with a modification for students who write at less than 100 digits per minute.
The number of digits per minute is a percentage of 100 and that percentage is multiplied
by 40 problems to give the expected number of problems per minute; for example, a child
who can only write 75 digits per minute would have an expectation of 75% of 40 or 30
facts per minute.
One goal is to develop enough math fact skill in isolation, so that math fact speed
will continue to improve as a result of using facts in more complex problems. Miller and
Heward discussed the research finding that students who are able to compute basic math
facts at a rate of 30 to 40 problems correct per minute (or about 70 to 80 digits correct per
minute) continue to accelerate their rates as tasks in the math curriculum become more
complex....[however]...students whose correct rates were lower than 30 per minute
showed progressively decelerating trends when more complex skills were introduced.
The minimum correct rate for basic facts should be set at 30 to 40 problems per minute,
since this rate has been shown to be an indicator of success with more complex tasks
(1992, p. 100). If a range from 30 to 40 problems per minute is recommended the
higher end of 40 would be more likely to continue to accelerate than the lower end at 30,
making 40 a better goal.
Developing automaticity 13
Donald B. Crawford, Ph.D. Otter Creek Institute
Another recommendation was that the criterion be set at a rate [in digits per
minute] that is about 2/3 of the rate at which the student is able to write digits (Stein et
al., 1997, p. 87). For example a student who could write 100 digits per minute would be
expected to write 67 digits per minute, which translates to between 30 and 40 problems
per minute.
In summary, if measured individually, a response delay of up to 1 second would
be automatic. In writing the range of 30 to 40 seems to be the minimum, although a good
case can be made that setting an expectation of 40 would be more prudent. However,
students can be expected to be able to develop their fluency up to about 60 per minute,
for students who can write that quickly. Sadly, many school districts have expectations
as low as 16 to 20 problems per minute (from standards of 50 problems in 3 minutes or
100 problems in five minutes). Children can count answers on their fingers at the rate of
20 per minute. Such low expectations pass children who have only developed
procedural knowledge of how to figure out the facts, rather than the direct recall of
automaticity.
What type of practice effectively leads to automaticity?
Some children, despite a great deal of drill and practice on math facts, fail to
develop fluency or automaticity with the facts. Hasselbring and Goin (1988) reported
that researchers, who examined the effects of computer delivered drill-and-practice
programs, have generally reported that computer-based drill-and-practice seldom leads to
automaticity, especially amongst children with learning disabilities.
Ashlock explained the reason was that, the major result of practice is increased
rate and accuracy in doing the task that is actually practiced. For example, the child who
practices counting on his fingers to get missing sums usually learns to count on his
fingers more quickly and accurately. The child who uses skip counting or repeated
addition to find a product learns to use repeated addition more skillfullybut he
continues to add. Practices does not necessarily lead to more mathematically mature
ways of finding the missing number or to immediate recall as such....If the process to be
reinforced is recalling, then it is important that the child feel secure to state what he
recalls, even if he will later check his answer...(1971, p. 363)
Developing automaticity 14
Donald B. Crawford, Ph.D. Otter Creek Institute
Hasselbring et al. stated that, From our research we have concluded that if a
student is using procedural knowledge (i.e., counting strategies) to solve basic math facts,
typical computer-based drill and practice activities do not produce a developmental shift
whereby the student retrieves the answers from memory (1988, p. 4). This author
experienced the same failure of typical practice procedures to develop fluency in his own
students with learning disabilities. Months and sometimes multiple years of practice
resulted in students who barely more fluent than they were at the start.
Hasselbring and Goin noted that the minimal increases in fluency seen in such
students
can be attributed to students becoming more efficient at counting and not
due to the development of automatic recall of facts from memory....We
found that if a learner with a mild disability has not established the ability
to retrieve an answer from memory before engaging in a drill-and-practice
activity then time spend in drill-and-practice is essentially wasted. On the
other hand, if a student can retrieve a fact from memory, even slowly, then
use of drill-and-practice will quickly lead to the fluent recall of that
fact....the key to making computer-based drill an activity that will lead to
fluency is through additional instruction that will establish information in
long-term memory....Once acquisition occurs (i.e., information is stored in
long term memory), then drill-and-practice can be used effectively to
make the retrieval of this information fluent and automatic (1988, p.
203).
For practice to lead to automaticity, students must be recalling the facts, rather than
deriving them.
Other researchers have found the same phenomena, although they have used
different language. If students can recall answers to fact problems, rather than derive
them from a procedure, those answers have been learned. Therefore continued practice
could be called overlearning. For example, Drill and practice software is most
effective in the overlearning phase of learningi.e., effective drill and practice helps the
student develop fast and efficient retrieval processes (Goldman & Pelegrino, 1986).
Developing automaticity 15
Donald B. Crawford, Ph.D. Otter Creek Institute
If students are not recalling the answers and recalling them correctly, accurately,
while they are practicingthe practice is not valuable. So teaching students some variety
of memory aid would seem to be necessary. If the students cant recall a fact directly or
instantly, if they recall the trick they can get the right answerand then continue to
practice remembering the right answer.
However, there may be another way to achieve the same result. Garnett, while
encouraging student use of a variety of relationship strategies in the second stage, hinted
at the answer when she suggested that teachers should press for speed (direct retrieval)
with a few facts at a time (1992, p. 213).
How efficient is practice when a small set of facts is practiced (small enough to
remember easily)? Logan and Klapp (1991a) found that college students required less
than 15 minutes of practice to develop automaticity on a small set of six alphabet
arithmetic facts. The experimental results were predicted by theories that assume that
memory is the process underlying automaticity. According to those theories,
performance is automatic when it is based on single-step, direct-access retrieval of a
solution from memory; in principle, that can occur after a single exposure. (Logan &
Klapp, 1991a, p. 180).
This suggests that if facts were learned in small sets, which could easily be
remembered after a couple of presentations, the process of developing automaticity with
math facts could proceed with relatively little pain and considerably less drill than is
usually associated with learning all the facts. The conclusion that automatization
depends on the number of presentations of individual items rather than the total amount
of practice has interesting implications. It suggests that automaticity can be attained very
quickly if there is not much to be learned. Even if there is much to be learned, parts of it
can be automatized quickly if they are trained in isolation (Logan and Klapp, 1991a, p.
193).
Logan & Klapps studies show that, "The crucial variable [in developing
automaticity] is the number of trials per item, which reflects the opportunity to have
memorized the items (1991a, p. 187). So if children are only asked to memorize a
couple of facts at a time, they could develop automaticity fairly quickly with those facts.
Apparently extended practice is not necessary to produce automaticity. Twenty minutes
Developing automaticity 16
Donald B. Crawford, Ph.D. Otter Creek Institute
of rote memorization produced the same result as 12 sessions of practice on the task!
(Logan, 1991a, p. 355).
Has this in fact been demonstrated with children? As has been noted earlier few
researchers focus on the third stage development of automaticity. And there are even
fewer studies that instruct on anything less than 7 to 10 facts at a time. However, in cases
where small sets of facts have been used, the results have been uniformly successful,
even with students who previously had been unable to learn math facts. Cooke and
colleagues also found evidence in practicing math facts to automaticity, suggesting that
greater fluency can be achieved when the instructional load is limited to only a few new
facts interspersed with a review of other fluent facts (1993, p. 222). Stein et al. indicate
that a set of new facts should consist of no more than four facts (1997, p. 87).
Hasselbring et al. found that,
Our research suggest that it is best to work on developing
declarative knowledge by focusing on a very small set of new target facts
at any one timeno more than two facts and their reversals. Instruction
on this target set continues until the student can retrieve the answers to the
facts consistently and without using counting strategies.
We begin to move children away from the use of counting
strategies by using controlled response times. A controlled response
time is the amount of time allowed to retrieve and provide the answer to a
fact. We normally begin with a controlled response time of 3 seconds or
less and work down to a controlled response time around 1.25 seconds.
We believe that the use of controlled response times may be the
most critical step to developing automatization. It forces the student to
abandon the use of counting strategies and to retrieve answers rapidly
from the declarative knowledge network.
If the controlled response time elapses before the child can
respond, the student is given the answer and presented with the fact again.
This continues until the child gives the correct answer within the
controlled response time. (1988, p.4). [emphasis added].
Developing automaticity 17
Donald B. Crawford, Ph.D. Otter Creek Institute
Giving the answer to the student, rather than allowing them to derive the answer
changes the nature of the task. Instead of simply finding the answer the student is
involved in checking to see if he or she remembers the answer. If not, the student is
reminded of the answer and then gets more opportunities to practice remembering the
facts answer. Interestingly this findingthat it is important to allow the student only a
short amount of time before providing the answerhas been studied extensively as a
procedure psychologists call constant time delay.
Several studies have found that using a constant time delay procedure alone for
teaching math facts is quite effective (Bezuk & Cegelka, 1995). This near-errorless
technique means that if the student does not respond within the time allowed, a
controlling prompt (typically a teacher modeling the correct response) is provided. The
student then repeats the task and the correct answer (Koscinski & Gast, 1993b). The
results have shown that this type of near-errorless practice has been effective whether
delivered by computer (Koscinski & Gast, 1993a) or by teachers using flashcards
(Koscinski & Gast, 1993b).
Two of the aspects of the constant time delay procedures are instructionally
critical. One is the time allowed is on the order of 3 or 4 seconds, therefore ensuring that
students are using memory retrieval rather than reconstructive procedures, in other words,
they are remembering rather than figuring out the answers. Second, if the student fails to
remember he or she is immediately told the answer and asked to repeat it. So it
becomes clear that the point of the task is to remember the answers rather than continue
to derive them over and over.
These aspects of constant time delay teaching procedures are the same ones that
were found to be effective by Hasselbring and colleagues. Practicing on a small set of
facts that are recalled from memory until those few facts are answered very quickly
contrasts sharply with the typical practice of timed tests on all 100 facts at a time. In
addition, the requirement that these small sets are practiced until answers come easily in a
matter of one or two seconds, is also unusual. Yet, research indicates that developing
very strong associations between each small set of facts and their answers (as shown by
quick, correct answers), before moving on to learn more facts will improve success in
learning.
Developing automaticity 18
Donald B. Crawford, Ph.D. Otter Creek Institute
Campbells research found that most difficulty in learning math facts resulted
from interference from correct answers to allied problemswhere one of the factors
was the same (Campbell, 1987a; Campbell, 1987b; Campbell & Graham, 1985). He
asked the question, How might interference in arithmetic fact learning be
minimized?If strong correct associations are established for problems and answers
encountered early in the learning sequence (i.e. a high criterion for speed and accuracy),
those problems should be less susceptible to retroactive interference when other problems
and answers are introduced later. Furthermore, the effects of proactive interference on
later problems should also be reduced (Campbell, 1987b, p. 119-120).
Because confusion among possible answers is the key problem in learning math
facts, the solution is to establish a mastery-learning paradigm, where small sets of facts
are learned to high levels of mastery, before adding any more facts to be learned.
Researchers have found support for the mastery-learning paradigm, when they have
looked at the development of automaticity. The mechanics of achieving this gradual
mastery-learning paradigm have been outlined by two sets of authors. Hasselbring et al.
found that,
As stated, the key to making drill and practice an activity that will lead to
automaticity in learning handicapped children is additional instruction for establishing a
declarative knowledge network. Several instructional principles may be applied in
establishing this network:
1. Determine learners level of automaticity.
2. Build on existing declarative knowledge.
3. Instruct on a small set of target facts.
4. Use controlled response times.
5. Intersperse automatized with targeted nonautomatized facts during instruction.
(1988, p. 4).
The teacher cannot focus on a small set of facts and intersperse them with facts
that are already automatic until after an assessment determines which facts are already
automatic. Facts that are answered without hesitation after the student reads them aloud
would be considered automatic. Facts that are read to the student should be answered
within a second.
Developing automaticity 19
Donald B. Crawford, Ph.D. Otter Creek Institute
Silbert, Carnine, and Stein recommended similar make-up for a set of facts for
flashcard instruction. Before beginning instruction, the teacher makes a pile of flash
cards. This pile includes 15 cards: 12 should have the facts the student knew instantly on
the tests, and 3 would be facts the student did not respond to correctly on the test. (1990,
p. 127).
These authors also recommended procedures for flashcard practice similar to the
constant time delay teaching methods. If the student responds correctly but takes
longer than 2 seconds or so, the teacher places the card back two or three cards from the
front of the pile. Likewise, if the student responds incorrectly, the teacher tells the
student the correct answer and then places the card two or three cards back in the pile.
Cards placed two or three back from the front will receive intensive review. The teacher
would continue placing the card two or three places back in the pile until the student
responds acceptably (within 2 seconds) four times in a row (Silbert et al., 1990, p. 130).
Hasselbring et al. described how their computer program Fast Facts provided a
similar, though slightly more sophisticated presentation order during practice. Finally,
our research suggests that it is best to work on developing a declarative knowledge
network by interspersing the target facts with other already automatized facts in a
prespecified, expanding order. Each time the target fact is presented, another
automatized fact is added as a spacer so that the amount of time between presentations
of the target fact is expanded. This expanding presentation model requires the student to
retrieve the correct answers over longer and longer periods (1988, p. 5).
Unfortunately the research-based math facts drill-and-practice program Fast
Facts developed by Hasselbring and colleagues was not developed into a commercial
product. Nor were their recommendations incorporated into the design of existing
commercial math facts practice programs, which seldom provide either controlled
response times or practice on small sets. What practical alternatives to cumbersome
flashcards or ineffective computer drill-and-practice programs are available to classroom
teachers who want to use the research recommendations to develop automaticity with
math facts?
Developing automaticity 20
Donald B. Crawford, Ph.D. Otter Creek Institute
How can teachers implement class-wide effective facts practice?
Stein et al. tackled the issue of how teachers could implement an effective facts
practice program consistent with the research. As an alternative to total individualization,
they suggest developing a sequence of learning facts through which all students would
progress. Stein et al. describe the worksheets that students would master one at a time.
Each worksheet would be divided into two parts. The top half of the
worksheets should provide practice on new facts including facts from the
currently introduced set and from the two preceding sets. More
specifically, each of the facts from the new set would appear four times.
Each of the facts from the set introduced just earlier would appear three
times, and each of the facts from the set that preceded that one would
appear twice.The bottom half of the worksheet should include 30
problems. Each of the facts from the currently introduced set would
appear twice. The remaining facts would be taken from previously
introduced sets. (1997, p. 88).
The daily routine consists of student pairs, one of which does the practicing while
the other follows along with an answer key. Stein et al. specify:
The teacher has each student practice the top half of the worksheet twice. Each
practice session is timedthe student practices by saying complete statements (e.g., 4 + 2
= 6), rather than just answers.If the student makes an error, the tutor corrects by saying
the correct statement and having the student repeat the statement. The teacher allows
students a minute and a half when practicing the top part and a minute when practicing
the bottom half. (1997, p. 90).
After each student practices, the teacher conducts a timed one-minute test of the
30 problems on the bottom half. Students who correctly answered at least 28 of the 30
items, within the minute allowed on the bottom half test, have passed the worksheet and
are then given the next worksheet to work on (Stein, et al., 1997). A specific
performance criterion for fact mastery is critical to ensure mastery before moving on to
additional material. These criteria, however, appear to be lower than the slowest
definition of automaticity in the research literature. Teachers and children could benefit
Developing automaticity 21
Donald B. Crawford, Ph.D. Otter Creek Institute
from higher rate of mastery before progressing, in the range of 40 to 60 problems per
minuteif the students can write that quickly.
Stein et al. also delineate the organizational requirements for such a program to be
effective and efficient: A program to facilitate basic fact memorization should have the
following components:
1. a specific performance criterion for introducing new facts.
2. intensive practice on newly introduced facts
3. systematic practice on previously introduced facts
4. adequate allotted time
5. a record-keeping system
6. a motivation system (1997, p. 87).
The worksheets and the practice procedures ensure the first three points.
Adequate allotted time would be on the order of 10 to 15 minutes per day for each
student of the practicing pair to get their three minutes of practice, the one-minute test for
everyone and transition times. The authors caution that memorizing basic facts may
require months and months of practice (Stein et al., 1997, p. 92). A record-keeping
system could simply record the number of tries at each worksheet and which worksheets
had been passed. A motivation system that gives certificates and various forms of
recognition along the way will help students maintain the effort needed to master all the
facts in an operation.
Such a program of gradual mastery of small sets of facts at a time is
fundamentally different than the typical kind of facts practice. Because children are
learning only a small set of new facts it does not take many repetitions to commit them to
memory. The learning is occurring during the practice time. Similarly because the
timed tests are only over the facts already brought to mastery, children are quite
successful. Because they see success in small increments after only a couple of days
practice, students remain motivated and encouraged.
Contrast this with the common situation where children are given timed-tests over
all 100 facts in an operation, many of which are not in long term memory (still counting).
Students are attempting to become automatic on the whole set at the same time which
requires brute force to master. Because childrens efforts are not focused on a small set
Developing automaticity 22
Donald B. Crawford, Ph.D. Otter Creek Institute
to memorize, students often just become increasingly anxious and frustrated by their lack
of progress. Such tests do not teach students anything other than to remind them that
they are unsuccessful at math facts. Even systematically taking timed tests daily on the
set of 100 facts is ineffective as a teaching tool and is quite punishing to the learner.
Earlier special education researchers attempted to increase automaticity with math facts
by systematic drill and practice...But this brute force approach made mathematics
unpleasant, perhaps even punitive, for many (Gersten & Chard, 1999, p. 21).
These inappropriate type timed tests led to an editorial by Marilyn Burns in which
she offered her opinion that, Teachers who use timed tests believe that the tests help
children learn basic facts....Timed tests do not help children learn (Burns, 1995, p. 408-
409). Clearly timed tests only establish whether or not children have learnedthey do
not teach. However, if children are learning facts in small sets, and are being taught their
facts gradually, then timed tests will demonstrate this progress. Under such
circumstances, if children are privy to graphs showing their progress, they find it quite
motivating (Miller, 1983).
Given the most common form of timed tests, it is not surprising that Isaacs and
Carroll argue that ...an over reliance on timed tests is more harmful than beneficial
(Burns, 1995), this fact has sometimes been misinterpreted as meaning that they should
never be used. On the contrary, if we wish to assess fact proficiency, time is important.
Timed tests also serve the important purpose of communicating to students and parents
that basic-fact proficiency is an explicit goal of the mathematics program. However,
daily, or even weekly or monthly, timed tests are unnecessary (1999, p. 512).
In contrast, when children are successfully learning the facts, through the use of a
properly designed program, they are happy to take tests daily, to see if theyve improved.
Results from classroom studies in which time trials have been evaluated
showAccuracy does not suffer, but usually improves, and students enjoy being
timed...When asked which method they preferred, 26 of the 34 students indicated they
liked time trials better than the untimed work period (Miller & Heward, 1992, p. 101-
102).
In summary, the proper kind of practice can enable students to develop their skill
with math facts beyond strategies for remembering facts into automaticity, or direct
Developing automaticity 23
Donald B. Crawford, Ph.D. Otter Creek Institute
retrieval of math fact answers. Automaticity is achieved when students can answer math
facts with no hesitation or no more than one second delaywhich translates into 40 to 60
problems per minute. What is required for students to develop automaticity is a
particular kind of practice focused on small sets of facts, practiced under limited response
times, where the focus is on remembering the answer quickly rather than figuring it out.
The introduction of additional new facts should be withheld until students can
demonstrate automaticity with all previously introduced facts. Under these
circumstances students are successful and enjoy graphing their progress on regular timed
tests. Using an efficient method for bringing math facts to automaticity has the added
value of freeing up more class time to spend in higher level mathematical thinking.
REFERENCES
Ando, M. & Ikeda, H. (1971). Learning multiplication factsmare than a drill.
Arithmetic Teacher, 18, 365-368.
Ashcraft, M. H. (1982). The development of mental arithmetic: A chronometric
approach. Developmental Review, 2, 213-236.
Ashcraft, M. H. & Christy, K. S. (1995). The frequency of arithmetic facts in elementary
texts: Addition and multiplication in grades 1 6. Journal for Research in Mathematics
Education, 25(5), 396-421.
Ashcraft, M. H., Fierman, B. A., & Bartolotta, R. (1984). The production and verification
tasks in mental addition: An empirical comparison. Developmental Review, 4, 157-170.
Ashcraft, M. H. (1985). Is it farfetched that some of us remember our arithmetic facts?
Journal for Research in Mathematics Education, 16 (2), 99-105.
Ashcraft, M. H. (1992). Cognitive arithmetic: A review of data and theory. Cognition,
44, 75-106.
Ashlock, R. B. (1971). Teaching the basic facts: Three classes of activities. The
Arithmetic Teacher, 18
Baroody, A. J. (1985). Mastery of basic number combinations: internalization of
relationships or facts? Journal of Research in Mathematics Education, 16(2) 83-98.
Bezuk, N. S. & Cegelka, P. T. (1995). Effective mathematics instruction for all students.
In P. T. Cegelka & Berdine, W. H. (Eds.) Effective instruction for students with learning
difficulties. (pp. 345-384). Needham Heights, MA: Allyn and Bacon.
Developing automaticity 24
Donald B. Crawford, Ph.D. Otter Creek Institute
Bloom, B. S. (1986). Automaticity: The Hands and Feet of Genius." Educational
Leadership; 43(5), 70-77.
Burns, M. (1995) In my opinion: Timed tests. Teaching Children Mathematics, 1 408-
409.
Campbell, J. I. D. (1987a). Network interference and mental multiplication. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 13 (1), 109-123.
Campbell, J. I. D. (1987b). The role of associative interference in learning and retrieving
arithmetic facts. In J. A. Sloboda & D. Rogers (Eds.) Cognitive process in mathematics:
Keele cognition seminars, Vol. 1. (pp. 107-122). New York: Clarendon Press/Oxford
University Press.
Campbell, J. I. D. & Graham, D. J. (1985). Mental multiplication skill: Structure,
process, and acquisition. Canadian Journal of Psychology. Special Issue: Skill. 39(2),
367-386.
Carnine, D. W. & Stein, M. (1981). Organizational strategies and practice procedures for
teaching basic facts. Journal for Research in Mathematics Education, 12(1), 65-69.
Carpenter, T. P. & Moser, J. M. (1984). The acquisition of addition and subtraction
concepts in grade one through three. Journal for Research in Mathematics Education,
15(3), 179-202.
Cooke, N. L., Guzaukas, R., Pressley, J. S., Kerr, K. (1993) Effects of using a ratio of
new items to review items during drill and practice: Three experiments. Education and
Treatment of Children, 16(3), 213-234.
Garnett, K. (1992). Developing fluency with basic number facts: Intervention for
students with learning disabilities. Learning Disabilities Research and Practice; 7 (4)
210-216.
Garnett, K. & Fleischner, J. E. (1983). Automatization and basic fact performance of
normal and learning disabled children. Learning Disability Quarterly, 6(2), 223-230.
Geary, D. C. & Brown, S. C. (1991). Cognitive addition: Strategy choice and speed-of-
processing differences in gifted, normal, and mathematically disabled children.
Developmental Psychology, 27(3), 398-406.
Geary, D. C., Bow-Thomas, C. C., & Yao, Y. (1992). Counting knowledge and skill in
cognitive addition: A comparison of normal and mathematically disabled children.
Journal of Experimental Child Psychology, 54(3), 372-391.
Gersten, R. & Chard, D. (1999). Number sense: Rethinking arithmetic instruction for
students with mathematical disabilities. The Journal of Special Education, 33(1), 18-28.
Developing automaticity 25
Donald B. Crawford, Ph.D. Otter Creek Institute
Goldman, S. R. & Pellegrino, J. (1986). Microcomputer: Effective drill and practice.
Academic Therapy, 22(2). 133-140.
Graham, D. J. (1987). An associative retrieval model of arithmetic memory: how
children learn to multiply. In J. A. Sloboda & D. Rogers (Eds.) Cognitive process in
mathematics: Keele cognition seminars, Vol. 1. (pp. 123-141). New York: Clarendon
Press/Oxford University Press.
Hasselbring, T. S. & Goin, L. T. (1988) Use of computers. Chapter Ten. In G. A.
Robinson (Ed.), Best Practices in Mental Disabilities Vol. 2. (Eric Document
Reproduction Service No: ED 304 838).
Hasselbring, T. S., Goin, L. T., & Bransford, J. D. (1987). Effective Math Instruction:
Developing Automaticity. Teaching Exceptional Children, 19(3) 30-33.
Hasselbring, T. S., Goin, L. T., & Bransford, J. D. (1988). Developing math automaticity
in learning handicapped children: The role of computerized drill and practice. Focus on
Exceptional Children, 20(6), 1-7.
Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and
decision making. (3rd Ed.) Belmont, CA: Wadsworth/Thomson Learning.
Isaacs, A. C. & Carroll, W. M. (1999). Strategies for basic-facts instruction. Teaching
Children Mathematics, 5(9), 508-515.
Koshmider, J. W. & Ashcraft, M. H. (1991). The development of childrens mental
multiplication skills. Journal of Experimental Child Psychology, 51, 53-89.
Koscinski, S. T. & Gast, D. L. (1993a). Computer-assisted instruction with constant time
delay to teach multiplication facts to students with learning disabilities. Learning
Disabilities Research & Practice, 8(3), 157-168.
Koscinski, S. T. & Gast, D. L. (1993b). Use of constant time delay in teaching
multiplication facts to students with learning disabilities. Journal of Learning
Disabilities, 26(8) 533-544.
Logan, G. D. (1985). Skill and automaticity: Relations, implications, and future
directions. Canadian Journal of Psychology. Special Issue: Skill. 39(2), 367-386.
Logan, G. D. (1988). Toward an instance theory of automatization. Psychological
Review, 95(4), 492-527.
Logan, G. D. (1991a). Automaticity and memory. In W. E. Hockley & S. Lewandowsky
(Eds.) Relating theory and data: Essays on human memory in honor of Bennet B.
Murdock (pp. 347-366). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Logan, G. D. (1991b). The transition from algorithm to retrieval in memory-based
theories of automaticity. Memory & Cognition, 19(2), 151-158.
Developing automaticity 26
Donald B. Crawford, Ph.D. Otter Creek Institute
Logan, G. D. & Klapp, S. T. (1991a). Automatizing alphabet arithmetic I: Is extended
practice necessary to produce automaticity? Journal of Experimental Psychology:
Learning, Memory & Cognition, 17(2), 179-195.
Logan, G. D. & Klapp, S. T. (1991b). Automatizing alphabet arithmetic II: Are there
practice effects after automaticity is achieved? Journal of Experimental Psychology:
Learning, Memory & Cognition, 17(2), 179-195.
Mc Closkey, M., Harley, W., & Sokol, S. M. (1991). Models of Arithmetic Fact
Retrieval: An evaluation in light of findings from normal and brain-damaged subjects.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 17 (3), 377-397.
Mercer, C. D. & Miller, S. P. (1992). Teaching students with learning problems in math
to acquire, understand, and apply basic math facts. Remedial and Special Education,
13(3) 19-35.
Miller, A. D. & Heward, W. L. (1992). Do your students really know their math facts?
Using time trials to build fluency. Intervention in School and Clinic, 28(2) 98-104.
Miller, G. (1983). Graphs can motivate children to master basic facts. Arithmetic
Teacher, 31(2), 38-39.
Miller, S. P., Mercer, C. D. & Dillon, A. S. (1992). CSA: Acquiring and retaining math
skills. Intervention in School and Clinic, 28 (2) 105-110.
Myers, A. C. & Thorton, C. A. (1977). The learning disabled childlearning the basic
facts. Arithmetic Teacher, 25(3), 46-50.
National Council of Teachers of Mathematics (2000). Number and operations standard
in Chapter 3 Standards in Standards and principles for school mathematics. Retrieved
June 5, 2003 from http://standards.nctm.org/document/chapter3/numb.htm
Peters, E., Lloyd, J., Hasselbring, T., Goin, L., Bransford, J., Stein, M. (1987). Effective
mathematics instruction. Teaching Exceptional Children, 19(3), 30-35.
Pellegrino, J. W. & Goldman, S. R. (1987). Information processing and elementary
mathematics. Journal of Learning Disabilities, 20(1) 23-32, 57.
Rightsel, P. S. & Thorton, C. A. (1985). 72 addition facts can be mastered by mid-grade
1. Arithmetic Teacher, 33(3), 8-10.
Silbert, J., Carnine, D. & Stein, M. (1990). Direct Instruction Mathematics (2
nd
Ed.)
Columbus, OH: Merrill Publishing Co.
Steinberg, R. M. (1985). Instruction on derived facts strategies in addition and
subtraction. Journal for Research in Mathematics Education, 16(5), 337-355.
Developing automaticity 27
Donald B. Crawford, Ph.D. Otter Creek Institute
Stein, M., Silbert, J., & Carnine, D. (1997) Designing Effective Mathematics
Instruction: a direct instruction approach (3rd Ed). Upper Saddle River, NJ: Prentice-
Hall, Inc.
Suydam, M. N. (1984). Research report: Learning the basic facts. Arithmetic Teacher,
32(1), 15.
Thorton, C. A. (1978). Emphasizing thinking strategies in basic fact instruction. Journal
for Research in Mathematics Education, 9, 214-227.
Thorton, C. A. & Smith, P. J. (1988). Action research: Strategies for learning subtraction
facts. Arithmetic Teacher, 35(8), 8-12.
Van Houten, R. (1993). Rote vs. Rules: A comparison of two teaching and correction
strategies for teaching basic subtraction facts. Education and Treatment of Children, 16
(2) 147-159.
Results of field tests of Mastering Math Facts 1
Donald B. Crawford, Ph.D. Otter Creek Institute
2000-2001 Kendall Elementary School Kendall, Washington phone (360) 383-2055
Steve Merz Principal e-mailto: SMerz@mtbaker.wednet.edu
All groups: Testing
1. Took the Stanford Diagnostic Math Test Computation Subtest: 25 minutes, multiple choice
in September and again in May. Use the answer form provided.
2. Chose one operation to study and took 3 math fact timings (two minutes in length) of all
students in September and 3 timings again in May. Score was the average number of
problems correctly answered in two minutes.
Three groups were set up.
The control group stayed as they were. They used whichever methods of teaching/learning math
facts they were using before the study. The second group, was the standard Mastering Math
Facts group. This group followed MMF standard directions and procedures as written. They
took the time to do MMF daily (either practice or two-minute progress monitoring sheet). They
began all students in the chosen operation. They gave the two-minute timing in the chosen
operation once per week. They collected that data on the student graph (writing down the
number correct in two minutes as well as graphing it). A third group was designated the New
and Improved Mastering Math Facts group. They followed standard Mastering Math Facts
procedures with some modifications. They cut time in half for timings, to allow for higher
number per minute. They kept raising goals with no upper limit. They didnt move students on
until they had passed three times instead of once.
Findings of improvement on Math Facts
A comparison among the three groups was possible only amongst 2
nd
grade students
where there were 3 classes, one in each condition, all of whom studied addition facts. No other
grade offered such a clear comparison. The results show suggest that MMF in its standard set of
directions performs best. It is notable that the control group made almost no progress in math
facts relative to either of the groups using the MMF materials.
2nd Grade Facts by Group
0
5
10
15
20
25
30
35
Fall Facts Spring Facts
A
d
d

F
a
c
t
s

i
n

2

m
i
n
.
MMF
New & imp
Control
Because there was no significant difference between the Standard MMF group and the
New and Improved MMF group in the school as a whole, those results were collapsed into one
group.
Results of field tests of Mastering Math Facts 2
Donald B. Crawford, Ph.D. Otter Creek Institute
Facts Mastery by Group (Across Grades)
48
29
24
13
0
10
20
30
40
50
60
Fall Spring
MMF
Control
As can be seen from the graph, students did improve in math fact fluency, and as a result of
using Mastering Math Facts students improved significantly more (approximately twice) the
improvement of students in the control condition. Students who used MMF also ended the year
significantly better at facts than the control group children.
Effects on Math Achievement
Perhaps more interesting are the effects upon achievement as measured by scores on the
Stanford Diagnostic Math Test computation subtest.
SDMT %ile Scores by Group (Across Grades)
ANOVA Interaction Analysis F = 4.45, p < .04
39
29
30
28
0
5
10
15
20
25
30
35
40
45
50
Fall Spring
MMF
Control
Scores reported are the national percentile scores with the different norms appropriate to the time
of testing. Students in the control group did not significantly change their ranking from fall to
spring against the expected improvement as reflected in the spring norms. Students in the MMF
group however did show a significant rise in their national ranking in math achievement during
the year.
Results of field tests of Mastering Math Facts 3
Donald B. Crawford, Ph.D. Otter Creek Institute
1998-99 Emerson Elementary School Everett, Washington
phone (425) 356-4560 Martha Adams Principal
e- mail to martha_adams@everett.wednet.edu
This school did not wish to employ a control group. Several teachers across all grades
implemented Mastering Math Facts with varying levels of fidelity to program guidelines. Data
were collected from math timings, scores representing the number of facts correctly answered in
writing during a 2-minute test. The data from all students who were working on a given
operation were included without regard to grade level. Data were collected from the beginning
of practice on that operation, the middle month of practice and the end of practice for each
student.
The data clearly show improvement in fluency of math facts during the course of working
through the program of Mastering Math Facts.
Facts correct
First month Middle month Last month
0
25
50
75
25%-75% Median 10%-90%
During use of Mastering Math Facts
Addition Facts in 2 minutes
The sample size for the addition operation was 115 students. The mean number of facts correct
at the outset was 34 facts in two minutes (with a standard deviation of 9.6). The middle month
mean was up to 46 facts in two minutes (with a standard deviation of 14.3). The last month
mean was 52 (with a standard deviation of 17.1).
Results of field tests of Mastering Math Facts 4
Donald B. Crawford, Ph.D. Otter Creek Institute
Facts correct
First month Middle month Last month
0
25
50
75
25%-75% Median 10%-90%
During use of Mastering Math Facts
--Mult. Facts in 2 min.--
The sample size for multiplication was 61 students. The mean number of facts correct at
the outset was 34 facts in two minutes (with a standard deviation of 12.7). The middle month
mean was up to 52 facts in two minutes (with a standard deviation of 13.4). The last month
mean was 62 (with a standard deviation of 13.7).
Facts correct
First month Middle month Last month
0
25
50
75
25%-75% Median 10%-90%
During use of Mastering Math Facts
--Division Facts in 2 minutes--
The sample size for division was 50 students. The mean number of facts correct at the outset
was 41 facts in two minutes (with a standard deviation of 12.2). The middle month mean was up
to 56 facts in two minutes (with a standard deviation of 15.4). The last month mean was 66
(with a standard deviation of 14.9).
Saxon Math Updated May 2013 Page 1

WWC Intervention Report U.S. DEPARTMENT OF EDUCATION
What Works Clearinghouse

Elementary School Mathematics Updated May 2013


Saxon Math




Report Contents
Overview p. 1
Program Information p. 2
Research Summary p. 3
Effectiveness Summary p. 4
References p. 5
Research Details for Each Study p. 8
Outcome Measures for
Each Domain p. 13
Findings Included in the Rating

for Each Outcome Domain p. 14
Supplemental Findings for Each
Outcome Domain p. 16
Endnotes p. 18
Rating Criteria p. 19
Glossary of Terms p. 20
Program Description
Saxon Math, published by Houghton Miffin Harcourt, is a core cur-
riculum for students in grades K5. A distinguishing feature of the
curriculum is its use of an incremental approach for instruction and
assessment. This approach limits the amount of new math content
delivered to students each day and allows time for daily practice.
New concepts are introduced gradually and integrated with previ-
ously introduced content so that concepts are developed, reviewed,
and practiced over time rather than being taught during discrete
periods of time, such as in chapters or units.
Instruction is built around math conversations that engage students
in learning, as well as continuous practice with hands-on activities,
manipulatives, and paper-pencil methods. The program includes
frequent, cumulative assessments used to direct targeted remedia-
tion and support to struggling students. Starting in grade 3, the focus
shifts from teacher-directed instruction to a more student-directed,
independent learning approach, though math conversations continue
to be used to introduce new concepts.
Research
2
The What Works Clearinghouse (WWC) identifed two studies of Saxon Math that both fall within the scope of the
Elementary School Mathematics topic area and meet WWC evidence standards. One study meets standards with-
out reservations, and the other study meets WWC evidence standards with reservations. Together, these studies
included more than 8,060 students in grades 15 from 452 schools in 11 states.
3
The WWC considers the extent of evidence for Saxon Math on the math performance of elementary school stu-
dents to be medium to large for the mathematics achievement domain, the only outcome domain examined for
studies reviewed under the Elementary School Mathematics topic area.
Effectiveness
Saxon Math was found to have potentially positive effects on mathematics achievement for elementary
school students.
Table 1. Summary of ndings
4
Improvement index (percentile points)
Outcome domain Rating of effectiveness Average Range
Number of
studies
Number of
students
Extent of
evidence
Mathematics achievement Potentially positive effects +3 2 to +7 2 > 8,060
5
Medium to large
Saxon Math Updated May 2013 Page 2
WWC Intervention Report
Program Information
Background
Saxon Math is distributed by Saxon Publishers, an imprint of Houghton Miffin Harcourt Supplemental Publishers.
Address: Specialized Curriculum Group, 9205 Southpark Center Loop, Orlando, FL 32819. Email: greatservice@
hmhpub.com. Website: http://www.hmheducation.com/saxonmathk5/index.php. Telephone: (800) 289-4490. Fax:
(800) 289-3994.
Program details
Saxon Math uses an incremental and integrated approach to instruction that includes three strategies: (a) fact-
fuency practice that promotes recall when working with math operations and fractions, (b) mental math exercises
intended to build number sense and problem-solving strategies, and (c) practice solving challenging, non-routine
story problems in which problem solving strategies are emphasized.
The curriculums main classroom activities draw on these strategies. The frst classroom activity is a daily whole-
group activity that provides an opportunity for students to review previously covered material, focusing on number
sense, math life-skills, and problem solving. A second activity engages students in conversations that help them
grasp new mathematical ideas introduced that day. Students then practice both the newly acquired skills and previ-
ously learned concepts during daily written practice sessions. Students complete a similar set of written practice
problems at home with adult support. Beginning in frst grade, the curriculum incorporates a third activity that
allows students to practice basic math facts. This component aims to improve recall of facts and enable students
to solve more complex problems.
Students complete written, cumulative assessments after every fve lessons. The results of these assessments pro-
vide teachers with data for instructional decision making and provide feedback for students and parents.
Cost
For Saxons Primary Math curricula (available for grades K4), each set of teachers materials costs between
$225.75 and $299.95, and student kits cost between $761.65 and $884.55 for 24 students and $889.42 and
$1,086.00 for 32 students. For the Saxon Math Intermediate 3-5 curricula (available for grades 35), the teachers
manual costs $238.30 and the student edition costs $68.65 per student.
6
Other available materials include posters,
manipulatives, and guides for adapting the Saxon Math curriculum for special education students.
Saxon Math Updated May 2013 Page 3
WWC Intervention Report
Research Summary
The WWC identifed 26 studies that investigated the effects of Saxon
Math on the math performance of elementary school students.
The WWC reviewed 14 of those studies against group design evidence
standards. One study (Agodini, Harris, Thomas, Murphy, & Gallagher,
2010) is a randomized controlled trial that meets WWC evidence stan-
dards without reservations, and one study (Resendez & Manley, 2005) is
a quasi-experimental design that meets WWC evidence standards with reservations. Those two studies are sum-
marized in this report. Twelve studies do not meet WWC evidence standards.
The remaining 12 studies do not meet WWC eligibility screens for review in this topic area. Citations for all 26 studies
are in the References section, which begins on p. 5.
Table 2. Scope of reviewed research
7
Grade 1, 2, 3, 4, 5
Delivery method Whole class
Program type Curriculum
Summary of study meeting WWC evidence standards without reservations
Agodini et al. (2010) presented results for 110 elementary schools that had been randomly assigned to one of four
conditions: Investigations in Number, Data, and Space

(28 schools), Math Expressions (27 schools), Saxon Math


(26 schools), and Scott ForesmanAddison Wesley Elementary Mathematics (29 schools). The analysis included
4,716 frst-grade students and 3,344 second-grade students who were evenly divided among the four conditions.
The study authors compared average spring math achievement of students in each condition after one school year
of program implementation. Student outcomes were measured by the Early Childhood Longitudinal StudyKinder-
garten (ECLS-K) math assessment.
Summary of study meeting WWC evidence standards with reservations
Resendez and Manley (2005) conducted a study of available school-level test results that included 170 intervention
schools and 172 comparison schools in Georgia. Comparison schools were matched to intervention schools based
on student demographics. The intervention schools used the Saxon Math program recommended for each grade
level in grades 18 between 2000 and 2005. The comparison schools used a variety of other curricula. About three-
ffths of comparison schools used traditional basal math curricula; one-third of the schools used a mix of basal,
investigative, and other approaches; and 5% used an investigative approach to teaching math. This intervention
report presents the studys fndings for grades 15.
Saxon Math Updated May 2013 Page 4
WWC Intervention Report
Effectiveness Summary
The WWC review of Saxon Math for the Elementary School Mathematics topic area includes student outcomes
in one domain: mathematics achievement. The two studies of Saxon Math that meet WWC evidence standards
reported fndings in this domain. The fndings below present the authors estimates and WWC-calculated estimates
of the size and statistical signifcance of the effects of Saxon Math on the mathematics achievement of elementary
school students. For a more detailed description of the rating of effectiveness and extent of evidence criteria, see
the WWC Rating Criteria on p. 19.
Summary of effectiveness for the mathematics achievement domain
Two studies reported fndings in the mathematics achievement domain.
Agodini et al. (2010) reported, and the WWC confrmed, statistically signifcant positive effects of the Saxon Math pro-
gram on the ECLS-K math assessment when compared to Scott ForesmanAddison Wesley Elementary Mathematics
in grade 2. The study reports no signifcant effects of Saxon Math on the ECLS-K math assessment when compared
to Investigations in Number, Data, and Space

and Math Expressions. The average effect size across the curricula
and both grades (frst and second) was not large enough to be considered substantively important according to WWC
criteria (an effect size of at least 0.25). Based on the one statistically signifcant fnding, the WWC characterizes this
study as having statistically signifcant positive effects.
Resendez and Manley (2005) reported signifcant effects of the Saxon Math program on school-level math achieve-
ment in grades 2, 4, and 5, but reported no signifcant effects in grades 1 and 3. These fndings control for schools
baseline math achievement levels. Due to the lack of student-level data, the student-level effect size and improve-
ment index could not be calculated for this study. Based on WWC calculations, the average effect across grades
15 is not statistically signifcant. Therefore, this study is characterized as having indeterminate effects.
Thus, for the mathematics achievement domain, one study showed statistically signifcant positive effects and one
study showed indeterminate effects. This results in a rating of potentially positive effects, with a medium to large
extent of evidence.
Table 3. Rating of effectiveness and extent of evidence for the mathematics achievement domain
Rating of effectiveness Criteria met
Potentially positive effects
Evidence of a positive effect with
no overriding contrary evidence.
In the two studies that reported ndings, the estimated impact of the intervention on outcomes in the mathematics
achievement domain was positive and statistically signicant in one study and indeterminate in one study.
Extent of evidence Criteria met
Medium to large Two studies that included more than 8,060 students in 452 schools reported evidence of effectiveness in the
mathematics achievement domain.
Saxon Math Updated May 2013 Page 5
WWC Intervention Report
References
Study that meets WWC evidence standards without reservations
Agodini, R., Harris, B., Thomas, M., Murphy, R., & Gallagher, L. (2010). Achievement effects of four early elementary
school math curricula: Findings for rst and second graders (NCEE 2011-4001). Washington, DC: National Center
for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Retrieved from http://ies.ed.gov/ncee/pubs/20114001/pdf/20114001.pdf
Additional sources:
Agodini, R., & Harris, B. (2010). An experimental evaluation of four elementary school math curricula. Journal
of Research on Educational Effectiveness, 3(3), 199253.
Agodini, R., Harris, B., Atkins-Burnett, S., Heaviside, S., Novak, T., & Murphy, R. (2009). Achievement effects
of four early elementary school math curricula: Findings from rst graders in 39 schools (NCEE 2009-
4052). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of
Education Sciences, U.S. Department of Education.
Agodini, R., Harris, B., Atkins-Burnett, S., Heaviside, S., Novak, T., Murphy, R., & Institute of Education
Sciences (ED), National Center for Education Evaluation and Regional Assistance. (2009). Achieve-
ment effects of four early elementary school math curricula: Findings from rst graders in 39 schools

(NCEE 2009-4053). Executive summary. Washington, DC: National Center for Education Evaluation and
Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Study that meets WWC evidence standards with reservations
Resendez, M., & Manley, M. A. (2005). The relationship between using Saxon Elementary and Middle School Math
and student performance on Georgia statewide assessments. Orlando, FL: Harcourt Achieve.
Studies that do not meet WWC evidence standards
Bhatt, R., & Koedel, C. (2012). Large-scale evaluations of curricular effectiveness: The case of elementary math-
ematics in Indiana. Educational Evaluation and Policy Analysis, 34(4), 391412. The study does not meet
WWC evidence standards because it uses a quasi-experimental design in which the analytic intervention and
comparison groups are not shown to be equivalent.
Calvery, R., Bell, D., & Wheeler, G. (1993, November). A comparison of selected second and third graders math
achievement: Saxon vs. Holt. Paper presented at the meeting of the Mid-South Educational Research
Association, New Orleans, LA. The study does not meet WWC evidence standards because it uses a quasi-
experimental design in which the analytic intervention and comparison groups are not shown to be equivalent.
Cummins-Colburn, B. J. L. (2007). Differences between state-adopted textbooks and student outcomes on the
Texas Assessment of Knowledge and Skills examination. Dissertation Abstracts International, 68(06A), 168
2299. The study does not meet WWC evidence standards because it uses a quasi-experimental design in
which the analytic intervention and comparison groups are not shown to be equivalent.
Educational Research Institute of America. (2009). A longitudinal analysis of state mathematics scores for Indiana
schools using Saxon Math No. 362. Bloomington, IN: Author. The study does not meet WWC evidence stan-
dards because it uses a quasi-experimental design in which the analytic intervention and comparison groups
are not shown to be equivalent.
Good, K., Bickel, R., & Howley, C. (2006). Saxon Elementary Math program effectiveness study. Charlestown, WV:
Edvantia. The study does not meet WWC evidence standards because it uses a quasi-experimental design in
which the analytic intervention and comparison groups are not shown to be equivalent.
Hansen, E., & Greene, K. (2000). A recipe for math. Whats cooking in the classroom: Saxon or Traditional? The
study does not meet WWC evidence standards because it uses a quasi-experimental design in which the
analytic intervention and comparison groups are not shown to be equivalent.
Saxon Math Updated May 2013 Page 6
WWC Intervention Report
Hook, W., Bishop, W., & Hook, J. (2007). A quality math curriculum in support of effective teaching for elementary
schools. Educational Studies in Mathematics, 65(2), 125148. The study does not meet WWC evidence stan-
dards because it uses a quasi-experimental design in which the analytic intervention and comparison groups
are not shown to be equivalent.
Nguyen, K., Elam, P., & Weeter, R. (1993). The 199293 Saxon Mathematics program evaluation report. Oklahoma
City: Oklahoma City Public Schools. The study does not meet WWC evidence standards because the mea-
sures of effectiveness cannot be attributed solely to the interventionthe intervention was not implemented
as designed.
Resendez, M., & Azin, M. (2007). The relationship between using Saxon Elementary and Middle School Math and
student performance on California statewide assessments. Jackson, WY: PRES Associates. The study does
not meet WWC evidence standards because it uses a quasi-experimental design in which the analytic inter-
vention and comparison groups are not shown to be equivalent.
Additional source:
Resendez, M., & Azin, M. (2007). Saxon Math and California English Learners math performance: Research
brief. Jackson, WY: PRES Associates.
Resendez, M., & Azin, M. (2008). The relationship between using Saxon Math at the elementary and middle school
levels and student performance on the North Carolina statewide assessment: Final report. Jackson, WY: PRES
Associates. The study does not meet WWC evidence standards because it uses a quasi-experimental design
in which the analytic intervention and comparison groups are not shown to be equivalent.
Resendez, M., Sridharan, S., & Azin, M. (2006). The relationship between using Saxon Elementary School Math and
student performance on Texas statewide assessments. Jackson, WY: PRES Associates. The study does not
meet WWC evidence standards because it uses a quasi-experimental design in which the analytic intervention
and comparison groups are not shown to be equivalent.
Roan, C. (2012). A comparison of elementary mathematics achievement in everyday math and Saxon Math schools
in Illinois (Doctoral dissertation). Available from ProQuest Dissertations and Theses. (UMI No. 3507509) The
study does not meet WWC evidence standards because it uses a quasi-experimental design in which the
analytic intervention and comparison groups are not shown to be equivalent.
Studies that are ineligible for review using the Elementary School Mathematics Evidence Review Protocol
Bell, G. (2011). The effects of Saxon Math instruction on middle school students mathematics achievement (Doctoral
dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3488140) The study is
ineligible for review because it does not use a sample within the age or grade range specifced in the protocol.
Christofori, P. (2005). The effect of direct instruction math curriculum on higher-order problem solving (Unpublished
doctoral dissertation). University of South Florida, Sarasota. The study is ineligible for review because it does
not use a comparison group design or a single-case design.
Doe, C. (2006). Marvelous math products. MultiMedia & Internet@Schools, 13(3), 3033. The study is ineligible for
review because it does not examine the effectiveness of an intervention.
Fahsl, A. J. (2001). An investigation of the effects of exposure to Saxon Math textbooks, socioeconomic status and
gender on math achievement scores. Dissertation Abstracts International, 62(08), 2681A. The study is ineli-
gible for review because it does not use a comparison group design or a single-case design.
Educational Research Institute of America. (2009). A longitudinal analysis of state mathematics scores for Florida
schools using Saxon Math No. 365. The study is ineligible for review because it does not use a comparison
group design or a single-case design.
Educational Research Institute of America. (2009). A longitudinal analysis of state mathematics scores for Oklahoma
schools using Saxon Math No. 363. Bloomington, IN: Author. The study is ineligible for review because it does
not use a comparison group design or a single-case design.
Saxon Math Updated May 2013 Page 7
WWC Intervention Report
Harcourt Achieve, Inc. (2005). Case study research summaries of Saxon Math. Retrieved from http://saxonpublishers.
hmhco.com/en/resources/result_c.htm?ca=Research%3a+Effcacy&SRC1=4


The study is ineligible for review
because it does not use a comparison group design or a single-case design.
Klein, D. (2000). High achievement in mathematics: Lessons from three Los Angeles elementary schools. Washing-
ton, DC: Brookings Institution Press. The study is ineligible for review because it does not use a comparison
group design or a single-case design.
Plato, J. (1998). An evaluation of Saxon Math at Blessed Sacrament School. Retrieved from http://lrs.ed.uiuc.edu/
students/plato1/Final.html The study is ineligible for review because it does not use a comparison group
design or a single-case design.
Slavin, R. E., & Lake, C. (2007). Effective programs in elementary mathematics: A best-evidence synthesis. The Best
Evidence Encyclopedia, 1(2). Retrieved from http://www.bestevidence.org/word/elem_math_Feb_9_2007.pdf
The study is ineligible for review because it is a secondary analysis of the effectiveness of an intervention, such
as a meta-analysis or research literature review.
Additional source:
Slavin, R. E., & Lake, C. (2009). Effective programs for elementary mathematics: A best evidence synthesis. Edu-
cators summary. Retrieved from http://www.bestevidence.org/word/elem_math_Mar_11_2009_sum.pdf
Viadero, D. (2009). Study gives edge to 2 math programs. Education Week, 28(23), 113. The study is ineligible for
review because it does not examine the effectiveness of an intervention.
Vinogradova, E., King, C., & Rhoades, T. (2008, April). Success for all students: What works? Best practices in Mary-
land public schools. Paper presented at the annual meeting of the American Sociological Association, Boston,
MA. The study is ineligible for review because it does not examine the effectiveness of an intervention.
Saxon Math Updated May 2013 Page 8
WWC Intervention Report
Appendix A.1: Research details for Agodini et al. (2010)
Agodini, R., Harris, B., Thomas, M., Murphy, R., & Gallagher, L. (2010). Achievement effects of four
early elementary school math curricula: Findings for rst and second graders (NCEE 2011-4001).
Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute
of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/
pubs/20114001/pdf/20114001.pdf
Table A1. Summary of ndings Meets WWC evidence standards without reservations
Study ndings
Outcome domain Sample size
Average improvement index
(percentile points) Statistically signicant
Mathematics achievement 110 schools/8,060 students +3 Yes
Setting
The study took place in elementary schools in 12 districts across 10 states, including Con-
necticut, Florida, Kentucky, Minnesota, Mississippi, Missouri, Nevada, New York, South Caro-
lina, and Texas. Of the 12 districts, three were in urban areas, fve were in suburban areas, and
four were in rural areas.
Study sample
Following district and school recruitment and collection of consent from all teachers in the
participating grades, 111 participating schools were randomly assigned to one of four cur-
ricula: (a) Investigations in Number, Data, and Space

, (b) Math Expressions, (c) Saxon Math,


and (d) Scott ForesmanAddison Wesley Mathematics. Blocked random assignment of the
schools was conducted separately within each district. In each district, participating schools
were grouped together into blocks of four to seven schools based on characteristics such as
Title I eligibility, free or reduced-price lunch eligibility status, grade enrollment size, math prof-
ciency, and proportion of White and Hispanic students. Two districts had an additional block-
ing variable (magnet school status in one district and year-round school schedule in another
district). One district required that all schools that fed into the same middle school receive the
same condition. Schools in each block were randomly assigned among the four curricula. On
average, 11 students were randomly sampled from each participating classroom for assess-
ment. One school with three teachers and 32 students assigned to Math Expressions withdrew
from the study and did not permit follow-up data collection.
The analysis sample included a total of 110 schools, 461 frst-grade classrooms, 4,716 frst
graders, 328 second-grade classrooms, and 3,344 second graders. In the frst grade sample,
on average, 27 schools, 116 classrooms, and 1,180 students were assigned to each condition.
In the second grade sample, on average, 18 schools, 82 classrooms, and 835 students were
assigned to each condition.
Seventy-six percent of the schools in the study were eligible for Title I funding. Approximately
half of the students in the sample were eligible for free or reduced-price lunch. Among stu-
dents in the sample, 39% were White, 32% were non-Hispanic Black, 26% were Hispanic, 2%
were Asian, and 1% were American Indian or Alaskan Native.
Saxon Math Updated May 2013 Page 9
WWC Intervention Report
Intervention
group
Students used Saxon Math as their core math curriculum. Study authors reported that about
six out of seven teachers self-reported completing at least 80% of the curriculum.
Comparison
group
The study included three comparison groups: (a) Investigations in Number, Data, and Space

,
(b) Math Expressions, and (c) Scott ForesmanAddison Wesley Elementary Mathematics. Each
curriculum was implemented by comparison teachers for one school year.
Investigations in Number, Data, and Space

is published by Pearson Scott Foresman. It uses


a student-centered approach that encourages reasoning and understanding and draws on con-
structivist learning theory. The lessons build on students existing knowledge and focus on under-
standing math concepts rather than simply learning computational methods. The curriculum is
organized in nine thematic units, each lasting 55.5 weeks. Study authors reported that about four
out of fve teachers self-reported completing at least 80% of the curriculum.
Math Expressions is published by Houghton Miffin Harcourt and uses a blend of student-cen-
tered and teacher-directed instructional approaches. Students using the curriculum question
and discuss mathematics and are explicitly taught problem solving strategies. There is an
emphasis on using multiple specifed objects, drawings, and language to represent concepts,
and on learning through the use of real-world situations. Students are expected to explain and
justify their solutions. Study authors reported that about nine out of 10 teachers self-reported
completing at least 80% of the curriculum.
Scott ForesmanAddison Wesley Mathematics is published by Pearson Scott Foresman and is
a curriculum that combines teacher-directed instruction with a variety of differentiated materi-
als and instructional strategies. Teachers select the materials that seem most appropriate for
their students. The curriculum is based on a consistent daily lesson structure, which includes
direct instruction, hands-on exploration, the use of questioning, and practice of new skills.
Study authors reported that about nine out of 10 teachers self-reported completing at least
80% of the curriculum.

Outcomes and
measurement
Mathematics achievement was measured using the mathematics assessment developed for
the ECLS-K class of 199899. The assessment is individually administered, nationally normed,
and adaptive. The assessment meets accepted standards of validity and reliability. Scale
scores from an item response theory (IRT) model were used in the analysis. The test was
administered in the fall of the implementation year (within 4 weeks of the frst day of classes) to
assess students baseline math achievement. The test was also administered in the spring
that is, from 16 weeks before the end of the school year of program implementation. For a
more detailed description of the outcome measure, see Appendix B.
Saxon Math Updated May 2013 Page 10
WWC Intervention Report
Support for
implementation
Teachers in all four groups were provided training by the curriculum publisher. Teachers
assigned to Saxon Math were provided 1 day of initial training in the summer before the school
year began. One follow-up training session, tailored to meet each districts needs, was offered
during the school year.
Teachers assigned to Investigations in Number, Data, and Space

(comparison group 1) were


provided 1 day of initial training in the summer before the school year began. Follow-up ses-
sions were typically 34 hours long and held after school.
Teachers assigned to Math Expressions (comparison group 2) were provided 2 days of initial
training in the summer before the school year began. Two follow-up trainings were offered dur-
ing the school year. Follow-up sessions typically consisted of classroom observations followed
by short feedback sessions with teachers.
Teachers assigned to Scott ForesmanAddison Wesley Elementary Mathematics (comparison
group 3) received 1 day of initial training in the summer before the school year began. Follow-
up training was offered about every 46 weeks throughout the school year. Follow-up sessions
were typically 34 hours long and held after school.
Saxon Math Updated May 2013 Page 11
WWC Intervention Report
Appendix A.2: Research details for Resendez and Manley (2005)
Resendez, M., & Manley, M. A. (2005). The relationship between using Saxon Elementary and Middle School
Math and student performance on Georgia statewide assessments. Orlando, FL: Harcourt Achieve.
Table A2. Summary of ndings Meets WWC evidence standards with reservations
Study ndings
Outcome domain Sample size
Average improvement index
(percentile points) Statistically signicant
Mathematics achievement 342 schools na No
na = not applicable
Setting
The schools included in the study were distributed across the state of Georgia and repre-
sented a mixture of rural, urban, and suburban communities.
Study sample
Using information provided by the Georgia Department of Education, the study authors identi-
fed Georgia schools that used the Saxon Math curricula between 2000 and 2005, as well as
schools that did not use Saxon Math but had similar student demographics to those who did.
The study sample included students in grades 18 in 170 intervention schools and 172 com-
parison schools. This intervention report focuses only on fndings for grades 15, because
grades 68 are outside of the scope of this review.
8
Data for the intervention group came from
85 schools for frst grade, 85 schools for second grade, 83 schools for third grade, 79 schools
for fourth grade, and 79 schools for ffth grade.

Data for the comparison group came from 144
schools for frst grade, 144 schools for second grade, 135 schools for third grade, 131 schools
for fourth grade, and 129 schools for ffth grade. The authors reported no signifcant differ-
ences in baseline math performance between the Saxon and non-Saxon schools.
Intervention
group
The Saxon Math curricula were used as a core curriculum in the intervention schools. These
schools used the version of the Saxon Math program that was appropriate for each grade
level. Participating schools had used the program for an average of three years.
Comparison
group
Comparison group schools were selected from among all Georgia schools that did not imple-
ment Saxon Math based on propensity score matching methods. Schools were matched
based on the their percentages of students who were female, African American, White, His-
panic, Native American, limited English profcient, educationally disadvantaged, migrant, dis-
abled, gifted, and having left school during the prior year. The comparison group schools used
a mixture of non-Saxon curricula. Sixty-two percent of the schools in the comparison group
used basal math curricula with chapter-based approaches to teaching math. Five percent of
the schools used curricula with an investigative approach. The remaining 33% of the schools
used curricula that were a mix of basal, investigative, and computer-based approaches. No
additional information was provided by the authors about the specifc components of the
basal, investigative, or computer-based approaches.
Saxon Math Updated May 2013 Page 12
WWC Intervention Report
Outcomes and
measurement
Study authors measured outcomes using Georgias Criterion-Referenced Competency Test
(CRCT), which assesses competency in number sense and numeration, geometry and mea-
surement, patterns and relations/algebra, statistics and probability, computation and esti-
mation, and problem solving. The authors note that per state policy, only school-level data
could be released. Fourth-grade students were tested in each school year from 19992000 to
200405. First-grade, second-grade, third-grade, and ffth-grade students were tested in the
spring of school years 200102, 200304, and 200405. All posttest scores are from spring
2005. For a more detailed description of this outcome measure, see Appendix B.
Support for
implementation
The intervention and comparison schools in the study were all using their curricula as part of
business-as-usual operations and did not receive additional implementation support as a part
of the study. Therefore, teachers received the training and implementation support normally pro-
vided with their schools curriculum. The study does not provide additional details on implemen-
tation support that schools may have received from curricula developers or other parties.
Saxon Math Updated May 2013 Page 13
WWC Intervention Report
Appendix B: Outcome measures for the mathematics achievement domain
Mathematics achievement
Early Childhood Longitudinal



StudyKindergarten (ECLS-K)
Math Assessment
This assessment was developed for the ECLS-K class of 199899. The ECLS-K is a nationally normed adaptive
test. The assessment measures understanding and skills in ve content areas: (a) number sense, properties,
and operations; (b) measurement; (c) geometry and spatial sense; (d) data analysis, statistics, and probability;
and (e) patterns, algebra, and functions. On the rst-grade test, approximately three-quarters of the items
focused on number sense, properties, and operations, with the remaining items predominantly drawn from
the areas of data analysis, statistics, and probability; and patterns, algebra, and functions. An ECLS-K math
assessment for the second grade did not exist, so the study authors worked with the developer of the ECLS-K,
Educational Testing Service, to select appropriate items from existing ECLS-K math assessments (including
the K1, third-, and fth-grade instruments). Half of the items in the second-grade test were related to number
sense, properties, and operations, with the other half covering measurement; geometry and spatial sense; and
patterns, algebra, and functions (as cited in Agodini et al., 2010).
Georgias Criterion-Referenced
Competency Test (CRCT),
Mathematics
9
As cited in Resendez and Manley (2005), the CRCT is a criterion-referenced test linked to Georgias Quality Core
Curriculum Goals. According to the Georgia Department of Education, the CRCT is a multiple-choice test that
is valid and reliable for Georgias public school students.
10
The CRCT math scores range from 150 to 450, with
scores below 300 not meeting standards and scores above 350 exceeding standards. The criteria for meeting
the standards vary by objective and grade level. The test includes subscales that cover six objectives: (a) num-
bers and number sense; (b) geometry and measurement; (c) patterns, relationships, and algebra; (d) statistics
and probability; (e) computation and estimation; and (f) problem solving. The cut points are set by the state and
take into account the difculty of each specic objective.
Saxon Math Updated May 2013 Page 14
WWC Intervention Report
Appendix C: Findings included in the rating for the mathematics achievement domain
Mean
(standard deviation) WWC calculations
Outcome measure
Study
sample
Sample
size
Intervention
group
Comparison
group
Mean
difference
Effect
size
Improvement
index p-value
Agodini et al., 2010
a
ECLS-K Grade 1 (vs.
Investigations in
Number, Data, and
Space)
54 schools/
2,235 students
45.05
(7.32)
44.51
(8.04)
0.54 0.07 +3 0.15
ECLS-K Grade 1 (vs.
Math Expressions)
52 schools/
2,320 students
44.36
(7.32)
44.74
(8.52)
0.38 0.05 2 0.31
ECLS-K Grade 1 (vs. Scott
ForesmanAddison
Wesley)
55 schools/
2,377 students
44.94
(7.32)
44.43
(8.15)
0.51 0.07 +3 0.16
ECLS-K Grade 2 (vs.
Investigations in
Number, Data, and
Space)
36 schools/
1,711 students
71.25
(16.16)
69.85
(15.75)
1.40 0.09 +3 0.09
ECLS-K Grade 2 (vs.
Math Expressions)
35 schools/
1,721 students
72.24
(16.16)
71.38
(16.70)
0.86 0.05 +2 0.28
ECLS-K Grade 2 (vs. Scott
ForesmanAddison
Wesley)
36 schools/
1,706 students
73.06
(16.16)
70.31
(15.74)
2.75 0.17 +7 0.00
Domain average for mathematics achievement (Agodini et al., 2010) 0.07 +3 Statistically
signicant
Resendez & Manley, 2005
b
CRCT Grade 1 229 schools/
nr students
86.26
(6.60)
85.20
(6.80)
1.06 na na 0.19
CRCT Grade 2 229 schools/
nr students
88.31
(6.39)
86.86
(7.35)
1.45 na na 0.00
CRCT Grade 3 218 schools/
nr students
86.94
(6.50)
85.93
(7.15)
1.01 na na 0.12
CRCT Grade 4 210 schools/
nr students
73.92
(8.51)
71.39
(11.83)
2.53 na na 0.00
CRCT Grade 5 208 schools/
nr students
82.46
(6.94)
81.66
(8.93)
0.80 na na 0.00
Domain average for mathematics achievement (Resendez & Manley, 2005) na na na
Domain average for mathematics achievement across all studies 0.07 +3 na
Table Notes: For mean difference, effect size, and improvement index values reported in the table, a positive number favors the intervention group and a negative number favors
the comparison group. The effect size is a standardized measure of the effect of an intervention on student outcomes, representing the average change expected for all students
who are given the intervention (measured in standard deviations of the outcome measure). The improvement index is an alternate presentation of the effect size, reecting the
change in an average students percentile rank that can be expected if the student is given the intervention. The WWC-computed average effect size is a simple average rounded
to two decimal places; the average improvement index is calculated from the average effect size. The statistical signicance of each studys domain average was determined by
the WWC. nr = not reported by the authors. na = not applicable. ECLS-K = Early Childhood Longitudinal StudyKindergarten. CRCT = Criterion-Referenced Competency Test.
a
For Agodini et al. (2010), the unit of assignment is the school. The p-values presented here were reported in the original study. The intervention group mean is the unadjusted
comparison mean plus the program coefcients from the hierarchical linear modeling (HLM) analysis. The comparison group mean is the unadjusted comparison group mean. A cor-
rection for multiple comparisons was needed but did not affect the statistical signicance of the ndings. This study is characterized as having a statistically signicant positive effect
because the effect for at least one measure within the domain is positive and statistically signicant, and no effects are negative and statistically signicant, accounting for multiple
comparisons.
Saxon Math Updated May 2013 Page 15
WWC Intervention Report
b
For Resendez & Manley (2005), no corrections for clustering or multiple comparisons were needed. The p-values presented here were reported in the original study. The original
study reported only means for CRCT subtests. The value reported here is the mean across those subtests as reported by the author to the WWC. The means presented here adjust for
differences in the groups at pretest. For subtest results, see Appendix D. Standard deviations are measured at the school level and were provided by the author to the WWC. Because
student-level standard deviations were not available for this study, the student-level effect sizes and improvement indices could not be computed and the magnitude of the effect size
was not considered for rating purposes. For further details, please see the WWC Procedures and Standards Handbook, Appendix B.
Saxon Math Updated May 2013 Page 16
WWC Intervention Report
Appendix D: Summary of supplemental ndings for the mathematics achievement domain
Mean
(standard deviation) WWC calculations
Outcome measure
Study
sample
Sample
size
(schools)
Intervention
group
Comparison
group
Mean
difference
Effect
size
Improvement
index p-value
Resendez & Manley, 2005
a
CRCT: Numbers and
number sense
Grade 1 229 89.53
(6.31)
88.52
(7.00)
1.01 na na 0.16
CRCT: Geometry
and measurement
Grade 1 229 90.34
(5.83)
90.29
(5.70)
0.05 na na 0.94
CRCT: Patterns, relations,
and algebra
Grade 1 229 87.88
(6.99)
86.28
(6.61)
1.60 na na 0.02
CRCT: Computation
and estimation
Grade 1 229 78.93
(9.54)
77.43
(10.10)
1.50 na na 0.14
CRCT: Problem solving Grade 1 229 84.64
(7.30)
83.49
(8.39)
1.15 na na 0.17
CRCT: Numbers and
number sense
Grade 2 229 88.57
(6.80)
86.62
(8.38)
1.95 na na 0.02
CRCT: Geometry and
measurement
Grade 2 229 91.46
(6.18)
92.36
(5.41)
0.90 na na 0.13
CRCT: Patterns, relations,
and algebra
Grade 2 229 87.05
(7.43)
83.58
(9.63)
3.47 na na 0.00
CRCT: Computation
and estimation
Grade 2 229 86.93
(7.13)
85.83
(7.82)
1.10 na na 0.15
CRCT: Problem solving Grade 2 229 87.54
(7.48)
85.93
(8.28)
1.61 na na 0.04
CRCT: Numbers and
number sense
Grade 3 218 89.74
(6.29)
88.24
(7.02)
1.50 na na 0.03
CRCT: Geometry and
measurement
Grade 3 218 93.60
(4.50)
92.24
(6.22)
1.36 na na 0.03
CRCT: Patterns, relations,
and algebra
Grade 3 218 86.26
(6.67)
85.90
(7.12)
0.36 na na 0.59
CRCT: Statistics and
probability
Grade 3 218 87.13
(7.21)
85.83
(7.98)
1.30 na na 0.09
CRCT: Computation
and estimation
Grade 3 218 86.81
(7.80)
85.71
(8.02)
1.10 na na 0.19
CRCT: Problem solving Grade 3 218 78.11
(10.12)
77.64
(10.69)
0.47 na na 0.63
CRCT: Numbers and
number sense
Grade 4 210 71.47
(10.32)
70.85
(14.39)
0.62 na na 0.65
CRCT: Geometry and
measurement
Grade 4 210 79.22
(8.93)
78.16
(11.13)
1.06 na na 0.33
CRCT: Patterns, relations,
and algebra
Grade 4 210 69.76
(8.77)
67.70
(11.07)
2.06 na na 0.06
Saxon Math Updated May 2013 Page 17
WWC Intervention Report
CRCT: Statistics
and probability

Grade 4 210 82.15
(9.05)
80.17
(10.82)
1.98 na na 0.07
CRCT: Computation
and estimation
Grade 4 210 73.12
(10.30)
67.65
(14.75)
5.47 na na 0.00
CRCT: Problem solving Grade 4 210 67.81
(9.87)
63.83
(14.44)
3.98 na na 0.00
CRCT: Numbers and

number sense
Grade 5 208 79.74
(9.55)
77.31
(11.51)
2.43 na na 0.03
CRCT: Geometry and
measurement
Grade 5 208 80.77
(9.01)
81.54
(9.88)
0.77 na na 0.44
CRCT: Patterns, relations,
and algebra
Grade 5 208 76.16
(9.37)
74.56
(12.11)
1.60 na na 0.15
CRCT: Statistics

and probability
Grade 5 208 79.82
(7.71)
81.52
(9.65)
1.70 na na 0.07
CRCT: Computation
and estimation
Grade 5 208 88.74
(6.56)
86.62
(8.55)
2.12 na na 0.01
CRCT: Problem solving Grade 5 208 89.55
(6.85)
88.43
(7.34)
1.12 na na 0.14
Table Notes: The supplemental ndings presented in this table are additional ndings from the studies in this report that do not factor into the determination of the intervention
rating. For mean difference, effect size, and improvement index values reported in the table, a positive number favors the intervention group and a negative number favors the
comparison group. The effect size is a standardized measure of the effect of an intervention on student outcomes, representing the average change expected for all students who
are given the intervention (measured in standard deviations of the outcome measure). The improvement index is an alternate presentation of the effect size, reecting the change
in an average students percentile rank that can be expected if the student is given the intervention. na = not applicable. CRCT = Criterion-Referenced Competency Test.
a
For Resendez & Manley (2005), no corrections for clustering or multiple comparisons were needed. The p-values presented here were reported in the original study. The means
presented here adjust for differences in the groups at pretest. Standard deviations are measured at the school level and were provided by the author to the WWC. Because student-
level standard deviations were not available for this study, the student-level effect sizes and improvement indices could not be computed, and the magnitude of the effect size was not
considered for rating purposes. For further details, see the WWC Procedures and Standards Handbook, Appendix B.
Saxon Math Updated May 2013 Page 18
WWC Intervention Report
Endnotes
1
The descriptive information for this program was obtained from a publicly available source: the programs website (http://www.
hmheducation.com/saxonmathk5/index.php, downloaded June 2010). The WWC requests developers review the program description
sections for accuracy from their perspective. The program description was provided to the developer in February 2012, and the WWC
incorporated feedback from the developer. Following internal review, the program description was provided again to the developer in
January 2013, and the WWC incorporated additional feedback from the developer. Further verifcation of the accuracy of the descrip-
tive information for this program is beyond the scope of this review. The literature search refects documents publicly available by
December 2012.
2
The previous report was released in September 2010. This report has been updated to include reviews of six studies released since
that report. Of the additional studies, one was within the scope of the Elementary School Mathematics review protocol and meets
WWC evidence standards. The remaining fve studies do not meet either WWC eligibility screens or evidence standards. A complete
list and disposition of all studies reviewed are provided in the references. The studies in this report were reviewed using the Evidence
Standards from the WWC Procedures and Standards Handbook (version 2.1) along with those described in the the Elementary School
Mathematics review protocol (version 2.0). When intervention reports are updated, all studies are re-reviewed under the current WWC
standards. In this report, a study that met standards with reservations (Good, Bickel, & Howley, 2006) in the September 2010 report
was re-reviewed, and it does not meet standards under version 2.1 of the WWC Evidence Standards; the intervention and comparison
groups in that study were not shown to be equivalent at baseline. The evidence presented in this report is based on available research.
Findings and conclusions may change as new research becomes available.
3
Absence of confict of interest: One of the studies summarized in this intervention report, Agodini et al. (2010), was prepared by staff
of one of the WWC contractors. Because the principal investigator for the WWC review of Elementary School Mathematics is also a
staff member of that contractor and an author of this study, the study was rated by staff members from a different organization. The
report was then reviewed by the principal investigator, a WWC Quality Assurance reviewer, and an external peer reviewer.
4
For criteria used in the determination of the rating of effectiveness and extent of evidence, see the WWC Rating Criteria on p. 19.
These improvement index numbers show the average and range of student-level improvement indices for all fndings in Agodini et
al. (2010); it was not possible to calculate improvement indices for Resendez and Manley (2005) because student-level data were not
provided.
5
One study, Resendez and Manley (2005), reported school sample size but did not report student sample size.
6
Both the primary and intermediate math curricula are available for grades 3 and 4.
7
Grade, delivery method, and program type refer to the studies that meet WWC evidence standards without or with reservations.
8
Results from grades 68 are being reviewed as part of the WWC Middle School Math review.
9
The original CRCT scores shown in the report are by objective. Upon request from the WWC, the authors calculated the mean overall
score across all objectives, controlling for pretest, for each grade.
10
Georgia Department of Education. (n.d.). Criterion-referenced competency tests. Retrieved from http://www.doe.k12.ga.us/


Curriculum-Instruction-and-Assessment/Assessment/Pages/CRCT.aspx
Recommended Citation
U.S. Department of Education, Institute of Education Sciences, What Works Clearinghouse. (2013, May).
Elementary School Mathematics intervention report: Saxon Math. Retrieved from http://whatworks.ed.gov
Saxon Math Updated May 2013 Page 19
WWC Intervention Report
WWC Rating Criteria
Criteria used to determine the rating of a study
Study rating Criteria
Meets WWC evidence standards










without reservations
A study that provides strong evidence for an interventions effectiveness, such as a well-implemented RCT.
Meets WWC evidence standards
with reservations
A study that provides weaker evidence for an interventions effectiveness, such as a QED or an RCT with high
attrition that has established equivalence of the analytic samples.
Criteria used to determine the rating of effectiveness for an intervention
Rating of effectiveness Criteria
Positive effects Two or more studies show statistically signicant positive effects, at least one of which met WWC evidence
standards for a strong design, AND
No studies show statistically signicant or substantively important negative effects.
Potentially positive effects At least one study shows a statistically signicant or substantively important positive effect, AND
No studies show a statistically signicant or substantively important negative effect AND fewer or the same number
of studies show indeterminate effects than show statistically signicant or substantively important positive effects.
Mixed effects At least one study shows a statistically signicant or substantively important positive effect AND at least one study
shows a statistically signicant or substantively important negative effect, but no more such studies than the number
showing a statistically signicant or substantively important positive effect, OR
At least one study shows a statistically signicant or substantively important effect AND more studies show an
indeterminate effect than show a statistically signicant or substantively important effect.
Potentially negative effects One study shows a statistically signicant or substantively important negative effect and no studies show
a statistically signicant or substantively important positive effect, OR
Two or more studies show statistically signicant or substantively important negative effects, at least one study
shows a statistically signicant or substantively important positive effect, and more studies show statistically
signicant or substantively important negative effects than show statistically signicant or substantively important
positive effects.
Negative effects Two or more studies show statistically signicant negative effects, at least one of which met WWC evidence
standards for a strong design, AND
No studies show statistically signicant or substantively important positive effects.
No discernible effects None of the studies shows a statistically signicant or substantively important effect, either positive or negative.
Criteria used to determine the extent of evidence for an intervention
Extent of evidence Criteria
Medium to large The domain includes more than one study, AND
The domain includes more than one school, AND
The domain ndings are based on a total sample size of at least 350 students, OR, assuming 25 students in a class,
a total of at least 14 classrooms across studies.
Small The domain includes only one study, OR
The domain includes only one school, OR
The domain ndings are based on a total sample size of fewer than 350 students, AND, assuming 25 students
in a class, a total of fewer than 14 classrooms across studies.
Saxon Math Updated May 2013 Page 20
WWC Intervention Report
Glossary of Terms
Attrition Attrition occurs when an outcome variable is not available for all participants initially assigned
to the intervention and comparison groups. The WWC considers the total attrition rate and
the difference in attrition rates across groups within a study.
Clustering adjustment If intervention assignment is made at a cluster level and the analysis is conducted at the student
level, the WWC will adjust the statistical signifcance to account for this mismatch, if necessary.
Confounding factor A confounding factor is a component of a study that is completely aligned with one of the
study conditions, making it impossible to separate how much of the observed effect was
due to the intervention and how much was due to the factor.
Design The design of a study is the method by which intervention and comparison groups were assigned.
Domain A domain is a group of closely related outcomes.
Effect size The effect size is a measure of the magnitude of an effect. The WWC uses a standardized
measure to facilitate comparisons across studies and outcomes.
Eligibility A study is eligible for review and inclusion in this report if it falls within the scope of the
review protocol and uses either an experimental or matched comparison group design.
Equivalence A demonstration that the analysis sample groups are similar on observed characteristics
defned in the review area protocol.
Extent of evidence An indication of how much evidence supports the fndings. The criteria for the extent





of evidence levels are given in the WWC Rating Criteria on p. 19.
Improvement index Along a percentile distribution of students, the improvement index represents the gain
or loss of the average student due to the intervention. As the average student starts at
the 50th percentile, the measure ranges from 50 to +50.
Multiple comparison
adjustment
When a study includes multiple outcomes or comparison groups, the WWC will adjust
the statistical signifcance to account for the multiple comparisons, if necessary.
Quasi-experimental
design (QED)
A quasi-experimental design (QED) is a research design in which subjects are assigned
to intervention and comparison groups through a process that is not random.
Randomized controlled
trial (RCT)
A randomized controlled trial (RCT) is an experiment in which investigators randomly assign
eligible participants into intervention and comparison groups.
Rating of effectiveness The WWC rates the effects of an intervention in each domain based on the quality of the
research design and the magnitude, statistical signifcance, and consistency in fndings. The
criteria for the ratings of effectiveness are given in the WWC Rating Criteria on p. 19.
Single-case design A research approach in which an outcome variable is measured repeatedly within and
across different conditions that are defned by the presence or absence of an intervention.
Standard deviation The standard deviation of a measure shows how much variation exists across observations
in the sample. A low standard deviation indicates that the observations in the sample tend
to be very close to the mean; a high standard deviation indicates that the observations in
the sample tend to be spread out over a large range of values.
Statistical signicance Statistical signifcance is the probability that the difference between groups is a result of
chance rather than a real difference between the groups. The WWC labels a fnding statistically
signifcant if the likelihood that the difference is due to chance is less than 5% ( p < 0.05).
Substantively important A substantively important fnding is one that has an effect size of 0.25 or greater, regardless
of statistical signifcance.
Please see the WWC Procedures and Standards Handbook (version 2.1) for additional details.

Anda mungkin juga menyukai