December 5, 2010
increase focus on the collection and use of student achievement data as a means of improving
school performance. The amount of student data being collected and tracked, the investments in
technological infrastructure in order to store and report data, and the number of professional
developments focused on improving teachers‘ use of this data are clear indicators of this trend
(Bambrik-Santoyo, 2008; Borja, 2006; Burch 2010; Hoff, 2006; Honawar, 2006; Zehr, 2006).
Burch (2010) sites a survey of large urban school districts which indicated 82% of respondent
had invested in interim assessment technology which corresponds with the 70% of
superintendents administering interim assessment and 10% planning to start as reported by Lynn
The tests students are taking and the data that they provide land on the desks of
teachers, principals, curriculum directors, and district administrators with an almost implied
sense that "if we test and have the data, we will improve." What is written about interim
assessment is quite speculative in nature, amounting to conclusions that many of these systems
consume precious instructional time and resources, are unlikely to lead to student learning, and
may actually harm students through diminished motivation to learn and decreased self-efficacy
(Shepard, 2008).
Interim assessment vendors have linked their products to formative assessment by claiming they can
serve formative purposes despite little empirical evidence demonstrating the assessments are actually used
in a formative way. With the growing popularity of these assessment programs, it has become critical to
Use of Interim Assessments 3
evaluate the ways the large amount of data is actually being used in practice by teachers, assessment
systems claim to diagnose gaps in student learning, evaluate instructional approaches and curricula, and
provide useful information to improve classroom instruction and increase student learning. To
substantiate these claims, advocates rely on extensive research showing how formative assessment
practices can substantially impact student learning (e.g., Marzano, 2006). Few studies, however, have
been conducted to examine if interim assessment systems are being used for formative purposes or their
This action research project is intended for teachers, instructional coaches, and administrators to
reflect upon and implement best practices for using interim assessment data in education.
Research Questions
This paper investigates the following research questions:
How are teachers and administrators across state of Colorado using interim assessments in
What are best practices for using interim assessment programs in education?
Definition
Interim assessments claim both summative and formative purposes, diagnosing student
strengths and weaknesses and evaluating the success of instruction. When applied skillfully,
interim assessments can provide formative information to improve teaching and learning. Many
Use of Interim Assessments 4
educators are keenly interested in leveraging the power of both formative and summative
purposes to boost student performance on state tests used for accountability purposes, analyze
instructional approaches and curricula, and providing information for improving classroom
how interim assessments are classified. Marian Perie, Scott Marion, and Brian Gong (2009) with
the National Center for the Improvement of Educational Assessment, Inc. spelled out a clear
occasions, or concepts.‖
Periel et al. (2009) provide a framework for evaluating these popular assessments, which
are being marketed to states and districts as ―benchmark,‖ ―diagnostic,‖ ―formative,‖ and/or
―predictive,‖ with potential for improving student performance and meeting requirements set
forth in NCLB.
types of school improvement required under NCLB. Emily Lai of the University of Iowa states
Use of Interim Assessments 5
that the characteristics of frequency, brevity, and identify possible learning disabilities make this
form of assessment useful, and they are user-friendly to educators. Professor Lai claims, ―The
theory of action underlying interim assessment makes intuitive sense (2009).‖ Additionally,
formative assessment has significant empirical data supporting its use (Black & William, 1998).
The literature on formative assessment focuses interim assessment‘s potential for the
identification of content and skills that students have not mastered, targeting instruction to
particular learning difficulties who may profit from more intensive instruction (Bloom, Hastings
& Madaus, 1971; Cowie & Bell, 1999; Popham, 2006). Black and William (1998) draw upon
this research to examine the effects of formative assessments practices on students‘ cognitive and
affective outcomes and summarized their findings of approximately 250 empirical studies:
In fact, the research behind formative assessment is so vast, Nichols, Meyers, and Burling
(2009) go as far as to claim, ―In technical discussions, the use of the phrase ‗formative
and use of assessment scores, reference to an assessment as formative is shorthand for the
Many advocates and vendors are invoking the term ―formative‖ to convey the
effectiveness of their products (Popham, 2006). With this label, proponents draw on the
extensive research done by Black and William (1998) and others for empirical evidence because
they believe interim assessments might have the same potential for making a difference in
summative tests, the comparison to formative assessment appears valid. Many educators,
however, may not fully appreciate the differences between the types of assessment and end up
product.(Lai, 2009).
The importance of assessment systems is going ―beyond simply providing data,‖ and
providing educators with strategies for the interpretation and use of data in modifying classroom
instruction. Vital to a successful interim assessment system is knowing its intended purpose in
although he was writing about program evaluation rather than evaluation of students. He points
out that formative evaluation goes hand-in-hand with implementation. Information from
evaluation is used to modify and improve programs when a program is still malleable. On the
other hand, summative evaluation occurs after the program has concluded, primarily for the
purpose of assessing program success and for making decisions regarding the continued
beginning with Bloom, Hastings, and Madaus (1971), whose handbook provided explicit
Use of Interim Assessments 7
guidance for complementing summative evaluation with formative student assessments. Scriven
(1996) has also emphasized that this dichotomy is misleading; what distinguishes the two is their
Black and William (1998) have argued that ―formative-ness‖ is not a quality of the
assessment instrument itself, but rather describes the way it is used. (Periel et. al. 2010; Perie et
methods or curricula
Predictive and evaluative uses of the data are more summative in nature and fall
outside of the realm of the studies conducted by Black and others (Popham,
2006).
Nichols et al. (2009) argue that for assessments to be accurately labeled as ―formative,‖
they must be accompanied by demonstrated data analysis that results directly lead to increased
student achievement They contend that interim assessments have been marketed as ―formative,‖
offering to increase student achievement without actually offering any evidence that specific
products lead to increased achievement (Linn, 2007; McMillan, 2007; Nichols, Meyers, &
Burling, 2009; Perie et al., 2009; Popham, 2006; Shepard, 2008; Shepard, 2009; Wiliam &
Thompson, 2008).
Use of Interim Assessments 8
assessments, but using assessments to impact learning has very little research behind it. The
connection to interim assessments is purely speculative, as the use and purpose of interim
assessments vary greatly. Many of these interim assessment do not even appear to qualify under
the definition of formative assessment laid forth by Black and William (1999 & Popham, 2006).
Assessment vendors stand to reap the benefits from this surge in the use of interim
assessments. As Olson (2005) reports, one market research firm estimated that by the year 2006
sales of interim assessment systems had reached over $320 million annually. Applying
institutional theory, Patricia Burch (2010) from the University of Southern California looked at
the complex dynamic behind the highly profitable interim assessments being sold by private
vendors. She notes how early institutional theory in education exists outside of the private sector
and points to the importance of closely watching the growing trend and the relationship between
the public and private sectors. With the marketing of these assessments as ―formative,‖ its
Notably, the Council on Chief State School Officers (CCSS) established a division
known as FAST (Formative Assessments for Students and Teachers) with its first focus being to
establish this clear definition. Popham (2006) summarizes this definition as follows:
Critics point to many missing elements of the formative assessment described by Black
and William from the interim assessments being implemented. For example, they point to the
Use of Interim Assessments 9
need for formative assessment to involve student participation in the process, to provide
feedback to students, and the use of high quality instruments. Other components of successful
formative assessment are clear learning targets (Brookhart, 2008; Sadler, 1989; Wiliam &
Thompson, 2008), learning progressions with next steps for students (Wilson & Draney, 2004)
assessment tasks that are curriculum-embedded and designed to reveal students‘ thinking
processes, and instructionally meaningful (Shepard, 2006); and the timely—almost immediate—
availability of results (Popham, 2006). Specifically, Sadler envisioned a situation in which both
students and teachers are engaged in the process of interpreting assessment results with respect to
their own learning goals and their progress toward those goals (1989).
Often referring to interim assessments as ―mile wide and inch deep,‖ critics contend the
results are too superficial to provide reliable estimates of sub-skills (Linn, 2007) or specific,
diagnostic information (Shepard, 2008); are too removed from classroom instruction to relate to
desired learning goals (Pellegrino & Goldman, 2008; Shepard, 2008); are too limited in item
format to elicit evidence of students‘ thinking processes (Shepard, 2008); and generally are not
properly suited to provide useful information to impact instruction. Another critique is that the
narrow focus of the test had teachers focusing on the ―bubble‖ students at the expense of the
instruction and be feasible and worth the money and time schools invest in them. Abrams (2007)
claims that interim assessments must be accompanied by information regarding how teachers can
adjust their instruction to help remediate student deficiencies; otherwise they will amount to little
more than ―early warning summative‖ assessments. Shepard (2008) adds that interim
Use of Interim Assessments 10
units, be consistent with curriculum sequencing, and provide information that is not available
Multiple assessment types should complement one another with each being customized to
serve its purpose and linked to the same content standards (Pellegrino & Goldman, 2008).
assessments used for summative purposes and classroom assessments used for formative
purposes (Perie, et al., 2009; Perie, Marion, Gong, & Wurtzel, 2007; Pellegrino & Goldman,
2008).
In attempting to address multiple purposes, Pellegrino and Goldman, (2008) argue that
compromises will be unavoidable. Changes that bring the interim assessment closer in line with
classroom instruction will provide relevant results for improving teaching and learning but may
diminish the relationship between performance on the interim assessment and performance on
In hopes of discovering best practices of how interim assessments can be used to improve
student learning, Lai (2009) from the University of Iowa offered the first empirical evidence
regarding the way assessments were actually being used and the first known instrumentations for
measuring teachers‘ interim assessment practices. Her research location, Iowa, provided a
satisfying accountability provisions of NCLB. First, all public schools in the state are required to
administer at least one district-wide assessment (other than the ITBS) to students at any grade
level to track student progress in reading, math, and science. This provision is known as the
Use of Interim Assessments 11
―multiple measures‖ requirement, and its purpose is to facilitate comparisons between Iowa
students and other students in the nation. Second, all districts are required to administer a
diagnostic reading assessment to all students in grades K-3 at least twice a year so that parents
Her results indicated that score reports did not provide enough detailed information about
student performance to enable her to give students high-quality feedback regarding strengths and
areas for improvement. Thus, combination assessments appear to lack many of the features
Lai‘s study points to several factors associated with significant learning gains:
While school administrators try to use one instrument to satisfy several purposes, Lai‘s
(2009) survey results suggest this strategy may not be successful, as the characteristics that make
an assessment an appropriate tool for comparing Iowa students to their peers in other states do
not necessarily facilitate formative use of assessment results. This appears to support the
Assessments used for external accountability are far removed from classroom instruction
on a number of dimensions: 1) the content represented on the tests is not a complete match to
content emphasized during instruction; 2) the timing of the assessment is distal from the
instructional cycle—it occurs after relevant instruction has ended, which does not allow results to
feed back into instructional improvement; 3) the purpose of the assessment is to enable
inferences about the effectiveness of state and district educational systems and for allocation of
accomplishments; the primary, intended users are policy-makers and administrators rather than
teachers and students; and 4) the locus of control regarding assessment selection and
administration is centralized (e.g., residing with the state or district) rather than decentralized
(e.g., residing with the classroom teacher). In many ways, interim assessments constitute a
over the potential behind interim assessments, but the key factor in their ability to increase
student achievement lies in its instructional use. Many vendors and proponents of interim
assessments rely on studies related to formative assessment, however, their actual use in a
Section 3: Methodology
Context of Study
This study began as an action research project focused on improving the use of assessment data at a
local hybrid on-line charter school. As the 3rd-5th grade teacher of 17 mixed-aged students, I noticed the
Use of Interim Assessments 13
first few weeks of school devoted a great deal of time to testing. While I desperately wanted data to
provide information on the students‘ knowledge base, I also had four weeks of teaching (with these
interruptions) when my information about the students was limited to my own classroom assessment. In
fact, it was two weeks after the exams were administered before I received any results.
When the assessment results finally arrived in my email, it simply contained a proficiency level and
percentage for each individual student by subject as well as aggregated data. I identified student ability
groups quickly, but it failed to provide me with useful information for adjusting my instruction.
When I had previously taught 4th grade in another local school district, I had received item by item
analysis organized by individual student and learning objective. The computerized reports provided me
with in-depth item analysis including detractor,s which provided information as to why students may have
selected the incorrect answer. I was able to adapt instruction to better suit the needs of my classroom
based upon common misunderstandings that emerged from analysis of the assessment data. Meetings
with my school instructional coach helped assure that my data inferences were correct, and she offered
guidance on teaching strategies focused on the specific needs of my students. While analyzing and
discussing the data required a lot of time in meetings and working at home, it played a direct role in my
My new classroom consisted of 17 students almost evenly divided among third, fourth, and fifth
graders. Their ability levels varied from complete illiteracy to advanced proficiency at each grade level,
and this wide range made it critical to be as efficient as possible in instruction. If the type of data I had
previously experienced been available, I could have identified specific objectives and targeted small
group instruction.
Use of Interim Assessments 14
Having experienced a data-rich environment, I knew the role interim assessment programs could play
in driving instruction within the classroom. I wondered how others used interim assessments within their
Preliminary Interviews
I began my research by interviewing five teachers in Denver area schools to investigate the process
and tools used by other teachers in analyzing interim assessment results. The interviews provided the
opportunity for these teachers to reflect upon and explain his or her assessment practices and experiences
in their own terms. ―The interview process not only provides a record of participants‘ views and
perspectives but also symbolically recognizes the legitimacy of their experience (Stringer, 2007).‖
Participating teachers were selected from five different schools across four districts in the Denver
area in order to better represent varied approaches to use of various interim assessment programs.
Teaching experience ranged from four to 26 years with a mean of 14 years of experience. The variety of
teachers was selected in order to create greater transferability of the study for use in a wide range of
Keeping personal experiences and possible bias in mind, I formulated a series of grand-tour type
questions to guide the interviews. The following questions were purposefully created to provide focus for
the interview and help bracket my personal experience and bias potential (Stringer, 2007).
2. What is the availability of data, how is it accessed, and what information is provided?
3. What functions of the data system do you find most and least helpful?
5. What professional development/training have you had relating to use of this data?
7. What challenges do you face in using the data provided from interim assessments?
Use of Interim Assessments 15
8. What are your overall views of using data provided by interim assessments?
I met with interview participants in local coffee shops and classrooms after school, and we briefly
discussed the purpose of my research and the interview. In the practice of informed consent, participants
read and signed a consent form and were asked if they had any questions. In addition to taking notes,
each interview was recorded using a portable digital audio device to later be imported into audio editing
While our discussions all focused around the above guiding questions, most of them we answered
without prompt from an actually question. Throughout the interviews, I asked follow-up questions to
the guiding questions. During this process, I noted general themes and sorted the information into
categories of test format, access to data, reporting functions, analysis process, use of information,
professional development, and alignment with curriculum. Each statement was then coded with a specific
unit of meaning and categorized by general topic such as test format, access to data, data analysis,
purposes, and uses of interim assessment data. The coded units of meaning directly related to the purpose
and use of assessment were then sorted into predictive, evaluative, and summative categories following
the principles set forth by Perie, et. al. (2009) and discussed within the review of literature.
Survey Construction
A mining of the literature on formative assessment identified three factors likely to facilitate the
1. qualities of the instruments themselves (item formats, alignment of tasks with classroom
2. aspects of use (communication of learning targets and quality criteria, provision of effective
used to construct survey instruments designed to measure how schools and teachers are
A survey was conducted based upon these factors and combined with themes from the preliminary
interview data analysis. The survey consisted of primarily Likert Scale items with participants indicating
the degree to which they agree with a series of statements. Other questions asked the frequency of
A draft survey was then administered to the original five interview participants who were told to ask
questions for clarification and make annotations next to survey items for further discussion if needed.
Changes were made to the survey to clarify questions or provide more targeted questioning.
Survey Administration
In October 2010, email invitations to complete the online survey were sent to the assessment
coordinators of each of Colorado‘s 186 school districts via email address obtained through the Colorado
Department of Education (CDE). A brief email explained the purpose of my study with a link to the
survey (Appendix ). In order to maximize the response rate, the email invitation was sent on a Monday
night with the intention of the assessment coordinators opening their email Tuesday morning. This email
All survey data was collected via a Google Form and maintained within a Google Document
spreadsheet, later to be downloaded to Microsoft Excel for analysis utilizing pivot tables and graphs.
Use of Interim Assessments 18
Figure 4. Reviewing Data with Parents Twenty-five respondents reported they used
weakness. Reviewing and communicating results with the parents arguably can be considered to be more
this statement. Interestingly, 80% of respondents indicated they used interim assessment data to evaluate
how well the students have learned the material taught to date. It would seem logical to think in order to
use interim assessment data in evaluating teacher pacing, the test would need to align with the classroom
Several interviewees mentioned this as being inconsistent. As a fourth grade teacher told me, ―I don‘t
understand what is being tested… because it doesn‘t align with what we are doing in class.‖ This
statement was confirmed by a teacher in a different district. She replied, ―The MAP test shows mastery
level of skills, but it is not aligned with our Core Knowledge curriculum. I simply hope the students are
able to transfer their knowledge.‖ This lack of alignment with daily instruction could play a critical role
Use of Interim Assessments 19
in how the data can be used. When the assessments do not align with the curriculum, it makes it difficult
for teachers to use it in this way, even though most respondents claim they do.
Questions might arise whether or not teachers Figure 6. Modify Pace and Curriculum
are permitted to make modifications to the pace or 45%
40%
sequence of the curriculum. One interviewee 35%
30%
remarked, ―You have to stay within the curriculum, 25%
20%
but you can make some adjustments to better fit your 15%
10%
class.‖ (See Figure 6) 5%
0%
Most Some
Every Rarely/
of the of the
time Never
Groups time time
Teacher 3.23% 6.45% 12.90% 9.68%
Groups appear to be a major focus for using Assessment
6.45% 9.68% 16.13% 6.45%
Coordinator
interim assessment data. Seventy-four per cent of
Administrator 0.00% 19.35% 9.68% 0.00%
respondents indicated agreement to grouping
students within the classroom by ability level for instruction with only five disagreeing and none strongly.
―When you get the data, all I‘m thinking about is how many kids are at each level so I can group them by
ability,‖ noted one teacher. Also noteworthy, only one person indicated she did not use the data for
Figure 8. Individualize Student Learning When setting goals, however, teachers appear
Goals
to focus more on the individual. In the Figure 8, a
14
12 majority of 65% of respondents indicated they
10
8
6
used the data to set individual student goals most
4
2 or every time. This is in comparison to only 44%
0
Not
Every Most of Some of Rarely/
time the time the time Never
Applica falling into the most or every categories in regards
ble
Teacher 2 3 2 3
to group goals. In fact, only 6% (2) of respondents
Assessment
5 5 2
Coordinator
Administrator 5 2 1 1
indicated they practiced group goal-setting every
mentioned in an interview with a teacher from a local charter school. He continued, ―We look at which
kids can be pushed up to the next level.‖ These ―bubble students‖ appeared as a theme throughout my
teacher interviews, as it was mentioned by four of six interviewees. Teachers are setting goals more for
individual students and setting goals more specifically on students on the threshold between proficiency
levels.
Student Involvement
As Sadler (1989) noted, the benefits of both students and teachers working together to set learning
goals and track students‘ progress toward those goals is student involvement, which probed students into
Use of Interim Assessments 21
tracking their own progress, reviewing results with individual students, reviewing the test as a class, and
The survey indicated widespread student participation with 45% of respondents indicating they went
over results—including strengths and weaknesses—with individual students. An almost equal number
(42%) indicated this using this practice some of the time. The remaining 12% were evenly split between
The data reveals one of the major reasons Figure 10. Motivate Students
replying ―most of the time.‖ The majority reported ―Rarely/Never‖ to this question. Ten replied some of
the time. This would arguably be a formative use of the data, but it does not appear to be done
mind in order to increase student achievement. As my data revealed, many schools and teachers are
applying formative principles in their practice. The data indicates that teachers look to use assessment
data for grouping of students, but teachers are not applying the same principles on the individual level
with the same level of frequency. While goals and feedback appear to be focused on a more individual
Use of Interim Assessments 22
level, the instructional components seem to be on the group level. The use of data for individual students
would arguably be more summative and predictive in nature. However, the research data indicates
teachers are practicing the more formative purposes for group instruction. Hence there is a gap between
what the data is intended to do and what the data is actually being used for. Date and research both
indicate that using interim assessment on the individual student level can provide feedback and motivate
students to learn
Section 5. Conclusion
By comparing the literature review with the findings of this action research, we can
reflect upon our practices and make changes to our approach to using data to use a more
formative approach consistent with this research on formative assessment. This has implications
on the education of our students and the way school districts are investing in these assessment
programs. It also has implications for professional development in school systems. The results
of this research can be used to drive professional development by informing teachers of both best
and common practices. The focus of professional development workshops should shift from
The limitations of this study include a small sample of 31 total surveys. The respondents
were split almost evenly into thirds among teachers, assessment coordinators, and administrators.
Future studies should include more teachers, as they are the ones in the classroom using the
References
Bambrick-Santoyo, P. (2008). Data in the driver‘s seat. Educational Leadership, 65 (4), 43-46.
Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education:
Bloom, B.S., Hastings, J.T., & Madaus, G.F. (1971). Handbook on formative and summative
Cowie, B., & Bell, B. (1999). A model of formative assessment in science education.
Lai, E. R., ―Interim assessment use in Iowa elementary schools‖ (2009). Theses and
Linn, R. L., (2007). Benchmark assessment: Promise or peril? Paper presented at the Annual
Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2004).Classroom Instruction that Works:
Nichols, P.D., Meyers, J.L., & Burling, K.S. (2009). A framework for evaluating and planning
Olson, L. (2005). Benchmark assessments offer regular achievement. Education Week, 24 (14),
1-19.
integrating assessment into classroom teaching and learning. The future of assessment:
Shaping teaching and learning (pp. 7-52). New York: Lawrence Erlbaum Associates.
(3), 86-87.
Perie, M., Marion, S., & Gong, B. (2009). Moving toward a comprehensive assessment system:
(5), 38.
Shepard, L.A. (2008). Formative classroom assessment: Caveat emptor. The future of
assessment: Shaping teaching and learning (pp. 279-303). New York: Lawrence
Erlbaum Associates.
Shepard, L.A. (2010). What the marketplace has brought us: Item-by-item teaching with little
William, D. & Thompson, M. (2008). Integrating assessment into learning: What will it take to
make it work? Inh C.A. Dwyer (Ed.), The future of assessment, Shaping teaching and
Wilson, M. & Draney, K. (2004). Some links between large-scale and classroom assessments:
The case of the BEAR assessment system. In M. Wilson (Ed.), Towards coherence
Chicago Press.
You are being asked to be in a research study. This form provides you with information about the study. A member of the research
team will describe this study to you and answer all of your questions. Please read the information below and ask questions about
anything you don’t understand before deciding whether or not to take part.
.
I am conducting a research study to investigate: How are data from interim (sometimes referred to as benchmark or periodic) assessments used?
You are being asked to be in this research because you work within a school which utilizes interim assessments in the Denver metropolitan area.
If you join the study, your survey and interview responses will be collected by the investigator/researcher, John Bunker, upon completion.
All comments will be de-identified after the research after the investigator/researcher reviews them. Only individuals who consent to this study
will be interviewed and have surveys collected. The only extra time on your part will be the completion of the survey either online or by pen and
paper which takes 5-10 minutes. Any interviews will take a maximum of 45 minutes and be scheduled at your convenience outside of your work
schedule.
As part of the data for my study, interviews will consist of questions related to your personal experience using data from interim
assessments given periodically at your school. I will correspond with you outside your work day at your convenience to set up and conduct the
interview.
Your participation will be confidential and pseudonyms will be used in all documentation, analyses, and written findings. I am not
associated with any school, district, state, or federal agencies and participation will not have any influence on performance evaluations or
licensing.
Discomforts you may experience while in this study include social embarrassment if your identity is identified and you do not like how your
comments are analyzed. As the researcher, I will be the only individual reviewing and analyzing any of the individual comments. Participant
names will NOT be used in reporting the results in any publications and will be de-identified prior to data review. The researcher will not let
anybody know who is participating in the study, but there is a small risk that your identity may be recognized through a de-identified comment.
Additionally, the de-identified data is being stored within a Google Document with access limited to the researcher.
This study may include risks that are unknown at this time.
Use of Interim Assessments 27
This study is designed for the researcher to learn more about how data from interim assessments is used to impact student learning. This
data will be reported to professional educational journals and research-based presentations with the intention to inform best practices. This study
may not benefit you personally, but hopefully will benefit future students’ learning through the use of interim assessments.
Is my participation voluntary?
Taking part in this study is voluntary. You have the right to choose not to take part in this study. If you choose to take part, you have the
right to stop at any time. If you refuse or decide to withdraw later, you will not lose any benefits or rights to which you are entitled.
The researcher carrying out this study is John Bunker. You may ask any questions you have now. If you have questions later, you may call
John Bunker at 720-989-1501.
You may have questions about your rights as someone in this study. You can call John Bunker with questions. You can also call the Human
Subject Research Committee (HSRC). You can call them at 303-556-4060.
Both the records that identify you and the consent form signed by you may be looked at by others. They are:
Regulatory officials from the institution where the research is being conducted who want to make sure the research is safe
The results from the research may be shared at a meeting. The results from the research may be in published articles. Your name will be
kept private when information is presented.
I have read this paper about the study or it was read to me. I understand the possible risks and benefits of this study. I know that being in this
study is voluntary. I choose to be in this study: I will get a copy of this consent form.
Signature: Date:
Print Name:
Print Name:
Investigator: Date:
Use of Interim Assessments 28
Comment
Scantron style test with spaces in test booklet for constructed response items (approximately 5 constructed response items per 25
item test with the rest being 4 option multiple choice)
District created
―If the kids aren‘t used to a format of a test, it is tricky for them. Sometimes I think the students have the knowledge but are
thrown off by the format of the question.‖
―If I take a test, I practice the test. That‘s why there are prep guides- so you know what to expect.‖
Access to data for all students taught regardless of subject area (Example: I have a student for reading, but he has my teammate
for math. I receive his math data in addition to his reading.)
Access to test copy with individual items for 3 weeks – ―It would be nice to be able to reference the individual items anytime to
see their style and content. It could help me figure out why students missed the items they did.‖
―The last time I logged on looking for item analysis, it wasn‘t up.‖
School secretary distributes general data showing proficiency level of individual students by subject and standard about 1 week
following the test
Standards linked to individual test items – ―The standards are not realistic to what I teach. It‘s hard to figure out what test
question types are going to cover what standard.‖
―This report shows each individual item number, which standard it correlates to, the correct response, and individual student
responses when incorrect.‖
―It would be nice to have a report that identified why the student might have gotten it incorrect. I have to look at each individual
test item and try to figure out why they might have selected B instead of C.‖
―There is just so much information here to mine through, but the system is pretty user-friendly.‖
Look for gaps in achievement by identifying standards with higher numbers of students below proficient under each standard
―I look for common wrong answers on frequently missed items and notice trends in the type of questions students miss.‖
Review the test immediately following with the whole test, item by item – ―It‘s really boring for the kids, but I think it‘s
important.‖
―The problem is the amount of data. It takes a lot of time to analyze it all, and then we have to keep up with pace of the
curriculum. I don‘t get a chance to re-teach because we have to get through the curriculum.‖
Teacher work day immediately following test for grading of constructed response questions
Teacher Portal Expert – received 8 hours of training on using the system last spring
Use of Interim Assessments 30
Had 3 1-hour training sessions for all teachers last year, ―but that wasn‘t helpful at all.‖
―I‘m not sure if the test covers higher order thinking skills, but I‘m not sure if that is the idea behind the test.‖
―It should be used to guide instruction but I‘m not sure what it actually does.‖
―I don‘t understand what is being tested. Is it a pre-test, a post-test, because it doesn‘t align with what we are doing in class? It
would be nice if we knew what was actually being tested.‖
―Writer‘s workshop doesn‘t cover some of the important skills they are being tested on such as punctuation and grammar.‖
―The test material doesn‘t seem to be aligned with our classroom pacing.‖
Use of Interim Assessments 31
Appendix C. Survey
JKLMLKNJN OPQRSTRUVWQXYZR[PPQPPZQVWR\]W] KLMNMLOKO PQRSTUSVWXRYZ[S\QQRQQ[RWXS]^X^
012315678697
8
312
231
2
21
0124567829
11211278
8 2
!""#$""%"&%%"
%"'(#")!%*#% !!+!!!%
&"""""""%$,!!+!!& !!%!$ 012
21
2
-./%
018
124394399278945267
89!!! !:
,%"
;"!)
,"""")%
<= 01
3
!
21
222
"#$%&'()
"*%+,
>4?722291458241
7@2AB29
2742C42431D267 "-.'+'()
"/0'$(0$
"/10'%2"/+3&'$4
189
486 "5+,$.6"
018
7829
112112781C4243312
724394511944E141445EC
189
4867 7
18
8211982
:
;"34$"'(+$.'<"%44$44<$(+"&%+%"+16
/+.1()2= /+.1()2=
>'4%).$$ >'4%).$$ ?$3+.%2 @).$$ @).$$
AB%23%+$",1C"C$22"+,$
4+3&$(+",%4"2$%.($&
+,$"<%+$.'%2"+%3),+"+1
&%+$
D.$&'0+"4+3&$(+4E
F$.G1.<%(0$"1("H/@D
>$+$.<'($"C,$+,$.
01813FG2489219244D292CF281212
7829
11211278167 1($"F$&%)1)'0%2
%FF.1%0,"'4"<1.$
.% $GG$0+'B$"'("+$%0,'()
H <%+$.'%2"+,%("%(1+,$.
D.1B'&$"4F$0'G'0
I G$$&I%0J"1(")%F4"'("%
9 F%.+'032%."4+3&$(+E4
J(1C2$&)$
9!9%" >$+$.<'($"C,$+,$.
<= 4+3&$(+4"%.$"1("+.%0J
+1"4300$$&"1("H/@D
D.1B'&$"01..$0+'B$
G$$&I%0J"+1",$2F
4+3&$(+4"4300$$&"1(
P^XQ]_P`QQWPNabSSbcQadSZLeYQfTSXZg`ch JLM Q_YR^`QaRRXQObcTTcdRbeT[MfZRgUTY[hadi LMN
CDEFEDGCG HIJKLMKNOPJQRSKTIIJIISJOPKUVPV <=>?>=@<@ ABCDEFDGHICJKLDMBBCBBLCHIDNOIO
0123 012345101789
147
4567896
9
587
7740098715
9656
6 147509740
9566
7
97 47740210
6
96
6 592350174510
556
6 85348
5400401
8996776675 1449800
79
47740210
95 592350174510
6
9
9
85348
5400401
6975697 5332801234510
6
56
976
47740210
77967
592350174510
9
7
97! 85348
5400401
"56
6#767
874510
6
955 417740448755
9776
8
5 80153328
76675 0123451080435
05796
6
6 740210
5
#769
6 417740448755
$956
5# 80108
6
9
76
7 72
001234510
9
6
969 80435740210
%55#
67 34519514510
0
96975
77414895205
9
7769675 8004004513818
3587
6967679
969 4491444314107
5
5675
1478147808043
2
5800400451
3818
&'()*+,(-,*./()+01(2-.1(0-3/(14*--56271,/74,(31.(/.135,1(8/-9(7',./79(+11.119.',1: 850453328
01234514875554430
80435800400451
3818
850472
4875554430205
8004004513818
314
89404 2459414
9277922
0045814853
3092007402101
;5.+1.(/+,.(*-)(-8,.'(0-3(31.(,*.(8-55-)7'<(=/+4,74.1(/.5+,7'<(,-(7',./79(+11.119.',(2+,+> 1471489470853
? 02
710181
9735814
56
15
56
@9
AB
8
B56
8
67
45676
67
279
5017291584710
$56
6#
6
9
6976
!""#$%&'()*+,-.$/-'(0,-1(,2-1$*2-&)-)(3*-)$-#*4*&5*-#*26+)2-(7)*#-4$'"+*)&08-).*
9
6 (22*22'*0)9-:
$56
6956 ;4021087488844381481471401835017815
5#6
9
6 ;402108748884158438014015
#7
5
"558
5
6 ;40210874888415844
14015
9#
79 ;402108748884158444
014015
98
7
787
9
IWQJVXIYJJPIGZ[LL[\JZ]LSE^RJ_MLQS`Y\a bEF BPJCOQBRCCIB@STEETUCSVEL>WKCXFEJLYRUZ [>?
678987:6: ;<=>?@>ABC=DEF>G<<=<<F=BC>HICI <=>?>=@<@ ABCDEFDGHICJKLDMBBCBBLCHIDNOIO
123435677587966766
74576
56
587 013456789
53
499455
!"#"
$""%
&76 49540893
19
9
345498
'3
'()
!"#$
!""*$% %&'()(*+!,+-&&./(+0&12
&76
'3 3(4&)$+5! 2(+6+7()"2+&8+()9#1(+6+5**#$#&:;/+7()"2
'()
+",-","%
&76
'3
.-!/"01
2/"""-,",3
"40"0-"%5
&76
'3
<JD=IK<L==C<:MN??NO=MP?F8QE=R@?DFSLOT U89 BPJCOQBRCCIB@STEETUCSVEL>WKCXFEJLYRUZ ?>?