Annotated Bibliography
May 2, 2011
Introduction
• Retention - University
• Directed Self-Placement/ Self-efficacy
• Accuplacer and FYC placement standardized exams
• WPA as authority & who makes decisions about writing programs?
o Who gets to make decisions about these things (placement)
• Machine-scored placement exams?
Annotated Bibliography
(often used for placement purposes, such as the SAT and ACCUPLACER)
“assure that the lowest form of assessment provides the appearance of
thoroughness and that the greatest economy will prevail” (142). However,
these types of tests prioritize placement over process and construct—and
the student products resulting from these types of high stakes “throw away”
tests don’t generate any useful data for research purposes or student
reflection and learning. Condon notes that in systematizing reductive forms
of assessment, universities inscribe values of writing which are underscored
by “the least interesting and the least useful potential product of an
assessment: the score, the ranking, the placement” (142). In fact, Condon
implicates the economics of placement in the increased use of machine-
scored essay tests, despite the fact that machine scoring is an even more
reductive form of an already reductive assessment. In the second half of the
article, Condon describes the process used at Washington State University to
move away from “typical” writing placement essay prompts—ones which
asked students to respond to a reading or argue a position—to a generative
type of prompt which calls upon writers to share their individual experiences
as learners and writers. WSU’s FYC placement prompts are specifically
generative since they ask writers to reflect upon two of six institutional
learning goals, which are intended to be incorporated into the general
education program and into the courses offered by each department. WSU’s
prompt asks students for two essays, one sample where students discuss
and analyze influential courses and/or teachers and the other sample where
students identify and reflect on learning experiences outside the classroom
that they feel will help them achieve the learning goals. Condon describes
various factors that make these types of prompts beneficial, including the
more obvious fact that they are reflective in nature. Additional advantages
of a generative prompt include two important distinctions: first, universities
“are of necessity” moving in the direction of specifying clear learning
outcomes and should base assessments on them, and second, these tests
provide a “robust set of data” for researchers to study in terms of learning,
writing, and reflection as well as information about the learning styles and
interests of test takers (145). Condon concludes that moving toward a more
generative type of assessment is in the institution’s and writing program’s
“enlightened self-interest” (153).
Corso begins this chapter with a telling narrative about her history with
writing program administration and student placement. Worth mentioning is
that Corso’s institution had, at one time, placed students by using a holistic
3
http://cshe.berkeley.edu/publications/publications.php?id=265 on 29 April
2011.
The authors of this study sought to examine the relative uses of high school
grades and standardized admissions tests for predicting students’ long-term
performance in post-secondary school. Using the University of California’s
student database, the researchers sampled 80,000 first-time freshman and
used multi-level data modeling to analyze the data for indicators of long-
term success (defined as graduation, and cumulative four-year GPA).
Despite many inconsistencies with secondary school grading techniques and
weights, the researchers determined the high school record to have strong
“superiority” over standardized tests in predicting long-term success in
college. Though this study does not directly correlate with first-year
4
Gere, A., Aull, L., Green, T. & Porter, A. (2010). Assessing the validity of
directed
In this article, Gere, Aull, Green, and Porter examine an established directed
self-placement (DSP) program at the University of Michigan, applying
Messick’s (1989/1995) definition of validity, which focuses on interpretations
and actions stemming from assessment results and, thus, the meaning(s)
placed upon assessment by the institution or writing program. The study
design was meant to determine “the degree to which the implementation of
DSP at the University of Michigan between 2003 and 2008 led students…to
take writing courses that were appropriate for them” (155). The authors’
argument highlights the gap in DSP research which reveals that while several
articles “touch on issues of reliability,” no research exists which examines
the “validity of DSP in systematic terms, [where validity should] link
evidence with social and personal consequences and values” (156). Thus,
the researchers use various data sources to examine the validity of DSP at U
of M, including student scores and decisions, DSP online questions,
interviews and surveys. In order to examine the empirical data and
theoretical positions about DSP at U of M, Gere et al use the six aspects of
validity as designed by Messick to inform the interpretation of results from
the data: 1) the extent to which the content of the DSP questions align to
the FYC writing construct at U of M , 2) the extent to which DSP questions
and students’ responses to them are theoretically grounded, 3)the extent to
which the scoring of DSP surveys align with the construct of FYC writing at U
of M, 4) the extent to which DSP scores generalize across time, student
populations, and the construct of FYC at the U of M, 5) the extent to which
the scores on other assessment measures (such as course grades) correlate
with the scores on the DSP questions, and 6) the implicit values in and
consequences of interpreting and using DSP scores. The authors found that
through using Messick’s definition of validity, a number of weaknesses in the
DSP program at U of M were clarified and/or revealed. In response to each of
the research questions, framed around one of six aspects of validity
described above, the researchers found that there were several
“disconnects” between the DSP and the first-year writing courses that
students chose to take (170). While these findings show that DSP and
students’ needs and desires did not correlate in terms of validity, the authors
highlight that any DSP program must be examined within their local context
of its college or university. A second implication of this study is that DSP had
perhaps been “under-conceptualized” at the U of M, including the way it had
5
Lewiecki-Wilson, C., Sommers, J. & Tassoni, J.P. (2000). Rhetoric and the
writer’s profile:
(6), 1-18.
The authors, citing Hagedorn (2005), first identify four categories of student
retention: institution, system, academic discipline, and by course. For this
study, they focus on retention in a specific academic department of concern
by comparing survey results of students enrolled in spring 2008 with the
enrollment statuses of students enrolled for fall 2008. The authors
determined that social connectedness and faculty approachability were
dependent variables that significantly differed between students classified
by whether or not they returned to the university in Fall 2008. From these
8
higher-education/2009/academic-advising-highly-important-students on
21 April 2011.
• Noel-Levitz web site - They are a consulting firm. In their “About Us”
section, they "strategically align" with publishing companies
Bedford/Saint Martin’s, Prentice Hall, and Cenage Learning.
• Some of their materials on retention - They use rhetoric of data-driven
approaches.
• A search of the Noel-Levitz site for Accuplacer yielded zero results.
• At their conference on student recruitment and enrollment, the Key
Note speakers come from recognized universities, including LaGaurdia
Community College (home of Darrin Cambridge & his electronic
10