Anda di halaman 1dari 32

Running head: PROGRAM EVALUATION PLAN

Program Evaluation Plan


Liutova Marina
Walden University

Dr. Michael Burke


EIDT-6130-1 Program Evaluation
June, 21, 2015

PROGRAM EVALUATION PLAN


The Program Evaluation Plan presented in this paper contains the following five sections:
(1) Program Analysis
(2) Evaluation Model Table
(3) Evaluative Criteria (revised)
(4) Data Collection Design and Sampling Strategy
(5) Reporting Strategy Table
Each of the sections can be reached through hyperlinks (Click on the name of the section of
choice to reach it).

PROGRAM EVALUATION PLAN

The program is the BA in Linguistics and Cross-cultural Communication offered by the


Institute for Tourism and Hospitality, which is the Moscow campus of the Russian State
University of Tourism and Service. It is a biannually-accredited full-time 240-credit-hour fouryear-long program that includes 44 courses and prepares students for careers in TEFL
(Teaching English as a Foreign Language) and translation / interpretation.
Overall, the program has a 21-year-long history during which it has been regularly
updated and modified to meet the requirements of the constantly changing the state educational
standards. In 2011, in line with the new Federal State Educational Standards and Russias
participation in the Bologna Process, the program was totally transformed to embrace a
competency-based approach, with the success of the program ultimately measured by the ability
of the graduates to demonstrate 12 general cultural and 44 professional / program-specific
(language-application- and professional-ethics-related) competencies as specified and
prescribed by the Core Training Program for 035700.62 Linguistics issued by the Ministry of
Education and Science of the Russian Federation on May 20, 2010 (ministerial order No.541)
(Ministry of Education and Science of the Russian Federation, May 20, 2010 the resource is
published in Russian).
According to the latest accreditation documentation, the program as developed by the
Russian State University of Tourism and Service is fully in compliance with the Ministryissued Core Training Program (CTP). Its stated goals repeat those of the CTP and are intended
to ensure graduates functionality in the following areas: (1) professional industrial practice, (2)
professional methodological research, (3) professional evaluation, and (4) organizational and
managerial activities. The objectives correspond to the CTP mandatory competencies for BAs
in Linguistics and Cross-Cultural Communication. The objectives for individual courses are
intended to be based on and to contribute to the development of the mandatory competencies.

PROGRAM EVALUATION PLAN

However, the biannual external accreditation has focused on auditing the documentation and
collecting quantitative data.
The courses are organized into 4 distinct categories / cycles: (1) humanitarian,
sociological and economic cycle: (a) federal mandatory, (b) national-regional (university
mandatory), and (c) electives; (2) mathematics and natural sciences cycle: (a) federal
mandatory and (b) electives; (3) profession-oriented cycle (general linguistics): (a) federal
mandatory, (b) national-regional (university mandatory), and (c) electives; and (4)
specialization-oriented cycle (interpretation and cross-cultural communication): (a) university
mandatory and (b) electives. The program theory broadly states that the competencies
developed by students in each of the cycles cumulatively ensure that, on graduation, the
students will be ready for professional activities associated with the four areas identified by the
CTP for BAs in Linguistics and Cross-Cultural Communication.
Although, the program has a history of enrollment competition, high student-satisfaction
(appr.80%) and student-retention rates (over 90%), graduates engaging in Masters programs in
Russia and abroad (appr.35%), and moderate rates of professional employment on graduation
(appr.60%) (Department of Linguistics, 2013), for the past four years, the program has been
steadily losing its visibility and prominence in the eyes of the university administration, partly
losing its funding in 2011 and having the program enrollment terminated in 2013.
Brief interviews with the program deliverers and the program leader reveal that, in
2011, the program was changed only on a pro forma basis, with neither instructional strategies
nor instructional materials revised or adapted. The failure to conduct in-depth changes is largely
attributed by the program leader to the faculty losing motivation and enthusiasm as a result of
the cuts in funding. Moreover, due to a lack of monitoring of the program activities by the
university quality control committee and a lack of student awareness of the programs
documented goals and objectives, the faculty have been content with their classroom practices
and have not felt any need for more than a perfunctory change.

PROGRAM EVALUATION PLAN

The stakeholders of the evaluation can be identified among all the five stakeholder
groups as described in literature on evaluation (Fitzpatrick et al, 2011), namely: (1)
policymakers, (2) administrators and managers, (3) program deliverers, (4) primary consumers,
and (5) secondary consumers; however, the key stakeholders for an internal evaluation, those
who initiated the evaluation, belong to three groups only: (2) administrators and managers
(university academic affairs department, additional professional education department, and
finance department), (3) program deliverers (the faculty and the program leader), and (4)
primary consumers (currently enrolled students). The data concerning the interests and concerns
of the three key stakeholder groups were collected through brief personal communications with
those representatives of the groups who were willing to share their opinions.
The interests and concerns voiced by the key stakeholders (pedagogical viability of
instructional strategies; quality of instructional materials; relevance of syllabi; relevance of
curriculum; programs sequence quality; programs promotion of professional reflection and
life-long learning; programs competitiveness; employability on graduation; balance between
received and experiential knowledge; meeting the Bologna-process requirements; and
compliance with the Federal State Educational Standards) are deemed by the evaluator to be all
related to the issue of whether or not the program as it is actually delivered meets the objectives
stated in the programs documents on whose basis it was accredited in 2013.
There are a number of non-stakeholder-interest-related contextual factors to consider in
planning and implementing the evaluation. Among the most prominent factors are the
hierarchy-oriented organizational culture and a lack of organizational evaluation capacity,
which may both undermine the reliability of the input from the stakeholders, severely limit
access to them as active participants, negatively affect the credibility and the utilization of
evaluation results, as well as raise such major ethical issues as those related to (1) Human
Rights and Respect (P3 in the Program Evaluation Standards as cited in Fitzpatrick et al, 2011)
in terms of assuring confidentiality while pressure to violate confidentiality is likely when

PROGRAM EVALUATION PLAN

results/findings are presented, and (2) Concerns for Consequences and Influence (U8 in the
Program Evaluation Standards as cited in Fitzpatrick et al, 2011) in terms of evaluation results
being misinterpreted or misused, for example, used to punish someone, suppressed or
ignored by the stakeholder (p.80).

PROGRAM EVALUATION PLAN

Evaluation
Model
EXPERTISE
AND
CONSUMERORIENTED
APPROACHES

Advantages
-being guided by standards consistent
across a particular field, such studies
produce results that allow to compare the
merits and worth of different programs or
products, and, thus, assist policy-makers
and individual consumers in final decision
making (Fitzpatrick et al, 2011);
-high objectivity as the evaluator is the
major, often the only, decision maker in
the study because he or she does not
have other important, direct audiences to
serve (Fitzpatrick et al, 2011, p.144);
- are more appropriate when there is a
need for independent judgment, when
specialized information is needed that
only experts can provide and, more
importantly in the case of education,
when program indicators are
standardized, rather than particular to a
program (Zukoski & Luluquisen, 2002,
p.3).
-produce a relatively comprehensive
appraisal of the product or program
(Fitzpatrick et al, 2011, p.148).

Disadvantages
Both approaches:
-the evaluation objectives and the evaluation
questions are determined externally (Fitzpatrick et
al, 2011; Shufflebeam, 2002);
-the approaches attempt to mold and define
conditions to fit preconceived models of how
things should be done (Patton, 2002, p.431);
-highly standards-oriented, do not take into
account the contextual variables.

Expertise-oriented:
- there can be very limited resources when it
comes to hiring someone to perform the
evaluation. Similarly, if there is an expert assigned
to your program, you (the client) are now on their
schedule (Morris, May 2015);
-there can be a higher cost associated with hiring
an outside person. Anyone who is considered an
expert in their field would generally expect to be
paid as one (Morris, May 2015);
-the presumed expertise of the experts is a
potential weakness (Fitzpatrick et al, 2011,
p.142);
-too often the team contains only content
experts, but may lack experts in the evaluation
process itself (Fitzpatrick et al, 2011, p.142);
-high potential for the experts personal bias
(Fitzpatrick et al, 2011).

PROGRAM EVALUATION PLAN

Objectives-oriented:
PROGRAMORIENTED
EVALUATION
APPROACHES

-Probably the greatest strength and appeal


of the objectives-oriented approach lies in
its simplicity. It is easily understood, is easy
to follow and implement, and produces
information that program directors
generally agree is relevant to their mission
(Fitzpatrick et al, 2011, p.166).

-The focus on objectives can cause evaluators


to ignore other important outcomes of the
program, both beneficial and detrimental, and
if the evaluation draws final conclusions, the
judgment of the program may be seriously
incomplete (Fitzpatrick et al, 2011, p.166);
- neglects program description, the need to
gain an understanding of the context in which
the program operates and the effect of this
context on program success or failure
(Fitzpatrick et al, 2011, p.166);

- evaluators using this approach may neglect


their role in considering the value of the
objective themselves (Fitzpatrick et al, 2011, p.166);
-Choices are involved in deciding which
objectives to evaluate and how to interpret
success or failure in each (Fitzpatrick et al, 2011,
p.166).

Logic models and Theory-Based:


- learn more about the program and to
shed light on what to evaluate and the
appropriate means for doing so
(Fitzpatrick et al, 2011, p.166);
-allow the evaluators to gain a better
understanding of the values and concerns of
the stakeholders regarding the program and
the evaluation, these evaluators have
focused on the program to identify and
formulate the questions the evaluation will
address, the timing of data collection, and
the appropriate methods to be used.
(Fitzpatrick et al, 2011, p.167).

-may ignore unintended program actions,


links, outputs, or outcomes that merit
attention. Further, their desire to test the
theory as a whole may prompt them to neglect
values or information needs of stakeholders
(Fitzpatrick et al, 2011, p.167);
-criticized for oversimplifying the complexity
of program delivery and context.
Oversimplification often leads citizens and
policymakers alike to fail to understand how
difficult, and costly, it is for programs or schools
to achieve stated goals (Fitzpatrick et al, 2011,
p.167).

Goal-Free:
-reduce[s] the bias that occurs from
knowing program goals and, thus, to
increase objectivity in judging the program
as a whole (Fitzpatrick et al, 2011, p.168);
-focuses on actual outcomes rather than
intended program outcomes (Fitzpatrick et al,
2011, p.168);

-increases the likelihood that unanticipated


side effects will be noted (Fitzpatrick et al,
2011, p.168).

-not a stand-alone approach but, rather, a


useful supplement to the other programoriented approaches.

PROGRAM EVALUATION PLAN

CIPP:
DECISIONORIENTED
EVALUATION
APPROACHES

encourages managers and evaluators to


think of evaluation as cyclical, rather than
project based (Fitzpatrick et al, 2011,
p.178).

although the current model encourages


participation from many stakeholders, the focus
is typically on managers (Fitzpatrick et al, 2011,
p.178).

Utilization-Focused:
-intensive primary stakeholder
involvement to achieve the intended use
(Fitzpatrick et al, 2011, p.180);
-active-reactive-adaptive approach to
evaluation (Patton; 2002, p.432).

the Achilles heel of UFE is staffing or turnover


of the evaluations primary intended users
(Fitzpatrick et al, 2011, p.181; also see Patton,
2002).

Both approaches:
-contextualized and situational;
-primary-stakeholder-oriented;
-focus on improving programs through
improving decisions (Fitzpatrick et al,
2011, p.177).

-tend to neglect stakeholders with less power.


Social equity and equality are not values directly
addressed by decision-oriented models
(Fitzpatrick et al, 2011, p.185);

-the focus on managers and their information


needs could restrict information that evaluators
seek, the types of data they collect, and the
dissemination of the results (Fitzpatrick et al,
2011, p.185);

-A potential weakness of these approaches is


the evaluators occasional inability to respond to
questions or issues that may be significant
even critical but that clash with or at least do
not match the concerns and questions of the
decision maker who is the primary audience for
the study. In addition, programs that lack
decisive leadership are not likely to benefit from
this approach to evaluation (Fitzpatrick et al,
2011, p.185);

- these approaches assume that the important


decisions and the information to make them can
be clearly identified in advance and that the
decisions, the program, and its context will
remain reasonably stable while the evaluation is
being conducted (Fitzpatrick et al, 2011, p.185)

PROGRAM EVALUATION PLAN

PARTICIPANTORIENTED
EVALUATION
APPROACHES

10

All approaches:
-Strengths of the participative approaches include
their potential to increase understanding and use of
evaluation by stakeholders and to increase
evaluators understanding of programs and
organizations, and, in so doing, to provide more valid
and useful information. Stakeholder participation can
also lead to organizational learning. Weaknesses
include concerns about bias and, therefore, less
acceptance of the study by external audiences;
greater time and cost to involve stakeholders;
potentially weaker results if those conducting the
study lack necessary skills; and the implications that
skills and expertise in evaluation can be readily and
quickly gained by any stakeholder (Fitzpatrick et al,
2011, p.228); over time, successful participatory
evaluation seeks to transform schools into learning
organizations, building their research capacity to go it
alone without an outside evaluation collaborator
(King, 2005, p.98);
-highlight the potential value of including
stakeholders in the process of conducting study for
improving validity of the study and its use
(Fitzpatrick et al, 2011, p.223).

-The two broad categories of


drawbacks are (a) the feasibility, or
manageability, of implementing a
successful participative study; and (b)
the credibility of the results to those
who do not participate in the process
(Fitzpatrick et al, 2011, p.224);

-evaluators skills. people skills to


work with others, including
stakeholders new to evaluation
(Fitzpatrick et al, 2011, p.225);

-a potential for bias: It is difficult to


judge ones own work objectively
(Fitzpatrick et al, 2011, p.225); Often
stakeholders are more concerned with
providing evidence of success to
funding sources and want the
evaluation to show that success
(Fitzpatrick et al, 2011, p.214);

Robert Stakes Responsive Approach:


-Flexible, changing methods and approaches;
adapting to new knowledge as the evaluation
proceeds; using an iterative, open-ended model
(Fitzpatrick et al, 2011, p.193);
-Recognition of multiple realities and the value of
pluralism (Fitzpatrick et al, 2011, p.193);
-Local knowledge, local theories, and the particulars
of an individual program, its nuances and
sensitivities, are more important to convey than
testing any big theory or generalizing to other
settings (Fitzpatrick et al, 2011, p.193);
-Evaluations should strive to be holistic, to convey
the full complexity of a program, not to reduce or
simplify (Fitzpatrick et al, 2011, p.194).

Naturalistic and Fourth-Generation


Evaluation:
-the constructivist paradigm for evaluation (p.197),
new criteria included credibility, transferability,
dependability, and confirmability. authenticity
(Fitzpatrick et al, 2001, p.197);
-to raise stakeholders awareness of issues, to

- low or insufficient competence of


stakeholders to perform the task that
some approaches call for (Fitzpatrick
et al, 2011, p.226);

-not easy to implement or not


appropriate in all organizational or
national cultures: More than any
other approach to program
evaluation, participant-oriented
approaches add a political element
inasmuch as they foster and facilitate
the activism of recipients of program
services (Fitzpatrick et al, 2011,
p.226);

-an ongoing need for creating and


sustaining stakeholder motivation to
participate: the importance of
peoples wanting to participate; the
power of political context; the

PROGRAM EVALUATION PLAN


educate them to the views of other stakeholders, and
to help them move to action (p.197) acting as a
negotiator to help stakeholders reach consensus on
their diverse views and decide on next steps or
priorities (Fitzpatrick et al, 2011, p.198);
-the major role of evaluation is one of
responsibility to an audiences requirements for
information in ways that take into account the
different value perspectives of its members
(Fitzpatrick et al, 2011, p.198);

11
necessity of continuing support; and
the need for common meaning
between distinct worlds of practice
(King, 2005, p.98);

-do not yield information that is


generalizable.

-studying the program activity in situ, or as it occurs


naturally, without constraining, manipulating, or
controlling it. The dominant perspective is that of
the informants (Fitzpatrick et al, 2011, p.198)

Practical Participatory Evaluation (P-PE):


-focuses on adaptation to context and capacity
building in organizations (Fitzpatrick et al, 2011);
intended to increase use as the stakeholders begin to
understand the evaluation, make decisions about it,
and ultimately be excited about the results
(Fitzpatrick et al, 2011, p.224);
-aims at in-depth involvement of a smaller group of
primary stakeholders at all phases of the program
evaluation (Fitzpatrick et al, 2011).

Developmental Evaluation:
-developmental evaluation does not evaluate a
particular thing. It uses evaluative modes of thinking
and techniques to help in the constant, ongoing,
changing development process and growth of an
organization (Fitzpatrick et al, 2011, p.207);

-not suited for either summative or


formative decisions (Fitzpatrick et al,
2011)

-there are no clear goals and no established


timeframe. The evaluator isnt working to develop an
evaluation report to provide to an external funder at
some given point in time. Instead, the team is
constantly tinkering to deal with changes changes
in what they know, in what participants need, and in
the context of the community. (Fitzpatrick et al,
2011, p.208).

Empowerment Evaluation:
the designated stakeholder group or groups to be
empowered have control. The evaluator serves as a
coach or guide, but stakeholders may overrule the
evaluator; The stakeholders are making decisions,
with guidance from the evaluator at all phases; ...a
key issue is that the stakeholders are taking the lead.
The evaluator is only facilitating (Fitzpatrick et al,
2011, p.210);

-conceptual ambiguity (Fitzpatrick et


al, 2011, p.212);
-too concerned with values and
advocacy and not sufficiently with
reason (p.213), more of an ideology,
a set of beliefs, than an evaluation
approach (Fitzpatrick et al, 2011,

PROGRAM EVALUATION PLAN


-encouraging stakeholder participation and
emphasizing the role of evaluators [sic] in building
organizations internal evaluation capacity and
mainstreaming evaluation (Fitzpatrick et al, 2011,
p.215).

12
p.213).

Deliberative Democratic Evaluation:


-recognition that all stakeholder groups do not
have equal power or equal experiences; to make
the evaluation process fully democratic, the
evaluator needs to work to ensure that those groups
who traditionally have less power are able to
participate in the process; The democratic
principles of social equity and equality are addressed
by giving priority to inclusion and sacrificing in-depth
participation (Fitzpatrick et al, 2011, p.217);
-Inclusion of all legitimate, relevant interests./ Using
dialogue to determine the real interests of each
stakeholder group./ Using deliberation to guide
stakeholders in a discussion of the merits of different
options and for the evaluator to draw conclusions
(Fitzpatrick et al, 2011, p.217);
-Using deliberation to guide stakeholders in a
discussion of the merits of different options and for
the evaluator to draw conclusions (Fitzpatrick et al,
2011, p.217); The evaluator does not share critical
decisions with the stakeholders; The evaluator
leads the deliberative process, but controls for
stakeholder bias and other sources of bias. The
deliberative process is not intended to help
stakeholders reach consensus, but, rather, to inform
the evaluation, to help the evaluator learn of the
reactions and views of different stakeholder groups
and thus to reach appropriate conclusions about
merit or worth (Fitzpatrick et al, 2011, p.218) I see
that as an advantage of the DDE as compared with
the other participatory approaches

From the participatory viewpoint, the


disadvantage is that the evaluator
does not share critical decisions with
the stakeholders (Fitzpatrick et al,
2011, p.218).

PROGRAM EVALUATION PLAN

13

Explain your choice of model for your program evaluation:


Context of the program to be evaluated: The program is the BA in Linguistics and Cross-cultural
Communication offered by the Institute for Tourism and Hospitality, the Moscow campus of the
Russian State University of Tourism and Service. In 2011, in line with the new Federal State
Educational Standards and Russias participation in the Bologna Process, the program was totally
transformed to embrace the now-mandatory-country-wide competency-based approach. The
interests and concerns voiced by the key stakeholders are deemed by the evaluator to be all related
to the issue of whether or not the program as it is actually delivered meets the objectives stated in
the programs documents on whose basis it was accredited in 2013. There are a number of nonstakeholder-interest-related contextual factors to consider in planning and implementing the
evaluation. Among the most prominent factors are the hierarchy-oriented organizational culture
and a lack of organizational evaluation capacity, which may both undermine the reliability of the
input from the stakeholders, severely limit access to them as active participants, negatively affect
the credibility and the utilization of evaluation results, as well as raise such major ethical issues as
those related to (1) Human Rights and Respect (P3 in the Program Evaluation Standards as cited in
Fitzpatrick et al, 2011) in terms of assuring confidentiality while pressure to violate confidentiality
is likely when results/findings are presented, and (2) Concerns for Consequences and Influence
(U8 in the Program Evaluation Standards as cited in Fitzpatrick et al, 2011) in terms of evaluation
results being misinterpreted or misused, for example, used to punish someone, suppressed or
ignored by the stakeholder (p.80).
Models to be used: For this particular program evaluation, it seems valuable to combine all the
three program-oriented approaches (objectives-oriented, theory-based, and goal-free) with the
deliberative democratic approach. The value of the theory-based approach lies in the fact that, as
interviews with the program deliverers reveal, very few of those immediately involved in
implementing the program have a sound understanding of the rationale behind the content and the
instructional strategies prescribed by the recently-introduced Federal State Educational Standards

PROGRAM EVALUATION PLAN

14

this way the evaluation can serve an educative function. The theory-based approach will also allow
the evaluator to study program implementation, focusing on whether key elements of the program
theory are, in fact, delivered as planned (Fitzpatrick et al, 2011, p.164). The next step to take will
be to use the objectives-oriented evaluation approach to study the outcomes and judge whether the
program achieves its stated objectives even if it is not implemented as planned. In order to provide
a comprehensive view of the program outcomes - that is, to identify those outcomes, both positive
and negative, that the program does not target but that can be directly attributed to it it appears
necessary to use the goals-free evaluation approach. The blend of all the three program-oriented
approaches to evaluation will ensure that the program evaluation addresses the concerns of the
major program stakeholders (the administration and management, the program deliverers, and the
primary consumers).
Although it is not advisable, for both validity- and results-utilization-related reasons, to
ignore the value of the stakeholders active participation at all stages of the program evaluation,
different contextual factors reduce the potential for turning the program evaluation into a truly
participatory effort: stakeholder unavailability, organizational culture, and such constraints as time
and budget. Therefore, input from the stakeholders will sought in line with the deliberative
democratic approach during the planning stage of the program evaluation (for example, to develop
the program theory), the stakeholders will be used as sources of information at the data-collection
stage, and, finally, input from the stakeholders will be sought as part of the program evaluations
communication plan.

PROGRAM EVALUATION PLAN

15

With the organization demonstrating low evaluation capacity, with the stakeholder
concerns about the program identified and discussed, and with the focus of the program
evaluation negotiated to set boundaries for the present program evaluation and to select the
evaluation approaches to be used, the evaluator thought it plausible to rely on the evaluation
models selected for this program evaluation to propose evaluation questions to be further either
discarded, modified, or replaced in the course of discussions with the stakeholder groups whose
concerns were relatively unanimously judged to be top-priority (Fitzpatrick et al, 2011).
The concerns of the stakeholders, as revealed early in the planning stage, as well as the
contextual constraints, helped to identify the most appropriate evaluation models to be used: the
three program-oriented evaluation approaches (objectives-oriented, theory-based, and goal-free)
supported by, but not guided by, the deliberative democratic approach.
The theory-based approach dictates the development of the program theory and
assessing the extent of adherence of the theory during the program implementation before any
evaluation questions are defined (Fitzpatrick et al, 2011). The program theory created with
input from the program deliverers and available current research in education has alerted the
evaluator and the program deliverers to several aspects of the program that, according to the
program deliverers, could be critical for achieving the program objectives but were likely to be
implemented inconsistently across the curriculum. These aspects were considered by the parties
involved to be the most important to track and evaluate in the program implementation. As it is
not feasible to track the whole program implementation to make judgments about its full
compliance with the stated program theory (currently, there are no first- or second-year students
as the program enrollment was terminated in 2013 and the time available does not allow of a
full-scale implementation evaluation based on performance observations), it was deemed
reasonable to focus on the potentially failing aspects. The questions formulated emphasize the

PROGRAM EVALUATION PLAN

16

relationship between the program activities and the short-term outcomes and are closely related
to the focus of the objectives-oriented approach:
1. To what extent do the instructional strategies actually used correspond to those stated in
the program theory? (judgments to be based on peer-review, with compliance with the
Core Training Programs requirements expressed as a percentage used as the criterion);
2. Do the course modules cover all of the competencies stated as short-term outcomes?
(judgments to be based on peer-review of the instructional materials and interviews of
individual modules instructors, with the criterion being the module-content-scope and
content-focus compliance with the Core Training Programs requirements);
3. To what extent do the instructional materials support the instructional strategies stated in
the program theory? (judgments to be based on structured peer-review of the lesson
plans and the instructional materials, current research and faculty interviews, with the
facultys theory- and experience-driven judgment of correspondence used as the
criterion);
4. Given the instructional strategies and the instructional materials, do the students feel
able to demonstrate the competencies stated as the programs short-term outcomes?
(judgments to based on students surveys and interviews, with the students perceived
readiness-by competency self-ratings used as the criterion);
5. To what extent do the potential employers feel the graduates are able to demonstrate the
competencies stated as the programs short-term outcomes? (judgments to be based on
the external application experts reports on the students performance during the
internship program, in-depth interviews with the external application experts and the
external academic experts involved in the State Final certification, with external
program experts opinions used as the criterion).
The major questions that goal-free evaluation can answer within the program evaluation
are:

PROGRAM EVALUATION PLAN

17

6. How far can the students achievement on graduation be attributed to the program they
attended? (judgments to be based on the attendance rates, participation rates, and student
performance during the State Final Certification assessed against the state- mandatory
rubric as the criteria);
7. What unanticipated outcomes does the program have? (judgments to be based on
student interviews, the students families opinions, with the students and the parents /
guardians opinions as the primary criteria).
It is not deemed feasible to study mid-term or long-term outcomes as, at present, there
are no graduates of the program: the first enrollees in the program, which was totally
overhauled in 2011, are to complete the program in June, 2015. The data collected from the
existing alumni or their employers was unanimously judged to be irrelevant for the present
study, as the student who enrolled before 2011 were not trained towards the objectives of the
currently-offered program.
For contextual political reasons, the evaluation is not to deal with the faculty and
academic/student services performance (this is highly likely to result in the program deliverers
anxiety, resistance, alienation, and serious bias), nor is it politically acceptable to make
judgments about the quality of the program management. Although these are important issues
to address, it would be premature to study them in a program evaluation which is not decisionoriented and which takes place in an organization totally new to effective internal evaluation.
Some of the sub-questions will become irrelevant as the evaluation proceeds; however,
they are to be initially included as a guideline to better define the possible scope of the
evaluation and to demonstrate to the stakeholders what questions evaluation based on the
selected evaluation models could answer.
With the program evaluation initiated by the program deliverers, the criteria for the
programs success / failure on the points stated as the evaluation questions have been discussed
with this stakeholder group and the external academic and application experts. Although, at

PROGRAM EVALUATION PLAN

18

first, it was voted relevant to use the existing standards for BAs as standards for the evaluation
questions, a review of the standards carried out by four of the program deliverers and the
evaluator did not produce any clear results: the standards as they are formulated do not lend
themselves to any objective evaluation for lack of clear numerical directions or clear rubrics to
assess a programs success or failure in achieving the objectives as prescribed by the Federal
State educational Standards. Therefore, the program deliverers and the evaluator thought it
necessary to formulate criteria and standards for the program evaluation. The criteria as
negotiated by the program deliverers, the primary consumers, and the evaluator are indicated in
brackets following each of the evaluation questions above. The standards agreed upon are as
follows: the answers of the interviewees, reflection-writers, and personal judgments are to be
interpreted and transferred on the Strongly disagree Disagree Agree Strongly Agree
scale, with Agree being the acceptable standard; the attendance rate of 85% is believed to be
a contextual sign of success; and the 30% of As, 40% of Bs, 25% of Cs, and 5% of Fs at the
State Final Certification as the hallmarks of the programs success.

PROGRAM EVALUATION PLAN

19

The evaluation questions as negotiated with the stakeholders are descriptive and the
results sought are not intended for generalization. Therefore, the most appropriate design for the
study was judged by the stakeholders and the evaluator to be that of a case study, with its focus
on in-depth understanding of the program and collecting data in many different ways. Case
studies generally focus on collecting qualitative data through observations, interviews and the
study of existing documents; however, case studies leave a lot of room for quantitative data
obtained through surveys and a statistical analysis of existing data. In this program study we
will use such data collection methods as (1) the programs documents and records, (2) surveys,
(3) interviews, and (4) focus groups.
The use of existing data
Existing data is a cost-effective source of valid and reliable information for a program
evaluation if they are consistently and accurately collected, organized, and stored so as to be
accessible and usable in terms of the restrictions placed by the organizational policies and such
ethical issues as respect for people.
In our case, program documents and records provide data related to the program theory
(the state mandatory Core Training Program and the Federal State educational Standards) and
program implementation, e.g. the curriculum, the syllabi, lesson plans, instructional materials,
student attendance rates, student participation rates, samples of the students work assembled in
students portfolios, student drop-out rates, grade point average, external application experts
reports, external academic experts reports, the students performance against the statemandated rubric during the final state certification.

PROGRAM EVALUATION PLAN

20

The samples of students work will be reviewed and rated against the state-mandatory
rubric by two of the faculty who are external to the program but are certified linguists. No
sampling is to be used considering the small number of the 4th-year students (which is 15), and
the relatively small number of written assignments included in each students portfolio (which
is 3 per each student, with the average volume of 10 pages per assignment)
The lesson plans and the instructional materials will be peer-reviewed for compliance
with the program theory. Research literature on peer-revision of lesson plans and instructional
materials suggests involving two raters. Each rater works with the lesson plans and the
instructional materials independently. Further, the results are compared and cross-checked.
The peer-revisions will be structured, which means that a rating standard form will be
developed, and the peer-reviewers will be pre-trained to use it.
A structured or unstructured observation could yield valuable information on the actual
in-class program implementation; however, considering the limited time and the timing of the
evaluation, in-class peer-observations are not possible. Nor are they possible considering the
current general negative attitude of the faculty towards being observed and observing others.
The existing data on in-class peer-observation cannot be considered to be reliable. The
program deliverers themselves voice doubts as to the reliability of their reports.
Another source of existing data that cannot be used, though it is rich and has been
accumulated for twenty years, is the data collected prior to 2014. Much debate has been held on
the issue. On reflection, the program deliverers finally agreed that the information was not valid
for it was collected from students and the faculty who worked towards totally different
educational standard, and it was not the purpose of this study to compare the currently offered
program with its predecessors, nor would it be reasonable to do so for the evaluation results
cannot reverse the state-initiated educational reform even if a comparative study clearly
demonstrated the superiority of the State Educational Standards as compared with the new
Federal State Educational Standards.

PROGRAM EVALUATION PLAN

21

Although it was tempting to use the rich information as baseline, it was ultimately
decided to dismiss it and to go on the information that is specific to the program as redirected
towards different educational goals and implemented in accordance with a different educational
approach (the competency-based approach) as far back as in 2011.
Nor was it seen as valuable to consider the existing accreditation reports, for neither the
administration and management nor the program deliverers viewed the information as reliable.
A brief overview of the organizational external politics, as well as discussions with those who
prepared the documents for the external accreditors review served to prove the point.
Overall, though certain existing data can be used effectively for the program evaluation,
a lot of original data have to be collected.
The methods selected for original data collection are face-to-face surveys, selected for
their cost-effectiveness and familiarity of format, interviews, and focus-groups.
Surveys
Surveys are used in this program evaluation to collect information pertaining to the
program implementation, that is the instructional strategies and materials actually used in class,
the identification of the unanticipated outcomes of the program, collection of the students
opinions concerning their perceived readiness to compete and function as professionals in the
competency-oriented job market, to measure the students attitude towards the program as a
whole,
We intend to use surveys to reach the students and the faculty.
The ethical issues involved include the need to assure the confidentiality and,
particularly in case of the third-year students, the anonymity; obtaining informed consent from
the students, their family members, and the members of the faculty.
Much work has been done to reduce the evaluation anxiety among the program
deliverers and the students. Now that it has been made clear by the universitys top
administration that the decision about the termination of the program is irrevocable and that the

PROGRAM EVALUATION PLAN

22

facultys contracts are prolonged until July 2016, when the program wraps up, the faculty
members have reconsidered their initial resistance and an obviously biased approach in favor of
serving those who stay by initiating a thorough internal evaluation system. The faculty seem to
be shaking off their personal fears and to be gradually more willing to experiment and test.
Interviews
In order to answer the evaluation questions and to insure that triangulation is possible,
interviews were seen as useful in constructing an alternative data bank.
For students interviews, the most appropriate format for the sequential order seems to
be a semi-structured interview, partly developed on the results yielded by the surveys.
The students surveys and the students interviews are to be held sequentionally, with
the surveys preceding the interviews. During the interviews, the interviewer is to pay particular
attention to probing on the issues that a particular survey participant either did not choose to
provide any answer upon, or provided consistently level answers either all-positive, allnegative, or with an evident central tendency bias. These steps are seen as necessary to reduce
respondent bias and anxiety considering the fact that it is the first program evaluation that the
students participate in. The interviewers must show a lot of tact to assure the students that their
survey answers are not being doubted but that the students in-depth opinions are sought as they
are considered highly valuable
No sampling strategy is to be used for student interviews as the number of the students
is small only 15.
For the interviews targeting other sources of data, such as the faculty and the external
academic and application experts, other interview formats were selected: an unstructured faceto-face interview with the faculty members and the program leader, and an unstructured
telephone interview or an unstructured face-to-face interview with the external program experts.
The choice between the telephone and face-to-face is to be made by each expert herself.

PROGRAM EVALUATION PLAN

23

The unstructured interview format can help identify and reduce respondent bias,
especially in the case of the faculty. Moreover, an unstructured interview can help to reveal the
unanticipated areas of interest and the interviewees particular concerns regarding the program.
The preparation for the interviews, like that for the surveys, does have such ethical
implications as ensuring the confidentiality and anonymity of the interviewees.
The timing of the interviews with the faculty is critical to the accuracy of the data
collected. It is considered more appropriate to hold the interviews after the students have taken
the state final certification but before the certification results are announced.
The timing for the interviews with the external program experts is more flexible;
however, to avoid the selective memory effect, it is believed that the external academic expert
should be interviewed within two days after the state final certification procedure, and the
external application expert should be interviewed within a week after she has competed reports
on each students performance.
No sampling is to be used, as there are only 6 faculty members and only 2 external
experts to interview.
Focus Groups
Although effective for obtaining qualitative data, focus groups cannot be used for
collecting information from the students or the faculty, as one of the requirements of the
method is that the members of the focus group should have had no prior relationship with each
other. This requirement can hardly be met in the case of the students or the faculty; however, it
is ideally met in the case of their family members, who can provide useful information
concerning the unintended outcomes of the program. Using the small-group format and a
trained facilitator can help create an environment conducive to collecting accurate information
in a safe, comfortable, and equitable fashion. With the focus-group members informed consent,
the focus group sessions will be audio-recorded for further transcription and interpretation. The
most appropriate timing for the sessions is seen as the annually-held meeting between the

PROGRAM EVALUATION PLAN

24

administration and the parents (or the guardians). As there are only 15 students graduating this
year, it is expected that the meeting will be attended by 12 to 16 family members, therefore, two
focus groups can be formed. To avoid contamination bias, the two focus group sessions will be
held simultaneously in different rooms, with the participants being randomly assigned to a
group through the drawing of lots.
The sessions are to be facilitated by trained evaluation personnel, with particular
attention paid to the issues of consistency of observation, moderation, and reporting.
Evaluation teams expertise
Preparing and conducting the surveys and the interviews, as well as preparing for and
facilitating the focus group sessions, raises the question of the evaluation expertise of the
evaluation team. Considering the low evaluation capacity in the organization, and the time and
budget constraints, it was viewed as necessary and expedient to involve skilled external
evaluation personnel who would have little financial interest but be willing to contribute on
such grounds as professional interest and professional development.
Graduate students currently enrolled in the MS in Sociology, as well as the faculty of
the Sociology Department of the University, welcomed the opportunity to assist in creating
surveys, formulating interview questions for the semi-structured interviews, conducting the
surveys and interviews, facilitating the focus group sessions, as well as interpreting the
information obtained through the surveys and interviews.
The graduates and the Sociology Department faculty are now familiarizing themselves
with the program to be evaluated and are reviewing the programs documents and records to
collect numerical data, such as student participation rates, student drop-out rates, and grade
point average.
This program evaluation focuses on the 4th-year students, their family members, as the
evaluation targets the whole program, but not the separate courses it consist of. 3rd-year students
will participate in the survey only, these data can be used in a further program evaluation as

PROGRAM EVALUATION PLAN

25

pre-test data with post-test data obtained when the students approach graduation in June 2016.
This way, this study can serve informational needs of further program evaluations and give the
now 3rd-year students their first taste of an evaluation procedure.

If you have any questions or suggestions concerning the data collection methods to be
used in the present program evaluation, please, contact the lead evaluator:
Liutova Marina Vladimirovna
Tel: 89036837409
E-mail: lyutovamv@mail.ru.

Running head: PROGRAM EVALUATION PLAN

Stakeholder

Reporting Strategy

Program Administration
and Management

E-mailed scheduled and


unscheduled written interim
reports with follow-up
personal communication via
telephone;
- Preliminary final report;
- Oral final report;
- Written final report (full
version).
Reasoning: In line with the
organizational reporting procedure, the
more formal format is chosen for
reporting to this stakeholder group.

Program deliverers:
The program leader
The faculty

Implications

Stakeholder Involvement in
Data Interpretation and
Reporting

Although this stakeholder group has little


Degree of involvement: minimal
personal involvement in the present program
Nature of involvement: approvals of
evaluation, it is important for both political and the interim and final reports
practical reasons (e.g. allocating resources to
the program evaluation: rooms, budget, printing
services, audio-recording equipment, etc) to
inform them of the development of the
evaluation project. The support of this
stakeholder group is necessary for the further
institutionalization of the internal evaluation
system.
The program administration and management
may be invited to personally attend some of the
evaluation team meetings, and, if deemed
acceptable by the program deliverers to
participate in personal group discussions.

Special care should be taken when negative


messages are delivered.
Verbal presentations (PPt, flip Special care should be taken when negative
charts);
messages are delivered, and when
Groups discussions (not during professionally sensitive issues are discussed.
the summer leave);
During group discussions, skillful facilitation is
Short written communications
important to ensure equity and fairness. It is
through e-mail (during the
also necessary to identify the program
summer leave and on);
deliverers possible biases and reduce their

Degree of involvement: moderate to


heavy (moderate during the summer
leave, heavy during the on-the-job
time)
Nature of involvement:
- active participation in the data

PROGRAM EVALUATION PLAN


-

Primary Consumers:
4th-year students
3rd-year students

Secondary Consumers:
4th-year and 3rd-year
students family members

E-mailed scheduled and


unscheduled written interim
reports;
- Preliminary final report;
- Oral final report;
- E-mailed written final report
(full version).
Reasoning: Too maintain the program
deliverers motivation, highly
personalized and, when possible,
interactive communication is
imperative. It is also important to keep
up the communication even during the
program deliverers summer leave.
- E-mailed scheduled (or
unscheduled) written interim
report on the data collected
from the students and their
family members;
- Written final report (abridged
version) published on the
programs page of the
universitys website.
Reasoning: With the summer holiday
approaching and with the graduates
leaving the program, any mode of
face-to-face communication becomes
unfeasible. However, to express
appreciation of the primary
consumers input, it is important to
provide feedback to them in the most
unobtrusive manner.
- Written final report (abridged
version) published on the
programs page of the

27

impact on the results of the program evaluation.

Special care should be taken when the


programs weaknesses are reported.
Confidentiality and anonymity are to be
guaranteed at each communication point.
The terminology in the reports is to be adapted
to the stakeholders level of understanding.

interpretation;
-

approval of the interim reports;

revision of the preliminary final


report;

approval of the written final


report.

Degree of involvement: moderate for


both groups
Nature of involvement:
- moderate participation (through
clarification of ambiguities) in
interpreting the data obtained
from the students and their
family members;
-

Special care should be taken when the


programs weaknesses are reported.
The terminology in the reports is to be adapted

approval of the interim report on


the data collected from the
students and their family
members.

Degree of involvement: none

PROGRAM EVALUATION PLAN

Program Evaluation
Project Manager

universitys website.
Reasoning: With this stakeholder
group being largely inaccessible, any
mode of face-to-face follow-up
communication becomes unfeasible.
However, to express appreciation of
the secondary consumers input, it is
important to provide feedback to them
in the most unobtrusive manner.
- Personal discussions;
- Short written communications
via e-mail;
- Scheduled and unscheduled
interim reports;
- Preliminary final report;
- Written final report (full
version).
Reasoning: ongoing communication
via all the venues available is critical
to the evaluation projects success.

28

to the stakeholders level of understanding.

The lead evaluators on-going reporting is


critical for the project manager to track
progress and, if necessary, take timely steps to
obtain additional resources, reallocate
resources, reschedule tasks, and report progress
to the university authorities.
Communication with the project manager
should primarily focus on project management
issues.

Degree of involvement:
- none in data interpretation (the
role of the PM is that of a
supporter),
- heavy in disseminating results.

Values, Standards, and Criteria to be Used in Interpreting the Data to Ensure Openness and Credibility:
Respect for people and democracy will be the primary values that the evaluation team will adhere to. The lead evaluator will engage in
extensive and intensive communication with the stakeholders to ensure that these values are shared and are unanimously given priority to.
In an attempt to ensure that the program evaluation is objective, credible and fair, it is important that evaluation communications are
accurate and detailed in order to allow the intended audience to understand and interpret the data being reported (Campbell, June 17, 2015, para
1). Communication is also instrumental in learning the stakeholders expectations, in preparing them for the negative messages that are certain to
have to be communicated to the appropriate stakeholders when the data are being or have been interpreted, and in ensuring open and

PROGRAM EVALUATION PLAN

29

comprehensive reporting - understanding your audience and stakeholders is key to reporting (Bledsoe, n.d.).
As it is important that all legitimate positions be reported in a balanced way (Fitzpatrick et al, 2011, p.457), the three recommendations for
objective data interpretation and reporting as formulated by House (2001) come to the fore: inclusion of all relevant stakeholder perspectives,
values and interests in the study, extensive dialogue between the evaluator and stakeholders, and extensive deliberation to reach valid
conclusions in the study (original emphasis).
The credibility of the interpretation is also ensured by the evaluation teams (except the lead evaluator) being external to the program and
having knowledge of and experience in data collection and data interpretation. Having no personal interest in the program and being largely driven
by professional interest, rather than financial gains, the teams participation in the program evaluation lends a lot of credibility to the results in the
eyes of the program administration and management, the program deliverers, and the programs consumers.
Triangulation will be used to ensure the reliability of the data collected.
The focus on the evaluation questions and the criteria negotiated and agreed upon by the program deliverers, program administration &
management, and the evaluation team will also guarantee that the results are seen as valid and reliable by the focal stakeholder group (the program
deliverers).
The most accessible stakeholder group the program deliverers will be actively involved in interpreting the data collected. They will also
be requested to review the preliminary final report for approval or suggestions for amendments (Fitzpatrick et al, 2011). Minority reports will be
resorted to if stakeholders voice disagreement with the judgments, conclusions, or recommendations.

PROGRAM EVALUATION PLAN

30

Potential Ethical Issues at the Reporting Stage:


The following are seen as the ethical issues most likely to arise during the reporting stage:
-

Situations that can potentially jeopardize respondent confidentiality (Ritter & Sue, 2007, p.13): to guard against such situations, it is
important to report the finding in a manner that will make the disclosure of the respondents identities irrelevant for example, producing
results for groups (Ritter & Sue, 2007).

Barriers to report[ing] data that presents the host organization unfavorably (Ritter & Sue, 2007, p.13): ongoing communication with the
stakeholders (especially, the program administration and managers, who are the least involved stakeholder group) on the ethics of
evaluation and the issues of corporate social responsibility can help avoid the situation.

Running head: PROGRAM EVALUATION PLAN

Bledsoe,

K.

L.

(n.d.).

Data

analysis

case

study

[Multimedia].

Retrieved

from

http://mym.cdn.laureatemedia.com/2dett4d/Walden/EIDT/6130/07/mm/DataAnalysisCaseStudy/index.html
Campbell, A. (June 17, 2015). RE: Alexandra Campbell Initial Post [Discussion post].
Retrieved

from

https://class.waldenu.edu/webapps/discussionboard/do/message?action=list_messages
&forum_id=_3139516_1&nav=discussion_board&conf_id=_1483203_1&course_id=_
8626050_1&message_id=_48358856_1#msg__48358856_1Id.
Department of Linguistics. (2013). Otchet po samoobsledovaniiu kafedry: 2008-2013
[Departmental self-revision report: 2008-2013]. Institute for Tourism and Hospitality,
Russian State University of Tourism and Service: Internal documentation. (Resource
in Russian).
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative
approaches and practical guidelines (4th ed.). Upper Saddle River, NJ: Pearson
Education.
King, J.A. (2005). Involving practitioners in evaluation studies: How viable is collaborative
evaluation in schools? In J.B.Cousins, and L.Earl (Eds.), Participatory evaluation in
education: Studies in evaluation use and organizational learning. London,
Washington D.C.: Falmer Press (member of the Taylor & Francis Group).
Ministry of Education and Science of the Russian Federation. (May 20, 2010). Federalnyi
gosudarstvennyi obrazovatelnyi standart vysshego professionalnogo obrazovaniia po
napravleniiu podgotovki 025700.62 Lingvistika [Federal state educational standard for

PROGRAM EVALUATION PLAN


the

core

training

32
program

035700.62

Linguistics].

Retrieved

from

http://www.edu.ru/db-mon/mo/Data/d_10/prm541-1.pdf. (Resource in Russian)


Morris, G. (May, 2015). RE: Question for Gayle [Discussion post]. Retrieved from
https://class.waldenu.edu/webapps/discussionboard/do/message?action=list_messages
&forum_id=_3139512_1&nav=discussion_board&conf_id=_1483203_1&course_id=_
8626050_1&message_id=_48358843_1#msg__48358843_1Id.
Patton, M.Q. (2002). Utilization-focused evaluation. In D.L. Shufflebeam, G.F. Madaus, &
D.Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human
services evaluation (2nd ed.). New York, Boston, Dordrecht, London, Moscow: Kluwer
Academic Publishers.
Ritter, L., & Sue, V. (2007). Introduction to using online surveys. New Directions for
Evaluation, 115, 5-14.
Shufflebeam, D.L. (2002). Foundational Models for 21st Century Program Evaluation. In D.L.
Shufflebeam, G.F. Madaus, & D.Kellaghan (Eds.), Evaluation models: Viewpoints on
educational and human services evaluation (2nd ed.). New York, Boston, Dordrecht,
London, Moscow: Kluwer Academic Publishers.
Zukoski, A., & Luluquisen, M. (2002). Participatory evaluation: What is it? Why do it? What
are the challenges? Community-Based Public Health: Policy & Practice, 5(April,
2002). Retrieved from https://depts.washington.edu/ccph/pdf_files/Evaluation.pdf.

Anda mungkin juga menyukai