Anda di halaman 1dari 60

Program Evaluation

Approaches
Why do we need model to evaluate?
So that evaluation could be conducted in a more
complete and meaningful manner.
MODEL is a conceptual picture that shows the
interrelationships between various elements
involved in any activities.
Few models are grouped under one evaluation approach.
The categorization of approaches depend on the author.
Zhang et al (2011) state that there are 26 evaluation
approaches categorized into 5 groups: pseudo-
evaluations, quasi-evaluation studies, improvement and
accountability-oriented eval, social agenda and
advocacy, and eclectic eval.
Stufflebeam (2001) suggests 22 eval approaches
grouped into 4 main categories: pseudo eval, questions
and methods-oriented, improvement or accountability
and social agenda or advocacy.
Approaches to Program Evaluation
Objectives-oriented eval. approaches (or goal-based)
Management-oriented eval. Approaches (or decision-
making)
Consumer-oriented eval. approaches
Expertise-oriented eval. approaches
Participant-oriented eval. approaches (or naturalistic)
Program Evaluation Approaches
Objectives-oriented approaches focus is on specifying goals and
objectives and determining the extent to which they have been attained.
eg Tyler belief that evaluation of a curriculum involves the activities used
to determine whether a curriculum has met its stated objectives.
Management-oriented approaches focus is on providing meaningful and
useful information for the managerial decision makers.
Participant-oriented approaches in which involvement of participants
(stake holders in that which is evaluated) are central in determining the
values, criteria, needs, data and conclusions for the evaluation.
It is also known as naturalistic/ qualitative approach. They believe that in
evaluating program, evaluators should focus more on producing a well-
rounded program by using qualitative methods and full involvement of the
evaluators at the actual site of the program.
Program Evaluation Approaches
Consumer-oriented approaches focus is on developing
evaluative information on products and accountability, for use
by consumers in choosing among competing products/services
and etc.
Expertise-oriented approaches it depends primarily on the
direct application of professional expertise to judge the quality
of whatever endeavour is evaluated.
OBJECTIVES-
ORIENTED
EVALUATION
APPROACH
Its
features: The purposes of some activity are
specified, and then evaluation focuses on the extent to
which those purposes are achieved.
Ineducation, the activity could be as short as a one-day
classroom lesson, or as complex as the whole schooling
enterprise.
Inhealth and human services, it is often a service or
intervention.
Inbusiness, it could be as simple as a one-day meeting
or as complex as a corporations 5-year strategic plan.
Many people have contributed to the evaluation and
refinement of the objectives-oriented approach to
evaluation since its inception in the 1930s such as:
1. The Tylerian Evaluation Approach Ralph Tyler (late
1930s)
2. Metfessel and Michaels Evaluation Paradigm
3. Provus Discrepency Evaluation Model.
4. Logic Models
5. Kirkpatricks 4-Level Model
How the objectives-oriented evaluation approach has
been used?
The objectives-oriented approach has dominated the thinking
and development of evaluation since the 1930s, both in the US
and elsewhere.
Its straightforward procedure of letting the achievement of
objectives determine success or failure and justify improvements,
maintenance or termination of program activities has proved an
attractive prototype.
Cronbach worked with Tyler on the 8th Year Study, developed an
approach to using objectives and associated measurement
techniques for purposes of course and curriculum development.
Tylers statement on program evaluation:
A comprehensive evaluation of the outcomes of
an educational program requires clear definitions
of the desired patterns of behaviour and of other
possible outcomes, both positive and negative. It
then requires the selection or development of test
situations that evoke such behaviour from the
students, and it necessitates the use of relevant
and important criteria for appraising the students
reactions in these test situation. Finally, the
reporting of these appraisals should be done in
Strengths and Limitations

????????
Major Concepts and Theories
1. The objectives-oriented eval approach used by Tyler was
designed to determine the extent to which prog obj were
attained. Tyler used discrepencies between what was
expected and what was observed to provide suggestions
for any program deficiencies.
2. Before a program can be evaluated using this approach, it
may be necessary to fully evaluate the goals/objectives of
the program.
3. Provus believes that evaluation goes thro 5 stages:
definition, installation, process, product and cost-benefit
analysis.
GOAL-FREE EVALUATION APPROACH
Was developed by Michael Scriven in 1972
History of development:
He found that with traditional evaluations, goals are not
dependable because they:
1) Can contradict with each other
2) Are too broad, accounting for all circumstances and
thereby useless
3) They create unnecessary noise for evaluation
4) Contaminate, or bias evaluator
5) Avoid looking for unintended effects
GOAL-FREE EVALUATION APPROACH
KSSR objectives - ??; SBA(PBS) objectives - ??
The objectives of NEAS are;
To reduce the focus on public examinations
To strengthen SBA
To improve students learning
To create a holistic and everlasting assessment
To develop a better human capital

The main objectives of SBA are:


To get the overall picture of an individuals potential
To monitor individuals development and help to
increase their potential
To make meaningful reporting on individual learning
Major Characteristics
Evaluator actively avoids information regarding program goals.
Evaluator does not have preconceived goals in order to not to
narrow the focus
Evaluator has minimal contact with staff or members of the
program.
Without information regarding goals, evaluator is more likely to
see unanticipated effects of program
Not based upon predetermined goals
Free of side effects
Lacking summative evaluations
Different from the true needs of the program
Goal-free evaluation is a results-focused evaluation.
The
evaluation identifies all results of the program,
whether anticipated or unanticipated ( Kahan, p28,
2008).
If not goals, what?

Useneeds of the consumer, funding agency,


program director
Evaluate achievement, not testing goals
Usehypothesis, based on experience, to start
evaluation
What is the purpose of this model?
Ifyou are interested in knowing the unintended as well
as the intended outcomes of your project, the goal-free
evaluation model may be an appropriate way of
capturing the results or outcomes of your work ( Scriven,
1972).

This evaluation model intentionally seeks to be blind to


the objectives or goals of project stakeholders. Scriven
noticed that the common focus of evaluators on the
achievement of results on pre-determined objectives
sometimes missed the positive unintended outcomes
that resulted. The organizing principle in this type of
evaluation is the effects and not the goals.
Two types of information are necessary in order to
conduct this type of evaluation:
1) The evaluator needs to identify all the effects or
outcomes that resulted from the project.
2) The evaluator must construct a profile of the needs of
the target population.
If an effect has a positive impact on one or more of those
needs, that part of the program that yielded that effect
should be positively evaluated.
Goal-free evaluation is candid and directly honest. But, it
can be uncomfortable.
When to use??
When stakeholders want:
Information about program outcomes, both intended and
unintended
Critique not focused on the program goals
When evaluators:
Have no knowledge of program goals, intentionally or
unintentionally
- Want to identify the effect of a program from data collection,
observations and interviews
Philosophical Perspective
Benefits:

1) To attract funders, program generated goals are


often grandiose and irrelevant to the project.
2) If the needs shift during the project, the program
can shift as well
Needs and Nature
Evaluation Needs
1) Access to program/project participants
2) Access to all data
3) Time
Evaluation Nature
It tends to be qualitative in nature, meaning this evaluation is one of
discovery. By interviewing participants in an unstructured
interview, the evaluator begins to identify outcomes. Several
interviews may be necessary. Observation might also be necessary.
Critical Elements and Client Issues
Critical Elements
- The evaluator has to have good knowledge of the subject of evaluation.
- The evaluator has to be competence.
- The evaluator has to be free of bias.
Client Issues
The evaluator has to be an intrusive presence in the sense that the
evaluator must be free to interact with the program participants.
This evaluation takes time and will be more expensive.
Evaluation products typically include a standard evaluation report that
may be both a qualitative and quantitative. The client should have
material that will serve promotional purposes as a consequence of this
Implementation techniques

Asking questions and devise ways to find answers


Determine what the evaluation will do and
how it will benefit stakeholders
Interviewing and interviewing again
MANAGEMENT-
ORIENTED
EVALUATION
APPROACH
Management-oriented evaluation Approach
It is meant to serve decision-makers
Its rationale: evaluative information is an essential part of good
decision making and that the evaluator can be most effective
by serving administrators, managers, policy makers, boards,
practitioners, and others who need good evaluative
information.
Developers of this method have relied on a system approach to
evaluation in which decisions are made about inputs, process
and outputs much like the logic models and program theory.
The decision-maker is the audience to whom a management-
oriented evaluation is directed, and the decision-makers
concerns, informational needs, and criteria for effectiveness
guide the direction of the study.
Management-oriented evaluation Approach

a) The CIPP Evaluation Model by Stufflebeam (1971)


b) The UCLA Evaluation Model by Alkin (1969)
c) The utilization-focused evaluation approach by
Patton (1986)
Approach
has been used?
This approach has guided program managers thro program planning,
operation and review.
Used for decision making and accountability
Decision Making Accountability
C Guidance for choice of objectives Record of objectives along with
records of needs, opportunities
and problems
I Guidance for choice of input Record of chosen strategy and
design
P Guidance for implementation Record of the actual process
P Guidance for Record of attainment and
termination/continuation/modificatio recycling decisions
n
Strengths and Limitations

?????????
Major Concepts and Theories
1. The major concepts for this approach is that the evaluation is to inform
decision-makers about the inputs, processes and outputs of the
program. Considers the decision makers concerns, informational needs
and criteria for effectiveness when developing the evaluation.
2. Stufflebeams CIPP model incorporates 4 separate evaluations
(C, I,P,P) into one framework to better serve managers and decision
makers. Each of these evaluations collects data to serve different
decisions (eg context eval serve planning decisions)
3. In the CIPP model, a context eval helps define objectives for the program
under eval.
4. An input eval provides info on what resources are available, what
alternative strategies to the program should be considered, and what
should be done one the program has ended.
Major Concepts and Theories
5. A process eval determines how well a program is being
implemented,
what barriers to success exist, and what should be done once the
program has ended.
6. Product eval is used to provide info on what program results were
obtained, how well needs were reduced, and what should be done
once the program has ended.
7. Alkins UCLA model is similar to CIPP Model in that it provides
decision-makers info on contexts, inputs, implementations,
processes and product of the program.
The UCLA Evaluation Model
By Alkin in 1969
5 types of evaluation:
i) Systems assessment to provide information about the state of the
system (similar to context evaluation in CIPP)
ii) Program planning to help in the selection of particular programs
likely to be effective in meeting specific educational needs (similar
to input eval)
iii) Program implementation
iv) Program improvement to provide info about how a program is
functioning (similar to process eval)
v) Program certification to provide info about the value of the
program and its potential for use elsewhere (similar to product eval)
The utilization-focused eval by Patton

Developed by Michael Quinn Patton in 1980s


Evaluation is based on utility or actual use.
Or, evaluation is based on clients requirements.
Or, it focuses on Intended use by intended users
meaning that who is going to use the evaluation
results and how they are going to use them.
Intended users= specific groups from the list of
potential stakeholders
Intended use= info needed by the intended users
The utilization-focused eval by Patton

Patton believed that in order to produce used


program evaluation studies, decision-makers have
to work together with the evaluators concerning:
- research design
- data collection method
- data analysis
- dissemination of data
The utilization-focused evaluation by Patton

1st step in evaluation is to identify and organize


relevant decision makers and information users.
The use of evaluation findings requires that
decision makers determine what info is needed by
various people and arrange for that info to be
collected and provided to them.
CIPP vs Patton
Both models focus on how eval results from the eval
procedures could be used for the decision-making.
Patton believes that eval have to play their role in working
together with the intended users in making judgments and
decisions without neglecting the evaluations accuracy.
Evaluators act as facilitators in helping intended users to
decide on the most suitable eval purposes
(formative/summative/developmental), on any method
(quanti/quali/mixed) any research design
(naturalistic/experimental) or focusing on any possibilities
(process/outcomes/impacts/costs or cost-benefit).
CIPP vs Patton
That is why .Patton disregard the idea of being
decision-making approach coz not only serve
decision-makers but also other purposes intended by
the users.
He prefers user-focused evaluation.
So, the difference is on PURPOSE OF EVALUATION.
CIPP: look for attainment of the goals, improve
program to look at overall impact.
Patton: to meet the needs of various audiences.
CONSUMER-
ORIENTED
EVALUATION
APPROACH
Scriven
Komoski
Purpose of evaluation:
To provide information about products to aid decisions about
purchases.
Characteristics:
They use criterion checklists to analyze products; product
testing;
informing consumers.
Characteristics:
It has lists of criteria for evaluating educational products and
activities.
It has formative-summative roles of evaluation.
Bias control.
Checklists for evaluation by Patterson
Instructional Materials Review Form:
Strengths and Limitations
Strengths:
Emphasis on consumer information needs
Influence on product developers
Concern with cost-effectiveness
Availability of checklists
Limitations:
Cost and lack of sponsorship; may suppress
creativity/innovation;
not open to debate or cross-examination.
Major Concepts and Theories

1. The increase in federal funding allocated for educational


products and materials since 1960s spurred the development
of this approach. It provides potential consumers with info
regarding a variety of product factors. This information is
provided to help consumers become more knowledgeable
about the products they purchase.
2. The most widely used data collection methods are stringent
evaluation criteria and checklists, which provide consumers
with defensible results on a wide variety of product factors.
EXPERTISE-
ORIENTED
EVALUATION
APPROACH
Broadly used by both national and regional accreditation agencies.
The oldest and most widely used, depends primarily on professional
expertise to judge institution, program, product or activity.
Eg: the worth of a drug-prevention curriculum would be assessed
by curriculum/subject matter experts who would observe the
curriculum in action, examine its content , learning theory.
Eg: quality of a hospital is assessed by looking at its special
program, its operating facilities, its pharmacy etc. by experts in
medicine, health services and hospital administration.
Normally, a team of experts who complement each other are much
more likely to produce a sound evaluation.
Eisner
Accreditation groups.

Purpose of evaluation: Providing professional judgments of


quality
doctoral examinations administered by a committee
proposal review panels
professional reviews conducted by professional
accreditation bodies
reviews of staff performance for decisions concerning
promotion
peer reviews of articles submitted to refereed professional
journals
4 TYPES OF EXPERTISE-ORIENTED EVAL APPR

1. Formal Review System accreditation


2. Informal review system graduate student
committees
3. Ad hoc panel reviews blue-ribbon panels
4. Ad hoc individual reviews Buros Institute of
Mental
Measurements
4 TYPES OF EXPERTISE-ORIENTED EVAL APPR
Strengths and Limitations
Strengths:
1. Broad coverage
2. Capitalizes on human judgment
3. Easy to implement
Limitations:
4. Open to conflict of interest
5. Superficial look at context
6. Overuse of intuition
7. Reliance on qualifications of the
experts.
Conclusion
The focus is on .
Objectives-oriented : specifying goals and determining the extent to
which they have been attained.
Management-oriented : identifying informational needs of managerial
decision makers.
Consumer-oriented : prog consumers ratings of prog features and quality
Expertise-oriented : judgement made by professional expertise about
programs.
All these share a common characteristic:
Their primary focus is NOT on serving the needs of those who participate in
the prog who are in fact, the raison de etre for the prog.
PARTICIPANT-
ORIENTED
EVALUATION
APPROACH
Naturalistic evaluation approach
It uses a qualitative methodology so anything related to
knowledge claims, strategies or methods used (data
collection, data analysis and interpretation) follow qualitative.
Being natural:
Evaluators act naturally like most of the people who evaluate

things by observing and reacting


- More freedom given to evaluators when choosing
methodology
Naturalistic evaluation approach
Also known as client-centred study.
Evaluators have to work hand in hand with clients and act
as a counsellor/advisor to the clients in improving
program.
It rejects objectivists and moves towards accepting open-
ended and emergent designs of study, so producing a
narrative description of study in order to create a rich set
of information of the program.
Naturalistic evaluation approach

An evaluation will not be useful if the


evaluator does not know the interests
and language of the audiences
(Stake, 1973, p.4)
The focus is on
Evaluators work to portray the multiple needs, values
and
perspectives of program stakeholders so that they are
able
to make judgments about the value of the program.
In general, it often depend on inductive-reasoning, use
multiple data sources, do not follow a standard plan and
describe multiple realities.
Stakes Countenance Framework
Illuminative Evaluation
Responsive Evaluation
Naturalistic Evaluation
Strengths and Limitations

???????
Questions to discussed
1. What are the major strengths of the objectives-oriented
approach?
2. What are the major weakness of the objectives-oriented
approach?
3. What are the major strengths of the management-oriented
approach?
4. What are the major weakness of the management-oriented
approach?
5. What are the major strengths of the participant-oriented
approach?

Anda mungkin juga menyukai