Anda di halaman 1dari 11

EVALUATION

Text: Phillips, Handbook of Training and Evaluation and Measurement Methods


CEOs want to know what theyre getting. Our challenge: to design evaluations that are simple and
inexpensive while communicating effectively the return on investment.
Evaluation
What learning occurred (if any)?
Learning objectives
Actual situation
To improve a program
Immediate

vs.

Research (Leonard Nadler)


Why learning occurred?
Hypotheses
Experimental
Varied
Long-term

In evaluating programs were talking about evidence -- not proof. Build a case based on circumstantial
evidence. Consider, do you really need the information youre looking for? What communicates? What
creates a vivid picture?
How can you set up systems for continual monitoring?
How do you decide where to invest money for evaluation?
1.
2.
3.
4.
5.

Identify the problem


Define the evaluation criteria
Define the research design
Consider intended / unintended consequences
Consider ethical issues (confidentiality and misuse of data)

Why evaluate?
Proving (summative / judge the program)
-- it works
-- it makes a difference
Improving (formative / improve the program)
-- instructor development
-- materials, equipment, facilities
-- revision of content
-- varying strategies
Learning
-- advance organizer
-- reinforce learning
-- focus on learner outcomes
Linking -- training to rest of organization (politics)
-- part of management/strategic direction
-- assure transfer
-- open lines of communication
Kirkpatricks Levels of Evaluation
Level One = reactions (reaction sheet)
Level Two = learning (knowledge and performance tests, observation, self-reports, simulations, work
sample analyses)
Level Three = behavior on the job (self, peer, and supervisor reports, case studies, surveys, site visits,
observation, work-sample analyses)
-- productivity measures: output, loans approved, # incentive bonuses
-- quality measures: scrap level, error, waste, rework, customer complaints
-- performance / behavior: absenteeism, tardiness, requests for transfer, turnover
Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
1

-- safety / regulatory: injuries, litigation, OSHA reports


Level Four = results (organizational audits, performance analyses, records analyses, document
reviews, cost/benefit comparison)

Shortcomings: definition of HRD covers training, only; and entirely outcome oriented -- perpetuates trial
and error learning.
Six-Stage Model for Evaluating HRD (Brinkerhoff)
1.
2.
3.
4.
5.
6.

Goal setting
Program design
Program implementation
Immediate outcomes
Usage outcomes
Impacts and worth

Whats the need?


What will work?
Is it working?
Did they learn it?
Do they use it?
Did it make a difference?

Strengths of model:
identifies flawed operations; not just flawed courses
requires articulation of assumptions about why and how each HRD activity is supposed to work
emphasizes formative evaluation by incorporating it at each stage

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
2

Clark & Merrill


Instructional Systems Design
Analysis
needs assessment
task analysis

Design
learning objectives
assessment

Development
delivery strategies
writing content (information/
practice)
Evaluation
try out / revise

Implementation
provide to students

2.
3.

Nadler
Critical Events
Identify needs of the
organization
Specify job performance
Identify learner needs

Select evaluation method/design


Determine evaluation strategy
Finalize program objectives
Estimate program costs/benefits
Prepare and present proposal
Design evaluation instruments
Determine and develop program
content
11. Design or select training &
development methods

4.
5.

Determine objectives
Build curriculum

6.

Select instructional
strategies
Obtain instructional
resources

12. Test program and make revisions

Evaluation and feedback


through out

13.
14.
15.
16.
17.
18.

8.

1.
2.
3.

Phillips
Results-Oriented HRD
Conduct a needs analysis and develop
tentative objectives
Identify purposes of evaluation
Establish baseline data

4.
5.
6.
7.
8.
9.
10.

Implement program
Collect data at proper stages
Analyze and interpret data
Make program adjustments
Calculate return on investment
Communicate program results

1.

7.

Conduct training

Version used by Campbell & Eyler separates Analysis & Design

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
3

Knowles
Andragogy
Set climate
Establish structure for mutual
planning
Diagnose needs (and interests)
for learning
develop competency model
assess performance
assess needs
Formulate directions
(objectives) for learning

Design a pattern of learning


experience
organizing principles
learning design models
Manage the learning
experiences
techniques
materials and devices
Evaluate results & rediagnose
needs

Simplified version
Eylers Work Tasks by Phase - original version designed to be used post course. Ive adapted it to include
the potential for use in pre-course development of an evaluation plan
1.

Course review/analysis
Why evaluate?
Proving
(summative /
judge the
program)
it works
it makes a
difference

Improving (formative /
improve the program)
instructor development
materials, equipment,
facilities
revision of content
varying strategies

Learning
advance
organizer
reinforce
learning
focus on
learner
outcomes

Linking -- training to rest of


organization (politics)
part of management/
strategic direction
assure transfer
open lines of
communication

Worth evaluating?
Ethical (Wooten )
misrepresentation & collusion
misuse of data
manipulation & coercion
value & goal conflict
technical ineptness
evaluability (Russ-Eft)
CLAM:
Does it have clear, measurable objectives?
Does it have a logical plan to reach them?
Are the activities logical?
Whats the management plan and plan for data use?

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
4

2.

Strategy definition
Strategic Considerations (Eyler)
need to know
resources
credibility
politics
intrusiveness
instructional reinforcement

identify data thats worth getting/will tell the story clearly and vividly
The measurements the client first used to identify the problems (presenting
symptoms) will often be the measures used in tracking operational results
(Robinson)
looking for evidence - not proof
intended/unintended results (Eyler)
design - collection & analysis plan

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
5

Benefits
Pre-experimental
one-shot
single group, pretest &
post-test
single group, time series

Quasi-experimental
control group

may be only choice


group may have no prior skill
better than one-shot
can compare before & after skills
multiple measures before allows
comparison of initial results and
measurement of long-term effects
eliminates much time &
selection threat to validity

acceptable when groups are very


similar on selection criteria
O1 --> X ---> O2
O1 ----------> O2
can compare groups over time

Drawbacks
totally uncontrolled - no way to tell
what influenced results

effect of a pre-test
effect of external factors
mortality threat

does not control


effects of testing
selection

institutional cycle design


controls for
X
O
(used to compare groups
pre-test effect
O X
O
over time; is training
time/history
X
O
having an effect? Build
O X
O
selection
this into normal
institutional practice)
True experimental (random selection - takes all control variables into account)
3 groups, random,
random selection
group B controls for time &
selected group pre &
mortality threats
pre & post testing on selected
post--test (1st 3 lines) or
random selection controls for
groups
Solomon 4-Square
R O1 X O2
selection threat
Design (if youre
R O1
O2
group C controls for pretest
concerned that the test
R
X O2
interaction
itself may alter outcome
R
O
- all 4 lines)
post-test only, control
random selection
amount of change cant be
group
accurately attributed to the
post-test only reduces pretest
program
effect (also time & expense of 3
group design)

threats to validity: time/history, testing effects, selection, mortality

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
6

3.

Instrument development
Fitting Instrumentation to the Purpose

Quest.
Interview
Observation
Extant Data

Needs
Assmnt

+
+

Implementation

Level 1

Level 2

Level 3

Level 4

make or buy your instruments


Validity:
Face -- does it appear to measure what it proposes to measure?
Content -- does it measure the content of what its designed to measure?
Construct -- does it represent a meaningful entity?
Concurrent -- 1 measure correlates with another
Predictive -- does it accurately predict later behavior/attributes? (i.e. SAT)
Potential Threats to Validity
Role bias-- peoples perception of why youre asking
Response Set -- predisposition to be agreeable / disagreeable
Central Tendency -- on written questionnaires, tending to answer in the middle
Social Desirability -- people tell you what they think you want to know
Practice / Testing -- test teaches and therefore distorts results
Halo Effect -- generalizing reaction to a topic or trainer
Overload -- more you ask, the lower the quality of response
Vocabulary
Trust
Using Extant Data
sales data
performance data
production records
incident reports
promotional time table
calls to hot line (ethics, orders)
sick leave
annual report

newsletters
financial statement
memo
absenteeism
exit interview data
customer complaints
training materials
work products
applications (i.e. Baldridge award)

Content Analysis:
Count (# incidents) vs. Classify (qualitative judgment)
Establishing a baseline
use mgt panel if necessary to guess
4.

Implementation review
review implementation plan
identify potential problems in collection and analysis
develop solutions
finalize design

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
7

5.

Data collection
notify participants of upcoming data collection activities
coordinate data collection (distribution & return of questionnaires, site visits for interviews,
observations, etc.)

6.

A. Data analysis
ROI = program benefits / program costs
Training investment analysis (Hassett)
1. Determine the information your organization needs
2. Use the simplest and least-expensive method possible for finding the information
you need
3. Perform the analysis as quickly as possible
4. Publish and circulate the results
Sources of Return on Investment (Hassett)
increased income (productivity, sales)
decreased losses (errors, waste)
retention (client and employee)
intangibles
if dont have costs already assigned to these, use a management panel to
assign costs
ROI = program benefits
program costs
Use only when data is clear and definitive; arrive at a single number.
Cost / Benefit: Benefits - Costs = $ Saved
Less technical; easier to couch in qualified terms; provides evidence, not proof.
Typical program costs:
Analysis = 20%; Design & development = 70%; Evaluation = 10%

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
8

6.

B. Presentation Plan
1) Identify audience &
2) What they need/want to know
Stockholders
Top Mgt
Managers
Participants
HRD

ROI; graphic & short verbal descriptions


ROI; success; next steps
Level 2, 3, 4 related to their group
Level 2, 3, 4 related to their group; any
personal information
everything

Michael Quinn Patton, How to Use Qualitative Methods in Evaluation


Purpose of evaluation = relaying information to decision-makers so that they feel
prepared to act.
Goal of evaluation = to shape information which decision-makers can use.
Decision-making is a process; it seldom occurs at a single point in time. Therefore
persuading decision-makers involves continual communication.
Most decision-makers rely on numbers but numbers do not convey meaning,
attitude, or morale:
o Numbers convey a sense of precision and accuracy even if the
measurements which yielded the numbers are relatively unreliable, invalid,
and meaningless. The point, however, is not to be anti-numbers. The
point is to be pro-meaningfulness. Thus by knowing the strengths and
weaknesses of both quantitative and qualitative data, the evaluator can help
stakeholders focus on really important questions rather than, as sometimes
happens, focusing primarily on how to generate numbers.
Technique: Evaluation Advisory Board (composed of decision-makers)
o I would like to know __________________ about ______________
program.
Prioritize. Use as basis of evaluation outputs.
3) Structure information & communication
Timely -- before decisions are made
Targeted
Media Selection -- fits organizations norms
Unbiased
Context Appropriate
Respected Sources
4) Present information
5) Analyze and follow-up on reactions
6) Repeat cycle from #1

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
9

Responsive Evaluation

Identify decision-makers

Who is in a decision-making role?


Why is this training program important to them?
What is their stake in it?
What values, biases, or experiences might influence their
judgment about the training program?

Identify information needs of


decision-makers

What questions do they have about the training program?


How are they going to use the information they receive?
What decisions might be forthcoming?

Systematically collect both


quantitative and qualitative data

What kind of information will speak to decision-makers?

Translate data into meaningful


information

What are the information needs of each group of


stakeholders?

What data do you need to collect to create a meaningful story


about the program?

How can you frame the information so that it matters to them?

Involve and inform decision-makers


on a continuous basis

How can you involve decision-makers without taking up too


much of their time?

Need to be able to (Eyler)


7) continuously monitor whats being done in a reasonable easy way
8) decide when & where to make a major investment in evaluation
9) align the eval function with the goals of the organization
Gordon (Training August 1991, p. 19)
Kirkpatrick
10) Soft skills - may not want to go beyond Level 2 11) everything doesnt work for every one (Level 4)
12) tampering with personality if try to force (Level 3)
Robinson
13) tie results (meet specific business problem) to training through needs analysis
Gilbert
14) evaluation is only meaningful if its conducted within the broader context of a
coherent performance improvement system
Accounting for HRD
Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
10

ROI = program benefits


program costs
Use only when data is clear and definitive; arrive at a single number.
Cost / Benefit: Benefits - Costs = $ Saved
Less technical; easier to couch in qualified terms; provides evidence, not proof.
Profit Centers
Analyzing Costs
Analysis Costs (< 20%)
Development Costs
Delivery Costs
Evaluation Costs (< 10%)

70%

Credibility and Acceptance of Negative Information (Easterby-Smith)

High credibility
source
Low credibility
source

Amount of
attitude change
in recipient of
information

Degree of dissonance of
information provided
How do you alter values and beliefs? Create dissonance.

Evaluation
Sasser, Pulley, & Ritter, with subsequent modifications

11/30/2015
11

Anda mungkin juga menyukai