Anda di halaman 1dari 28

TRAINING EVALUATION

TRAINING EVALUATION
Refers

to the process of collecting the outcomes needed to determine if training is effective.

EVALUATION DESIGN
Refers

to the collection of information that will be used to determine the effectiveness of the training program.

TRAINING EFFECTIVENESS
Refers

to the benefits that the company and the trainees receive from training.

TRAINING OUTCOMES OR CRITERIA


Refer

to measures that the trainer and the company use to evaluate training programs.

CATEGORIES OF TRAINING EVALUATION


CATEGORIES DEFINITION OBJECTIVES TYPE OF DATA COLLECTED Qualitative METHOD S OF DATA COLLECTION Questionnaires Interviews or

FORMATIVE EVALUATION

Evaluation conducted to improve the training process.

Helps ensure that the training program is well-organized and runs smoothly; and trainees learn and are satisfied with the training program.

SUMMATIVE EVALUATION

Evaluation conducted to determine the extent to which trainees have changed as a result of participating in the training program.

To identify if the trainees have acquired the KSAs indicated in the training objectives. To measure the monetary benefits that the company receives from the program.

Quantitative

Tests, Ratings of Behavior, Objective Measures of Performance

REASONS OF CONDUCTING TRAINING EVALUATION


To identify the programs strengths and weaknesses. To assess whether content, organization, and administration of the program contribute to learning and the use of training content on the job. To identify which trainees benefited most or least from the program.

REASONS OF CONDUCTING TRAINING EVALUATION


To gather data to assist in marketing training programs. To determine the financial benefits and costs of the programs. To compare the costs and benefits of training versus non-training investments. To compare the costs and benefits of different training programs to choose the best program.

OVERVIEW OF EVALUATION PROCESS


Conduct a Needs Analysis Develop Measurable Learning Outcomes Develop Outcome Measures Choose an Evaluation Strategy Plan and Execute the Evaluation

OUTCOMES USED IN EVALUATION OF TRAINING PROGRAMS


Kirkpatricks Four-Level Framework of Evaluation Criteria
Level 1 2 3 4 Criteria Reactions Learning Behavior Results Trainee satisfaction Acquisition of knowledge, skills, attitudes, behavior Improvement of behavior on the job Business results achieved by trainees Focus

OUTCOMES USED IN EVALUATION OF TRAINING PROGRAMS


Cognitive Outcomes Skill-Based Outcomes Affective Outcomes Results Return on Investment

OUTCOMES USED IN EVALUATION OF TRAINING PROGRAMS


OUTCOMES PURPOSE EXAMPLE HOW MEASURED WHAT IS MEASURED

COGNITVE

To determine the degree to which trainees are familiar with principles, facts, techniques, procedures emphasized in training.
Used to assess the level of technical or motor skills and behavior.

Safety rules, Pencil-andelectrical paper tests principles, Work Samples steps in appraisal

Acquisition knowledge

of

SKILL-BASED

Jigsaw puzzle, Observation listening skills, Work samples coaching skills, Rating airplane landings

Behavior Skills

OUTCOMES USED IN EVALUATION OF TRAINING PROGRAMS


OUTCOMES AFFECTIVE PURPOSE To identify reaction outcomes and evaluate instructor/trainer EXAMPLE Satisfaction with training, beliefs regarding other cultures, HOW MEASURED Interviews Focus Group Attitude surveys WHAT IS MEASURED Motivation Reaction programs Attitudes to

RESULTS

Used to Absenteeism, determine the accidents, training patents programs payoff for the company

Observation Company Data from payoff information system or performance records

RETURN ON Used to compare Dollars/Peso INVESTMENT costs and (ROI) benefits

Identification Economic value and of training comparison of costs and benefits of the program

DETERMINING WHETHER OUTCOMES ARE GOOD


Good training outcomes need to be:
Relevant

Reliable
Discriminate Practical

DETERMINING WHETHER OUTCOMES ARE GOOD


RELEVANCE Criteria relevance the extent to which training programs are related to learned capabilities emphasized in the training program. Criterion contamination extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions. Criterion deficiency failure to measure training outcomes that were emphasized in the training objectives.

DETERMINING WHETHER OUTCOMES ARE GOOD


Criterion deficiency, relevance, and contamination:
Outcomes Identified by Needs Assessment and Included in Training Objectives

Outcomes Measured in Evaluation

Contamination

Relevance Outcomes Related to Training Objectives

Deficiency

DETERMINING WHETHER OUTCOMES ARE GOOD


RELIABILITY degree to which outcomes can be measured consistently over time. DISCRIMINATION degree to which trainees performances on the outcome actually reflect true differences in performance.
PRACTICALITY refers to the ease with which the outcomes measures can be collected.

EVALUATION DESIGNS
Determines

the confidence that can be placed in the results, that is, how sure a company can be that the training is responsible for changes in evaluation outcomes or that training failed to influence the outcome. degree to which outcomes can be measured consistently over time.

EVALUATION DESIGNS
Threats to Validity: Alternative Explanation for Evaluation Results
Threats

to validity refer to a factor that will lead one to question either: The believability of the study results (internal validity), or The extent to which the evaluation results are generalizable to other groups of trainees and situations (external validity)

EVALUATION DESIGNS
Threats to Validity Threats To Internal Validity Company Persons Outcome Measures Threats To External Validity Reaction to pretest Reaction to evaluation Interaction of selection and training Interaction of methods

EVALUATION DESIGNS
METHODS TO CONTROL FOR THREATS TO VALIDITY PRETEST AND POSTTESTS Pretraining Measures establishing a baseline before training Posttraining Measures outcomes taken after the training. COMPARISON GROUPS Group of employee who participates in the evaluation but do not attend the training program. Use of comparison groups helps rule out the possibility that changes found in the outcome measure are due to factors other than training.

EVALUATION DESIGNS
METHODS TO CONTROL FOR THREATS TO VALIDITY RANDOM ASSIGNMENT Assigning of trainees or comparison group on the basis of chance. Helps to ensure that trainees are similar in individual differences.

EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS POSTTEST ONLY Only posttest training outcomes are collected. Can be strengthened by adding a comparison group. Appropriate when trainees can be expected to have similar levels of knowledge, behavior, or result outcomes prior to training.

EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS PRETEST/POSTTEST Both pretraining and posttest training outcomes are collected. There is no comparison group. Used when companies wanted to evaluate a training program but are uncomfortable with excluding certain employees or that only intend to train small group of employees.

EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS PRETEST/POSTTEST WITH COMPARISON GROUPS Pretraining and posttest training outcomes are collected from both groups. If improvement is greater for the training group than the comparison group, training is responsible for this change. Controls for most of threats to validity.

EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS

TIME SERIES Training outcomes are collected at periodic intervals both before and after training from both groups. Can be improved by using reversal (a time period in which participants no longer receive the training intervention). Allows an analysis of the stability of training outcomes over time. Frequently used to evaluate training programs that focus on improving readily observable outcome.

EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS

SOLOMON FOUR GROUP DESIGN Combines the pretest/posttest comparison group and the posttest only comparison group design. Training group and a comparison group are measured on the outcomes before and after training. Another training group and control group are measured only after training. Controls for most threats of internal and external validity.

EVALUATION DESIGNS
CONSIDERATIONS IN CHOOSING AN EVALUATION DESIGN Factors That Influence the Type of Evaluation Design
Factor Change potential Importance How Factor Influences Type of Evaluation Design Can program be modified? Does ineffective training affect customer service, product development, or relationships between employees?

Scale
Purpose of training Organization culture

How many trainees are involved?


Is training conducted for learning, results, or both? Is demonstrating results part of company norms and expectations?

Expertise
Cost Time frame

Can a complex study be analyzed?


Is evaluation too expensive? When do we need the information?

Anda mungkin juga menyukai