TRAINING EVALUATION
Refers
EVALUATION DESIGN
Refers
to the collection of information that will be used to determine the effectiveness of the training program.
TRAINING EFFECTIVENESS
Refers
to the benefits that the company and the trainees receive from training.
to measures that the trainer and the company use to evaluate training programs.
FORMATIVE EVALUATION
Helps ensure that the training program is well-organized and runs smoothly; and trainees learn and are satisfied with the training program.
SUMMATIVE EVALUATION
Evaluation conducted to determine the extent to which trainees have changed as a result of participating in the training program.
To identify if the trainees have acquired the KSAs indicated in the training objectives. To measure the monetary benefits that the company receives from the program.
Quantitative
COGNITVE
To determine the degree to which trainees are familiar with principles, facts, techniques, procedures emphasized in training.
Used to assess the level of technical or motor skills and behavior.
Safety rules, Pencil-andelectrical paper tests principles, Work Samples steps in appraisal
Acquisition knowledge
of
SKILL-BASED
Jigsaw puzzle, Observation listening skills, Work samples coaching skills, Rating airplane landings
Behavior Skills
RESULTS
Used to Absenteeism, determine the accidents, training patents programs payoff for the company
Identification Economic value and of training comparison of costs and benefits of the program
Reliable
Discriminate Practical
Contamination
Deficiency
EVALUATION DESIGNS
Determines
the confidence that can be placed in the results, that is, how sure a company can be that the training is responsible for changes in evaluation outcomes or that training failed to influence the outcome. degree to which outcomes can be measured consistently over time.
EVALUATION DESIGNS
Threats to Validity: Alternative Explanation for Evaluation Results
Threats
to validity refer to a factor that will lead one to question either: The believability of the study results (internal validity), or The extent to which the evaluation results are generalizable to other groups of trainees and situations (external validity)
EVALUATION DESIGNS
Threats to Validity Threats To Internal Validity Company Persons Outcome Measures Threats To External Validity Reaction to pretest Reaction to evaluation Interaction of selection and training Interaction of methods
EVALUATION DESIGNS
METHODS TO CONTROL FOR THREATS TO VALIDITY PRETEST AND POSTTESTS Pretraining Measures establishing a baseline before training Posttraining Measures outcomes taken after the training. COMPARISON GROUPS Group of employee who participates in the evaluation but do not attend the training program. Use of comparison groups helps rule out the possibility that changes found in the outcome measure are due to factors other than training.
EVALUATION DESIGNS
METHODS TO CONTROL FOR THREATS TO VALIDITY RANDOM ASSIGNMENT Assigning of trainees or comparison group on the basis of chance. Helps to ensure that trainees are similar in individual differences.
EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS POSTTEST ONLY Only posttest training outcomes are collected. Can be strengthened by adding a comparison group. Appropriate when trainees can be expected to have similar levels of knowledge, behavior, or result outcomes prior to training.
EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS PRETEST/POSTTEST Both pretraining and posttest training outcomes are collected. There is no comparison group. Used when companies wanted to evaluate a training program but are uncomfortable with excluding certain employees or that only intend to train small group of employees.
EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS PRETEST/POSTTEST WITH COMPARISON GROUPS Pretraining and posttest training outcomes are collected from both groups. If improvement is greater for the training group than the comparison group, training is responsible for this change. Controls for most of threats to validity.
EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS
TIME SERIES Training outcomes are collected at periodic intervals both before and after training from both groups. Can be improved by using reversal (a time period in which participants no longer receive the training intervention). Allows an analysis of the stability of training outcomes over time. Frequently used to evaluate training programs that focus on improving readily observable outcome.
EVALUATION DESIGNS
TYPES OF EVALUATION DESIGNS
SOLOMON FOUR GROUP DESIGN Combines the pretest/posttest comparison group and the posttest only comparison group design. Training group and a comparison group are measured on the outcomes before and after training. Another training group and control group are measured only after training. Controls for most threats of internal and external validity.
EVALUATION DESIGNS
CONSIDERATIONS IN CHOOSING AN EVALUATION DESIGN Factors That Influence the Type of Evaluation Design
Factor Change potential Importance How Factor Influences Type of Evaluation Design Can program be modified? Does ineffective training affect customer service, product development, or relationships between employees?
Scale
Purpose of training Organization culture
Expertise
Cost Time frame