This action might not be possible to undo. Are you sure you want to continue?
Purposes and Dimension Classification
by Sung Heum Lee, PhD and James A. Pershing, PhD
he evaluation scheme that many corporate training programs use is Kirkpatrick's four-levels of evaluation: reaction, learning, behavior, and results (Kirkpatrick, 1994). However, surveys of the evaluation of corporate training programs show limited application of the levels other than at the reaction level (Alliger & Janak, 1989; American Society for Training and Development, 1996; Brinkerhoff, 1989; Dixon, 1990; Industry Report, 1996; Parker, 1986; Plant & Ryan, 1994). Training participants' reaction is the most commonly used criterion for determining the effectiveness of corporate training programs. Most corporate trainers evaluate their training programs by using a sim-
pIe end-of-course reaction form, often referred to as a "happy sheet" (Plant & Ryan, 1994), a "smile or whoopie sheet" (Robinson & Robinson, 1989), an "end-of-event questionnaire" (Bramley, 1996), or a "reactionnaire" (Newby, 1992). Based on a recent survey of corporate training programs using Kirkpatrick's four-levels of evaluation (American Society for Training and Development, 1996), only 4.3% of the organizations surveyed measured results, 13.7% measured behavior change, 27.9% measured learning, and 88.9% reported using participant reactionnaires. These findings indicate that the majority of the organizations evaluated the reactions and opinions of their training participants
immediately upon completion of training programs.
Reaction Evaluation of Training Program
Purposes The main purpose of reaction evaluation is to enhance the quality of training programs, which in turn leads to improved performance. The ultimate objective is to make training programs more efficient and effective for organizational performance improvement. Reaction evaluations are a type of formative evaluation when the results are used for program modification and the redesign of content, course materials, and presentations (Antheil & Casper, 1986; Robinson & Robinson, 1989). Generally, they collect information that
instructional strategies. The kinds of questions to be addressed in to program evaluators. when. Kirkpatrick (1994. training materithe instructor alone. Sanders. Volume 38 • Number 8 33 . expansion. and by whom it is used (Beer & In writing about level 1 evaluation. and improvement (Grove & Ostroff. timing and use determine whether an evaluation is forma1991). 1987). not instructors or facilitators (Cangelosi. effectiveness. Instructors are far more likely to accept als. value.is specific enough to help make revisions and improvements in the training program. 1991) and to make decisions Dimensions to Evaluate concerning program continuation. value. Reaction questionnaires amount of information and require the minimum amount of should be designed to supply valid and reliable information time to complete. Keller. The responsibility of an evaluator of tion. tive or summative. mine the value. effectiveness. If participants are not satisfied with the training experience. Figure 1 summarizes the and make constructive feedback about what they do rather basic differences between formative and summative evaluathan who they are. questionnaires. or efficiency of a training program (Smith & Brandenburg. The tion. termination. merit. impact of the training activity. a reaction evaluation will. Identifying and ferent uses of the evaluation results. Although these two distinctive roles call for difsent a comprehensive set of dimensions. there are few differselecting reaction evaluation dimensions that are valid ences in how trainers collect and analyze the data. or Fitzpatrick. interviews After training program Stakeholder or potential consumers External evaluators. termination. & ever. Summative evaluations provide program uation focuses on the course instructor or facilitator decisionmakers and potential customers with judgments (Phillips. 1990. 1996. presents a few sample reaction forms but does not suggest This difference calls for the careful selection of dimensions guidelines for selecting reaction dimensions. however none of the citations preterminated. be linked with the Performance Improvement. merit. 1997). Worthen & Sanders.1996) Bloomer. Formative Evaluation Purpose Use Focus Tools Time Audience User Major Characteristics To determine program's worth. Therefore. supported by external evaluators Timing and control for program improvement Reaction evaluations can also be summative in nature. 1992). 1997). or adoption (Worthen. interviews observations. not to trainee. In Diagnostic for program modification. He indicates for reaction evaluations and for the rigorous design and that ideal reaction evaluations provide the maximum development of reactionnaires. or quality To make decisions about a program's future or adoption Program impact Post-tests. or redesign termination. decisions are necessary during the developmental stages of Guidelines for reaction dimensions can help practitioners a training program to improve it and-when it has been stadesign useful reaction evaluations for program modificabilized-to judge its final worth or determine its future. Reaction evaluations provide program designers with insights about participants' degree of satisfaction with a program's design and implementation. Judgment for program continuation. Both forand applicable presents a formidable problem for corpomative and summative evaluations are essential because rate training evaluation practitioners (Mattoon. expansion. the goal of reacrevision. Consequence such cases. the reaction evaluation of a training program should relate to an instructor's Any aspect of a training program can be evaluated: the instruction and the impact of the training program. but in how. 1991). or quality To improve training program and correct errors Program process Tests. modification. value. 1987). or adoption tion evaluation is to deterFigure 1. they may not use what they have learned and will probably advise others not to attend the training program. of course. During training program Program designer or team Primarily internal evaluators. and even the training facilities. instructor. The evaluation of training programs can play either a formative purpose to improve the program or a summative purThere are a number of different dimensions for training pose to decide whether a program should be continued or reaction evaluations. howabout a program's worth or merit (Worthen. The evaluation of a training program. Summative Evaluation To determine program's worth. 1986. questionnaires. Worthen & In some organizations the primary purpose of reaction evalSanders. Basic Differences Between Formative and Summative Evaluation. supported by internal evaluators in unique cases Convincing information decisionmaking for observations. training programs is to evaluate a training program. The main difference is not in the information. This is crucial information. is a judgment about the quality.
and reaction evaluations can identify these weaknesses. facilities/accommodations. objectives. Cayer. handouts. and recommendation for program improvement (Basarab & Root. such as the size and comfort of the room and the tests or other performance measures (Wart. such as objectives. Robinson and Robinson (1989) indicate that reaction evaluations should include some questions that are specific to the particular program being evaluated. but ask more general questions about whether the training participants feel that they will be able to transfer what they have learned to the work environment and whether the organization is ready to support new skills. The reaction questions should consider how well training materials-tutorial guides. whether the 34 Performance Improvement. 1992). (1995) suggest some guidelines for selecting dimensions of reaction evaluation. manuals. methods. During the design and development stages of training program materials. Program materials are the objects the trainer and instructor use in the training environment. and the learner's belief as to the overall effectiveness of the event. However. Program designers select procedures. & Cook. while others might be detailed and require a considerable amount of time to complete. Typically. instructor. A performance objective is a detailed description of what trainees will be able to do when they complete a training program. administrative details. etc. Based on the results of an extensive literature review on reaction evaluations. 1989. program relevance to job/work area. program materials. Designing a training program starts with these factors. The most important concept associated with program content is that of a performance objective. or decisionmakers (Payne. There are areas to assess during a reaction evaluation. the course materials. duration. communication medium. Keller. 1996. Faerman & Ban. the trainer. and helpfulness. The content of a training program should be identified with recognition of some significant variables.information requirements of program designers. In addition. program value. helping to improve future programs. Forsyth et al. and any conflicts in concepts and terminologies used. overall evaluation. level. developers should make every effort to avoid unnecessary duplication of content. manipulable materials. They also suggest that the reaction evaluation of a training program should not only focus on the program itself. The areas of feedback used on reactionnaires should be directly tied to the nature and scope of the training program and the purposes of the evaluation. The results of material evaluation can be used to revise the training materials and to make the materials as effective as possible (Dick & Carey. effectiveness of the instruotorls). 1994). 1996). class handouts. individualized instruction packages. discrepancies are bound to occur. whether the content was organized into manageable amounts. SEPTEMBER1999 . or approaches. efficiency. Other areas incorporate logistical concerns. allowing for more precise information about a program's content and process. 1993). the appropriateness. and relevance to the job or to intended changes. and value (Hellebrandt & Russell. and general comments. Considerations include how well the training materials matched the real world of the trainee. 1992. content. ease of use. quality of the program materials. Instructional materials include published and unpublished print materials such as textbooks. Robinson & Robinson. Answers can verify the consistency of the materials with the program objectives. technological materials. Some reaction forms might be very simple. the use of media. or textbooks-performed for participants. level of difficulty. and timeliness of the content presented can be judged by the participants' reactions. and planned improvements. 1993). trainee. and methods that are relevant to the training objectives. Sanderson (1995) advocates dimensions such as the participants' opinion of the precourse briefing. gaps in content. 1993. adaptability. participatory materials. reactionnaires inquire about participants' reactions to and interest in the usefulness ofthe program content. trainers. such as the quality of classroom environment. content. various delivery methodologies. techniques. The purpose of evaluating the training materials is to determine their effectiveness. training environment/facilities. 1993). Schouborg. instructional activities • Program time/length • Training environment • Planned action/transfer expectation • Logistics/administration • Overall evaluation • Recommendations for program improvement Program Objective(s)/Content. and delivery methods. comprehensive dimensions for reaction evaluations can be summarized as follows: • Program objective(s)/content • Program materials • Delivery methods/technologies • Instructor/facilitator. laboratory manuals. trainer. resources. Phillips (1996) enumerates the most common dimensions of reaction evaluations as being program content. Sample reaction questions for objective(s)/content dimensions are as follows: • Did the program content meet the stated objectives? • Were the program topics effectively sequenced? • Was the program content up to date? • Was the course content at an appropriate level of difficulty? • Was the course content practical? Program Materials. program coordinator/facilitator. The selection of training objective(s)/content depends on the purposes of the training program and is largely a judgment procedure (Tracey. quality of materials.
Classroom instruction has two distinct attributes: the teaching of groups of trainees and the physical separation of the classroom from the workplace (Yelon. materials. role play. 1999. Heinich et al. and constraints before selecting methods or technologies. participant evaluation. drill and practice. the evaluator can evaluate the appropriateness and helpfulness of the delivery methods in helping learners understand the content of a training program. The instructional designer determines method/technology options to achieve the objectives of a training program. problemsolving. training objectives. & Stevens. Seels & Glasgow. the instructor/facilitator is one of the key components of an effective training program. Evaluation questions revolve around the instructor's ability to interact with the learners and his or her ability to deliver the training content in a meaningful way. programmed instruction. To evaluate the appropriateness and helpfulness of instructional activities. As a manager of the training situation. simulation. and whether the training materials were presented in a way that was both interesting and stimulating (Forsyth. as well as each aspect of the total instructional activity. Several of these standards can be assessed using reactionnaires. skills. and equipment selected for a training program (Tracey. and the quality of any performance tests or examinations. and technology-based instruction (Davies. 1992). and tutorials. Jolliffe. Consideration should be given to whether the instructor encouraged active participation through the use of examples and illustrations. cooperative learning group. Developers should consider several factors in selecting delivery methods/technologies that will help trainees reach objectives. particularly for the use of class time (Dick & Carey. 1995). They must identify trainee characteristics. content and sequencing. tutorial. designers adhere to many instructional principles derived from learning and instructional theories (Yelon. 1992). Reaction questions should also cover how the instructors interpreted and used the training materials and whether they presented materials in a way that was stimulating. After using delivery methods such as audio. to group-based activities. to ensure content understanding and performance change. The instructors must possess the required technical and pedagogical knowledge. 1992). discovery.sequence was from simple to complex and from concrete to abstract. To be effective in using instructional activities to enhance job performance. Group teaching distinguishes classroom instruction from individualized instruction. 1992). 1996). Another important consideration is the degree of trainee involvement in the training activity. 1995). 1981. Various instructional activities can take place in a classroom. An instructional activity is a set of structured experiences designed to help trainees achieve one or more training objectives. 1994). The selection of instructional activities for a training program has significant implications for course management strategies. Determining whether the delivery methods will help trainees reach the stated objectives is an important issue in selecting appropriate delivery methods/technologies for different types of objectives (Dean. and course evaluation. or multimedia for a training program.. game. Sample questions for program materials are as follows: • Were the materials consistent with the training objectives? • Were the program materials of high quality? • Was the level of difficulty of the materials appropriate? • Was the content of the handouts easy to understand? Delivery Methods/Technologies. & Stevens. The designer can choose from delivery methods such as lecture. Sample questions about the instructor/facilitator dimension for reaction evaluation are as follows: • Did the instructor present content clearly? • Was the instructor responsive to participants' questions? • Was the instructor well-prepared? Instructional Activities. the evaluation might ask questions such as the following: • Were the course exercises relevant to the program objectives? Performance Improvement. case study. and attitude and be successful in using the strategies. instructional staff variables can be one of the more important factors in attempts to account for variance in program outcomes and to distinguish a program's success. explained concepts. written assignments. laboratory. Physical separation from the workplace distinguishes classroom instruction from on-the-job training. Sample questions for the evaluation dimension of delivery methods/technologies are as follows: • Were the audio learning aids helpful? • Were the presentation technologies used in class effective? • Were the visual aids helpful? InstructorlFacilitator. there are 60 standards covering preparation. ranging from listening to the instructor. The designers of training programs strive to be effective in creating each element of classroom instruction. interesting. 1990). Performance standards for instructors are the backbone of instructor excellence. and instructional constraints. the training situation. questioning techniques. to multimedia-mediated instruction. Based on the performance standards for instructors (Powers. and enthusiastically answered questions (Forsyth. Volume 38 • Number 8 35 . Reaction evaluation of program materials should also include gathering data regarding the relevance of reading materials. objectives. demonstration. platform skills. Jolliffe. discussion. and helpful. gaining participation. self-instruction. Choices of delivery methods/technologies are based on selection criteria such as whether the delivery methods are appropriate for the trainee. In this sense. visual. Training action begins with this person. training aids.
computer lab. Too little time or too much time can negatively affect training effectiveness. Training participants' perceptions of classroom environments can have a significant influence on both cognitive and affective learning outcomes (Haertel & Walberg. 1992). 1996). developers can use program time/length as an evaluation dimension. travel arrangements. they need to ask specific questions regarding learning space. The logistics and administrative sides of program planning are important. If participants have to report to their managers about their training experiences and their intended transfer actions. To improve future training programs. and overall flexibility in terms of training event demands. room temperature. visibility-conducive to learning? • Did the arrangements-food. Using this dimension. kinds of activities in which people are engaged. Questions related to this fact would focus on understanding and awareness of ergonomics as applied to the logistics and physical adequacy of the training environment (Faerman & Ban. how well pretraining enrollments are executed. reaction evaluations can include questions about operations. seating arrangements. and extracurricular activities associated with the program. measuring participants' perceptions regarding the likelihood of their being able to transfer training content to the work environment may be particularly important (Baldwin & Ford. McVey. This dimension of reaction evaluation is used to measure the participants' overall reactions about the usefulness of the course content. it may be more likely that they will implement what they learn (Sanderson. the program evaluator could ask participants about their plans and expectations for applying the content of the program when they return to their jobs. Sample questions for this dimension of reaction evaluations are as follows: • Was the scheduling for this course efficiently administered? • Was the process of registration for this course easy? • Was the assistance with extracurricular activities helpful? Overall Evaluation. 1980. division of duties and responsibilities. social and special events. From this point of view. noise. The place could be a classroom. To help the participants implement the results of the program on the job. effectiveness of the instructor. sleeping accommodation. study facility-meet your needs? • Was there enough workspace for class activities? Planned Actionlfransfer Expectation. and how well the program is managed once underway. Environmental psychologists recognize the environment as a persistent and powerful influence on human learning and behavior.• Were the group discussions helpful to participants in exchanging ideas with each other? • Was the homework helpful in understanding the course content? Program Time/Length. The quality of large training programs depends on how well the objectives and content of the program are marketed. The facilities of the learning environment include the furnishings. heating. Sample questions for the program timellength dimension are as follows: • Was the amount of time in the program sufficient? • Was the length of the program appropriate for program objective(s)? • Was there enough time for practice of course content? Training Environment. the conditions. Sample questions for the training environment dimension are as follows: • Was the training environment appropriate for the learning? • Were the environmental conditions-comfort. arrangements. and program procedures and policies (Miringoff. accessibility. When evaluators are considering questions in this area. living room. and the location of the placers) where learning occurs (Tessmer & Harris. or car. 1995). 1988). Positive transfer is highly contingent on factors in the trainee's work environment. To ensure quality programs for performance improvement. adequacy of the learning environment. Broad and Newstrom (1992) report that there is a positive relationship between favorable organizational climate and management support of training and the participants' ability to apply classroom learning to the work environment. patterns of work. office. lighting. facilitator or coordinator. To find and remove the barriers for planned action and transfer of training content. 1988). These questions function as a type of administrative audit that assesses administrative aspects such as personnel practices. and special events such as registration procedures. the evaluators of a training program can assess the length of sessions and/or entire training program and use the results for schedule changes and considerations of overall program length. Peterson & Bickman. Sample questions regarding the planned action/transfer expectations dimension for reactionnaires are as follows: • Was the training content relevant to your job? • Do you expect the organization to support your use of the skills learned in this program? • What factors will encourage job transfer of the training content? • What factors will inhibit job transfer of the training content? Logistics/Administration. 1993. 1988). reaction evaluation should include questions on planned actions and anticipated organizational barriers. Time-on-task and the efficient use of time are important in planning a training session. SEPTEMBER 1999 . planned action/expectation for job trans- 36 Performance Improvement. acoustics. An understanding of logistical and administrative support undergirds the effective team-building effort that is necessary in conducting successful programs.
reaction evaluation can be a more useful and valuable tool in the evaluation of training programs and perConclusion formance improvement programs in general.. but also when they should be assessed. The idea of selecting dimensions for the reaction evaluation of A total of 11 dimensions and their purposes are summarized training programs also can be applied to evaluate other interin Figure 2. E. "Kirkpatrick's levels of trainee's perceived success at achieving some of the goals of training criteria: Thirty years later. At least program improvement are as follows: a portion of each evaluation should be specific to the program it is designed to evaluate (Robinson & Robinson. & Janak. lodging. as specified in the participants to express their own thoughts without being purpose of reaction evaluation. training (Patrick. including media/technologies To rate the ability. or feelings about a specific training program are complex. and other logistical and administrative matters Administration ment of the training program and provide an open forum To determine overall participant satisfaction and feelings about the training Overall Evaluation for the participants to share program their opinions. preparation. G.M. ventions for improving human performance. Dimensions of Reaction Evaluation. structure. some being transitory in American Society for Training and Development. throughout the training program. Sample questions for recommending one reaction evaluation for all its training programs. nature. and effectiveness of the trainer or facilitator in leading the program To evaluate the appropriateness and helpfulness of in. the training program. and usefulness of written material and other aids To judge the appropriateness and effectiveness of delivery methods. They are the results References of many factors. evaluators must decide not only what Performance Improvement. the training context. recognized as "customer satisfaction. and quality of materials for the training program.A.fer. dining room. 1989). 11\ Each participant's reactions. other trainees. training location. such as training content and methods. Sample questions of this dimension for reaction evaluation are as follows: • Was the overall instructional environment conducive to learning? • Was there enough time to cover the program content? • Did the training program meet your intended needs? • Would you recommend this training program to others? Dimension Program Objectives/ Content Program Materials Delivery Methods/ Technologies InstructorlFacilitator Instructional Activities Program Time/ Length Training Environment Purpose To evaluate the program objectives with participants' expectations and the appropriateness. Therefore. but it takes longer to It is a common mistake for a training department to create summarize the results. including classroom. Volume 38 • Number 8 37 . 1992). 1996)." is being used in the best-practice companies to make train• Please make any comments for changes that would improve the program. • What would you suggest to improve the training program? Reaction evaluation. and leisure facilities Recommendations for ProPlanned Action! To evaluate the participants' plans/expectations and anticipated barriers for gram Improvement. and the Alliger. Participant reactions may vary 42(2).and/or out-of-class activities To assess the length of session and/or entire training program for schedule change and considerations of program length To evaluate the adequacy of the physical training environment. it is best to use a series of open-ended questions that allow the aspects of these reactions are of interest. ing more effective in meeting customer requirements. and timeliness of the program content To determine the effectiveness. 331-342. (1996). attitudes." Personnel Psychology. level. (1989). forced into a set of choices (Keller. With appropriate dimensions. Questions Transfer Expectation applying the content of the training program on the job in this dimension collect useful information for conTo evaluate the smoothness and effectiveness of the scheduling. This approach can produce very helpful information for program improvement and decisionmaking for future training. When the Recommendations for To receive suggestions/recommendations for improving similar or future evaluator wants more sponProgram Improvement training programs taneous feedback about participants' attitudes toward Figure 2. Logistics! tinuous quality improveregistration. efficiency.
"Sumrnative evaluation in training and development. New York: HarperCollins.). (1989). Evaluating training and educational programs: A review of the literature. & Stevens. & Bickman. New York: Macmillan. New York: McGraw-Hill.H..T.." New Directions for Program Evaluation. Newby. (1994). Haertel. MA: Kluwer Academic. Reading. "Comprehensive evaluation model: A tool for the evaluation of nontraditional educational programs. & Newstrom. & Ford. 53-65. (1992).S.J. Instructional media and technologies for learning (6th ed. "Evaluation. Davies. Brooks. 45-61. Transfer of training: Action-packed strategies to ensure high payoff from training investments. NJ: Prentice-Hall.). TX: Gulf. & Casper. San technique. (1990).N. I. (1999). SEPTEMBER 1999 . Instructional McGraw-Hill.The 1996 American Society for Training and Development report on trends that affect corporate learning &.C.. J. (1988). The ASTD training and development handbook (4th ed." New Directions for Program Evaluation. Bramley.. D. J. G. A.S. M. Miringoff.. Developing human resources.W.L. Hellebrandt. --.A. (1991).. "Confirmative evaluation of instructional materials and learners." In R..M..H. Keller. Evaluating diversity training: 17 readyto-use tools. S. Training: Research and practice. & Bloomer. A.K.A. The training evaluation process: A practical approach to evaluating corporate training programs. 55-64. 129-137. San Diego: Pfeiffer. Evaluating training effectiveness: Benchmarking your training activity against best practice (2nd ed. Evaluating courses: Practical strategies for teachers. J. S. 5-20. TX: Air Force Material Command. & Smaldino. (1994)." Educational Evaluation and Policy Analysis. Management in human service organizations. 41(1). "The relationship between trainee responses on participant reaction forms and post-test scores. Wexley & J.. (1981). The systematic instruction (4th ed. (1992). Kirkpatrick. K. Training. 83-92." Journal of Industrial Teacher Education.D. London: McGraw-Hill. (1997). "Trainee satisfaction and training impact: Issues in training evaluation. 44. Hinrichs (Eds. Designing educational project and program evaluations: A practical overview based on research and experience. D.K. MA: Addison-Wesley. D. (1993). A. D. "Industry report: Who's learning what?" (1996). 5-185-5-220. Upper Saddle River. 22-27. B. N. 63-105. Phillips. W. J.M.). London: Kogan Page. "Transfer of training: A review and directions for future research.E. "Ergonomics and the learning environment. Brinkerhoff. VA: Author." Performance &Instruction. I. I. (1994). Dick. (1996). Faerman. Basarab. Baldwin. "Using evaluation to transform training. DC: The Bureau of National Affairs. 1045-1104.). (1996). Jonassen [Ed." Public Productivity &. J.)." New Directions for Program Evaluation. MA: Kluwer Academic. G. J. McVey.Management Review. (1988). New York: Simon & Schuster Macmillan. "Assessing social-psychological classroom environments in program evaluation. & Walberg. Designing instruction for adult learners.L.D. (1980). (1986). handbook. 29-55. 335-345.R.J. classroom instruction. T.). "Program evaluation. (1992). design of Parker. & Carey. Russell. (1995). Malabar. 40. (1988). FL: Krieger." Innovative Higher Education. Jolliffe. (1986). San Francisco: Berrett-Koehler.. Broad. 23(2). Molenda. Antheil. 33(10). (1996). G.A. Boston.L.D. Forsyth. L. Evaluating training programs: The four levels. 299-314. Beer." Human Resources Development Quarterly. 11(1). R. L. J.J. RO. J.. Washington. P. 38 Performance Improvement. 8(4). C. Patrick. (1996). Payne." In K. Accountability in human resource management: Techniques for evaluating the human resource function and measuring its bottom-line contribution.L.). (1993). M. (1996).). D. Training evaluation Diego: Pfeiffer. 1(2).L. J..G. J. (1986). J. Alexandria. New York: Dean. Grove.J. (1992). (1996). 16(3).K.. & Root. 294-312.F. "Program personnel: The missing ingredient in describing the program environment. Mattoon. D. Boston. M. H. lecturers and trainers. 40. & Russell. "Levels of evaluation. (1992). v. & Ostroff. Cangelosi." Personnel Psychology.C." In D.. Craig (Ed. Evaluating New York: Longman. Peterson. London: Academic. C. Handbook of training evaluation and mea- Dixon. Houston. Handbook of research for educational communications and technology.performance (2nd ed. Heinich. & Ban. ---. 32(6). (1991).
Smith Research Center. (812) 855-8545. B. (1995). New York: Longman. associations the American instruction/training. Handbook of human performance technology: A comprehensive guide for analyzing and solving performance problems in organizations.pn? • Check Out the ISPI Bulletin Boards at WWW.surement methods: Proven models and methods for evaluating any HRD program (3rd ed. Worthen." In H.). (1993). J. Sanderson. Sanders. & Ryan. Sung Heum Lee. He has presented such as the Association Society for Training & Technology. J. Houston. New York: AMACOM.. B." In H. Pershing. computer-based several topics in professional Educational Performance Communications Improvement. training evaluation. Schouborg. (1992).R. Amherst. N. training evaluation. TX: Gulf. New York: Pergamon. He may be reached at the Office of Education and Training Resources. needs analysis. or fax: James A. Analyzing the instructional setting: Environmental analysis. & Sanders. Training for impact: How to link training to business needs and measure the results. (1990). 27-30. Designing training and development systems (3rd ed. & Brandenburg. is an Associate Professor in the Department of Instructional Systems Technology and Director of Education and Training Resources at Indiana University He teaches courses and conducts research in the areas of performance technology. Wart.e. B. Yelon. (1997). Pershing was selected to replace Martha Dean as the new editor of Performance Improvement. Seels.R.edu. Smith.R (1987). 113-14. RJ..D.E. "Classroom instruction. (1993).). Columbus.D. 4(2). 2805 EastTenth Street. San Francisco: Jossey-Bass.lspl. Educational evaluation: Alternative approaches and practical guidelines." Performance Improvement Quarterly. OH: Merrill. G. & Cook. He holds a PhD in Instructional Systems Technology from Indiana University. "Who is evaluating training?" Journal of European Industrial Training. Truelove (Ed.). IN 47405-1006. J. Exercises in instructional design. the Worthen.J.J. or fax: (812) 339-8792. M. W. This article was accepted for publication before James A. B. New York: Longman. M.R (1990).V. Got a Performance Qu.R. Stolovitch & E.org Performance Improvement. Robinson.). Oxford: Blackwell. Bloomington. the International Society for Development. & Robinson. for and and theory of instructional and performance technology. Wright Education Building #2230. "Objectives and evaluation. 383-411. D. Plant.). J. Keeps (Eds. He may be reached at Wendell W. Haertel (Eds. B. San Francisco: Jossey-Bass. D. Flex: A flexible tool for continuously improving your evaluation of training effectiveness.). email: pershin@indianaedu. & Harris. 18(5). Handbook of training and development (znd ed. Worthen. London: Kogan Page.L. Walberg & G. San Francisco: Jossey-Bass. is a Research Associate with Education and Training Resources at Indiana University. 42-47. Tessmer. Handbook of training and development for the public sector: A comprehensive resource. Volume 38 • Number 8 39 • .. Powers.I§. (1992). IN 47408.e. M. PhD. RA. 35-58. MA: HRD... 201 North Rose Avenue. D. email suhlee@indiana..G. Tracey. San Francisco: Jossey-Bass. His research focuses on the fields of performance analysis. & Fitzpatrick. Z. Program evaluation: Alternative approaches and practical guidelines (2nd ed. & Glasgow. Bloomington. Cayer. "Program evaluation. PhD. G. (1992). "Summative evaluation." In S. (1989). J. S. Room 101. and the International Federation of Training and Development Organisations over the last three years..). S.R (1992). The international encyclopedia of educational evaluation. (1991). Instructor excellence: Mastering delivery of training.L. (1994). and the business impact of training and development.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.