Anda di halaman 1dari 3

kirkpatrick's four levels of training evaluation in detail

This grid illustrates the Kirkpatrick's structure detail, and particularly the modern-day interpretation of the Kirkpatrick learning evaluation model, usage, implications, and examples of tools and methods. This diagram is the same format as the one above but with more detail and explanation:
evaluation level and type 1. Reaction evaluation description and characteristics Reaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: id the trainees like and en!oy the training" id they consider the training relevant" #as it a good use of their time" id they like the venue, the style, timing, domestics, etc" $evel of participation. %ase and comfort of experience. $evel of effort re&uired to make the most of the learning. 'erceived practicability and potential for applying the learning. 2. Learning Learning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience: id the trainees learn what what intended to be taught" Typically assessments or tests before and after the training. -nterview or observation can be used before and after although this is time-consuming and can be inconsistent. .ethods of assessment need to be /elatively simple to set up, but more investment and thought re&uired than reaction evaluation. 0ighly relevant and clear-cut for certain training such as &uantifiable or technical skills. examples of evaluation tools and methods Typically 'happy sheets'. (eedback forms based on sub!ective personal reaction to the training experience. relevance and practicability

,an be done immediately the training ends. )ery easy to obtain reaction feedback

)erbal reaction which can be noted (eedback is not expensive to and analysed. gather or to analyse for groups. 'ost-training surveys or &uestionnaires. -mportant to know that people were not upset or disappointed. *nline evaluation or grading by delegates. -mportant that people give a positive impression when +ubse&uent verbal or written relating their experience to reports given by delegates to others who might be deciding managers back at their !obs. whether to experience same.

id the trainee experience what was intended for them to experience" #hat is the extent of advancement or change in the trainees after the training, in the direction or area that was intended"

closely related to the aims of the learning. .easurement and analysis is possible and easy on a group scale. /eliable, clear scoring and measurements need to be established, so as to limit the risk of inconsistent assessment. 0ard-copy, electronic, online or interview style assessments are all possible.

$ess easy for more complex learning such as attitudinal development, which is famously difficult to assess. ,ost escalates if systems are poorly designed, which increases work re&uired to measure and analyse.

3. ehaviour

ehaviour evaluation is the extent to which the trainees applied the learning and changed their behaviour, and this can be immediately and several months after the training, depending on the situation: id the trainees put their learning into effect when back on the !ob"

*bservation and interview over time are re&uired to assess change, relevance of change, and sustainability of change. 1rbitrary snapshot assessments are not reliable because people change in different ways at different times. 1ssessments need to be subtle and ongoing, and then transferred to a suitable analysis tool.

.easurement of behaviour change is less easy to &uantify and interpret than reaction and learning evaluation. +imple &uick response systems unlikely to be ade&uate.

,ooperation and skill of observers, typically linemanagers, are important factors, and difficult to #ere the relevant skills and 1ssessments need to be designed to control. knowledge used reduce sub!ective !udgement of the observer or interviewer, which is a .anagement and analysis of #as there noticeable and variable factor that can affect ongoing subtle assessments measurable change in the reliability and consistency of are difficult, and virtually activity and performance of the measurements. impossible without a welltrainees when back in their designed system from the roles" The opinion of the trainee, which is beginning. a relevant indicator, is also #as the change in behaviour sub!ective and unreliable, and so %valuation of implementation and new level of knowledge needs to be measured in a and application is an sustained" consistent defined way. extremely important assessment - there is little point in a good reaction and #ould the trainee be able to 234-degree feedback is useful good increase in capability if transfer their learning to method and need not be used nothing changes back in the another person" before training, because respondents can make a !udgement !ob, therefore evaluation in as to change after training, and this this area is vital, albeit -s the trainee aware of their challenging. can be analysed for groups of change in behaviour, respondents and trainees. knowledge, skill level" 5ehaviour change evaluation is possible given good support 1ssessments can be designed and involvement from line around relevant performance managers or trainees, so it is

scenarios, and specific key performance indicators or criteria. *nline and electronic assessments are more difficult to incorporate assessments tend to be more successful when integrated within existing management and coaching protocols. +elf-assessment can be useful, using carefully designed criteria and measurements. !. Results Results evaluation is the effect on the business or environment resulting from the improved performance of the trainee - it is the acid test.

helpful to involve them from the start, and to identify benefits for them, which links to the level 6 evaluation below.

-ndividually, results evaluation is not particularly difficult7 across an entire organisation it becomes very much more challenging, not least because of the reliance The challenge is to identify which on line-management, and the .easures would typically be and how relate to to the trainee's fre&uency and scale of business or organisational key input and influence. changing structures, performance indicators, such as: Therefore it is important to identify responsibilities and roles, which complicates the process and agree accountability and of attributing clear )olumes, values, percentages, relevance with the trainee at the accountability. timescales, return on start of the training, so they investment, and other understand what is to be measured. 1lso, external factors greatly &uantifiable aspects of affect organisational and organisational performance, for This process overlays normal good business performance, which instance7 numbers of management practice - it simply cloud the true cause of good complaints, staff turnover, needs linking to the training input. or poor results. attrition, failures, wastage, non-compliance, &uality (ailure to link to training input type ratings, achievement of and timing will greatly reduce the standards and accreditations, ease by which results can be growth, retention, etc. attributed to the training. (or senior people particularly, annual appraisals and ongoing agreement of key business ob!ectives are integral to measuring business results derived from training.

-t is possible that many of these measures are already in place via normal management systems and reporting.

Anda mungkin juga menyukai