Anda di halaman 1dari 12

The Vocational Aspect of Education

ISSN: 0305-7879 (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rjve19

Strategies for the Assessment of Competence

Anthony Watson

To cite this article: Anthony Watson (1994) Strategies for the Assessment of Competence, The
Vocational Aspect of Education, 46:2, 155-165, DOI: 10.1080/0305787940460205

To link to this article: https://doi.org/10.1080/0305787940460205

Published online: 11 Aug 2006.

Submit your article to this journal

Article views: 3192

View related articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rjve20
The Vocational Aspect of Education, Volume 46, No. 2, 1994

Strategies for the Assessment of Competence

ANTHONY WATSON
University of Technology, Sydney, Australia

ABSTRACT The paper examines three different strategies for the assessment
of competence. The strategies are different because they base their
Judgements about competence on different forms and amounts of evidence
and they adopt different procedures for obtaining this evidence. Each
strategy is illustrated by means of an analysis of an assessment programme
in operation. The programmes are drawn from an investigation of
competency-based training programmes in TAFE colleges, industry and
various training institutions carried out in Australia and Great Britain during
1992. The analysis brings out the strengths and weaknesses of each strategy
as well as the particular problems and difficulties associated with the
assessment of competence and the procedures that must be followed to
ensure that principles of sound assessment are adhered to.

Introduction
The agenda for the reform of the Australian Vocational Education and
Training System Is now well established. The National Training Board has
been established to foster the development of competency-based
standards for all occupations and classifications covered by industrial
awards and agreements and to endorse them within a national framework.
National working parties have been set up to develop competency-based
training guidelines. In 1991, agreement was reached on a National
Framework for the Recognition of Training (VEETAC Working Party on the
Recognition of Training, 1991). Australian TAFE authorities are committed
to the introduction of competency-based training systems for all
recognised courses (Kinsman, 1992, p. 25).
There are, however, a number of issues which remain largely
unresolved about the nature and implementation of competency-based
vocational education and training. One of the most important of these

155
ANTHONY WATSON

arises from the general lack of agreement concerning appropriate


strategies for the assessment of competence. Another is concern about
appropriate methods of monitoring student/trainee progress and
recording competence achieved.
It is generally accepted that the success or failure of a
competency-based system will depend ultimately on the quality of the
assessment methods employed and on the methods of monitoring student
progress and recording competence achieved. On these matters, though,
the official Australian sources have not been particularly forthcoming.
The VEETAC Working Party on the Recognition of Training says little
more than that assessment methods should be appropriate "Courses
submitted for accreditation shall include assessment methods and
demonstrate their appropriateness to the course" (1991, Appendix 2). The
VEETAC Working Party on the Implementation of Competency-Based
Training goes further and provides 19 principles of assessment which
must be compiled with by providers of competency-based training. These
include well accepted principles of assessment such as the need for
practices that are valid and reliable as well as principles that are
particularly important for competency-based assessment. These include
the need for assessment to be undertaken as an holistic process which
integrates skills, knowledge and their practical applications and which, as
far as practicable, covers both workplace and off-the-job vocational
educational and training (VEETAC Working Party Report, 1992, pp. 14-19).
It is likely, however, that what programme developers and teachers
require is more specific guidance on ways in which these principles might
be applied and some examples of competency-based assessment in
operation. They require an understanding also of the problems and pitfalls
associated with competency-based assessment.
Consequently, the aim of this paper is to examine three strategies for
the assessment of competence and to illustrate these through an analysis
of assessment programmes in operation. The programmes are drawn from
an investigation of competency-based training (CBT) programmes in
operation in TAFE colleges, industry and various training institutions in
Australia and Great Britain carried out by the author during 1992. The
analysis brings out the strengths and weaknesses of each strategy as well
as some of the problems associated with competency-based assessment
and the procedures that must be adopted if the sought after principles of
sound assessment are to be applied.

The Assessment of Competence


The assessment of competence is generally viewed as a process of
obtaining evidence on performance by one or a number of means and
making judgements on the basis of that evidence about an individual's
competence to meet certain prescribed standards.

156
ASSESSMENT OF COMPETENCE

This Is the view expressed by The National Council for Vocational


Qualifications (NCVQ) in Great Britain. According to this body,
assessment of competence can be thought of as "the process of collecting
evidence and making judgements on whether or not performance criteria
have been met" (NCVQ, 1991b, p. 21). This appears to be the accepted
view also underlying most official CBT documents in Australia as
evidenced by the following definition:
Assessment is the process of collecting evidence and making
judgements on the nature and extent of progress towards performance
requirements set out in a standard, or a learning outcome, and, at the
appropriate point, making the judgement as to whether competency has
been achieved. (VEETAC, 1993, p. 13)
If this view of assessment In relation to competence is accepted, however,
three fundamental questions follow which must be answered before a
valid and reliable assessment programme can be developed.
1. What forms or types of evidence should be sought?
2. How much evidence is required?
3. Where should the evidence be provided? (Or who has the responsibility
for making the judgements?)
Answers to these questions will determine the shape and size of the
assessment programme.
The assessment strategies outlined in this paper take different
approaches on these questions. They are different because they base
their judgements about competence on different forms and amounts of
evidence and they adopt different procedures for obtaining this evidence.
They are different also because they are based on different views of
competent performance and they face different practical problems in
obtaining the evidence required.

Strategy No. 1: assessment based on samples of performance


One major strategy for the assessment of competence in common use is
based on what might be thought of as samples of performance. In other
words, the major evidence of competence is derived from samples of
performance on specially arranged assessment events such as practical
tests, exercises and simulations. These are generally supported by what
the NCVQ refer to as "supplementary evidence" from written and oral
questions and multiple-choice tests (NCVQ, 1991b, p. 22). The practical
assessments are designed to measure the technical or performance
aspects of competence. The written and oral tests are usually designed to
measure underpinning knowledge and understanding.
This approach to assessment requires that criteria are worked out
for each assessment event on which judgements about competence are

157
ANTHONY WATSON
based. Assessments based on these criteria are generally made on a
pass/fail basis. Students are generally assessed individually when they are
ready and judged as 'competent' or 'not competent'. Complex records of
progressive achievement are required as individual retries may be
allowed on assessments events until competence is achieved. This
approach is generally found in formal college or off-job training settings
and often carried out on behalf of industry. It was employed in some form
in all TAFE and further education (FE) colleges in the investigation, which
had competency-based training programmes in operation.
The assessment strategy employed in the Hairdressing Certificate
programme at Box Hill College of TAFE, Victoria, exemplifies this
approach. Assessment in this programme is based on the observation of
samples of job performance carried out on specially-designed practical
tests or tasks which involve the basic operations of hairdressing (cutting,
styling, waving, colouring and basin service). These are supported by
supplementary evidence from theory tests which assess underlying
knowledge. The assessments are generally based on individual elements
of competence such as 'shampoo and condition hair*. Theory tests are
generated and marked by a computer program (80% pass rate is
required). Students are allowed three tries before being 'locked out' for
further instruction. Practical tests employ checklists of performance
criteria which are also generated by the computer program. They usually
follow the theory test and are checked off by the teacher. When all
requirements of an element are satisfactorily completed, students are
recorded as 'competent' for that element. In addition there are so-called
'retention tests' or assignments at regular intervals and a final assessment
which attempt to draw together and integrate a number of elements.
Records are computer-managed and records of progress are available to
students. These are based on a management chart which outlines the
requirements for the course.
This approach to the assessment of competence is typical of most
college-based or off-job programmes. Validity depends on the extent to
which assessment events do in fact assess the relevant elements of
competence to the standards prescribed. It also depends on the quality
and range of facilities at the college and the capacity of the college to
simulate real workplace conditions and events. Reliability depends on the
usual factors which affect criterion-referenced assessment such as the
quality of the assessment items, objectivity in marking, adequacy of
marking guides and checklists.
There are, however, certain problems peculiar to this approach.
These must be overcome if effective judgements about competence and
effective records of competence achieved are to be provided.
The first is a tendency to fragment or atomise the assessment of
competence and to assess many separate elements at the expense of more
holistic assessment. At Box Hill this problem is tackled through the

158
ASSESSMENT OF COMPETENCE

inclusion of the so-called 'retention tests' which draw together a number


of elements Into an integrated tasks such as 'a full basin service'. A similar
strategy was observed at Tea Tree Gully College, South Australia, in a
course on business studies. The course in question employs a so-called
'Mini Business Project' which extends over several weeks and draws
together elements such as business planning, budgeting, marketing,
pricing and producing a product and maintaining accurate financial
records.
A second problem arises from the complex and time-consuming
recording of progress which is required to keep track of competence
achieved and not achieved, number of successful and unsuccessful tries
and retries for all students and all elements of competence. (Not to
mention the impact on the reliability of the assessment which may result
from retries on the same tests.) This problem is particularly taxing for
teachers in those colleges where these records are maintained manually.
Box Hill College, like most colleges which employ some form of self-paced
learning, tackles this problem by means of a computer-based management
system which maintains a progressive record of progress for all students,
and which is available to students and teachers at all times. In the
experience of this author, some system of computer-managed learning
and assessment appears to be the only efficient and cost-effective method
of providing adequate records of progress whether a college employs
self-paced learning or not.
The third and most fundamental problem, however, arises from the
difficulty experienced by most off-job assessment systems in making valid
judgements about workplace competence on the evidence of one
successful performance of a job or operation in a college setting. Jobs that
look the same on the surface in a college setting may vary enormously in
the cut and thrust of the workplace. The recent National Assessment
Research Forum Report makes this point quite strongly when it says that
decisions about an individual's attainment of units of competency
should be largely derived from evidence of performance in the
workplace and should be made by workplace assessors. While
additional evidence about the individual's underpinnings knowledge
may be gathered from other sources, especially from the provider of any
off-job training, the primary and substantive evidence of competency is
that drawn from observing, over a period of time, the individual's
gradual acquisition of skills and knowledge. (1993, p. 20)
To overcome this problem, colleges may have to establish cooperative
arrangements for integrated work-based assessment like the so-called
collaborative training partnerships which have been established between
education and industry in Great Britain.

159
ANTHONY WATSON

Strategy No. 2: assessment based on


natural observation in the workplace
A second major strategy for the assessment of competence observed in
operation is one based on what the NCVQ calls "performance evidence
from natural observation in the workplace" (NCVQ, 1991b, p. 22). In other
words, the major source of evidence on which judgements about
competence are made is the observation of natural or routine
performance in the workplace. This approach is typified by multiple
observations of natural or routine work performance on all elements of
competence. It requires hierarchies of assessors to ensure quality control
and complex tracking and recording of competence achieved. The
approach is most often found of course in industrial on-job training
settings and may be supported by off-job components. It was observed in
operation in industrial settings as diverse as Australian Paper
Manufacturers, Queensland and Boots the Chemist in Great Britain.
The assessment strategy employed in the training programme in
Electrical Fitting and Instrument Control at ICI Botany, New South Wales,
exemplifies this approach. Assessment of the on-job component of this
programme is based on the observation of work performance. It is carried
out by supervising tradespersons each time a trainee does work in one of
the competency areas as part of the 'normal daily work routine'. Reco;ds
of each experience are maintained using a system of computer-scanned
record or story cards. The record cards carry details which identify
element of competence involved, major unit of competence, related units,
degree of complexity of task, criticality of task, time taken and degree of
involvement of the trainee (Dutneall &Said, 1992).
Record cards are verified by supervising tradespersons and a craft
development officer who has a mentoring responsibility for the trainee.
Because it is realised that the quality of work experience varies, learning
values in point form are assigned to each experience based on the criteria
listed above and these are allocated to the appropriate area of
competence. When a preset number of points is exceeded, that is, when a
competence area has been practised successfully many times, the trainee
will be judged as 'most likely competent in that area'. (One-off
assessments are also available for auditing and appeals and for the
recognition of prior learning.)
This example represents a thoughtful approach to the assessment of
workplace competence. The system appears to have face validity as the
performance being assessed is real workplace performance under real
conditions and is related to the units and elements of competence set out
in the statement of standards. It is holistic assessment in that it is based
on real work which generally requires the integration of skills, knowledge
and attitudes. It takes account of the problem that some instances of job

160
ASSESSMENT OF COMPETENCE

performance may be far more complex, critical or time-consuming than


others and avoids the criticism which is often levelled at off-job
assessment that Judgements about competence are not adequate if based
on one instance of performance.
However, the extent to which such a system is reliable, fair to all
trainees and generally practicable and cost-effective will depend on a
number of factors. These include:
1. adequate quality control to ensure consistency across assessors;
2. the quality and range of workplace experience and training that can be
provided;
3. sensible decisions about the number of observations of performance
required to make a reliable judgement about competence. (The
combination of circumstances in range statements and performance
criteria can lead to a multiplication of assessment events.)
The programme developers at ICI are well aware of these requirements.
They have instituted a three-tier system of quality control and they are
experimenting to find the optimum number of observations for each
element of performance. Other programmes will need to take similar
precautions if they are to adopt this approach to the assessment of
workplace competence.
A more fundamental problem for workplace assessment though, is
the one raised by the National Assessment Research Forum Report. That
is,
Does the workplace assessor or supervisor simply accept the
competency decisions made by the formal institutional training provider
involved?... or is the workplace assessor or supervisor expected to
assess achievement of each Unit of Competency? (1993, p. 20)
In other words, does the formal institutional provider retain a significant
responsibility for the assessment of certain aspects of competence
(underpinning knowledge, understanding, etc.) or Is the ultimate and total
responsibility for judgements about competence to be shifted to the
workplace? These questions concerning the relative roles of industry and
institutional providers of training such as TAFE and FE require resolution
if successful integration of assessment and training is to be achieved.

Strategy No. 3: assessment based on evidence from prior achievements


A third major strategy for the assessment of competence is one based on
what the NCVQ calls "evidence from prior achievements" (NCVQ, 1991b,
p. 22). In other words, judgements about competence are based chiefly on
the recognising, assessing and accrediting of prior achievements and
learning through devices that document this achievement such as a
portfolio of evidence. The portfolio may be supported, of course, by

161
ANTHONY WATSON

evidence from supplementary sources such as interviews, oral and written


questions and simulations.
This approach is more likely to be used to assess higher levels of
competence where natural observation of performance in the workplace
may not be a viable option. It is also more likely to be found in training
agencies prepared to offer alternate routes for accreditation. The strategy
is quite common in Great Britain for the assessment of higher level
competency standards such as those in management training and training
programmes for assessors and verifiers.
The assessment strategy employed in the training programme for
managers offered by Thames Training, a semi-commercial training and
development centre of the University of Greenwich, exemplifies this
approach. This programme views management competence as the ability
to perform management functions in the workplace together with the
skills, knowledge and understanding that underpin such performance. The
main source of evidence of this competence is provided in a portfolio of
evidence. The primary function of the portfolio is to organise prior
learning experiences and achievements into a manageable form ready for
either 'action planning' or 'accreditation'. The framework for organising
the evidence is based on the Kolb Experiential Learning Cycle. The
process of accrediting competence is seen as a process of personal and
professional learning development (turning experience into learning).
The process involves a series of stages. These include a start up
workshop to audit current competence, collection of direct and indirect
evidence of competence, development of the portfolio where competence
is documented, submission of the portfolio to an accredited assessor and
final assessment interview for the accreditation of competence. The
process may be supplemented by oral and written tests of underpinning
knowledge and understanding and sometimes by simulations.
This approach to the assessment of competence results from a
desire to offer an alternative and more accessible route to accreditation
which is more closely related to real workplace performance than
traditional tertiary courses with their emphasis on time serving, academic
reading and traditional examinations. It provides an interesting alternative
method of making judgements about workplace competence on the basis
of evidence from past achievements rather than on the basis of traditional
sources such as direct observation. (It is assumed that the direct
observation of day-to-day management functions, such as the settlement
of an industrial dispute or the drafting of a management plan, are not
viable options.)
The strategy employed by Thames Training would appear to have
face validity in that every step in the process and all decisions made are
related to the units and elements of competence in the relevant set of
standards. To ensure that the required standards are reached, however,
steps would have to be taken to ensure that the process of evidence

162
ASSESSMENT OF COMPETENCE

collection and documentation was not simply a file-keeping exercise.


Appropriate academic standards and levels of intellectual rigour would
need to be maintained.
To enhance reliability, candidates would have to be trained in
documentation and reflection techniques and assessors would need to be
experts in the facilitation and assessment of these techniques. The
reliability of judgements would be enhanced further, of course, by the
inclusion of evidence from supplementary sources such as interviews, oral
and written questions and simulations.

Conclusion
It is worth restating that the assessment of competence is all about
obtaining evidence on which valid and reliable judgements about
competence might be made. Mitchell reminds us that "competence is the
ability to perform to the standards expected in employment" and that,
therefore, "performance evidence must be the prime candidate for
consideration, with assessment in the ongoing course of work the one that
is most likely to offer the highest validity" (1989, p. 61). When this
evidence is insufficient or unavailable, however, supplementary evidence
must be found through samples of performance observed elsewhere and
the assessment of an individual's knowledge or related cognitive
performance. It may well be, as Jack et al (1993, p. 22) conclude, that
"evidence of workbased performance and of relevant, underpinning
knowledge and understanding are both necessary in order to demonstrate
full competence capable of being sustained across a range of contexts".
The strategies examined in this paper provide alternative
approaches to obtaining this evidence, given different types and levels of
competence being assessed and different practical circumstances. They
also serve to highlight some of the difficulties and problems associated
with the assessment of competence and bring into focus some of the
procedures that must be followed to ensure that the principles of sound
assessment are adhered to.

Acknowledgements
For the purposes of this paper, the author made use of materials and
unpublished papers generously made available by the following people:
Mr Ralph Dutneal and Mr Roger Said of ICI, NSW, Australia; Mr Ray Seary
and Mr Terry Walker, TAFE consultants to ICI; Mr Leon Hockaday and Ms
Kerri Ferguson of Box Hill College, Victoria, Australia; Ms Polly Carter, of
Thames Training, University of Greenwich, United Kingdom.

163
ANTHONY WATSON

Correspondence
Professor Anthony Watson, School of Adult Vocational Education,
University of Technology Sydney, PO Box 123, Broadway, NSW 2007,
Australia.

References
Burke, J.W. (Ed.) (1989) Competency Based Education and Training. London: Falmer
Press.
Dutneal, R. & Said, R. (1992) Competency based training: an ICI case study,
unpublished paper, ICI Botany, NSW.
Gealy, N., Johnson, C, Miller, C. & Mitchell, L (1991) Designing assessment
systems for national certification, in E. Fennell (Ed.) Development of
Assessable Standards for National Certification, pp. 35-51. Great Britain:
Employment Department.
Jack, M., Goodman, H. & Newbery, G. (1993) Assessment of learning outcomes: the
BTEC experience, Competence and Assessment, Employment, Issue 21,
pp. 18-22.
Kinsman, M. (1992) Competency based education and TAFE, in Higher Education
and the Competency Movement: implications for tertiary education and the
professions. Canberra: Australian National University.
Mansfield, R. (1991) Deriving standards of competence, in E. Fennell (Ed.)
Development of Assessable Standards for National Certification. Great Britain:
Employment Department, pp. 12-24.
Masters, G.N. & McCurry, D. (1990) Competency-based Assessment in the
Professions, National Office of Overseas Skills Recognition, Research Paper
No. 2. Canberra: AGPS.
Mitchell, L. (1989) The definition of standards and their assessment, in J. W. Burke
(Ed.) Competency Based Education and Training, pp. 54-64. London: Falmer
Press.
National Assessment Research Forum Report (1993) Competency-based Assessment.
Canberra: Department of Employment, Education & Training.
National Council for Vocational Qualifications (1991a) Criteria for National
Vocational Qualifications. Great Britain: Employment Department.
National Council for Vocational Qualifications (1991b) Guide to National Vocational
Qualifications. Great Britain: Employment Department.
National Training Board (1992) National Competency Standards Policy and
Guidelines, 2nd edn. Canberra: National Training Board.
VEETAC Working Party on Recognition of Training (November 1991) National
Framework for the Recognition of Training. Canberra: AGPS.
VEETAC Working Party on the Implementation of Competency-based Training
(1992) Assessment of Performance under Competency-based Training. Canberra:
AGPS.

164
ASSESSMENT OF COMPETENCE
VEETAC (1993) Framework for the Implementation of a Competency-based
Vocational Education and Training System. Canberra: AGPS.
Whiteley, J. (1991) Assessment Models appropriate for Competency-based Training
and their Relationship to Teaching and Learning Approaches. Queensland:
Research and Development Division, Office of Vocational Education, Training
and Employment Commission.

165

Anda mungkin juga menyukai