Anda di halaman 1dari 3

EMILY GAN HUI FANG

Tutorial 1

Characteristics and Principles of Assessment

1. Validity
 Is defined as the extent to which an assessment accurately measures what it is
intended to measure. (accuracy of assessment)
 Validity ensures that assessment tasks and associated criteria effectively measure
student attainment of the intended learning outcomes at the appropriate level.
 In classroom, if an assessment intends to measure achievement and ability in a
particular subject area but then measures concepts that are completely unrelated,
the assessment is not valid.
 Eg.

Type of Validity Definition Example/Non-Example

A semester or quarter exam that only


The extent to which the includes content covered during the last
Content content of the test matches six weeks is not a valid measure of the
the instructional objectives. course's overall objectives -- it has very
low content validity.

The extent to which scores


on the test are in If the end-of-year math tests in 4th grade
agreement with (concurrent correlate highly with the statewide math
Criterion
validity) or predict tests, they would have high concurrent
(predictive validity) an validity.
external criterion.

The extent to which an If you can correctly hypothesize that


assessment corresponds ESOL students will perform differently on
Construct to other variables, as a reading test than English-speaking
predicted by some students (because of theory), the
rationale or theory. assessment may have construct validity.

2. Reliability
 The degree to which an assessment tool produces stable and consistent results.
 Indicates the consistency or stability of test performance and is one of the most
important considerations when selecting tests and other assessment tools
EMILY GAN HUI FANG
Tutorial 1

 A test must be constructed so that examiners can administer the test with
minimal errors and can interpret the performance of students with confidence.
 Eg. if you create a quiz to measure students’ ability to solve quadratic equations,
you should be able to assume that if a student gets an item correct, he or she
will also get other, similar items correct

3. Objectivity

A scientific test is objective when no external basic or marginal conditions affect


the process and its evaluation.

A test that is objective measures without reference to outside influences.

Eg. the test items should be free from ambiguity. A given test item should mean the
same thing to all the students that the test maker intends to ask. Dual meaning
sentences, items having more than one correct answer should not be included in the
test as it makes the test subjective.

4. Practicability

 Refers to the need to ensure that the assessment requirements are appropriate to
the intended learning outcomes of a programme; and that in their operation they
do not distort the learning/training process; and, that they do not make
unreasonable demands on the time and resources available to learner,
teacher/trainer and/or assessor.
 Eg. A test of language proficiency that takes a student five hours to complete is
impractical -it consumes more time (and money) than necessary to accomplish its
objective. A test that requires individual one-on-one proctoring is impractical for a
group of several hundred test-takers and only a handful of examiners. A test that
takes a few minutes for a student to take and several hours for an examiner too
evaluate is impractical for most classroom situations.

5. Interpretability

 Test interpretation encompasses all the ways that meaning is assigned to the
scores. Proper interpretation requires knowledge about the test, which can be
obtained by studying its manual and other materials along with current research
EMILY GAN HUI FANG
Tutorial 1

literature with respect to its use; no one should undertake the interpretation of
scores on any test without such study. In any test interpretation, the following
considerations should be taken into account.

A. Consider Reliability: Reliability is important because it is a prerequisite to validity


and because the degree to which a score may vary due to measurement error is an
important factor in its interpretation.

B. Consider Validity: Proper test interpretation requires knowledge of the validity


evidence available for the intended use of the test. Its validity for other uses is not
relevant. Indeed, use of a measurement for a purpose for which it was not designed
may constitute misuse. The nature of the validity evidence required for a test depends
upon its use.

C. Scores, Norms, and Related technical Features: The result of scoring a test or
subtest is usually a number called a raw score, which by itself is not interpretable.
Additional steps are needed to translate the number directly into either a verbal
description (e.g., pass or fail) or into a derived score (e.g., a standard score). Less than
full understanding of these procedures is likely to produce errors in interpretation and
ultimately in counselling or other uses.

D. Administration and Scoring Variation: Stated criteria for score interpretation assume
standard procedures for administering and scoring the test. Departures from standard
conditions and procedures modify and often invalidate these criteria.

Anda mungkin juga menyukai