ANALYSIS
DIANNA MAY C. MACAPULAY
A. CRITERIA OF A GOOD TEST
1.Relevance
- students are able to perform task
2. Representativity
- represents a real situation
3. Authenticity
- the situation and the interaction are meaningful and
representative in the world of the individual user
4. Balance
- each relevant topic/ ability receives an equal amount of
attention
5. Validity
- the test effectively measures what is intended to measure
Sub Classifications of validity:
A.Concurrent validity
- If the scores it gives correlate highly with a recognized external
criterion which measures the same area of knowledge or ability
B. Construct validity
- if scores can be shown to reflect a theory about the nature of a
construct or its relation to other constructs
C. Content Validity
- if the items or tasks of which it is made up constitute a
representative sample of items or tasks for the area of knowledge
or ability to be tested. (often related to syllabus or course).
D. Convergent Validity
- there is a high correlation between scores achieved in it and
those achieved in a different test measuring the same
construct E. Criterion- related
E. Validity
- if a relationship can be demonstrated between test scores
and some external criterion which is believed to be a
measure of the same ability
- Information on it is also used in determining how well a test
predicts future behavior
F. Discriminant validity
- if the correlation it has with tests of a different trait is lower
than correlation with tests of the same trait, irrespective of
testing method
G. Face Validity
- the extent to which a test appears to candidates, to be an
acceptable measure of the ability they wish to measure
H. Predictive Validity
- indication of how well a test predicts future performance in a
relevant skill
FACTORS THAT INFLUENCE
VALIDITY
1. Appropriateness of test items
2. Directions
3. Reading vocabulary and sentence structure
4. Difficulty of items
5. Construction of test items
6. Length of the test
7. Arrangement of items
8. Patterns of answers
6. Relialibilty
- consistency and stability with which a test measures
performance
VARIABLES:
1. Specificity
- questions should not be open to different information
2. Differentiation
- the test discriminates between good and poor students
3. Difficulty
- the test has an adequate level of difficulty
4. Length
- the test contains enough items. In multiple choice at least
40 test items are required
5. Time
- students should have sufficient time to perform a test/ task
6. Item construction
- a well- constructed questions is better than a poor one
Possible reasons for the inconsistency
of an individual’s score in a test
- scorer’s inconsistency
- Limited sampling of behaviour
- Changes in the individual himself
Factors affecting reliability/ factors which
influence the reliability of a test
- Objectivity
- Difficulty of the test
- Length of the test
- Adequacy
- Testing condition
- Test administration procedures
B. IMPORTANCE OF QUANTITATIVE ANALYSIS
1.Purpose
- Q. A is meant to give some idea about the reliability of the test
- statistical data can make problematic items more visible
2. Getting started
- during the development phase of the test development
process, a sample group of at least 20 representative end- users
is gathered to whom the test is administered and solved using a
statistical program
- Usually the help of statistician is necessary
- When this has been done, the descriptive statistics, the
correlation, and the item reliability analyses can be checked
C. COMMON STATISTICAL FORMULAS
1.Descriptive statistics
- intended to offer a general idea about the test scores
CONCEPTS: