Anda di halaman 1dari 45

lOMoARcPSD|2532716

Summary Psychological testing, Robert M. Kaplan

Test theory (Rijksuniversiteit Groningen)

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

Test Theory
Hoofdstuk 1

- Test: measurement device or technique used to quantify behavior or aid in the


understanding and prediction of behavior.
-> measures only a sample of behavior, and error is always associated with a
sampling process.
- Item: is a specific stimulus to which a person responds overtly; this response can be
scored or evaluated
-> specific questions or problems that make up a test.
- Psychological test (educational test): set of items that are designed to measure
characteristics of human beings that pertain to behaviour
-> Overt behaviour; individual’s observable activity.
-> Covert behaviour; takes place within an individual and cannot be directly
observed.
- Scales: Used by psychologist to deal with problems of interpretation, relate raw
scores on test items to some defined theoretical or empirical distribution.
- Types of tests:
-> individual test; can be given to only one person at a time.
-> group test; can be administrated to more than one person at a time by a single
administrator
=> Test administrator; person giving the test.
categorize tests according to type of behaviour they measure;
->Ability tests; contain items than can be scored in terms of speed, accuracy, or both.
=> different types of ability:
1. Achievement: previous learning
2. Aptitude: potential for learning or acquiring a specific skill
3. Intelligence: person’s general potential to solve problems, adapt to changing
circumstances, think abstractly, and profit from experience.
=> these three concepts are encompassed by the term human ability.
-> personality tests; related to overt en covert dispositions of the individual.
=> different types of personality tests:
1. Structured personality tests: provide a statement, usually of the “self-report”
variety, and require the subject to choose between two or more alternative
responses as true/false.
2. Projective personality test: unstructured, either the stimulus or required response
are ambiguous.
- Psychological testing: all possible uses, applications, and underlying concepts of
psychological and educational tests. The main use of these tests though, is to
evaluate individual differences or variations among individuals.
- Principles of psychological testing: basic concepts and fundamental ideas that
underlie all psychological and educational tests
- Reliability: accuracy, dependability, consistency or repeatability of the test results
- Validity: meaning and usefulness of test results
- Interview: method of gathering information through verbal interaction, such as
direct questions.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

- Test batteries: two or more tests used in conjunction, quite common by Han Dynasty
- Charles Darwin & individual differences:
-> most basic concept underlying psychological and educational testing pertains to
individual differences
-> According to Darwin’s theory higher forms of life evolved partially because of
differences among individual forms of life within species.
-> Sir Francis Galton, began applying Darwin’s theories to the study of human beings.
Galton initiated a search for knowledge concerning human individual differences,
which is now one of the most important domains of scientific psychology.
- Psychological testing (experimental psychology and psychological measurement):
developed from at least two lines of inquiry; one based on the work of Darwin,
Galton, and Cartell on the measurement of individual differences, and the other
based on the work of the German psychophysicists Herbart, Weber, Fechner, and
Wundt.
-> Binet developed the first major general intelligence test. Binet’s early effort
launched the first systematic attempt to evaluate individual differences in human
intelligence.
- Evolution of intelligence and standardized achievement tests:
-> The history and evolution of Binet’s intelligence are instructive. Binet was aware of
the importance of a standardization sample.
-> representative sample; one that comprises individuals similar to those for whom
the test is to be used.
-> mental age concept was one of the most important contributions of the revised
1908 Binet-Simon Scale.
=> 1916 L.M. Terman of Stanford University had revised the Binet test for use in the
United States.
World war I: The war created a demand for large-scale group testing because
relatively few trained personnel could evaluate the huge influx of military recruits ->
World war I fueled the widespread development of group tests
Achievements tests: Among the most important developments following World War
I was the development of standardized achievements tests.
-> Standardized achievements tests caught on quickly because of the relative ease of
administration and scoring and the lack of subjectivity or favoritism that can occur in
essay or other written tests.
=> by 1930, it was widely held that the objectivity and reliability of these new
standardized tests made them superior to essay tests.
- Wechsler’s test: produced various scores, among these was the performance IQ.
Wechsler’s inclusion of a nonverbal scale thus helped overcome some of the practical
an theoretical weakness of the Binet test.
- Personality test: 1920-1940:
-> traits; relatively enduring dispositions that distinguish one individual from another.
-> earliest personality test were structure paper-and-pencil group test
=> motivation underlying the development of the first personality test was the need
to screen military recruits.
-> introduction of the Woodworth test was enthusiastically followed by the creation
of a variety of structured personality tests, all of which assumed that a subject’s
response could be taken at face value.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> during the brief but dramatic rise and fall of the first structured personality tests,
interest in projective tests began to grow.
-> Unlike the early structured personality tests, interest in the projective Rorschach
inkblot test grew slowly.
=> Adding to the momentum for the acceptance and use of projective tests was the
development of the Thematic Apperception Test(TAT’ by Henry Murray and Christina
Morgan in 1935.
TAT purported to measure human needs and thus to ascertain individual differences
in motivation.
- Emergence of new approaches to personality testing:
-> 1943, Minnesota multiphasic personality inventory began a new era for structured
personality tests
=> factor analysis; is a method of finding the minimum number of dimensions, called
factors, to account for a large number of variables.
-> 1940, J.R. Guilford made the first serious attempt to use factor analytic technique
in the development of a structured personality test.
- Period of rapid changes in the status of testing:
-> 1949, formal university training standard had been developed and accepted, and
clinical psychology was born
=> one of the major functions of the applied psychologist was providing psychological
testing
-> in late1949 and early 1950’s testing was the major function of the clinical
psychologist.
=> depending on one’s perspective, the government’s effort to stimulate the
development of applied aspects of psychology, especially clinical psychology, were
extremely successful.
-> unable to engage independently in the practice of psychotherapy, some
psychologist felt like technicians serving the medical profession. The highly talented
group of post-World War II psychologists quickly began to reject this secondary role.
=> these attacks intensified and multiplied so fast that many psychologists jettisoned
all ties to the traditional tests developed during the first half of the 20th century.
- Current environment:
-> Neuropsychology, health psychology, forensic psychology and child psychology,
because of these important areas of psychology makes extensive use of psychological
tests, psychological testing again grew in status and use.
-> psychological testing remains one of the most important yet controversial topics in
psychology
-> testing is indeed one of the essential elements of psychology.

Hoofdstuk 2

- Tests: devices used to translate observations into numbers


- Why statistics: scientific study requires systematic observations and an estimation of
the extent to which observations could have been influenced by chance alone.
Statistical methods serve two important purposes in the quest for scientific
understanding:
1. Statistics are used for purposes of description

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

2. We can use statistics to make inferences, which are logical deductions about
events that cannot be observed directly.
=> first comes the detective work of gathering and displaying clues, or what the
statistician John Tukey calls exploratory data analysis. Then comes a period of
confirmatory data analysis, when the clues are evaluated against rigid statistical
rules.
- Descriptive statistics: methods used to provide a concise description of a collection
of quantitative information.
- Inferential statistics: methods used to make inferences from observations of a small
group of people known as a sample to a larger group of individuals known as
population.
- Measurements: the application of rules for assigning numbers to objects. The rules
are the specific procedures used to transform qualities of attributes into numbers.
The basic feature of these types of systems is the scale of measurements
- Properties of scales: three important properties make scales of measurement
different from one another; magnitude, equal interval, and an absolute 0.
- Magnitude: property of moreness. Scale has the property of magnitude if we can say
that a particular instance of the attribute represents more, less, or equal amounts of
the given quantity than does another instance.
- Equal intervals: scale has the property of equal intervals if the difference between
two points at any place on the scale has the same meaning as the difference between
two other points that differ by the same number of scale units.
-> when a scale has the property of equal intervals, the relationship between the
measured units and some outcome can be described by a straight line or a linear
equation in the form Y=a+bX. This equation shows that an increase in equal units on
a given scale reflects equal increases in the meaningful correlates of units.
- Absolute 0: obtained when nothing of the property being measured exists. For many
psychological qualities, it is extremely difficult, if not impossible, to define an
absolute 0 point.
- Nominal scale: are really not scales at all; their only purpose is to name objects. Are
used when the information is qualitative rather than quantitative.
- Ordinal scale: scale with the property of magnitude but not equal intervals or an
absolute 0. This scale allows you to rank individuals or objects but not to say anything
about the meaning of the differences between the ranks. For most problems in
psychology, the precision to measure the exact difference between intervals does
not exist. So, most often one must use ordinal scales of measurement.
- Interval scale: scale has the properties of magnitude and equal intervals but not
absolute 0.
- Ratio scale: scale that has all three properties; magnitude, equal intervals, and a
absolute 0.
- Level of measurement: important because it defines which mathematical operations
we can apply to numerical data.
- Distribution of scores: summarizes the scores for a group of individuals.
- Frequency distribution: displays scores on a variable or a measure to reflect how
frequently each value was obtained. With a frequency distribution, one defines all
the possible scores and determines how many people obtained each of those scores.
For most distributions of test scores, the frequency distribution is bell-shaped.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> whenever you draw a frequency distribution or a frequency polygon, you must
decide on the width of the class interval
- Percentile ranks: replace simple ranks when we want to adjust for the number of
scores in a group. Answers the question; what percent of the scores fall below a
particular score (X)? To calculate a percentile rank, you need only follow these simple
steps:
1. Determine how many cases fall below the score of interest
2. Determine how many cases are in the group
3. Divide the number of cases below the score of interest by the total number of
cases in the group, and multiply the result by 100
=> P=B/Nx100= percentile rank of x
=> means that you form s ratio of the number of cases below the score of interest
and the total number of scores.
- Percentiles: specific scores or points within a distribution. Percentiles divide the total
frequency for a set of observations into hundredths. Instead of indicating what
percentage of scores fall below a particular score, as percentile ranks do, percentiles
indicate the particular score, below which a defined percentage of scores fall.
=> percentile and percentile rank are similar.
=> when reporting percentiles and percentile ranks, you must carefully specify the
population you are working with. Remember that a percentile rank is a measure of
relative performance.
- Variable: score that can have different values
- Mean: arithmetic average score in a distribution.
- Standard deviation: approximation of the average deviation around the mean.
-> knowing the mean of a group of scores does not give much information.
=> square root of average squared deviation around the mean. Although the
standard deviation is not a average deviation, it gives useful approximation of how
much a typical score is above or below the average score.
=> o=√((X-Ẍ)2/n)
=> knowing the standard deviation of a normally distributed batch of data allows us
to make precise statements about the distribution
=> most often we use the standard deviation for a sample to estimate the standard
deviation for a population. When we talk about a sample, we replace the Greek with
Roman letter S.
- Variance: squared deviation around the mean. O2=(X-Ẍ)2/n
-> Variance is useful statistic commonly used in data analysis, it shows the variable
squared deviation around the mean rather than in deviations around the mean. The
variance is the average squared deviation around mean.
- Z-score: transforms data into standardized units that are easier to interpret.
Difference between a score and the mean, divided by the standard deviation
-> Z=(X-Ẍ)/S
=> problem with means and standard deviations is that they do not convey enough
information for us to make meaningful assessments or accurate interpretations of
data. Other metrics are designed for more exact interpretations.
- Standard Normal Distribution: central statistics and psychological testing.
-> On most occasions we refer to units on the X axis of the normal distribution in Z
score units. Any variable transformed into Z score units takes on special properties

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

1. Z scores have a mean of 0 and a standard deviation of 1


-> The numbers under the curve are the proportions of cases that we would expect
to observe in each area.
- Percentile ranks: percentage of scores that fall below the observed z-score.
- McCall’s T: transform raw scores to give them more intuitive meaning. 1939.
-> originally intended to develop a system to derive equal units on mental quantities
-> standard deviation set at 10.
=> exactly the same as standard scores (Z-score), except that the mean in McCall’s T
is 50 rather than 0 and the standard deviation is 10 rather than 1.
-> Z score can be transformed to a T score by applying the linear transformation:
T=10Z+50
=> distinction between standardization and normalization. McCall’s T and other
methods described standardize scores by applying a linear transformation. These
transformation do not change the characteristics of the distributions. If a distribution
of scores is skewed before the transformation applied, it will also be skewed after the
transformation has been used.-> transformations standardize but do not normalize.
- Quartiles: divide the frequency distribution into equal fourths.
1. Quartile is 25th percentile
2. Quartile is median, or 50th percentile
3. Quartile is 75th percentile
-> interquartile range: interval if scores bounded by the 25th and 75th percentile.
Bounded by the range of scores that represents the middle 50% of the distribution.
=> Deciles: similar to quartiles except that they use points that mark 10% rather than
25% intervals.
- Stanine system: converts any set of scores into a transformed scale, which ranges
from 1 to 9. Actually the term stanine comes from “standard nine”. The scale is
standardized to have a mean of 5 and a standard deviation of approximately 2.
- Norms: performance by defined groups on particular tests
-> based on the distribution of scores obtained by some defined sample of individuals
-> used to give information about performance relative to what has been observed in
a standardization sample.
-> Are obtained by administering the test to a sample of people and obtaining the
distribution of scores for that group.
=> Age-related norms: certain tests have different normative groups for particular
age groups. Most common used is for growth charts used by pediatricians
- Tracking: tendency to stay at about the same level relative to one’s peers -> chart
used to determine the expected course of for example growth of a child.
- Norm-referenced test: compares each person with a norm.
-> purpose of establishing norms for a test is to determine how a test taker compares
with others
=> raking people according to performance, however, tests can play an important
role in identifying problems and suggesting new directions for individualized
programs of instruction.
- Criterion-referenced test: specific types of skills, tasks, or knowledge that the test
taker can demonstrate
-> results of the test would not be used to make comparisons between the child and
other members of his class.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> emphasizes the diagnostic use of tests, using them to identify problems that can
be remedied.

Hoofdstuk 4

- Error: there will always be some inaccuracy in out measurements


- Reliable: tests that are relatively free of measurement error
- Spearman: developed reliability assessment
-> 1733 Abraham De Moivre introduced the basic notion of sampling error
-> 1896 Karl Pearson developed the product moment correlation
=> Reliability theory puts these two concepts together in the context of
measurement.
- Classical test score theory: assumes that each person has a true score that would be
obtained if there were no errors in measurement. However, because measuring
instruments are imperfect, the score observed for each person almost always differs
from the person’s true ability or characteristic.
-> major assumption: errors of measurement are random. Although systematic errors
are acknowledges in most measurement problems, they are less likely than other
errors to force an investigator to make the wrong conclusions.
-> not get the same score on each measurement, but get a distribution of scores.
=> center of the distribution should represent the true score and dispersion around
the mean of the distribution should display the distribution of sampling errors.
=> assumes that the true score for an individual will not change with repeated
applications of the same test. Because of random error, however, repeated
applications of the same test can produce different scores.
=> standard error of measurement: the standard deviation of the distribution of
errors for each person tells us about the magnitude of measurement error.
-> tells us on the average, how much a score varies from the true score.
=> turning away from classical test theory: requires that exactly the same test items
be administered to each person
- Domain sampling model: central concept in classical test theory. Considers the
problems created by using a limited number of items to represent a larger and more
complicated construct.
-> this model conceptualizes reliability as the ratio of the variance of the observed
score on the shorter test and the variance if the long-run true score. The
measurement considered in the domain sampling model is the error introduced by
using a sample of items rather than the entire domain. As the sample gets larger, it
represents the domain more and more accurately. As a result, the greater the
number of items, the higher the reliability.
-> reliability can be estimated from the correlation of the observed test score with
the true score.
=> because true scores are not available, our only alternative is to estimate what
they would be.
- Item response theory (IRT): computer is used to focus on the range of item difficulty
that helps assess an individual’s ability level.
-> difficulties: method requires a bank of items that have been systematically

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

evaluated for level of difficulty. Considerable effort must go into test development,
and complex computer software is required
- Most reliability coefficients are correlations.
- Sources of error: observed score may differ from a true score for many reasons;
-situational factors
-items on the tests might not be representative of the domain
-Test reliability is usually estimated in one of three ways:
1.test-retest method
2.parallel forms
3.internal consistency
- Test-retest reliability: estimates are used to evaluate the error associated with
administering a test at two different times. This types of analysis is of value only
when we measure traits or characteristics that do not change over time
-> Tests that measure some constantly changing characteristic are not appropriate
for test-retest evaluation. -> applies only to measures of stable traits.
=> Test-retest reliability is relatively easy to evaluate: Just administer the same test
on two well-specified occasions and then find the correlation between scores from
the two administrations.
=> sometimes poor test-retest correlations do not mean that a test is unreliable,
instead they suggest that the characteristic under study has changed. One of the
problems with classical test theory is that it assumes that behavioural dispositions
are constant over time
- Carryover effect: occurs when the first testing session influences scores from the
second session
-> the test-retest correlation usually overestimates the true reliability.
Are of concern only when the changes over time are random, if the changes are
systematic carryover effects do not harm the reliability.
-> types of carryover effects:
1. Practice effects, some skills improve with practice -> can also affect tests of manual
dexterity; experience taking the test can improve dexterity skills
=> because of these problems, the time interval between testing sessions must be
selected and evaluated carefully.
- Parallel forms reliability: compares two equivalent forms of a test that measure the
same attribute. The two forms use different items; however, the rules used to select
items of a particular difficulty level are the same
-> sometimes referred to as equivalent forms reliability or parallel forms
-> reliability analysis to determine the error variance that is attributable to the
selection of one particular set of items.
=> when both forms of the test are given on the same day, the only sources of
variation are random error and the difference between the forms of the test
=> sometimes two forms of the test are given at different times, then error
associated with time sampling is also included in the estimate of reliability.
- Split-half reliability: a test given and divided into halves that are scored separately.
The results of one half of the test are then compared with the results if the other. If
the test is long, the best method is to divide the items randomly into halves.
-> if the items get progressively more difficult, the be better advised to use the odd-
even system, whereby one subscore is obtained for the odd-number items is the test

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

and another for the even-numbered items.


-> to estimate the reliability of the test, you could find the correlation between the
two halves.
=> the correlation between the two halves of the test would be a reasonable
estimate of the reliability of half the test. To correct for half-length you can apply the
spearman brown formula, which allows you to estimate what the correlation
between the two halves would have been if each half had been the length of the
whole test: corrected r=2r/1+r -> using the spearman brown formula increases the
estimate of reliability
=> When not use spearman brown: when the two halves of a test have unequal
variances, Cronbach’s coefficient alpha can be used. This general reliability
coefficient provides the lowest estimate of reliability of the test is acceptable
because the lowest boundary of reliability is still high, reliability will not drop below
alpha. A low alpha level, on the other hand gives less information.
=> when the variances for the two halves of the test are equal, the spearman brown
coefficient and coefficient alpha give the same results. Under other specific
circumstances, both procedures may underestimate the true reliability.
- KR20 Formula: addition to split-half technique, method for estimating the internal
consistency of a test.
-> formula for calculating the reliability of a test which the items are dichotomous,
scored 0 or 1. (kuder-richardson 20) KR20=r=N/N-1[1-(Ẍ(1-Ẍ/N))/s2]
-> decisions how to split tests into halves cause many potential problems for split-
half reliability. The two halves may have different variances. The split-half method
also requires that each half be scored separately, possibly creating additional work.
=> requires to find the proportion of people who got each item correct, there are
many tests for which there is no right or wrong answer, such as personality and
attitude scales.
- Coefficient Alpha: Cronbach. Formula that estimates the internal consistency of tests
in which the items are not scored as 1 or 0. r=α= (N/N-1)(s2-s I 2/s2)
-> More general reliability coefficient than KR20 because s I 2 can describe the
variance of items whether or not they are in a right-wrong format. Thus coefficient
alpha is the most general method of finding estimates of reliability through internal
consistency
=> all of the measure of internal consistency evaluate the extent to which the
different items on a test measure the same ability or trait.
=> when the items do not measure the same characteristic, the test will not be
internally consistent
=> factor analysis is a popular method for dealing with the situation in which a test
apparently measures several different characteristics. Can help a test constructor
build a test that has submeasures for several different traits.
- Reliability of difference score: created by subtracting one test score from another.
This might be the difference between performance at two points in time, or the
difference between measures of two different abilities. Whenever comparisons
between two different attributes are being made, one must make the comparison in
Z(standardized) units.
-> create a host of problems that make them more difficult to work with than single
scores.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> reliability of a difference score is expected to be lower than the reliability of either
score on which it is based. If two tests measure exactly the same trait, then the score
representing the difference between them is expected to have a reliability of 0
=> because of poor reliabilities, difference scores cannot be depended on for
interpreting patterns
=> although reliability estimates are often interpreted for individuals, estimates of
reliability are usually based on observations of populations, not observations of the
same individual. One distinction is between reliability and information.
1. Low reliability implies that comparing gain scores in a population may be
problematic.
2. Low information suggests that we cannot trust gain-score information about a
particular person.
3. Low reliability of a change score for a population does not necessarily mean that
gains for individual people are not meaningful.
- Reliability in behavioural observation studies: many sources of error, because
psychologists cannot always monitor behaviour continuously, they often take
samples of behaviour at certain time intervals. Under these circumstances, sampling
error must be considered. -> sources of error introduced by time sampling are similar
to those with sampling items from a large domain.
-> behavioural observation systems are frequently unreliable because of
discrepancies between true scores and the scores recorded by the observer.
=> problem of error associated with different observers presents unique difficulties.
To assess these problems, one needs to estimate the reliability of the observers.
These reliability estimates have various names; interrater, interscorer, interobserver
or interjudge reliability. All of the terms consider the consistency among different
judges who are evaluating the same behaviour. different ways to do this:
1. Record the percentage of times that two or more observers agree -> not best
method; percentage does not consider the level of agreement that would be
expected by chance alone, and percentages should not be mathematically
manipulated.
2. Kappa statistic, is best method for assessing the level of agreement among several
observers. Introduces by Cohen as a measure of agreement between two judges who
each rate a set of objects using nominal scales.
-> indicates the actual agreement as a proportion if the potential agreement
following correction for chance agreement
- Time sampling: same test given at different points in time may produce different
scores, even if given to the same test takers.
- Item sampling: same construct or attribute may be assessed using a wide pool of
items.
- Internal consistency: the intercorrelations among items within the dam test.
- Standard errors of measurement: the larger the standard error of measurement, the
less certain we can be about the accuracy with which an attribute is measured.
Conversely, a small standard error of measurement tells us that an individual score is
probably close to the measured value.
-> researchers use standard errors of measurement to create confidence intervals
around specific observed scores

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

- Reliability: reliability estimates in the ranges of .70 and .80 are good enough for most
purposes in basis research.
-> in clinical settings, high reliability is extremely important. When tests are used to
make important decisions about someone’s future, evaluators must be certain to
minimize any error in classifications.
-> most useful index of reliability for the interpretation of individual scores is the
standard error of measurement. This index is used to create an interval around an
observed score.
=> two common approaches are to increase the length of the test and to throw out
items that run down the reliability. Another procedure is to estimate what the true
correlation would have been if the test did not have measurement error.
 the larger the sample, the more likely that the test will represent the true
characteristic. In the domain sampling model, the reliability of a test increases as the
number of items increases.-> adding new items can be costly and can make a test so
long that few people would be able to sit through it.
 reliability of a test depends on the extent to which all of the items measure one
common characteristic. To ensure that the items measure the same thing, 2
approaches are suggested:
1. Factor analysis; tests are most reliable if they are unidimensional, means that one
factor should account for considerably more of the variance than any other factor.
Items that do not load on this factor might be best omitted.
2. Discriminability analysis; examine the correlation between each item and the total
score for the test.

Hoofdstuk 5

- Validity: agreement between a test score or a measure and the quality it is believed
to measure.
-> evidence for inferences, 3 types:
1. Construct-related
2. Criterion-related
3. Content-related
- Face validity: mere appearance that a measure has validity. Test has face validity if
the items seem to be reasonably related to the perceived purpose of the test.
-> is not really not validity at all because it does not offer evidence to support
conclusions drawn from test scores.
-> important because the appearances can help motivate test takers because they
can see that the test is relevant.
- Content-related evidence for validity: test or measure considers the adequacy or
representation of the conceptual domain the test is designed to cover
-> been of greatest concern in educational testing.
=> factors could include: characteristics of the items and the sampling of the items.
-> Only type of evidence besides face validity that is logical rather than statistical.
-> in looking for content validity evidence, attempt to determine whether test has
been constructed adequately, content of the items must be carefully evaluated.
=> 2 new concepts that are relevant to content validity evidence:
1. Construct underrepresentation: failure to capture important components of a

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

construct
2. Construct-irrelevant: when scores are influenced by factors irrelevant to the
construct.
-> test scores reflect many factors besides what the test supposedly measures
- Criterion validity evidence: tells how well a test corresponds with a particular
criterion. Such evidence is provided by high correlations between a test and a well-
defined criterion measures. A criterion is the standard against which the test is
compared.
-> reason for gathering criterion validity evidence is that the test or measure is to
serve as a stand-in for the measure we are really interested in.
- Predictive validity evidence:
-> for example: SAT Reasoning Test, serves as predictive validity evidence as a college
admissions test if it accurately forecasts how well high school students will do in their
college studies. SAT is predictor variable, grade point average is the criterion.
- Concurrent related evidence: comes from assessments of the simultaneous
relationship between the test and the criterion. Here the measures and criterion
measures are taken at the same time because the test is designed to explain why the
person is now having difficulty in school.
- Validity Coefficient: correlation of the relationship between a test and a criterion.
Tells the extent to which the test is valid for making statements about the criterion.
-> no hard rules about how large a validity coefficient must be to be meaningful.
=> coefficient is statistically significant if the chances of obtaining its value by chance
alone are quite small.
- Issues of concern when interpreting validity coefficients:
1. Be aware that the conditions of a validity study are never exactly reproduced.
2. Criterion related validity studies mean nothing at all unless the criterion is valid
and reliable, for applies research the criterion should relate specifically to the use of
the test.
3. Validity study might have been done on a population that does not represent the
group to which inferences will be made.
4. Proper validity study cannot be done because there are too few people to study.
Such a study can be quite misleading, cannot depend on a correlation obtained from
a small sample, particularly for multiple correlation and multiple regression. The
smaller the sample, them more likely chance variation in the data will affect the
correlation. A good validity study will present some evidence for cross validation
5. Never confuse the criterion with the predictor.
6. Variability has a restricted rang if all scores for that variable fall very close
together. The problem this creates is that correlation depends on variability.
Correlation requires that there be variability in both the predictor and the criterion.
7. Criterion related validity evidence obtained in one situation may not be
generalized to other similar situations. Must prove that the results obtained in a
validity study are not specific to the original situation, there are many reasons why
results may not be generalized.
-> generalizability: evidence that the findings obtained in one situation can be
generalized to other situations.
8. Predictive relationships may not be the same for all demographic goups. Separate
validity studies for different groups may be necessary.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> although criterion related validity evidence is common is psychological and


educational research, it simply does not apply in some instances. By definition, the
criterion must be the most accurate measure of the phenomenon if it is to serve as
the gold standard.
- Construct validity evidence: assembling evidence about what a test means. Done by
showing the relationship between a test and other tests and measures.
-> construct is defined as something built by mental synthesis.
=> important set of logical considerations for establishing evidence, Campbell & Fiske
1959, distinguished between 2 types of evidence essential for a meaningful test:
1. Convergent evidence; when a measure correlates well with other tests believed to
measure the same construct
-> shows that measure of the same construct converge, or narrow in, on the same
thing.
=> obtained is 2 ways, show that a test measures the same things as other tests used
for the same purpose, and demonstrate specific relationships that we can expect if
the test is really doing its job.
-> no single variable can serve as the criterion
-> also evaluated specific hypotheses based on certain theoretical notions about the
construct.
2. Discriminant evidence; test measures something unique-> a test should have low
correlations with measures of unrelated constructs, or evidence for what the test
does not measure.
-> indicates that the measure does not represent a construct other than the one for
which it was devised.
- Criterion-referenced tests: Validity studies for the criterion-referenced tests would
compare scores on the test to scores on other measures that are believed to be
related to the test.
- Relationship between reliability and validity: attempting to define the validity of a
test will be futile if the test is not reliable. Theoretically, a test should not correlate
more highly with any other variable than it correlates with itself
-> reliability and validity are related concepts.

Hoofdstuk 6

- Guidelines for writing test items, DeVellis:


1. Define clearly what you want to measure
2. Generate an item pool
3. Avoid exceptionally long items
4. Keep the level of reading difficulty appropriate for those who will complete the
scale
5. Avoid “double-barreled” items that convey two or more ideas at the same time
6. Consider mixing positively and negatively worded items
=> when writing items, you need to be sensitive to ethnic and cultural differences
- Dichotomous format: 2 alternatives for each item. Usually a point is given for the
selection of one of the alternatives. The most common example of this format is the
true-false examination
-> advantages of true-false items include their obvious simplicity, ease of

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

administration, and quick scoring, require absolute judgment.


-> disadvantages, to be reliable true-false test must include many items
- Polytomous format: resembles the dichotomous format except that each item has
more than two alternative
-> multiple-choice tests are easy to score, and the probability of obtaining a correct
response by chance is lower than it is for true-false. A major advantage of this
format is that it takes little time for test takers to respond to a particular item
because they do not have to write. Thus, the test can cover a large amount of
information in a relatively short time.
-> distracters; incorrect choices.
=> psychometric theory:: that adding more distracters should increase the reliability
of the items. However, in practice, adding distracters may not actually increase the
reliability because it is difficult to find good ones. -> well-chosen distracters are an
essential ingredient of good items.
=> this result suggests that three alternative multiple-choice items may be better
than five alternative items because they retain the psychometric value but take less
time to develop and administer.
=> if distracters are too easy, then a poorly prepared test takes has a high chance of
guessing the correct answer.
=> if each test item has four choices, the test takes would be expected to get 25% of
the total number of items correct.
=> when the correction for guessing is applied, the expected score is 0.
-> guessing threshold: describes the chances that a low-ability test taker will obtain
each score.
- Likert format: requires that a respondent indicate the degree of agreement with a
particular attitudinal question. Popular format for attitude and personality scales.
-> scoring requires that any negative worded items be reverse scored and the
responses are then summed
-> especially popular in measurements of attitude.
-> scales require assessment of item discriminability.
=> strongly disagree, disagree, neutral, agree, strongly agree
- Category format: Technique that is similar to the Likert format but that uses an even
greater number of choices.
-> 10-point rating. Not exactly it van have either more of fewer categories.
-> people will change ratings depending on context, when given a group of objects to
rate, subject have a tendency to spread their responses evenly across the 10
categories.
=> increasing the number of choices beyond nine can reduce reliability because
responses may be more likely to include an element of randomness when there are
so many alternatives that respondents cannot clearly discriminate between the fine-
grained choices.
-> visual analogue scale, related to categorical scales. Respondent is given a 100-
millimeter line and asked to place a mark between two well-defined endpoints. Not
used often for multi-item scales, because scoring is time-consuming
- Adjective checklist: common personality measurement. With this method, a subject
receives a long list of adjective and indicates whether each one is characteristic of
himself. Can be used for describing either oneself or someone else.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> requires subjects either to endorse such adjectives or not, thus allowing only two
choices for each item.
=> Q-sort: similar technique, increases the number of categories. Can be used to
describe one-self or to provide ratings of others. A subject is given statements and
asked to sort them into nine piles.
- Item analysis: general term for a set of methods used to evaluate test items, is one
of the most important aspects of test construction. Basic methods involve
assessment of item difficulty and item discriminability.
-> Item difficulty; defined by the number of people who get a particular item correct,
for a test that measures achievement or ability. Optimal difficulty level for items is
usually about halfway between 100% of the respondents getting the item correct and
the level of success expected by chance alone. -> Tests should have a variety of
difficulty levels because a good test discriminates at many levels. -> one must also
consider human factors.
- Discriminability: another way to examine the relationship between performance on
particular items an performance on the whole test. Determines whether the people
who have done well on particular items have also done well on the whole test.
=> extreme group method: compares people who have done well with those who
have done poorly on a test. -> the difference between these proportions is called the
discrimination index.
=> point biserial method: another way to examine the discriminability of items is to
find the correlation between performance on the items and performance on the total
test. -> using it on tests with only a few items is problematic because performance on
the item contributes to the total test score.  point biserial correlation between an
item and the total test score is evaluated in much the same way as the extreme
group discriminability index. If the value is negative or low, then the item should be
eliminated from the test.
- Item characteristic curve: a valuable way to learn about items, is to graph their
characteristics. The relationship between performance on the item and performance
on the test gives some information about how well the item is tapping the
information we want.
-> to draw the item characteristics curve, we need to define discrete categories of
test performance. Once you have arrives at these categories, you need to determine
what proportion of the people within each category got each item correct. One you
have this series of breakdown, you can create a plot of the proportions of correct
responses to an item by total test scores.
=> ranges in which the curve changes suggests that the item is sensitive, while flat
ranges suggest areas of low sensitivity.
=> in summary, item analysis breaks the general rule that increasing the number of
items makes a test more reliable. When bad items are eliminated, the effects of
chance responding can be eliminated and the test can become more efficient,
reliable and valid.
- Item response theory: newer approaches to testing based on item analysis, consider
the chances of getting particular items right or wrong, make extensive use of item
analysis. Each item on a test has its own item characteristic curve that describes the
probability of getting a particular item right or wrong given the ability level of each
test taker.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> technical advantages: builds on traditional models of item analysis and can provide
information on item functioning, the value of specific items and the reliability of a
scale. Perhaps the most important message for the test taker is that his score is no
longer defined by the total number of items correct, but instead by the level of
difficulty of items that he can answer correctly.
=> various approaches to the construction of tests using IRT: all of approaches grade
items in relation to the probability that those who do well or poorly on the exam will
have different levels of performance. Can easily adapt them for computer
administration.
=> peaked conventional; most conventional tests have the majority of their items at
or near an average level of difficulty
=> rectangular conventional; requires that test items be selected to create a wide
range in level of difficulty.
=> IRT are specialized applications for specific problems such as measurement of self-
efficacy
 classical test theory; old approach; a score derived from the sum of an individual’s
responses to various items, which are sampled from a larger domain that represents
a specific trait or ability.
- Linking uncommon measures: one challenge in test applications is how to determine
linkages between two different measures.
-> often these linkages are achieved through statistical formulas.
- Items for Criterion-Referenced tests:
-> criterion-referenced test; compares performance with some clearly defined
criterion for learning. This approach is popular in individualized instruction programs.
Would be used to determine whether this objective had been achieved.
=> Involves clearly specifying the objectives by writing clear and precise statements
about what the learning program is attempting to achieve.
=>To evaluate the items in the test, one should give the test to two groups of
students, one has been exposed to the learning unit one that has not.
-> traditional use of tests requires that we determine how well someone has done on
a test by comparing the person’s performance to that of others

Hoofdstuk 7

- Relationship between examiner and Test taker: both the behaviour of the examiner
and his relationship to the test taker can affect test scores. For younger children, a
familiar examiner may make a difference.
-> people tend to disclose more information in a self-report format than they do to
an interviewer. In addition, people report more symptoms and health problem in a
mailed questionnaire than do in a face-to-face interview.
-> In most testing situations, examiners should be aware that their rapport with test
takers can influence the result. They should also keep in mind that rapport might be
influenced by subtle processes such as the level of performance expected by the
examiner.
- The race of the tester:
-> because of concern about bias, the effects of the tester’s race have generated
considerable attention

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> reason why so few studies show effects of the examiner’s race on the results of IQ
tests is that the procedures for properly administering an IQ test are so specific.
Anyone who gives the test should do so according to a strict procedure. -> deviation
from this procedure might produce differences in performance associated with the
race of the examiner.
=> race of the examiner affects test scores in some situations(Sattler). Examiner
effects tend to increase when examiners are given more discretion about the use of
the tests. Also there may be some biases in the way items are presented. Race effects
in test administration may be relatively small, effort must be made to reduce all
potential bias.
- Language of test taker: new standards concern testing individuals with different
linguistic backgrounds. The standard emphasize that some tests are inappropriate for
people whose knowledge of the language is questionable. Translating tests is
difficult, and it cannot be assumed that the validity and reliability of the translation
are comparable.
- Training of test administrators: Different assessment procedures require different
levels of training. Many behavioural assessment procedures require training and
evaluation but not a formal degree or diploma.
- Expectancy effects: the magnitude of the effects is small, approximately a 1-point
difference on a 20-point scale.
->expectancies shape our judgment in many important ways: Important
responsibilities for faculty in research-oriented universities to apply for grant funding.
-> Two aspects of the expectancy effect relate to the use of standardized test
1. nonverbal communication between the experimenter and the subject
2. Determining whether expectancy influences test scores requires careful studies on
the particular tests that are being used.
=> a variety of interpersonal and cognitive process variables have been shown to
affect our judgment of others. Even something as subtle as a physician’s voice can
affect the way patients rate their doctors. These biases may also affect test scoring.
- Effects of reinforcing responses: Inconsistent use of feedback can damage the
reliability and validity of test scores. Several studies have shown that reward can
significant affect test performance.
-> girls decreased is speed when given reinforcement, and boys increased in speed
only when given verbal praise.
-> Some of the most potent effects of reinforcement arise in attitudinal studies.
-> reinforcement and feedback guide the examinee toward a preferred response.
Another way to demonstrate the potency of reinforcement involves misguiding the
subject.
-> the potency of reinforcement requires that test administrators exert strict control
over the use feedback.
-> Testing also requires standardized conditions because situational variables can
affect test scores.
- Computer assisted test administration: interactive testing involves the presentation
of test items on a computer terminal or personal computer and the automatic
recording of test responses. The computer can also be programmed to instruct the
test taker and to provide instruction when parts of the testing procedure are not
clear.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> some advantages that computer offers:


1. Excellence of standardization
2. Individually tailored sequential administration
3. Precision of timing responses
4. Release of human testers for other duties
5. Patience
6. Control of bias.
=> computer offers many advantages in test administration, scoring and
interpretation.
=> Computer assisted test administration does not necessarily depend on a
structured order of test items
=> Computers are objective and cost-effective, furthermore they allow more
experimental control than do other methods of administration
=> Computer assisted method also prevent test takers from looking ahead at other
sections of the test or going back to sections already completed.
- Mode of administration: should be constant within any evaluation of patients.
-> for studies of psychiatric disability, mode of asking the questions makes a
difference. In other instances, the mode of administration is less important.
- Subject variables: motivation and anxiety can greatly affect test scores.
-> Test anxiety appears to have three components: worry, emotionality, and lack of
self-confidence.
-> it may seem obvious that illness affects test scores.
=> behavioural assessment methodology: many assessment procedures involve the
observation of behaviour. These samples require the performance of tasks in
environments similar to the actual work setting.
-> improving methods of behavioural observation has long been recognized as
important in clinical psychology. Many new problems, though, have accompanied the
increasing use of behavioral observation methods. Some of the problems include
reactivity, drift and expectancies.
- Reactivity: increase in reliability as a reaction on for example someone checking on
the observers, have shown that reliability and accuracy are highest in this case.
-> many studies have demonstrated that accuracy and interrater agreement decrease
when observers believe their work is not being checked. => to deal with this problem,
some experimenters resort to covert operations.
- Drift: the tendency of observers to drift away from the strict rules they followed in
training and to adopt idiosyncratic definitions of behaviour when they leave the
training sessions.
-> contrast effect; most common problem, tendency to rate the same behaviour
differently when observations are repeated in the same context. Standards may also
shift, resulting in biased ratings of behaviour.
=> observer drift and contrast effects suggest that observer should be periodically
retrained.
- Expectancies: more consistently cause bias in the behavioral observation when
observation when observers receive reinforcement for recording a particular
behaviour than when they do not.
-> impact of expectancy is subtle.
- Deception: the detection of lying and honesty has become a major industry.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

- Statistical Control of rating errors:


-> halo effect: the tendency to ascribe positive attributes independently of the
observed behaviour. -> Some psychologists have argued that this effect can be
controlled through partial correlation in which the correlation between two variables
is found while variability in a third variable is controlled.

Hoofdstuk 9

- Problem of defining intelligence:


-> T.R. Taylor; identified 3 independent research traditions that have been employed
to study the nature of human intelligence:
1. Psychometric; examines the elemental structure of a test
2. Information processing; examine the processes that underlie how we learn and
solve problems
3. Cognitive approaches; focuses on how humans adapt to real-world demands
=> there is a correlation between socioeconomic background en scores on all
standardized intelligence tests
=> intelligence tests were initially developed to eliminate subjectivity in the
evaluation of children’s ability.
- Binet’s principles of test construction: Binet defined intelligence as the capacity:
1. To find and maintain a definite direction or purpose
2. To make necessary adaptations to achieve that purpose
3. To engage in self-criticism so that necessary adjustments in strategy can be made.
=> In developing tasks to measure judgment, attention, and reasoning, Binet used
trial and error as well as experimentation and hypothesis-testing procedures. He was
guided by 2 major concepts that to this day underlie not only the Binet scale but also
major modern theories of intelligence: age differentiation and general mental ability
- Age differentiation: simple fact that one can differentiate older children from
younger children by former’s greater capabilities.
-> Binet searched for tasks that could be completed by between 66 & 75% of the
children of a particular age group and also by a smaller proportion of younger
children but a larger proportion of older ones. Thus Binet eventually assembled a set
of tasks that an increasing proportion of children could complete as a function of
increases in age.
=> With the principle of age differentiation, one could determine the equivalent age
capabilities of a child independent of his chronological age. Equivalent age capability
was eventually called mental age
- General mental ability: Binet was also guided in his selection of tasks by this decision
to measure only the total product of the various separate and distinct elements of
intelligence, that is, general mental ability.
- Spearman’s model of general mental ability: intelligence consists of one general
factor (g) plus a large number or specific factors
-> psychometric g: spearman’s notion of general mental ability
-> positive manifold: according to Spearman resulted from the fact that all tests, no
matter how diverse, are influenced by g.
-> factor analysis: statistical technique developed by spearman to support the notion
of g. => method for reducing a set of variables or scores to a smaller number of

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

hypothetical variables called factors. => through factor analysis, one can determine
how much variance a set of tests or scores has in common. This common variance
represents the g factor.
- Implications of general mental intelligence (g): The concept or general intelligence
implies that a person’s intelligence can best be represented by a single core, g, that
presumably reflects the shared variance underlying performance on a diverse set of
tests.
-> however, if the set of tasks is large and broad enough, the role of any given task
can be reduced to an minimum.
- The gf-gc theory of intelligence: there are 2 basic types of intelligence: fluid and
crystallized.
-> fluid intelligence: those abilities that allow us to reason, think, and acquire new
knowledge.
-> Crystallized intelligence: represents the knowledge and understanding that we
have acquired.
- Early Binet scales:
 1905 Binet-Simon scale: individual intelligence test consisting of 30 items
presented in an increasing order of difficulty.
-> 3 levels of intellectual deficiency were designated by terms no longer in use today
because of the derogatory connotations they have acquired.
idiot; the most severe form of intellectual impairment
imbecile; moderate levels of impairment
moron; mildest level of impairment
=> The collection of 30 tasks of increasing difficulty in the Binet-Simon scale provided
the first major measure of human intelligence. Binet has solved 2 major problems of
test construction: he determined exactly what he wanted to measure, and he
developed items for this purpose. He fell short in several other areas.
 1908 scale: retained the principle of age differentiation. Was an age scale, which
means items were grouped according to age level rather than simply one set of items
of increasing difficulty, as in the 1905 scale.
-> when items are grouped according to age level, comparing a child’s performance
on different kinds of tasks is difficult, if not impossible, unless items are exquisitely
balanced as in the fifth edition.
-> Binet had done little to meet one persistent criticism: the scale produces only one
score, almost exclusively related to verbal, language, and reading ability. Binet made
little effort to diversify the range of abilities tapped.
=> main improvement in the 1908 scale was the introduction of concept of mental
age.
=> to summarize the 1908 scale introduced 2 major concepts: the scale format and
the concept of mental age.
 1916, terman’s revision: increased the size of the standardization sample.
Unfortunately, the entire standardization sample of the 1916 revision consisted
exclusively of white native Californian children.
- Intelligence quotient (IQ): subject’s mental age in conjunction with his chronological
age to obtain a ratio score. Stern 1912. This ratio score presumably reflected the
subject’s rate of mental development.
-> 1916 scale provided the first significant application of the now outdated IQ

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

concept.
-> in 1916 people believed that mental age ceased to improve after 16 years, 16 was
used as the maximum chronological age.
- 1937 scale: by adding new tasks, developers increased the maximum possible mental
age to 22 years, 10 months.
-> standardization sample was markedly improved, standardization sample cam from
11 U.S. states representing a variety of regions.
=> perhaps the most important improvement in the 1937 version was the inclusion of
an alternate equivalent form. Forms L and M were designed to be equivalent in terms
of both difficulty and content. With 2 forma, the psychometric properties of the scale
could be readily examined.
=> Problems: reliability coefficients were higher for older subjects than for younger
ones. Reliability figures also varies as a function of IQ level, with higher reliabilities in
the lower IQ ranges and poorer ones in the higher ranges. Scores are most unstable
for young children in high IQ ranges. -> this differential variability in IQ scores as a
function of age created the single most important problem in the 1937 scale.
- 1960 Stanford-Binet revision and deviation IQ: instructions for scoring and test
administration were improved, and IQ tables were extended from age 16 to 18.
Perhaps most important, the problem of differential variation in IQs was solved by
the deviation IQ concepts.
-> as used in the Stanford-Binet scale, the deviation IQ was simply a standard score
with a mean of 100 and a standard deviation of 16
-> scores could be interpreted in terms of standard deviations and percentiles with
the assurance that IQ scores for every age group correspond to the same percentile.
=> 1960 revision did not include a new normative sample or restandardization,
however 1972, a new standardization group consisting a representative sample of
2100 children had been obtained for use with the 1960 revision.
- Modern Binet scale: The model for the latest editions of the Binet is far more
elaborate than the Spearman model that best characterizes the original version of
the scale.
->They are based on a hierarchical model. At the top of the hierarchy is g (general
intelligence), which reflects the common variability of all task.
->At the next level are 3 group factors:
1. Crystallized abilities; reflect learning
2. Fluid-analytic abilities; basic capabilities that a person uses to acquire crystallized
abilities
3. Short-term memory; one’s memory during short intervals
=> the model of the modern Binet represents an attempt to place an evaluation of g
in the context of a multidimensional model of intelligence from which one can
evaluate specific abilities. Intelligence could best be conceptualized as comprising
independent factors, or primary mental abilities.
 Characteristics 1986 revision: To continue to provide a measure of general mental
ability, the authors of the 1986 revision decided to retain the wide variety of content
and task characteristics of earlier versions.
-> in place of the age scale items with the same content were placed together into
any one of 15 separate tests to create point scales.
-> the more modern 2003 5th edition provided a more standardization hierarchical

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

model with 5 factors.


=> placing together items of similar content in a point scale permits the calculation of
specific scores for each of the 15 tests. Thus, in addition to an overall score that
presumably reflects g, one can obtain scores related to each specific content area.
characteristics 2003 5th revision: represents an elegant integration of the age scale
and point scale formats. 1ste the non verbal and verbal scaled are equally weighted.
-> the purpose of the routing tests is to estimate the examinee’s level of ability. The
remaining eight subtests are arranged in an age scale format. This means that tasks
of differing content are grouped together on the basis of difficulty.
=> The difference with the 2003 5th edition is that because of the equal weighting of
verbal and nonverbal items, it is possible to summarize an examinee’s score on all
items of similar content. As a result, the 5th edition retains the advantage of the point
scale by allowing examiners to summarize scores within any given content area while
also using a mixture of tasks to maintain an examinee’s interest.
 psychometric properties 2003 5th edition: major goal of the fifth edition is to tap
the extremes in intelligence, the major historical strength of the Binet that had been
essentially lost in the fourth edition. The age range touted by the fifth edition spans
from 2 to 85+ years of age.
-> the range of possible scores run from a low 40 to a high of 160, reestablishing the
Binet as one of the most appropriate tests for evaluating extremes in intelligence.
=> the reliability is quite good, test-retest coefficients are likewise excellent.

Hoofdstuk 10

- Wechsler intelligence scales:


-> Wechsler emphasized that factors other than intellectual ability are involved in
intelligent behaviour.
-> Wechsler’s test did not directly measure nonintellective factors, it took these
factors into careful account in its underlying theory.
-> Wechsler capitalized on the inappropriateness of the 1937 Binet scale as a
measure of the intelligence of adults.
=> Wechsler correctly noted that the Binet scale’s emphasis on speeds, with times
tasks scattered throughout the scale, tended to unduly handicap older adults.
Furthermore, mental age norms clearly did not apply to adults. Finally, Wechsler
criticized the then-existing Binet scale because it did not consider that intellectual
performance could deteriorate as a person grew older.
- Point and Performance scale concepts: most critical differences:
1. Wechsler’s use of the point scale concept rather than an age scale and
2. Wechsler’s inclusion of a performance scale.
 point scale concept: Recall that from 1908 to 1972, the Binet scale grouped items
by age level. Each age level included a group of tasks that could be passed by two
thirds to three fourths of the individuals at that age level.
-> In a point scale, credits or point are assigned to each item. An individual receives a
specific amount of credit for each item passed. The point scale offers an inherent
advantage. This scale makes it easy to group items of a particular content together.
-> by arranging items according to content and assigning a specific number of points
to each item, Wechsler constructed an intelligence test that yielded not only a total

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

overall score but also scores for each content area.


Performance scale concept: Wechsler included an entire scale that provided a
measure of nonverbal intelligence: a performance scale. Thus, in addition to
measuring intelligence in adults and yielding separate scores, wechsler’s approach
offered a third major advantage over the early Binet scales.
-> Wechsler scale included 2 separate scales. The verbal scale provided a measure of
verbal intelligence, the performance scale a measure of nonverbal intelligence.
=> concept of performance scale was far from new. However, wechsler’s new scale
was the first to offer the possibility of directly comparing an individual’s verbal and
nonverbal intelligence, both verbal and performance scales were standardized on the
same sample, and the results of both scales were expressed in comparable units.
=> a performance scale attempts to overcome biases caused by language, culture,
and education. Indeed, performance tasks tend to require a longer interval of
sustained effort, concentration, and attention than most verbal tasks.
- Wechsler-bellevue intelligence: Wechsler’s first effort to measure adult intelligence,
the Wechseler Bellvue scale was poorly standardized.
- Scales, subtests, and indexes of the WAIS-III: Intelligence is the aggregate or global
capacity of the individual to act purposefully, to think rationally and to deal
effectively with his environment. Wechsler believed that intelligence comprised
specific elements that one could individually define and measure; however, these
elements were interrelated, that is, not entirely independent.
-> Wechsler’s definition implies that intelligence comprises several specific
interrelated functions or elements and that general intelligence results from the
interplay of these elements.
=> Wechsler tried to measure separate abilities, which Binet had avoided in adopting
the concept of general mental ability
=> WAIS-III and the latest Wechsler scales, wechslers basic approach is maintained. ->
there are individual subtests, each of which is related to a basic underlying skill or
ability
- Verbal subtest WAIS III: 7 types:
1. Vocabulary
2. Similarities
3. Arithmetic
4. Digit span
5. Information
6. Comprehension
7. Letter number sequencing
 vocabulary subtest: the ability to define words is not only one of the best single
measures of intelligence but also the mist stable. Vocabulary tests appear on nearly
every individual test that involves verbal intelligence . The relative stability of the
vocabulary scale is one of its most important features.
Because the vocabulary subtest provides a relatively stable estimate of general
verbal intelligence, one can use it to evaluate baseline or premorbid intelligence.
 similarities subtest: consist of some 15 paired items of increasing difficulty. The
subject must identify the similarity between the items in each pair.
Some items definitely require the subject to think abstractly. This subtest measures
the subject’s ability to see the similarity between apparently dissimilar objects or

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

things. The character of a person’s thought processes can be seen in many cases
 arithmetic subtest: contains approximately 15 relatively simple problems. You
must be able to retain the figures in memory while manipulating them.
Generally, however, concentration, motivation, and memory are the main factors
underlying performance.
 Digit span subtest: requires subject to repeat digits, given at the rate of one per
second, forward and backward. In terms of intellective factors, the digit span subtest
measures short-term auditory memory.
 information subtest: involves both intellective and nonintellective components,
including several abilities to comprehend instructions, follow directions, and provide
a response. Although purportedly a measure of the subject’s range of knowledge,
nonintellective factors such as curiosity and interest in the acquisition of knowledge
tend to influence test scores.
 Comprehension subtest: 3 types of questions. 1) asks the subject what should be
done in a given situation. 2) asks the subject to provide a logical explanation for some
rule or phenomenon. 3) asks the subject to define proverbs
 letter-number sequencing: is supplementary. It is not required to obtain a verbal
IQ score, but it may be used as a supplement for additional information about the
person’s intellectual functioning. It is made up of 7 items in which the individual is
asked to reorder lists of numbers and letters.
- Raw scores, scaled scores, and VIQ: each subtest produce a raw score and has a
different maximum total.
-> 2 sets of norms have been derived for this conversion: age-adjusted and reference-
group norms
=> age-adjusted norms: created by preparing a normalized cumulative frequency
distribution of raw scores for each age group
=> reference-group norms: based on the performance of participants in the
standardization sample between ages 20 and 34.
-> to obtain the VIQ, the age-corrected scales scores of the verbal subtests are
summed.
- Performance subtests: 7 types:
1. Picture completion
2. Digit symbol-coding
3. Block design
4. Matrix reasoning
5. Picture arrangement
6. Object assembly
7. Symbol search
 picture completion subtest: the subject is shown a picture in which some
important detail is missing. Asked to tell which part is missing, the subject can obtain
credit by pointing to the missing part. As in other WAIS-III performance subtests,
picture completion is timed.
 digit symbol-coding: requires the subject to copy symbols. The subtest measures
such factors as ability to learn an unfamiliar task, visual-motor dexterity, degree of
persistence, and speed of performance.
 block design: The subject must arrange the blocks to reproduce increasingly
difficulty designs.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

 Matrix reasoning: the subject is presented with nonverbal, figural stimuli. The task
is to identify a pattern or relationship between the stimuli. In addition to its role in
measuring fluid intelligence, this subtest is a good measure of information processing
and abstract reasoning skill
 picture arrangement subtest: requires the subject to notice relevant details. The
subject must also be able to plan adequately and notice cause and effect
relationships
 object assembly subtest: consist of puzzles, that the subject is asked to put
together as quickly as possible.
symbol search subtest: was added in recognition of the role of speed of
information processing in intelligence.
- Performance IQs: as with the verbal subtests, the raw scores for each of the five
performance subtest are converted to scaled scores. The performance IQ is obtained
by summing the age corrected scaled scores on the performance subtests and
comparing this score with the standardization sample.
- FIQs: It is obtained by summing the age-correlated scaled scores of the verbal
subtests with the performance subtest and comparing the subject to the
standardization sample.
- Index scores: 4 types:
1. Verbal comprehension; as a measure of acquired knowledge and verbal reasoning,
the verbal comprehension index might best be thought of as a measure of
crystallized intelligence
2. Perceptual organization; the perceptual organization index, consisting of picture
completion, block design, and matrix reasoning is believed to be a measure of fluid
intelligence
3. Working memory; the notion of working memory is perhaps one of the most
important innovations on the WAIS-III
4. Processing speed; the processing speed index attempts to measure how quickly
your mind works.
-> The WAIS-III provides a rich source of data that often furnishes significant cues for
diagnosing various conditions.
- Verbal performance IQ comparisons: In providing a measure of nonverbal
intelligence in conjunction with a VIQ, the WAIS III offers an extremely useful
opportunity not offered by the early Binet scales.
-> the WAIS-III PIQ aids in the interpretation of the VIQ
-> language, cultural, or educational factors might account for the differences in the
two IQs
-> when considering any test result, we must take ethnic background and multiple
sources of information into account.
- Pattern analysis: one evaluates relatively large differences between subtest scaled
scores. Wechsler reasoned that different types of emotional problems might have
differential effects on the subtest and cause unique score patterns
-> at best, such analysis should be used to generate hypotheses.
- Standardization: The WAIS-III standardization sample consisted of 2450 adults
divided into 13 age groups from 16-17 through 85-89.
- Reliability: The impressive reliability coefficients for the WAIS-III attest to the internal
and temporal reliability of the verbal, performance, and full-scale IQs

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> The standard error or measurement (SEM): is the standard deviation of the
distribution of error scores. According to classical test theory, an error score is the
difference between the score actually obtained by giving the test and the score that
would be obtained if the measuring instrument were perfect.
-> SEM= SD√(1-r)
=> in practice the SEM can be used to form a confidence interval within which an
individual’s true score is likely to fall. More specifically, we can determine the
probability that an individual’s true score will fall within a certain range a given
percentage of the time.
-> we can see that the smaller SEM for the verbal and full scale IQs means that we
can have considerably more confidence that an obtained score represents an
individual’s true score than we can for the PIQ.
-> test-retest reliability coefficients for the carious subtests have tended to vary
widely.
-> Unstable subtests would produce unstable patterns, thus limiting the potential
validity of patterns interpretation.
- Validity: Validity of the WAIS-III rests primarily on its correlation with the earlier
WAIS-R and with its correlation with the earlier children’s version where the ages
overlap.
- Evaluation of the Wechsler adult scales:
-> as with all modern tests, including the modern Binet, the reliability of the
individual subtests is lower and therefore makes analysis of subtest patterns dubious,
for the purpose of making decisions about individuals
-> As with the modern Binet, it incorporates modern multidimensional theories of
intelligence in its measures of fluid intelligence, an important concept. According to
one theory, there are at least 7 distinct and independent intelligences, linguistic,
body-kinethetic, spatial, logical-mathematical, musical, interpersonal, intrapersonal.
The classical view of intelligence reflected in the WAIS-III leaves little room for such
ideas.
- Wisc-IV: measures intelligence from ages 6 through 16 years, 11 months.
-> latest version of this scale to measure global intelligence and, in an attempt to
mirror advances in the understanding of intellectual properties, the WISC-IV
measures indexes of the specific cognitive abilities, processing speed, and working
memory.
-> The Wisc-IV contains 15 subtests, 10 of which were retained from the earlier WISC-
III and 5 entirely new ones. Three subtests used in earlier versions were entirely
deleted.
-> within the WISC-IV verbal comprehension index are comprehension, similarities
and vocabulary subtests. Within the WISC-IV perceptual reasoning index are the
subtests block design, picture concepts, and matrix reasoning.
=> The processing speed index comprises a coding subtest and a symbol search
subtest that consists of paired groups of symbols.
=> The working memory index consists of digit span, letter-number sequencing, and a
supplemental arithmetic subtest.
=> WISC-IV had updated its theoretical underpinnings. In addition to the important
concept of fluid reasoning, there is an emphasis on the modern cognitive psychology
concepts of working memory and processing speed.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

=> Through the use of teaching items that are not scored, it is possible to reduce
errors attributed to a child’s lack of understanding.
 Item bias: One of the most important innovations in the WISC-IV is the use of
empirical data to identify item biases.
-> the major reason that empirical approaches are needed is that expert judgments
simply do not eliminate differential performance as a function of gender, race,
ethnicity, and socioeconomic status.
-> The WISC-IV had not eliminated item bias, but the use of empirical techniques is a
major step in the right direction.
 Standardization of the WISC-IV: the sample was stratified using age, gender, race,
geographic region, parental occupation, and geographical location as variables.
 Raw scores and scaled scores: The mean scaled score is also set at 10 and the
standard deviation at 3. Scaled score are summed for the four indexes and FSIQs.
 interpretation: The basic approach involves evaluating each of the four major
indexes to examine for deficits in any given area and evaluate the validity of the FSIQ
 Reliability of the WISC-IV: the procedure for calculating reliability coefficients for
the WISC-IV was analogous to that used for the WISC-III and WAIS-III.
-> As usual, test-retest reliability coefficients run a bit below those obtained using the
split-half method. As with the Binet scales and other tests, reliability coefficients are
poorest for the youngest groups at the highest levels of ability.
 WISC-IV validity: the manual presents several lines of evidence of the test’s
validity, involving theoretical considerations, the test’s internal structure, a variety of
intercorrelational studies, factor analytic studies, and evidence based on WISC-IV’s
relationship with a host of other measures.
- WPPSI-III: 1967, Wechsler published a scale for children 4 to 6 years of age, the
WPPSI. In its revised version, the WPPSI-R paralleled the WAIS-III and WISC-III in
format, method of determining reliabilities, and subtests.
->Only 2 unique subtests are included:
1. Animal pegs; an optional test that is timed and requires the child to place a colored
cylinder into an appropriate hole in front of the picture of an animal>
2. Sentences; an optional test of immediate recall in which the child is asked to
repeat sentences presented orally by the examiner.
-> The latest version, the WPPSI-III, was substantially revised. It can now be used for
children as young as 2 years, 6 months, and has a special set of subtests specifically
for the age range of 2 to 3 years, 11 months.
=> 7 new subtests were added to enhance the measurement of fluid reasoning,
processing speed, and receptive, expressive vocabulary.
=> in addition to the extended age range and a special set of core tests for the
youngest children, the WPPSI-III includes updated norms stratified as in the WISC-IV
and based on the October 2000 census. New subtests were added along with the 2
additional composites; processing speed quotient PSQ and general language
composite GLC.
-> the WPPSI-III has age-specific starting points for each of the various subtests.
-> As with the Binet, instructions were modified to make them easier for young
children, and materials were revised to make them more interesting.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

Hoofdstuk 13

- Personality characteristics: nonintellective aspects of human behaviour, typically


distinguished from mental abilities
-> these characteristics are of vital concern in clinical and counseling settings
- Structured personalities tests: attempt to evaluate personality traits, personality
types, personality states, and other aspects of personality, such as self-concept.
-> structured is general procedure in which the subject is asked to respond to a
written statement
- Projective personality test: the stimulus is ambiguous and the subject has few
guidelines about what type of response is required.
- Deductive approach: comprise the logical-content and the theoretical approach
-> use reason and deductive logic to determine the meaning of test response.
=> Logical-content strategy: test designer tries to logically deduce the type of content
that should measure the characteristic to be assessed.
-> The principle distinguishing characteristic of this strategy is that it is assumes that
the test item describes the subject’s personality and behaviour
=> Theoretical strategy: items must be consistent with the theory. Attempts to create
a homogeneous scale and, toward this end, may use statistical procedures such as
item analysis.
- Empirical approach: comprise the criterion-group and the factor analysis method.
-> rely on data collection and statistical analyses to determine the meaning of a test
response or the nature of personality and psychopathology. Attempts to use
experimental research to determine empirically the meaning of a test response.
=> Criterion-group strategy: Test constructors select and administer a group of items
to all the people in this criterion group as well as to a control group that represents
the general population.
->The actual content or face validity of an item in the criterion-group strategy is of
little importance. Instead, the approach attempts to determine which items
discriminate the criterion and control group.
->The next step is to cross-validate the scale by checking how well it distinguished an
independent criterion sample.
-> After a scale has been constructed and cross-validated, the third step in the
criterion approach is to conduct additional research to ascertain empirically what it
means when subjects endorse a large number of items on a particular scale.
=> Factor analytic-strategy: Boils down or reduces data to a small number of
descriptive units or dimensions. Factor analysts begin with an empirical database
consisting of the intercorrelation of a large number of items or tests.
- Logical content strategy:
-> Woodworth personal data sheet: yielded a single score, providing a global
measure of functioning. Only those recruits who reported many symptoms received
an interview. In this way, the military could concentrate its efforts on the most likely
candidates for rejection.
-> Early Multidimensional Logical Content scales: The Bell attempted to evaluate the
subject’s adjustment in a variety of areas such a home life, social life, and emotional
functioning.
-> Mooney Problem checklist: contains a list of problems that recurred in clinical case

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

history data and in the written statements of problems submitted by approximately


4000 high-school students
=> criticisms of the logical content approach: In assuming that one can interpret test
items at face value, the logical content strategy also assumes that the subject takes a
normal approach to the test, complies with the instructions, reads each item, and
answers a honestly as possible.
Even if subjects can provide accurate self-evaluation, they still may not interpret the
test item in the same way as the test constructor or test user, which is also an
implicit assumption of the logical content strategy.
- Criterion group strategy: ->
- Minnesota multiphasic personality inventory: is a true-false self-report
questionnaire. The validity scales provide information about the person’s approach
to testing, such as whether an attempt was.
The clinical scales were designed to identify psychological disorders
The content scales consist of groups of items that are empirically related to a specific
content area.
-> purpose: Like Woodworth, the purpose of the MMPI and MMPI-2 is to assist
distinguishing normal from abnormal groups. Specifically, the test was designed to
aid in the diagnosis or assessment of the major psychiatric or psychological disorders.
-> Original development of the scales: The criterion group used to develop the
original MMPI consisted of psychiatric inpatients at the University of Minnesota
Hospital. -> the use of this control group was perhaps the original MMPI’s greatest
source of criticism.
=> masculinity femininity scale (MF): contained items differentially endorsed by men
and women
=> social introversion scale (Si): measures introversion and extraversion
=> Lie scale (L): designed to detect individuals who attempted to present themselves
in an overly favorable way.
=> K scale: attempts to locate those items that distinguished normal from abnormal
groups when both groups produced a normal test pattern.
=> infrequency scale (F): designed to detect individuals who attempt to fake bad,
consists of those items endorsed by less than 10% of the control group, high F scores
bring the validity of the whole profile into question.
-> Initial interpretations: To deal with multiple-scale evaluations, clinicians made use
of pattern analysis, which the test authors had originally suggested.
-> Meehl’s Extension of the empirical approach: The validity of the MMPI was
extended by finding homogeneous profile patterns and determining that
characteristics of individuals who show these patterns.
Because the original idea of the contrasted-group method was extended by the use
of criterion groups, we use the term criterion-group strategy rather that contrasted-
group strategy to describe the MMPI and related tests.
Along with an empirical approach, Meehl and others began to advocate a change in
the names of the clinical scales.
The validity scales retained their original names. At this point, MMPI patterns could
have a numerical code.
-> Restandardization MMPI-2: the purpose of the revision was to update and expand
the norms; revise items that were out of date, awkward, sexist, or problematic; and

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

broaden the item pool to extend the range of constructs that one could evaluate.
Interpretation of the clinical scales remained the same because not more than four
items were dropped from any scale, and the scales were renormed and scores
transformed to uniform T scores.
In developing new norms, the MMPI project committee selected 2900 subjects from
7 geographic areas of the united states.
A major feature of the MMPI-2 is the inclusion of additional validity scales
the VRIN attempts to evaluate random responding.
The TRIN attempts to measure acquiescence, the tendency to agree or mark true
regardless of content.
-> psychometric properties: the MMPI and MMPI-2 are extremely reliable. Perhaps as
a result of item overlap, intercorrelations among the clinical scales are extremely
high. Because of these high intercorrelations among the scales and the results of
factor analytic studies, the validity of pattern analysis had often been questioned.
Another problem with MMPI and MMPI-2 is the imbalance in the way items are
keyed.
This overwhelming evidence supporting the covariation between demographic
factors and the meaning of MMPI and MMPI-2 scores clearly shows that two exact
profile patterns can have quite different meanings, depending on the demographic
characteristics of each subject. Of course, not all MMPI studies report positive results
but the vast majority attest to it utility and versatility.
-> Current status: The restandardization of the MMPI has eliminated the most serious
drawback of the original version, the inadequate control group. The MMPI an MMPI-
2 are without question the leading personality test of the 21st century.
- California psychological inventory 3rd edition: In contrast to the MMPI and MMPI-2,
the CPI attempts to evaluate personality in normally adjusted individuals and thus
finds more use in counseling settings.
-> the test does more than share items with the MMPI, like the MMPI, the CPI shows
considerable intercorrelation among its scales.
Also like the MMPI, true-false scale keying is often extremely unbalanced. Reliability
coefficients are similar to those reported for the MMPI.
=> the CPI is commonly used in research settings to examine everything from
typologies of sexual offenders to career choices. The advantage of the CPI is that it
can be used with normal subjects. The MMPI and MMPI-2 generally do not apply to
normal subjects.
- Factor analytic strategy: ->
- Guilford’s pioneer efforts: One usual strategy in validating a new test is to correlate
the scores on the new test with the scores on other tests that purport to measure
the same entity.
Guilford’s approach was related to this procedure. However, instead of comparing
one test at a time to a series of other tests, Guilford and his associates determined
the interrelationship (intercorrelation) of a wide variety of tests and then factor
analyzed the results in an effort to find the main dimensions underlying all
personality tests.
->Today the Guilford-Zimmerman Temperament survey primarily serves only
historical interest.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

- Cattell’s Contribution: Cattell began with all the adjectives applicable to human
beings so he could empirically determine and measure the essence of personality.
-> source traits: 16 basic dimension where personality was reduced to by Cattell->
the Sixteen Personality Factor Questionnaire => 16PF
=> unlike the MMPI and CPI, the 16PF contains not item overlap, and keying is
balanced among the various alternative responses.
-> the extend the test to the assessment of clinical populations, items related to
psychological disorders have been factor analyzed, resulting in 12 new factors in
addition to the 16 needed to measure normal personalities.
-> neither clinicians not researchers have found the 16 PF to be as useful as the
MMPI. The claims of the 16PF to have identified the basic source traits of the
personality are simply not true.
-> Factor analysis is one of many ways of constructing tests. It will identify only those
traits about which questions are asked, however, and it has no more claim to
uniqueness than any other method.
- Problems with the Factor Analytic Strategy: One major criticism of factor analytic
approaches centers on the subjective nature of naming factors. To understand this
problem, one must understand that each score on any given set of tests or variables
can be broken down into three component:
1. Common variance; the amount of variance a particular variable holds in common
with other variables, It results from the overlap of what two or more variables are
measuring
2. Unique variance; factors uniquely measured by the variable, some construct
measured only by the variable in question
3. Error variance; variance attributable to error
=> important factors may be overlooked when the data are categorized solely on the
basis of blind groupings by computers.
- Theoretical strategy: items are selected to measure the variables or constructs
specified by a major theory of personality. Predictions are made about the nature of
the scale; if the predictions hold up, then the scale is supported.
 Edward Personal Preference Schedule (EPPS): theoretical basis for EPPS is the
need system proposed by Murray, probably the most influential theory in
personality-test construction date.
Edwards attempted to rate each of his items on social desirability.
Edwards included a consistency scale with 15 pairs of statements repeated in
identical form.
-> ipsative scores: present results in relative terms rather than an absolute totals.
Thus, 2 individuals with identical relative, or ipsative, scores may differ markedly in
the absolute strength of a particular need. Ipsative scores compare the individual
against himself and produce data that reflect the relative strength of each need for
that person; each person thus provides his own frame of reference
-> The EPPS has several interesting features. Its forced-choice method, which
requires subjects to select one of two items.
 Personality research form PRF and Jackson personality inventory JPI: Like the
EPPS, the PRF and JPI were based on Murray’s theory of needs.
-> strict definitional standards and statistical procedures were used in conjuction
with the theoretical approach.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

=> PRF: is intended for research purposes


=> JPI: intended for use on normal individuals to assess various aspects of personality
including interpersonal, cognitive, and value orientations.
-> items for the PrF and JPI are balanced in true-false keying.
 Self concept: what you believe to be true about yourself will strongly affect your
behaviour.
-> several adjective checklists have been developed to evaluate self-concept.
-> to evaluate self-concept, Rogers uses Q-sort technique, in which a person receives
a set of cards with appropriate self-statements
=> Rogers’s theory predict that large discrepancies between the real and ideal selves
reflect poor adjustments and low self-esteem.
- Combination strategies:
 positive personality measurement and the NEO personality inventory-revised
(NEO-PI-R): research suggests that it may be advantageous to evaluate individuals
positive characteristics in an attempt to understand the resources than an individual
is endowed with and how this endowment affects behaviour and well-being.
 Neo personality inventory (NEO-PI-R): the developers of this test used both factor
analysis and theory in item development and scale construction. Quite ambitious, the
NEO-PI-R attempts to provide a multipurpose inventory for predicting interests,
health and illness behaviour, psychological well-being, and characteristic coping
styles.
-> NEO-PI-R identified 3 broad domains:
1. Neuroticism; defined primarily by anxiety and depression. 6 facets of this domain
are anxiety, hostility, depression, self-consciousness, impulsiveness, vulnerability
2. Extroversion; degree of sociability or withdrawal a person tends to exhibit. 6 facets
are warmth, gregariousness, assertiveness, activity, excitement seeking, positive
emotions.
3. Openness; refers to breadth of experience to which a person is amenable. 6 facets
are fantasy, aesthetics, feelings, actions, ideas, values.
=> NEO-PI-R: has supported what is perhaps becoming one of the most accepted
notions in personality and personality assessment -> 5 factor model of personality
1. Extroversion; degree to which a person is sociable, leader-like, and assertive as
opposed to withdrawn, quiet, reserved.
2. Neuroticism; degree to which a person is anxious and insecure as opposed to calm
and self-confident
3. Conscientiousness; degree to which a person is persevering, responsible, and
organized as opposed to lazy, irresponsible, and impulsive.
4. Agreeableness; degree to which a person is warm and cooperative as opposed to
unpleasant and disagreeable
5. Openness to experience; degree to which a person is imaginative and curious as
opposed to concrete-minded and narrow thinking.
-> conscientiousness has been found to be valid as a positive predictor of
performance in all occupations studied.
-> neuroticism was also negatively correlated with effective styles of coping with
stressful situations.
-> trait openness correlated significantly with crystallized intelligence and the traits

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

of openness, agreeableness, and extraversion were found to be beneficial in


predicting success in specific job settings.
- Frequently used measures of positive personality traits:
 Rosenberg self-esteem scale: measures global feelings of self-worth using 10
simple and straightforward statements that examines rate on a 4-point Likert scale.
The Rosenberg scale was created for use with adult population.
 General self-efficacy scale: the GSE was developed to measure an individual’s
belief in his ability to organize resources and manage situations, to persist in the face
barriers, and to recover from setbacks.
Ego resiliency scale: this measure of ego resiliency or emotional intelligence was
developed by Block an Kremen 1996. The ego resiliency scale consists of 14 items,
each answered using a 4-point Likert scale to rate statements such as “I am regarded
as a very energetic person” ‘I get over mu anger at someone reasonably quickly” and
“Most of the people I meet are likeable”
 Dispositional resilience scale: developed by Bartone, Wright, Ingraham, Ursano
1989 to measure “hardiness”, which is defined as the ability to view stressful
situations as meaningful, changeable, and challenging.
 Hope scale: Snyder 1991, proposed a cognitive model that characterizes hope as
goal driven energy in combination with the capacity to construct systems to meet
goals.
-> The hope scale measures the components of this cognitive model
 Life orientation test-revised (LOT-R): consists of 10 items developed to access
individual differences in generalized optimism versus pessimism
 Satisfaction with life scales: the 5 item SWLS was developed as a multi-item scale
for the overall assessment of life satisfaction as a cognitive judgmental process,
rather than for the measurement of specific satisfaction domains.
-> the SWLS has been found to be positively correlated with healthy psychological
and social functioning and negatively associated with measures of psychological
distress.
 positive and negative affect schedule (PANAS): the instrument has 2 scales, one
for positive affect and one for negative affect
 Coping intervention of stressful situations (CISS): understanding individuals styles
of coping is key to understanding components of their personality
 Core self-evaluations: construct is not simply a descriptive system but explanatory
of the dispositional source of life satisfaction and performance.
-> In light of all evidence to date, the core self-evaluation construct remains a better
predictor of job performance than do individual traits and should give direction to
further evaluation.

Hoofdstuk 14

- Projective hypothesis: proposes that when people attempt to understand and


ambiguous or vague stimulus, their interpretation of that stimulus reflects their
needs, feelings, experiences, prior conditioning, thought processes and so forth
-> although what the subject finally sees in a stimulus is assumed to be a reflection of
personal qualities or characteristics, some responses may be more revealing than
others.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> examiners can never draw absolute, definite conclusions from any single response
to an ambiguous stimulus. They can only hypothesize what a tests response means.
-> problem with all projective tests is that many factors can influence one’s response
to them
->the validity of projective tests has been questioned.
- Rorschach inkblot test:
 stimuli, administration, interpretation: Rorschach is an individual test. In the
administration procedure, each of the 10 cards is presented to the subject with
minimum structure.
-> examiner must not give any cues that might reveal the nature of the expected
response.
-> the lack of clear structure of direction with regard to demands and expectations is
a primary feature of all projective tests. The idea is to provide as much ambiguity as
possible so that the subject’s response reflect only the subject.
=> administered twice:
1. Free associations: examiner presents the cards one at a time
2. Inquiry: examiner shows the cards again and scores the subject’s responses.
-> in scoring for location, the examiner must determine where the subject’s
perception is located on the inkblot.
=> having ascertained the location of a response, the examiner must then determine
what it was about the inkblot that led the subject to see that particular percept ->
determinant.
-> identifying the determinant is the most difficult aspect of Rorschach
administration
=> Rorschach scoring is obviously difficult and complex. Use of the Rorschach
requires advanced graduate training
=>> Confabulatory responses: the subject overgeneralizes from a part to a whole.
 Psychometric properties:
-> unreliable scoring: results from both meta-analysis and application of Kuder-
Richardson techniques reveal a higher level of Rorschach reliability than has generally
been attributed to the test.
-> Problem of “R”: in evaluating the Rorschach, keep in mind that there is no
universally accepted method of administration.
=>Like administration, Rorschach scoring procedures are not adequately
standardized. Without standardized scoring, determining the frequency, consistency,
and meaning of a particular Rorschach response is extremely difficult
=>One result of unstandardized Rorschach administration and scoring procedures is
that reliability investigation have produced varied and inconsistent results.
=>Whether the problem is lack of standardization, poorly controlled experiments,
there continues to be disagreement regarding the scientific status of Rorschach
- Alternative inkblot test: the holtzman: In this test, the subject is permitted to give
only one response per card. Administration and scoring procedures are standardized
and carefully described.
-> there are several factors that contributed to the relative unpopularity of the
Holtzman, the most significant of which may have been Holtzman’s refusal to
exaggerate claims of the test’s greatness and his strict adherence to scientifically

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

founded evidence of its utility. The main difficulty with the Holtzman as a
psychometric instrument is its validity.
- Thematic apperception test (TAT): used more than any other projective test. Based
on Murray’s theory of needs, whereas the Rorschach is basically atheoretical.
 Stimuli, administration, interpretation: TAT is more structured and less ambiguous
than the Rorschach., TAT stimuli consist of picture that depict a variety of scenes
-> standardization of the administration and especially the scoring procedures of the
TAT are about as poor as, if not worse than, those of the Rorschach.
-> As with the Rorschach and almost all other individually administered tests, the
examiner records the subject’s responses verbatim. The examiner also records the
reaction time.
-> There are by far more interpretive and scoring systems for the TAT than for the
Rorschach.
-> Almost all methods of TAT interpretation take into account the hero, needs, press,
themes and outcomes. The hero is the character in each picture with whom the
subject seems to identify.
-> of particular importance are the motives and needs of the hero.
=> TAT interpretation, press refers to the environmental forces that interfere with of
facilitate satisfaction of the various needs. Again, factors such as frequency ,
intensity, and duration are used to judge the relative importance of these factors.
The frequency of various themes and outcomes also indicates their importance.
=> Although a story reflects the storyteller, many other factors may influence the
story. Therefore, all TAT experts agree that a complete interview and a case history
must accompany any attempt to interpret the TAT
 psychometric properties: Subjectivity affects not only the interpretation of the
TAT, but also analysis of the TAT literature
- Alternative apperception procedures: Family of man photo-essay collection;
provides a balance of positive and negative stories and a variety of action and energy
levels for the main character. In comparison, the TAT elicits predominantly negative
and low energy stories.
-> The children’s apperception test: was created to meet the special needs of
children age 3 through 10
- Nonpictorial projective procedures: words or phrases sometimes provide the
stimulus, as in the word association test and incomplete sentence tasks.
 word association test: the psychologist says a word and you say the first word that
comes to mind. The use of word association tests dates back to Galton and was first
used on a clinical basis by Jung and Kent and Rosanoff.
 sentence completion tasks: these tasks provide a stem that the subject is asked to
complete.
-> as with all projective techniques, the individual’s response is believed to reflect
that person’s needs, conflict, values, and thought processes.
-> many incomplete sentence tasks have scoring procedures.
-> sentence completion procedures are used widely in clinical as well as in research
settings.
 figure drawing tests: another set of projective tests uses expressive techniques, in
which the subject is asked to create something, usually a drawing.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> projective drawing tests are scored on several dimensions, including absolute size,
omissions, and disproportions.

Hoofdstuk 15

- Cognitive behavioural assessment procedures:


 the rationale for cognitive behavioural assessment: one major function of
traditional psychological tests is to ascertain the possible underlying causes of
disordered behaviours.
-> In cognitive behavioural assessment, by contrast, the behaviours, thought
processes, or physiological responses that define a disordered condition are
considered the real problem.
-> This is not to say that cognitive behavioural assessment denies, ignores, or negates
the causes of psychological disorders.
-> cognitive behavioural assessment often includes an evaluation of the internal and
external factors that lead to and maintain disordered behaviour as well as an
evaluation of the behaviour itself.
-> cognitive behavioural assessment is more direct than traditional psychological
tests.
-> In cognitive behavioural assessment, psychologists analyze preceding and
subsequent factors and focus on a direct change in overt behaviour, thoughts, or
physiological processes.
Procedures based on operant conditioning:
-> In cognitive behavioural assessment based on operant conditioning, one must first
identify the critical response or responses involved in the disorder.
-> after attempting to increase or decrease the behaviour psychologists observe the
effect of the intervention on the behaviour in question relative to the baseline
=> in ease case, one first calculates a baseline. Then one implements an intervention.
Finally, one observes the effects of this intervention on the baseline
 self-report techniques: one attempt to deal with the problems inherent in
observation is the self-report technique. The typical self-report is a list of statements
about particular situations
-> cognitive behavioural procedures focus on situations that lead to particular
response patterns; that is, situations are the primary determinant of behaviour.
Traditional self-report procedures focus on relatively enduring internal characteristics
of the individual that lead to particular response patterns.
=> The fear survey schedule: Items are typically related to situations that involve fear
and avoidance behaviours. Subjects rate each time according to the degree to which
they experience that particular fear.
=> assertiveness: the ABSS can help determine whether you can be assertive if
necessary, situations in which you might have difficulty being assertive, and your
personal attitude towards assertiveness.
=> Self-report battery: primary scales request general information. Secondary scales
yield information about the need for specific techniques, tertiary scales yield
information about specific problems.
=> evaluation of self-report procedures: all of the problems associated with face
validity – subject capacity and willingness to be truthful, response bias, poor

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

reliability, poor validity, lack of norms – usually plague cognitive behavioural self-
report techniques.
 Kanfer and Saslow’s functional approach: Kanfer and Saslow are among the most
important pioneers in the field of cognitive behavioural assessment. These authors
propose what they call a functional approach assessment.
-> functional approach assessment adhered to the assumption of the learning
approach, that is, the functional approach assumes that both normal and disordered
behaviours develop according to the same laws and differ only in extremes.
-> Behavioural deficits are classes of behaviours described as problematic because
they fail to occur with sufficient frequency, with adequate intensity, in appropriate
form, or under socially expected conditions
-> besides isolating behavioural excesses and deficits, a functional analysis involves
other procedures, including clarifying the problem and making suggestions for
treatment.
 dysfunctional attitude scale: a major pillar of cognitive behavioural assessment
that focuses primarily on thinking patterns rather than overt behaviour is A.T. Beck’s
cognitive model of psychopathology -> model is based on schemas, which are
cognitive frameworks or organizing principles of thought.
-> Beck’s theory holds that dysfunctional schemas predispose an individual to
develop pathological behaviours.
-> to evaluate negative schemas, beck and colleagues have developed the
Dysfunctional attitude scale.
 irrational beliefs test: According to the cognitive viewpoint, human behaviour is
often determined by beliefs and expectations rather than reality.
-> in view of the influence of beliefs an expectations, several cognitive behavioural
tests have been developed to measure them.
 cognitive functional analysis: what people say to themselves also influences
behaviour. -> interestingly, positive and negative self-statements do not function in
the same way.
-> treatment generally involves identifying and then eliminating negative self-
statements rather than increasing positive self-statements
-> The premise underlying a cognitive functional analysis is that what a person says to
himself plays a critical role in behaviour. The cognitive functional analyst is thus
interested in internal dialogue such a self-appraisals and expectations.
-> If thoughts influence overt behaviour, then modifying one’s thoughts can lead to
modifications in one’s actions. In other words, to the extent that thoughts play a role
in eliciting or maintaining one’s actions, modification of the thoughts underlying the
actions should lead to behavioural changes.
-> parallel to meichenbraum’s technique of cognitive functional analysis are
procedures and devices that allow a person to test him or self monitoring devices.
-> In the simplest case, an individual must record the frequency of a particular
behaviour, that is, to monitor it so that he becomes aware of the behaviour.
-> some self monitoring procedures are quite sophisticated.
-> these self monitoring assessment tools are limited only by the imagination of the
practioner.
- Psychophysiological procedures: physiological data are used to draw inferences
about the psychological state of the individual. Haynes stated, a fundamental tenet

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

of psychophysiological assessment is that social, behavioural, cognitive, and


emotional phenomena are often a function of, and are often reflected in,
physiological processes.
 physiological variables with treatment implications: this type of assessment
represents a quantum leap from traditional procedures, which depend on voluntary
responses from subjects. In addition, as with other methods of behavioural
assessment, psychophysiological assessment has direct implications for treatment.
 evaluation of psychophysiological techniques: Support for psychophysiological
assessment had come from investigations that have revealed a systematic
covariation between measurable physiological processes and cognitive processes.
-> one of the most serious problems in psychophysiological assessment concerns
artifacts.
- Computers and psychological testing: for testing, one can use computer in 2 basic
ways:
1. To administer, score, and even interpret traditional tests
2. To create new tasks and perhaps measure abilities that traditional procedures
cannot tap.
 computer assisted interview: The computer assisted interview has been used for
everything from comprehensive behavioral psychological assessment and diagnostics
to special topic evaluations
-> much of the research had indicated that the validity of computer administered
interviews is equal to or better that that of paper and pencil forms.
-> computer interview success may exceed the interview accomplishments of some
clinicians because the computer programs ask more standardized questions, covering
areas that a clinician may neglect.
=> more important, computer administered interviews appear to reduce the
likelihood of clients skewing their responses in an attempt to appear socially
acceptable to the clinician.
=> in fact, people are sometimes more frank and candid in response to a computer
than to a skilled therapist.
 computer administered tests: a small number of psychological tests that measure
negative affect tend to produce different results than their paper and pencil forms.
-> computer administration is generally less time-consuming for both the individual
being tested and the test administrator, more cost-effective, better accepted by test
takers who are adults or children, and often more accurate.
-> the finer subtleties of human communication cannot be read by a computer
program. Body language that may suggest hesitation or distortion of the truth to a
human would not be captures by computer. The use of subtle techniques by the
clinician also cannot be emulated be a computer.
 computer diagnosis, scoring, and reporting of results: the effectiveness of
computer scoring, computer generated diagnosis, and narrative reports has been the
issue of controversy since their inception. Projective tests have also been successfully
scored by computer.
-> in the field of educational testing, sophisticated computer programs have been
designed to score essays.
-> the next generation of computerized essay scoring in the intelligent essay assessor.
IEA. Is used to evaluate essays according to content.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> E-rater analyzes the structure of the essay in the same fashion as PEG and
measures the content of the essay as does the IEA
=> because e-rater uses a natural language processing technique, detailed
information about this approach to scoring may aid a test taker who is unskilled as a
writer in producing an essay that is scored high.
=> the proficiency of computers to score essays consistently has barely risen to a
standard that is rather low and especially problematic when considering the
importance of some high stakes tests.
 internet usage for psychological testing: problems concerning the inability to
standardize testing conditions when participants are testing at different locations,
using different types of computers, and with different amounts of distraction in their
environments may be compensated for by the massive sample sizes available to web-
based laboratories. Davis has shown that results gathered from web-based
participants in different environments did not covary.
 computerization of cognitive behavioural assessment:
7 major applications of computers to cognitive behavioural assessment:
1. Collecting self-report data
2. Coding observational data
3. Directly recording behaviour
4. Training
5. Organizing and synthesizing behavioural assessment data
6. Analyzing behavioural assessment data
7. Supporting decision making
-> once computerized, such questionnaire could be easily administered and
immediately scored by the computer, thus saving valuable professional time.
-> as with other computer-based psychological assessments, computer-based
cognitive behavioural evaluations appear to have equivalent results as their paper
and pencil counterparts
-> the use of computers in cognitive behavioural therapy substantially improves the
quality and effectiveness of traditional techniques.
-> computer programs can also generate schedules of reinforcement including fixed
ratio, variable ratio, fixed interval, and variable interval schedules.
 test possible only by computer: virtual reality technology is ideal for safely and
sufficiently exposing clients to the objects of their phobias while evaluating
physiological responses.
-> studies that have measured physiological responses have found that controlled
exposure to virtual environments can desensitize individuals to the object of their
fear.
-> because of the relative safety of facing one’s fear in a virtual environment, because
virtual reality reduces the possibility of embarrassment to clients, and because virtual
reality requires less time and money to conduct than does in vivo desensitization, it is
seen as ideal for the treatment and measurement of phobic responses.
 computer adaptive testing: the selection of only items necessary for the
evaluation of the test taker limits the number of items needed for a exact evaluation.
Computer-adaptive tests have been found to be more precise and efficient than
fixed-item tests.
-> One benefit that is enjoyed by test takers is the decrease in time needed for test

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

taking. In addition, efficiency is increased as scores are immediately recorded and


made available. Expenses are reduced because no highly trained test administrators
are needed because the computer acts as administrator. Finally, computer-adaptive
tests are self-paced and provide accurate scores for test takers whose abilities range
from gifted to impaired.
-> however, there are limitations to computer-adaptive tests.
-> regardless of the drawbacks, computer-adaptive testing is highly effective and
frequently used.
 psychophysical and signal detection procedures: the application of psychophysical
and signal detention procedures. In these procedures, a signal is presented and the
subject is required to report whether he saw it.
-> Saccuzzo and colleagues suggest that one can use psychophysical methods to
evaluate psychological disorders and perhaps detect them in their early stages.
=> after the information is registered, it theoretically enters a brief perceptual
memory system, which Neisser calls iconic storage. The person does not become
consciously aware of the stimulus until it is transferred to the higher brain centers,
where it is compared with previous learning. The rate of this transfer of information
is the speed of information processing.
=> the speed of information processing can be determined by finding this minimum
interval. Furthermore, one can often use this information for diagnostic purposes
=> Psychophysical and signal-detection procedures have also been applied to ability
testing.
=> the signal-detection approach, as yet in its infancy, offers many advantages over
other procedures. Scoring can be simplified, administration can be easily
standardized, and the effects of the examiner can be minimized.
=> with standard software, independent investigators can readily verify from
psychophysical and signal detection procedures.

Hoofdstuk 17

- Clinical neuropsychology: the scientific discipline that studies the relationship


between behavior and brain functioning in the realms of cognitive, motor, sensory
and emotional functioning.
-> the activities of neuropsychologists include the identification, description,
multivariate quantification, and treatment of diseases of the brain and spinal cord.
-> neuropsychology is a field of study that actively attempts to relate brain
dysfunction and damage to observable and measurable behavioral functioning.
-> neuropsychology is able to detect problems that are often missed even with the
latest neuroimaging devices
=> clinical neuropsychology has developed rapidly over the last few decades
=> neuropsychologists are quite specialized.
=> neuropsychologists focus mainly on brain dysfunction, but some are actively
developing interventions for those who suffer brain injuries or related problems.
Neuropsychologists also study jow cognitive processes are affected by mental illness.
- Developmental neuropsychology: testing is typically done as part of a complex
evaluation.
-> neuropsychological assessment of children presents unique challenges.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

-> neuropsychological tests for children differ widely.


1. category of measures tests general development and adaptive functioning
2. Group of measures estimates attention and executive function.
-> 4 different factors of mental processing and related them to specific anatomical
regions in the brain:
1. Focus execute: child’s ability to scan information and respond in a meaningful way
2. Sustain: child’s capacity to pay close attention for a defined interval of time.
3. Encode: related to information storage, recall, and mental manipulation.
4. Shift: the ability to be flexible.
=> dyslexia: specific reading disorder characterized by difficulties in decoding single
words.
-> federal law now requires that children with specific disabilities receive
individualized instructional programs and special attention
-> neuropsychological evaluations have been used for a variety of other purposes.
-> another applications of neuropsychological testing is to determine the seriousness
of concussions among athletes
=> other clinical neuropsychologists have been busy identifying the cognitive
consequences of early brain lesions.
=> developmental neuropsychology is a difficult field because it requires several
levels of assessment:
1. Application of formal tests to determine the nature of the problem
2. Assessment of the environment
3. & 4. Formulation of treatment plans, which involve a prediction of the short and
long term consequences of the brain problem and the chances that intervention will
make a difference.
5. Availability of resources
6. Development of a realistic treatment plan that considers the information gained in
step 1-5.
7. Evaluating progress made in the course of clinical care.

Hoofdstuk 17

- Anxiety: an emotional state marked by worry, apprehension and tension.


-> an observable reaction caused by exposure to stressful situations
- State anxiety: an emotional reaction that varies from one situation to another.
- Trait anxiety: personality characteristic
- State trait anxiety inventory STAI: provides 2 separate scores: one for state anxiety
(A-state) and another for trait anxiety (A-trait)
->useful because behaviour is influenced by both situations and personality traits.
- Measures of anxiety: drive or motivational state, that could become manifest in two
different types of responding, task relevant and task irrelevant
-> task relevant responses: directed towards accomplishing the task at hand
-> task irrelevant responses: begin to respond in a way that interferes with their
performance.
 test anxiety questionnaire TAQ: theory distinguishes between 2 different drives,
or motivational states, that operate on test taking situations. One is learned task
drive, which is the motivation to emit responses relevant to the task at hand. The

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

other is the learned anxiety drive, made up of task-relevant responses and task-
irrelevant responses
-> reliability of the TAQ is high
 test anxiety scale: on early criticism of the TAQ was that it dealt with state anxiety
rather than trait anxiety
-> the focus on the test anxiety problem shifts from the situation in the TAQ to the
person in the TAS. Although the 2 measures are quite similar and are indeed highly
correlated, one measure assesses anxiety associated with situations, whereas the
other determines which people are highly test anxious.
-> TAS does make meaningful distinctions among people, they also suggest that
school performance may be associated with personality characteristics other than
intelligence.
-> the TAS shows that a combination of high trait anxiety and high environmental
anxiety produce the most test anxiety. -> many physical effects are associated with
test anxiety.
 other measures of test anxiety: test anxiety had 2 components: emotionally and
worry.
-> emotionally, the physical response to test taking situations, is associated with
conditions such as accelerated heart rate and muscle stiffness.
-> worry: is the mental preoccupation with failing and with the personal
consequences of doing poorly.
-> this theory proposes that the emotional components is a state, or situational,
aspect.
-> achievement anxiety test (AAT): is an 18-item scale that gives scores two different
components of anxiety: facilitating and debilitating
=> debilitating anxiety; anxiety that all of the other scales attempt to measure, or the
extent to which anxiety interferes with performance on tests. -> is harmful
=> facilitating anxiety: state that can motivate performance. Gets one worried
enough to study hard. -> is helpful,
- Measures of coping: several measures have been developed to assess the ways in
which people cope with stress.
-> problem focused strategies involve cognitive and behavioral attempts to change
the course of the stress there are active methods of coping. Emotion focused
strategies do not attempt to alter the stressor but instead focus on ways of dealing
with the emotional responses to stress.
- Ecological momentary assessment: one of the advantages of EMA is that the
information is collected in the subject’s natural environment.
- Quality of life assessment: among the many definitions of health, we find two
common themes:
1. Everyone agrees that premature mortality is undesirable, so one aspect of health is
the avoidance of death
2. Quality of life is important. In other words, disease and disability are of concern
because they affect either life expectancy or life quality.
-> many major as well as minor diseases are evaluated in terms of the degree to
which they affect life quality and life expectancy. => nearly all major clinical trials in
medicine use quality of life assessment measures.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

- Health related quality of life: many of the measures examine the effect of disease or
disability on performance of social role, ability to interact in the community, and
physical functioning.
-> there are two major approaches to quality of life assessment: psychometric and
decision theory
=> psychometric: attempts to provide separate measure for the many different
dimensions of quality of life.
=> decision theory: attempts to weight the different dimensions of health in order to
provide a single expression of health status.

Hoofdstuk 18

- Personnel psychology from the employee’s perspective: fitting people to jobs:


 Myers-Briggs type indicator: theoretically constructed test based on Carl Jung’s
theory of psychology types.
-> Jung argued that although we must strive for balance in the four modes, each
person tends to emphasize one way of experiencing the world over the others.
-> the purpose of the Myers-Briggs test it to determine where people fall on the
introversion-extroversion dimension and on which of the four modes they must rely.
 tests for use in industry: wonderlic personnel test WPT: a quick and stable paper
and pencil intelligence test with extensive norms.
-> to measure potential ability for specific vocations, one can choose from several
fine tests. The differential aptitude test (DAT) is especially useful in assessing clerical
competence.
- Measuring characteristics of the work setting:
 the social ecology approach: one of the most important areas in social ecology is
the study of behavioural settings.
-> study of behavioural settings reveals a great deal about the social rules of the
environment.
-> behavioural settings are truly self regulating ecologies. When a component of the
system is missing, the activities in the program are changed to correct the imbalance.
=> to avoid social condemnation, people must act according to the rules for a given
behavioral setting.
=> the study of behavioural settings also involves examining the relationship
between work satisfaction and the requirements of the job.
=> to understand the relationship between work setting and satisfaction, one must
consider many aspects of the ecology, including the workload, coping strategies, and
the duration of work overload. One must also make a precise description of the work
environment.
 classifying environments: some of the earliest work in the field of environmental
psychology involved building elaborate systems to classify the characteristics of
environments that had been shown to affect individual or group behaviour.
-> characteristics of the people in one’s environment affect one’s behaviour.
-> Lemke & Moos have expanded this work by creating a multiphase environmental
assessment procedure (MEAP)
=> behavioural settings and social environments are coming to be recognized as
important factors in job and personal satisfaction.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)


lOMoARcPSD|2532716

- Job analysis:
-> checklists: used by job analysts to describe the activities and working conditions
usually associated with a job title.
-> there are methodological problems with checklists because it is sometimes difficult
to determine if unchecked items were intentionally omitted or if the form was left
incomplete.
-> in contrast to Moos’s environment scales, checklists do not predict well whether
someone will like a particular job environment.
=> critical incidents: observable behaviour that differentiate successful from
unsuccessful employees.
=> observation: another method for learning about the nature of the job. Information
gathered through observational methods can sometimes be biased because people
change their behaviour when they know they are being watched.
=> interviews: also be used to find out about a job
=> questionnaires: commonly used to find out about job situations.
==> job content can be described by both job oriented and work oriented tasks. The
system includes 3 categories:
1. Worker requirements; skills, knowledge, abilities
2. Experience requirements; training, licensure.
3. Job requirements; work activities, work context, characteristics of the
organization.
- Measuring the person-situation interaction: the international support their position
by reporting the proportion of variance in behaviour explained by person, by
situation, and by the interaction between person and situation.
-> the interaction position explains only some of the people some of the time
-> to help predict more of the people more of the time, Bem and Funder, introduces
the template-matching technique, a system that takes advantages of people’s ability
to predict their own behaviour to particular situations. The system attempts to match
personality to a specific template of behaviour.
-> the probability that a particular person will behave in a particular way in a
situation is a function of the match between his characteristics and a template.
-> the concept of different strokes for different folks seems like a good way to
structure one’s search for the right job, the right apartment, or the right
psychotherapist. However, this approach has some problems;
1. There are an enormous number of combinations of persons and situations
2. Research had not yet supported specific examples of matching traits and
situations.
=> the fit between persons and situations may depend on how we ask the question.
=> in general, career satisfaction depends on an appropriate match between person
and job.

Distributing prohibited | Downloaded by Alyanna de Leon (alyaaannaaaa@gmail.com)

Anda mungkin juga menyukai