Tips
Have organized notes (rather than a stack of all
the handouts Ive posted) with the most
important informa=on
Try to answer all ques=ons (even if you only end
up with =me for a par=al answer)
Make sure to submit when I tell you =me is up
(answers =me-stamped aRer that point wont
receive credit anyway)
Make sure to carefully read each ques2on and
answer all parts (some2mes more than one
thing may be asked within a single ques2on)
1
Con=nuum of Knowledge
Types of Studies
2
Research Hierarchy
Pre-experimental designs
Quasi-experimental designs
Experimental designs (includes RCTs)
Appraising Literature
Things to consider when looking at an empirical
ar=cle:
Design
Implica=ons for internal validity
Variables and how they were measured
Reliability and validity
Sampling
Implica=ons for external validity/generalizability
Signicance of ndings
Sta=s=cal AND prac=cal
Contribu=on to the eld
3
Variables
What is a Variable?
A variable is a measureable concept that can
"vary" or take on dierent values. Almost
anything that can be measured can be a
variable.
A variable is the characteris=c or property that
is being studied, and it can take on dierent
values.
A variable must have at least two values.
A variable is a measurable concept.
Independent Variables
The presumed/proposed causal variable; the X variable; the experimental
variable.
For example: In a study evalua=ng the eec=veness of an interven=on
on reducing levels of depression, the interven=on is the independent
variable, since it is the one that is presumed to cause changes in level of
depression.
In studies where you only have one group, and that group is being compared
to itself at dierent points in =me, Time is the independent variable.
For example: In a study to see if mindfulness techniques might help
teens in juvenile deten=on decrease the number of incidents of disrup=ve
behavior, if the techniques are taught star=ng in December and disrup=ve
behavior incidents are totaled for January, February, and March, =me is
the independent variable.
Dependent Variables
The eect variable; the variable that is dependent on the independent
variable; the outcome variable; the Y variable. This is typically the
thing that is being inuenced by or changed by the independent
variable.
For example: In a study evalua=ng the eec=veness of an
interven=on on reducing levels of depression, the level of
depression is the dependent variable, since it is the one that is
presumed to be changed by the interven=on (the outcome being
measured).
4
5 common quan=ta=ve
data collec=on techniques
1. Known Scales and Instruments
2. Subject Supplied Data (Interviews and
Ques=onnaires/Surveys) designed by the
researcher
3. Observa=onal Techniques
4. Known Data or Archival Data
5. Physiological/Biological
Measurement Methods
Whether you are conduc=ng research, or
reading research conducted by someone else,
it is important to understand how things were
measured, and what the implica=ons of the
measures are
5
Reliability & Validity
You need to understand:
Conceptual aspects of each type
How to test for each type
How to interpret results of tests for each type
Reliability
(concerned with measurement error)
Consistency of a measure
Want to know changes we see are actual changes,
not just a reec=on of our measures inconsistency
Several types of consistency are of interest to us:
Stability (test-retest reliability)
Test-retest; Pearsons correla=on .8+
Inter-rater reliability/inter-observer reliability
Raters observe & rate same thing; Kappa .8+
Internal consistency
Use Cronbachs alpha to test correla=ons between scale
items; cutos vary but want at least .7+ (though .8+ is more
widely accepted)
Validity
Is a measure actually measuring what it claims to?
In other words, is it accurate?
6
Item Validi=es
Face validity:
On its face, does this measure seem right?
Can consult literature, peers, etc. to make sure the
way we are measuring something makes sense. Could
also pilot with target popula=on.
Content validity:
Are we going to be able to capture the full depth of
our concept using our measure? Is it too cursory?
Can consult experts in the eld to assess content
validity. Experts should be able to judge the
completeness of a measure and the appropriateness
of its individual items
Criterion validi=es
Concurrent validity:
Does this measure agree (or disagree, if thats what we
want) with other known measures?
Compare scores on your measure to scores obtained from the
same sample on another valid measure.
Looking for scores to have a Pearsons correla=on coecient
with a value (or absolute value, if applicable) of .8 or higher.
Predic=ve validity:
How well can the measure predict future outcomes?
Must wait and do follow up with the same sample to see.
Known groups validity:
Groups known to dier are compared.
Should nd dierent scores for persons who have and do not
have the characteris=c being measured.
Construct validity
Discriminant Validity
Performance on the measure is compared to performance on
a measure of a dierent but related construct
A valid measure should be able to discriminate a related but
dierent characteris=c; scores should diverge
Example: Comparing scores on a PTSD scale to scores on an anxiety
scale to see if it can dieren=ate between PTSD and anxiety. PTSD and
anxiety can be related, but they are not the same thing.
Convergent Validity
Performance on the measure is compared to performance on
another valid measure of the same construct
A valid measure should perform comparably to another valid
measure of the same construct; scores should converge
Example: Comparing scores on one depression scale to scores on
another valid scale for depression to see if they agree with each
other.
7
Popula=on vs. Sample
Whats the dierence?
Popula2on
The overall group of individuals that someone reading or
conduc=ng research may want to use in generalizing results
All individuals who are members of that par=cular group
The researcher denes the group (we can dene the
boundaries of whom to include in the popula=on of interest
Sample
The sample is a selec=on of individuals from the popula=on
of interest.
These are the people who will actually be recruited for your
study.
Probability Sampling
Simple random (SRS)
Systema=c
Stra=ed
Researcher chooses subgroups, then samples from
within each subgroup to ensure representa=on
from those subgroups in the sample.
Area/cluster mul=-stage
Researcher randomly samples from subgroups
that are randomly chosen.
Probability is being relied upon at mul2ple stages.
Non-Probability Sampling
Availability (Convenience)
Purposive
Researchers hand pick a sample based on some
quali=es/set of criteria
Quota
Non-probability version of stra=ed sampling.
Researcher wants certain propor=ons of specied
subgroups represented in sample.
Snowball
8
Generalizability, Causal Inference, &
Interpreta=on of Results
Generalizability
External validity
Can you draw conclusions about anyone outside of your
study?
Sampling & recruitment
Probability sampling techniques are those that u=lize some
type of random process or procedure to gather a sample
where every element on a sampling frame had an equal
chance of being selected (want to address systema=c bias)
When we use probability sampling techniques, we can
reasonably generalize our results from our sample to the
en=re sampling frame.
Pre-Experimental Designs
These designs are used for purposes other than trying to
test a causal rela2onship; exploratory studies.
9
Quasi-Experimental Designs
Time Series Design
OOOXOOO BUT remember, these designs could also
look like this:
OOOOXOOOO
Mul=ple Time Series Design
OOOXOOO OR like this:
OXOO
OOO OOO
O OO
OR like this:
Non-Equivalent Comparison
OX1O
Group Design
OX2O
OXO O O
O O
OR like some other varia=on of these
designs. Dont let that throw you o.
Non-Equivalent Group Treatment
Comparison Design
OX1O
OX2O
Experimental Designs
Hint: When you see random assignment* (R) you are looking at an experimental
design
These designs can have the same varia=ons we saw with the quasi
experimental designs.
10
Threats to Internal Validity that CAN
(poten=ally) be ruled out (reduced
or prevented) by design
History
Instrumenta=on
Tes=ng
Matura=on
Regression
Selec=on
Interac=ons (related to selec=on threat causing
groups to experience other threats systema=cally
dierently from one another)
11
Ruling Out Threats by Design
Adding Random Assignment
OXO R OXO
O O R O O
Threats ruled out:
History, Instrumenta2on, Matura2on & Tes2ng are all s=ll
ruled out due to the comparison group
Selec2on - Probability is relied on to make the groups
comparable (there should not be any systema=c dierences
between the groups that could provide an alternate
explana=on for any observed dierences).
Interac2ons w/Selec2on The groups should not
experience the other threats dierently from one another.
Regression to the mean - Extremes should be distributed
between the two groups (or both groups will be equally
extreme).
12