Anda di halaman 1dari 12

Midterm

Exam Review Session

What youve learned so far:


How to iden=fy and dene:
Types of research/purposes of research/hierarchies of
evidence
Ethical standards for research
Independent variables (IVs)
Dependent variables (DVs)
Values
Common data collec=on techniques
Reliability and validity of measures - concepts, tes=ng for,
interpre=ng results of tests for, & applicability for known
scales and other types of measures
Popula=ons of interest, sampling techniques & external
validity/generalizability
Research designs
Threats to internal validity

Tips
Have organized notes (rather than a stack of all
the handouts Ive posted) with the most
important informa=on
Try to answer all ques=ons (even if you only end
up with =me for a par=al answer)
Make sure to submit when I tell you =me is up
(answers =me-stamped aRer that point wont
receive credit anyway)
Make sure to carefully read each ques2on and
answer all parts (some2mes more than one
thing may be asked within a single ques2on)

1
Con=nuum of Knowledge

Types of Studies

Main Purposes of Research/


Types of Studies
Exploratory
Inves=gate something not much is known about
Descrip=ve
Tries to illustrate characteris=cs of a popula=on
Explanatory
AVempts to explain a phenomenon
Evalua=on
Tests eec=veness of an interven=on, agency,
program, or policy

2
Research Hierarchy
Pre-experimental designs
Quasi-experimental designs
Experimental designs (includes RCTs)

Meta analyses, systema=c reviews

Appraising Literature
Things to consider when looking at an empirical
ar=cle:
Design
Implica=ons for internal validity
Variables and how they were measured
Reliability and validity
Sampling
Implica=ons for external validity/generalizability
Signicance of ndings
Sta=s=cal AND prac=cal
Contribu=on to the eld

Ethical Guidelines for Social Science


Research
Informed consent
Voluntary
Do no harm
Protec=on of vulnerable popula=ons
Conden=ality/anonymity
Honest repor=ng of results

3
Variables
What is a Variable?
A variable is a measureable concept that can
"vary" or take on dierent values. Almost
anything that can be measured can be a
variable.
A variable is the characteris=c or property that
is being studied, and it can take on dierent
values.
A variable must have at least two values.
A variable is a measurable concept.

Independent Variables
The presumed/proposed causal variable; the X variable; the experimental
variable.
For example: In a study evalua=ng the eec=veness of an interven=on
on reducing levels of depression, the interven=on is the independent
variable, since it is the one that is presumed to cause changes in level of
depression.

In studies where there is no aVempt to demonstrate causality, the


independent variable is also the grouping variable, the one that denes
the groups that are being compared in a study.
For example: In a study to see if men or women dier in their
depressive symptoms, the independent variable is gender, since it is the
variable that denes the groups being compared (men and women).

In studies where you only have one group, and that group is being compared
to itself at dierent points in =me, Time is the independent variable.
For example: In a study to see if mindfulness techniques might help
teens in juvenile deten=on decrease the number of incidents of disrup=ve
behavior, if the techniques are taught star=ng in December and disrup=ve
behavior incidents are totaled for January, February, and March, =me is
the independent variable.

Dependent Variables
The eect variable; the variable that is dependent on the independent
variable; the outcome variable; the Y variable. This is typically the
thing that is being inuenced by or changed by the independent
variable.
For example: In a study evalua=ng the eec=veness of an
interven=on on reducing levels of depression, the level of
depression is the dependent variable, since it is the one that is
presumed to be changed by the interven=on (the outcome being
measured).

In studies where there is no causa=on being explored, it is the thing


that the groups are being compared on.
For example: In a study to see if men or women dier in their
depressive symptoms, the dependent variable is depressive
symptoms, since it is the variable on which the groups (men and
women) are being compared (it is the dierence they are interested
in exploring).

4
5 common quan=ta=ve
data collec=on techniques
1. Known Scales and Instruments
2. Subject Supplied Data (Interviews and
Ques=onnaires/Surveys) designed by the
researcher
3. Observa=onal Techniques
4. Known Data or Archival Data
5. Physiological/Biological

Measurement Methods
Whether you are conduc=ng research, or
reading research conducted by someone else,
it is important to understand how things were
measured, and what the implica=ons of the
measures are

Note: The ideas of reliability and validity apply


to all types of measures, not only to known
scales and instruments

Validity & Reliability Conceptualized

(Fig. 8.3 in 2016 edi=on of textbook, p. 152)

5
Reliability & Validity
You need to understand:
Conceptual aspects of each type
How to test for each type
How to interpret results of tests for each type

If you are struggling with this content I


recommend referring to the textbook and Unit
6 handout (which includes examples from the
textbook)

Reliability
(concerned with measurement error)
Consistency of a measure
Want to know changes we see are actual changes,
not just a reec=on of our measures inconsistency
Several types of consistency are of interest to us:
Stability (test-retest reliability)
Test-retest; Pearsons correla=on .8+
Inter-rater reliability/inter-observer reliability
Raters observe & rate same thing; Kappa .8+
Internal consistency
Use Cronbachs alpha to test correla=ons between scale
items; cutos vary but want at least .7+ (though .8+ is more
widely accepted)

Validity
Is a measure actually measuring what it claims to?
In other words, is it accurate?

Item validity (Face & Content)


Does our measure make sense?
Criterion validity (Predic=ve, Concurrent, Known
Groups)
Comparing our measure to external criteria (in the
form of another validated measure)
Construct validity (Discriminant, Convergent)
Does your measure concur with theore=cal
rela=onships between variables?

6
Item Validi=es
Face validity:
On its face, does this measure seem right?
Can consult literature, peers, etc. to make sure the
way we are measuring something makes sense. Could
also pilot with target popula=on.
Content validity:
Are we going to be able to capture the full depth of
our concept using our measure? Is it too cursory?
Can consult experts in the eld to assess content
validity. Experts should be able to judge the
completeness of a measure and the appropriateness
of its individual items

Criterion validi=es
Concurrent validity:
Does this measure agree (or disagree, if thats what we
want) with other known measures?
Compare scores on your measure to scores obtained from the
same sample on another valid measure.
Looking for scores to have a Pearsons correla=on coecient
with a value (or absolute value, if applicable) of .8 or higher.
Predic=ve validity:
How well can the measure predict future outcomes?
Must wait and do follow up with the same sample to see.
Known groups validity:
Groups known to dier are compared.
Should nd dierent scores for persons who have and do not
have the characteris=c being measured.

Construct validity
Discriminant Validity
Performance on the measure is compared to performance on
a measure of a dierent but related construct
A valid measure should be able to discriminate a related but
dierent characteris=c; scores should diverge
Example: Comparing scores on a PTSD scale to scores on an anxiety
scale to see if it can dieren=ate between PTSD and anxiety. PTSD and
anxiety can be related, but they are not the same thing.
Convergent Validity
Performance on the measure is compared to performance on
another valid measure of the same construct
A valid measure should perform comparably to another valid
measure of the same construct; scores should converge
Example: Comparing scores on one depression scale to scores on
another valid scale for depression to see if they agree with each
other.

7
Popula=on vs. Sample
Whats the dierence?
Popula2on
The overall group of individuals that someone reading or
conduc=ng research may want to use in generalizing results
All individuals who are members of that par=cular group
The researcher denes the group (we can dene the
boundaries of whom to include in the popula=on of interest
Sample
The sample is a selec=on of individuals from the popula=on
of interest.
These are the people who will actually be recruited for your
study.

Probability Sampling
Simple random (SRS)
Systema=c
Stra=ed
Researcher chooses subgroups, then samples from
within each subgroup to ensure representa=on
from those subgroups in the sample.
Area/cluster mul=-stage
Researcher randomly samples from subgroups
that are randomly chosen.
Probability is being relied upon at mul2ple stages.

Non-Probability Sampling
Availability (Convenience)
Purposive
Researchers hand pick a sample based on some
quali=es/set of criteria
Quota
Non-probability version of stra=ed sampling.
Researcher wants certain propor=ons of specied
subgroups represented in sample.
Snowball

8
Generalizability, Causal Inference, &
Interpreta=on of Results
Generalizability
External validity
Can you draw conclusions about anyone outside of your
study?
Sampling & recruitment
Probability sampling techniques are those that u=lize some
type of random process or procedure to gather a sample
where every element on a sampling frame had an equal
chance of being selected (want to address systema=c bias)
When we use probability sampling techniques, we can
reasonably generalize our results from our sample to the
en=re sampling frame.

Introduc=on to Research Design


Design Diagram Symbols
X = Usually signies exposure of a group to an interven=on.
Can also be the presence of a characteris=c or having had
a par=cular experience (grouping variable). X has to do w/
your independent variable.
O = Point/s of observa=on or when measurements is
happening
R = Random assignment
M = Matching
Reading diagrams:
X's and O's on the same line are applied to the same group of
people.
Read leR to right to indicate temporal order of events.
X's and O's ver=cal to one another occur simultaneously.

Pre-Experimental Designs
These designs are used for purposes other than trying to
test a causal rela2onship; exploratory studies.

Survey Research/Case Study Design


XO

One Group Pre-Test Post-Test Design
OXO

Sta=c Group Comparison
X O
------
O

9
Quasi-Experimental Designs
Time Series Design
OOOXOOO BUT remember, these designs could also
look like this:
OOOOXOOOO
Mul=ple Time Series Design
OOOXOOO OR like this:
OXOO
OOO OOO
O OO

OR like this:
Non-Equivalent Comparison
OX1O
Group Design
OX2O
OXO O O
O O
OR like some other varia=on of these
designs. Dont let that throw you o.
Non-Equivalent Group Treatment
Comparison Design
OX1O
OX2O

Experimental Designs
Hint: When you see random assignment* (R) you are looking at an experimental
design

Classic Experimental Design


R OXO
R O O

Post-Test Only Experimental Design
R X O
R O

Treatment Comparison Experimental Design
R O X1 O
R O X2 O

These designs can have the same varia=ons we saw with the quasi
experimental designs.

*remember the dierence between random sampling/random selec2on and random


assignment

What is internal validity?


Internal validity helps you to determine:
Was it X that caused Y,
or what is something else?
Internal validity is the degree to which your
design can eliminate other explana2ons of the
rela2onship between X and Y.
Threats to internal validity are variables and
circumstances that could be seen as alternate
explana=ons of the rela=onship between X and Y.

10
Threats to Internal Validity that CAN
(poten=ally) be ruled out (reduced
or prevented) by design
History
Instrumenta=on
Tes=ng
Matura=on
Regression
Selec=on
Interac=ons (related to selec=on threat causing
groups to experience other threats systema=cally
dierently from one another)

Ruling Out Threats by Design


Adding a Comparison Group
OXO OXO
O O
Threats ruled out:
History. Both groups would have experienced the
outside event.
Instrumenta2on. Changes in measurement
would aect both groups.
Matura2on. Both groups experience the same
passage of =me.
Tes2ng. Both groups take the measure the same
# of =mes.

Ruling Out Threats by Design


Adding a Comparison Group
OXO OXO
O O
Threats that are not ruled out:
Selec2on your groups might not be comparable
(might be systema=cally dierent from each other)
Interac2ons with selec2on selec=on is not ruled
out so you cant rule out interac=ons with selec=on
either the groups might interact w/the other
threats dierently from one another.
Regression to the mean extremes might not be
evenly distributed between the groups

11
Ruling Out Threats by Design
Adding Random Assignment
OXO R OXO
O O R O O
Threats ruled out:
History, Instrumenta2on, Matura2on & Tes2ng are all s=ll
ruled out due to the comparison group
Selec2on - Probability is relied on to make the groups
comparable (there should not be any systema=c dierences
between the groups that could provide an alternate
explana=on for any observed dierences).
Interac2ons w/Selec2on The groups should not
experience the other threats dierently from one another.
Regression to the mean - Extremes should be distributed
between the two groups (or both groups will be equally
extreme).

Threats to Internal Validity that CANT


be ruled out by design
There are several threats to internal validity that
cannot be ruled by designs, that is, even the
strongest design might s=ll succumb to these
threats.
There are things a researcher can do to minimize
some of these, but design cannot "rule them out":
Experimental group aVri=on/dieren=al aVri=on
Diusion/imita=on of treatment
Resenzul demoraliza=on
Compensatory equaliza=on

12

Anda mungkin juga menyukai