Anda di halaman 1dari 11

# Mike Furr Psychology Department, Wake Forest University March 2008

## Effect Sizes and Significance Tests Page 1

Summary of Effect Sizes and their Links to Inferential Statistics R. Michael Furr Department of Psychology Wake Forest University

1. Definitions of effect sizes 1.1. Effect sizes expressing the degree of association between two variables (r) 1.2. Effect sizes expressing the degree of difference between means (d, g) 1.3. Effect sizes expressing the proportion of variance explained (R2, 2, 2) 1.4. Effect sizes for proportions 2. Transforming between effect sizes 2.1. Computing r from d and g 2.2. Computing d from r and g 2.3. Computing g from r and d 3. Computing inferential statistics from effect sizes 3.1. Computing X2 from r 3.2. Computing t from r, d, and g 3.3. Computing F with two means 3.4. Computing F with more than two means 4. Computing effect sizes from inferential statistics 4.1. Computing r from X2 , t, and F 4.2. Computing d from t 4.3. Computing g from t 4.4. Computing 2 and 2

Much of this is based on : Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 231-244). New York: Russell Sage Foundation. Rosenthal, R. Rosnow, R. L., & Rubin, D. B. (2000) Contrasts and effect sizes in behavioral research: A correlational approach. Cambridge, UK: Cambridge University Press.

Please feel free to contact me if any of these formulae seem incorrect it is possible that typographical errors may have been made. Mike Furr furrrm@wfu.edu

## Effect Sizes and Significance Tests Page 2

1. EFFECT SIZES: DEFINITIONS 1.1 Degree of association between two variables (Correlational Effect Sizes)
XX XX n YY 2 Y Y n n

r = = rpb =

Zx Zy n

)2

(X X )(Y Y ) xy s xy n = = = xy xy sxsy

NOTE: Some of these formulas use n in the denominator of the correlation, but they are sometimes written with n-1 rather than n in the denominators. This difference does not matter as long as either n or n-1 is used in all parts of the equation (ie, in the standard deviations, z-scores, covariance, etc) When r is a point-biserial correlation (rpb), it is based on a dichotomous grouping variable (X) and a continuous variable (Y). In this case, the equation can also be written as:

r=

p1 p2 (Y1 Y2 ) Y

where p1 and p2 are the proportions of the total sample in each group, Y1 Y2 is the difference between the groups means on the continuous variable (Y), and Y is the standard deviation of variable Y. Note that this is Equation 5 in McGrath and Meyer (2006), and note that the direction of the correlation depends on which group is considered group 1 and which is considered group 2.
McGrath, R. E., & Meyer, G. J. (2006). When effect sizes disagree: The case of r and d. Psychological Methods, 11, 386-401.

Z transformation of a correlation

1 + r 1 z r = r = 1 log e = 2 [log e (1 + r ) log e (1 r )] 2 1 r Where loge is the natural log function (LN on some calculators) To transform back from zr (r) metric to r metric
e 2z r + 1 Where e is the exponent function (eX on some calculators) r= e 2z r 1

## Effect size for the difference between correlations

Cohen ' s q = Z r1 Z r 2

## 1.2.1 . For comparing means from two groups:

Cohen ' s d = X1 X 2 pooled X1 X 2 s pooled X1 X 2 s control group

Hedges ' s g =

Glass ' s =

## Where pooled = and and

2 ( n 1 ) 1 + ( n 2 ) 2 2

n1 + n 2

and

s pooled =

2 ( n 1 1) s1 + ( n 2 1) s 2 2

n1 + n 2 2 N2 N

## pooled = s pooled s pooled =

n1 + n 2 2 = s pooled n1 + n 2 = pooled N2 N

pooled n1 + n 2 2 n1 + n 2

You may also see pooled referred to as within , and s pooled referred to as s within or as MS within

1.2.2 The logic of d and g can be applied when comparing one mean to a population mean (e.g., one sample t-test), such that:
d = X X

g=

## X sX where is the null hypothesis population mean

1.2.3 The logic of d and g can be applied when comparing two correlated means (e.g., repeated measures t-test, paired samples t-test)
d= D D D sD

g=

where D is the mean difference score and D and sD are standard deviations of the difference scores

## Effect Sizes and Significance Tests Page 4

1.3 Variance accounted for R2 , Eta squared (2), and omega squared (2)

R2 = 2 =

## SS EXPLAINED SS BETWEEN = SS TOTAL SS TOTAL SS EFFECT SS TOTAL

2 For a specific effect (ie, in a study with multiple IVs/predictors) R2 = 2 = R 2 EFFECT = EFFECT =

2 EFFECT
2 TOTAL

## 1.4 Effect sizes for proportions

Cohen ' s g = p .50 where p estimates a population proportion d = p1 p 2 where p1 and p 2 are estimates of the population proportions Cohen ' s h = arcsin p1 arcsin p 2 Pr obit d = Z p1 Z p 2 where Z p and Z p are standard normal deviate transformed estimates of population proportions 1 2 p p Logit d = log e 1 log e 2 1 p 1 1 p 2

## 2. TRANSFORMING BETWEEN EFFECT SIZES 2.1 Computing r

2.1.1 Computing r from Cohens d For one group and for two correlated means (ie repeated measures or paired samples)
r= d2 d +1
2

d d 2 +1

## For two independent groups

r= d2 d2 + 1 p1 p 2

Where p1 is the proportion of participants who are in Group 1 and p2 is the proportion in Group 2 Note, for equal sample sizes (p1 = p2 = .50), this simplifies to: r = 2.1.2. Computing r from Hedges g For one group and for two correlated means (ie repeated measures or paired samples)
r= g2 g2 + df N = g2 N 1 g2 + N

d2 d2 + 4

## For two independent groups

g2 df N2 g 2 n 1 n 2 + (n 1 + n 2 )df g2 + g2 + Np1 p 2 Np1 p 2 Where p1 is the proportion of participants who are in Group 1 and p2 is the proportion in Group 2 r= = = r= g2 N2 g 2 + 4 N g 2 n1n 2 g2

Note, for equal sample sizes (p1 = p2 = .50), this simplifies to:

## Effect Sizes and Significance Tests Page 6

2.2 Computing d

2.2.1 Computing d from r For one group and for two correlated means (ie repeated measures or paired samples)
d= r 1 r 2

## For two independent groups

d= r

p1 p 2 1 r 2

)
d= 2r 1 r 2

Where p1 is the proportion of participants who are in Group 1 and p2 is the proportion in Group 2

Note, for equal sample sizes (p1 = p2 = .50), this simplifies to:

2.2.2. Computing d from Hedges g For one group and for two correlated means (ie repeated measures or paired samples)
d=g N N =g df N 1

For two independent groups (regardless of the relative sample sizes) N N d=g =g df N2
2.3 Computing g

2.3.1 Computing g from r For one group and for two correlated means (ie repeated measures or paired samples)
g= r 1 r 2 df r = N 1 r 2 N 1 N

## For two independent groups

g= r

p1 p 2 1 r p1 p 2 1 r Where p1 is the proportion of participants who are in Group 1 and p2 is the proportion in Group 2 Note, for equal sample sizes (p1 = p2 = .50), this simplifies to: g =
2r 1 r
2

df = N

N2 N

df 2r = N 1 r 2

N2 N

## Effect Sizes and Significance Tests Page 7

2.3.2 Computing g from Cohens d For one group and for two correlated means (ie repeated measures or paired samples) g=d df N 1 =d N N

## For two independent groups (regardless of the relative sample sizes)

g=d df N2 =d N N

2.4 Transforming between eta squared (2) and omega squared (2)
2 df effect + nk 1 1 2 df error eta 2 for an effect = 2 = 2 df effect + nk 1 + 1 1 2 df error

Where n is the number of individuals per group, and k is the number of groups for the effect
df error 2 omega 2 for an effect = =
2

1 2

df effect

df error 2 df effect + nk 1 2

## Effect Sizes and Significance Tests Page 8

3. COMPUTING SIGNIFICANCE TESTS FROM EFFECT SIZES Recall, 3.1 For a 2 x 2 Contingency table Inferential test statistic = Effect size X Size of Study

2 (1) = Z 2 = r 2 N
3.2 For a t-test

## 3.2.1 T from r This is appropriate for any kind of t-test:

t=

r 1 r2

df

3.2.2 T from Cohens d 3.2.2.1 For a One-sample t-test or correlated means t-test
t = d df = d N 1

where d =

X X

or d =

D D

## 3.2.2.2 For an independent groups t-test

n1n 2 df = d p p df = d p p ( N 2) t = d 1 2 1 2 (n 1 + n 2 ) Where p1 is the proportion of participants who are in Group 1 and p2 is the proportion in Group 2

Note, for equal sample sizes (p1 = p2 = .50), this simplifies to: 3.2.3 T from Hedges g 3.2.3.1 For a One-sample t-test or correlated means t-test
t=g N

t=d

df N2 =d 2 2

where g =

X sX

or g =

D sD

## 3.2.3.2 For an independent groups t-test

n 1n 2 = g p1 p 2 N t = g n1 + n 2 Where p1 is the proportion of participants who are in Group 1 and p2 is the proportion in Group 2

Note, for equal sample sizes (n1 = n2 = n and p1 = p2 = .50), this simplifies to:
3.3 For an ANOVA (F test)

t=g

N n =g 2 2

F= r2 1 r 2

(df error )

## F = d 2 (df error p1 p 2 ) df (for equal n study, F = d 2 error 4 n n F = g2 1 2 n +n 2 1

= g 2 (nkp1p 2 ) = g 2 (Np1p 2 ) for a oneway ANOVA nk N (for equal n study, F = g 2 = g 2 for a oneway ANOVA 4 4 eta 2 1 eta 2 2 1

F=

(df error )
2 1 2

F=

(nk ) + 1 = 2

(N ) + 1

## for a oneway ANOVA

3.3.2 For an ANOVA with dfNUMERATOR > 1 (more than two independent groups)
F= eta 2 df error 1 eta 2 df means 2 nk +1 1 2 k 1

F=

## 4.1.1 r from a X2 test for a 2x2 contingency table

r = = rpb = 2 (1) Z = n n

r= t2 t + df
2

F F + df error

4.2 Computing d

d= t df = t N 1

## 4.2.2 d from an independent groups t-test

n +n 1 2 d = t df n n 1 2 = t p1 p 2 df error 2t df = t p1 p 2 ( N 2)

Which simplifies to d =

## 4.2.3 d from an F test based on numerator df = 1

d= F p1 p 2 df error = F p1 p 2 ( N 2)
2 F df

Which simplifies to d =

4.3 Computing g

g= t N

## 4.3.2 g from an independent groups t-test

g=t n1 + n 2 = n1n 2 t p1 p 2 N 2t N

Which simplifies to g =

## 4.3.3 g from an F test based on numerator df = 1

g= F p1p 2 N
2 F N

Which simplifies to g =

## 4.4 Computing 2 and 2

Feffect (df effect ) df error Feffect (df effect ) eta 2 for an effect = 2 = = Feffect (df effect ) Feffect (df effect ) + df error +1 df error omega 2 for an effect = 2 = (Feffect - 1)(df effect ) (Feffect - 1)(df effect ) + nk

## For F tests with numerator = 1, these simplify to

Feffect df error Feffect eta 2 for an effect = 2 = = Feffect Feffect + df error +1 df error omega 2 for an effect = 2 = (Feffect - 1) (Feffect - 1) + nk