Anda di halaman 1dari 35

Chapter 1 Linear Regression with 1 Predictor

Statistical Model
Y i 0 1 X i i

i 1 , , n

where:

Y i is the (random) response for the ith case


0 , 1 are parameters
X i is a known constant, the value of the predictor variable for the ith case
i is a random error term, such that:
E {i} 0 2 {i} 2 {i , j} 0 i, j i j

The last point states that the random errors are independent (uncorrelated), with mean 0,
and variance 2 . This also implies that:
E {Y i} 0 1 X

2 {Yi}

{Yi ,Y j} 0

Thus, 0 represents the mean response when X 0 (assuming that is reasonable level of
X ), and is referred to as the Y-intercept. Also, 1 represent the change in the mean
response as X increases by 1 unit, and is called the slope.

Least Squares Estimation of Model Parameters


In practice, the parameters 0 and 1 are unknown and must be estimated. One widely
used criterion is to minimize the error sum of squares:
Y i 0 1 X i i
Q

i1

i2

i1

i Yi ( 0 1 X i )

(Y i ( 0 1 X i ) ) 2

This is done by calculus, by taking the partial derivatives of Q with respect to 0 and 1
and setting each equation to 0. The values of 0 and 1 that set these equations to 0 are
the least squares estimates and are labelled b 0 and b 1 .
First, take the partial derivates of Q with respect to 0 and 1 :

Q
2
0

Q
2
0

i1
n

i1

(Y i ( 0 1 X i ) ) ( 1 )

(1)

(Y i ( 0 1 X i ) ) ( X i )

(2 )

Next, set these these 2 equations to 0, replacing 0 and 1 with b 0 and b 1 since these are
the values that minimize the error sum of squares:
2

(Yi b 0 b 1 X i ) 0

i1

(Yi b 0 b 1 X i ) X

i1

i1

Yi nb0 b1 X
i1

i1

X iY i b 0

(1a )
n

X i b1 X

i1

i1

2
i

(2 a )

These two equations are referred to as the normal equations (although, note that we
have said nothing YET, about normally distributed data).
Solving these two equations yields:
n

b1

i1

( X i X )(Yi Y )
n

i1

(X i X )

b0 Y b1 X

i1

(X i X )

i1

i1

n X k i Y i

(X i X )

Yi
2

i1

k iY i

i1

liY i

where k i and l i are constants, and Y i is a random variable with mean and variance given
above:
ki

X i X
n

i1

li

(X i X )2

1
1
X ki
n
n

X (X i X )
n

i1

(X i X )2

The fitted regression line, also known as the prediction equation is:
^

Y b0 b1X

The fitted values for the individual observations aye obtained by plugging in the
corresponding level of the predictor variable ( X i ) into the fitted equation. The residuals
^
are the vertical distances between the observed values ( Y i ) and their fitted values ( Y i ),
and are denoted as e i .
^

b0 b1 X

ei Yi Y

Properties of the fitted regression line


n

ei 0

X ie i 0

The sum of the weighted (by X ) residuals is 0

Y i ei 0

The sum of the weighted (by Y ) residuals is 0

The regression line goes through the point ( X , Y )

i1
n

The residuals sum to 0

i1
n ^
i1

These can be derived via their definitions and the normal equations.

Estimation of the Error Variance


Note that for a random variable, its variance is the expected value of the squared
deviation from the mean. That is, for a random variable W , with mean W its variance is:

2 {W } E { (W W ) 2 }
For the simple linear regression model, the errors have mean 0, and variance 2 . This
means that for the actual observed values Y i , their mean and variance are as follows:
E {Y i} 0 1 X

2 {Y i} E { (Yi ( 0 1 X i )) 2 }

First, we replace the unknown mean 0 1 X

with its fitted value Y i b 0 b 1 X i , then


we take the average squared distance from the observed values to their fitted values.
We divide the sum of squared errors by n-2 to obtain an unbiased estimate of 2 (recall
how you computed a sample variance when sampling from a single population).
i

i1

s2

(Yi Y i )

i1

n 2

e i2

n 2

Common notation is to label the numerator as the error sum of squares (SSE).
n

SSE

i1

(Yi Y i ) 2

e i2

i1

Also, the estimated variance is referred to as the error (or residual) mean square
(MSE).

M SE

s2

SSE
n 2

To obtain an estimate of the standard deviation (which is in the units of the data), we take
the square root of the erro mean square. s M S E .
A shortcut formula for the error sum of squares, which can cause problems due to roundoff errors is:
n

SSE

i1

(Yi Y ) b 1 ( X i X )(Yi Y )
2

i1

Some notation makes life easier when writing out elements of the regression model:

SS

XX

i1

(X i X )2

i1

2
i

i1

X i

SS

XY

i1

( X i X )(Yi Y )

SS

YY

i1

(Yi Y )
2

i1

Yi

i1

i1


X i

n

i1

Yi

X iY i

i1

Yi

Note that we will be able to obtain most all of the simple linear regression analysis from
these quantities, the sample means, and the sample size.

SS
b1
SS

XY

SSE SS

XX

YY

(S S XY ) 2

S S XX

Normal Error Regression Model


If we add further that the random errors follow a normal distribution, then the response
variable also has a normal distribution, with mean and variance given above. The
notation, we will use for the errors, and the data is:

i ~ N (0 , 2 )

Y i ~ N ( 0 1 X i , 2 )

The density function for the ith observation is:


fi

1 Y 0 1 X i
1
exp i

2
2

The likelihood function, is the product of the individual density functions (due to the
independence assumption on the random errors).
n

L (0 , 1 , )

1
exp
( 2 2 ) 1/2

(2 )
2

n /2

i1

1
exp
2

The values of 0 , 1 ,

i1

(Yi 0 1 X i ) 2

(Yi 0 1 X i ) 2

that maximize the likelihood function are referred to as


^

maximum likelihood estimators. The MLEs are denoted as: 0 , 1 , 2 . Note that the
natural logarithm of the likelihood is maximized by the same values of 0 , 1 , 2 that
maximize the likelihood function, and its easier to work with the log likelihood function.
lo g e L

n
n
1
lo g ( 2 ) lo g ( 2 )
2
2
2

i1

(Yi 0 1 X i ) 2

Taking partial derivatives with respect to 0 , 1 ,

yields:

lo g L
1
2
0
2

lo g L
1
2
1
2
lo g L
n

2

(Y i 0 1 X i )( 1)

i1

(Y i 0 1 X i )( X i )

i1

2 ( )
2

i1

(Yi 0 1 X i ) 2

(4 )

(5 )

(6)

Setting these three equations to 0, and placing hats on parameters denoting the
maximum likelihood estimators, we get the following three equations:
n

i1

i1

1
^ 4

X iY i
n

i1

Yi n 0

i1

(4 a )

X i

i1

i1

(5 a )

(Yi 0 1 X i ) 2

2
i

(6a )

^ 2

From equations 4a and 5a, we see that the maximum likelihood estimators are the same
as the least squares estimators (these are the normal equations). However, from equation
6a, we obtain the maximum likelihood estimator for the error variance as:
n

^ 2

i1

(Yi 0 1 X i ) 2
n

i1

(Yi Y i ) 2
n

This estimator is biased downward. We will use the unbiased estimator s 2 M S E


throughout this course to estimate the error variance.

Example LSD Concentration and Math Scores


A pharmacodynamic study was conducted at Yale in the 1960s to determine the
relationship between LSD concentration and math scores in a group of volunteers. The
independent (predictor) variable was the mean tissue concentration of LSD in a group of
5 volunteers, and the dependent (response) variable was the mean math score among the
volunteers. There were n=7 observations, collected at different time points throughout the
experiment.
Source: Wagner, J.G., Agahajanian, G.K., and Bing, O.H. (1968), Correlation of
Performance Test Scores with Tissue Concentration of Lysergic Acid Diethylamide in
Human Subjects, Clinical Pharmacology and Therapeutics, 9:635-638.
The following EXCEL spreadsheet gives the data and pertinent calculations.
Time (i)

Sum
Mean

Score (Y)

Conc (X)

Y-Ybar

X-Xbar

(Y-Ybar)**2

(X-Xbar)**2

(X-Xbar)(Y-Ybar) Yhat

e**2

78.93

1.17

28.84286

-3.162857

831.9104082 10.0036653

-91.22583673

78.5828

0.3472

0.1205

58.20

2.97

8.112857

-1.362857

65.81845102 1.85737959

-11.05666531 62.36576

-4.1658

17.354

67.47

3.26

17.38286

-1.072857

302.1637224 1.15102245

-18.64932245 59.75301

7.717

59.552

37.47

4.69

-12.61714

0.357143

159.1922939 0.12755102

-4.506122449 46.86948

-9.3995

88.35

45.65

5.83

-4.437143

1.497143

19.68823673 2.24143673

-6.643036735 36.59868

9.0513

81.926

32.92

6.00

-17.16714

1.667143

294.7107939 2.77936531

-28.62007959 35.06708

-2.1471

4.6099

29.97

6.41

-20.11714

2.077143

404.6994367 4.31452245

-41.78617959 31.37319

-1.4032

1.969

350.61

30.33

2078.183343 22.4749429

-202.4872429

1E-14

253.88

50.08714286

4.3328571

b1

-9.009466

b0

89.123874

MSE

50.776266

The fitted equation is:

Y 89.12 9.01X

and the estimated error variance is

s 2 MSE 50.78 , with corresponding standard deviation s 7.13 .

350.61

A plot of the data and the fitted equation are given below, obtained from EXCEL.
Math Score vs LSD Concentration
90
80

Math Score (Y)

70
60
50
40
30
20
10
0
0

LSD Concentration (X)

Output from various software packages is given below. Rules for standard errors and tests
are given in the next chapter. We will mainly use SAS, EXCEL, and SPSS throughout the
semester.

1) EXCEL (Using Built-in Data Analysis Package)


Data Cells
Time (i)
1
2
3
4
5
6
7

Score (Y) Conc (X)


78.93
1.17
58.2
2.97
67.47
3.26
37.47
4.69
45.65
5.83
32.92
6
29.97
6.41

Regression Coefficients

Intercept
Conc (X)

Coefficien Standard
t Stat
P-value
Lower
Upper
ts
Error
95%
95%
89.12387 7.047547 12.64608 5.49E-05 71.00761 107.2401
-9.00947 1.503076 -5.99402 0.001854 -12.8732 -5.14569

Fitted Values and Residuals


Observation
1
2
3
4
5
6
7

Predicted
Score (Y)
78.5828
62.36576
59.75301
46.86948
36.59868
35.06708
31.37319

Residuals
0.347202
-4.16576
7.716987
-9.39948
9.051315
-2.14708
-1.40319

2) SAS (Using PROC REG)


Program (Bottom portion generates graphics quality plot for WORD)
options nodate nonumber ps=55 ls=76;
title Pharmacodynamic Study;
title2 Y=Math Score
X=Tissue LSD Concentration;
data lsd;
input score conc;
cards;
78.93 1.17
58.20 2.97
67.47 3.26
37.47 4.69
45.65 5.83
32.92 6.00
29.97 6.41
;
run;
proc reg;
model score=conc / p r;
run;
symbol1 c=black i=rl v=dot;
proc gplot;
plot score*conc=1 / frame;
run;
quit;

Program Output (Some output suppressed)


Pharmacodynamic Study
Y=Math Score

X=Tissue LSD Concentration

The REG Procedure


Model: MODEL1
Dependent Variable: score
Parameter Estimates
Variable
Intercept
conc

DF
1
1

Parameter
Estimate
89.12387
-9.00947

Standard
Error
7.04755
1.50308

t Value
12.65
-5.99

Pr > |t|
<.0001
0.0019

Output Statistics
Obs
1
2
3
4
5
6
7

Dep Var
score
78.9300
58.2000
67.4700
37.4700
45.6500
32.9200
29.9700

Predicted
Value
78.5828
62.3658
59.7530
46.8695
36.5987
35.0671
31.3732

Std Error
Mean Predict
5.4639
3.3838
3.1391
2.7463
3.5097
3.6787
4.1233

Residual
0.3472
-4.1658
7.7170
-9.3995
9.0513
-2.1471
-1.4032

Std Error
Residual
4.574
6.271
6.397
6.575
6.201
6.103
5.812

Student
Residual
0.0759
-0.664
1.206
-1.430
1.460
-0.352
-0.241

Plot (Including Regression Line)

3) SPSS (Spreadsheet/Menu Driven Package)

Output (Regression Coefficients Portion)

Coefficientsa

Model
1

(Constant)
Conc (X)

Unstandardized
Coefficients
B
Std. Error
89.124
7.048
-9.009
1.503

a. Dependent Variable: Score (Y)

Standardized
Coefficients
Beta
-.937

t
12.646
-5.994

Sig.
.000
.002

Plot of Data and Regression Line

Math Score vs LSD Concentration (SPSS)


80.00

Linear Regression

70.00

60.00

50.00

40.00

30.00
1.00

2.00

Score (Y) = 89.12 + -9.01 * conc


R-Square
3.00
4.00= 0.88 5.00
6.00

Conc (X)

4) STATVIEW (Spreadsheet/Menu Driven Package from SAS)


Output (Regression Coefficients Portion)
Regression Coefficients
Score (Y) vs. Conc (X)
Coefficient Std. Error

Std. Coeff.

t-Value

P-Value

Intercept

89.124

7.048

89.124

12.646

<.0001

Conc (X)

-9.009

1.503

-.937

-5.994

.0019

Graphic output

Regression Plot
80
70

Score (Y)

60
50
40
30
20
1

4
Conc (X)
Y = 89.124 - 9.009 * X; R^2 = .878

5) S-Plus (Also available in R)


Program Commands

x <- c(1.17, 2.97, 3.26, 4.69, 5.83, 6.00, 6.41)


y <- c(78.93, 58.20, 67.47, 37.47, 45.65, 32.92, 29.97)
plot (x,y)
fit <- lm(y ~ x)
abline (fit)
summary (fit)

Program Output
Residuals:
1
2
3
4
5
6
7
0.3472 4.166 7.717 9.399 9.051 2.147 1.403
Coefficients:
(Intercept)
x

Value Std. Error


89.1239
7.0475
-9.0095
1.5031

t value Pr(>|t|)
12.6461
0.0001
-5.9940
0.0019

Residual standard error: 7.126 on 5 degrees of freedom

30

40

50

60

70

80

Graphics Output

6) STATA
Output (Regression Coefficients Portion)
score Coef.

Std. Err.

conc -9.009467
_cons 89.12388

1.503077
7.047547

P>t

[95% Conf.

Interval]

-5.99 0.002 -12.87325


12.65 0.000 71.00758

-5.145686
107.2402

Graphics Output
Math Scores vs LSD Concentration

30

40

score/Fitted values
50
60
70

80

STATA Output

3
score

conc

4
Fitted values

Chapter 2 Inferences in Regression Analysis

Rules Concerning Linear Functions of Random Variables (P. 1318)


n

Let Y1 , , Yn be n random variables. Consider the function

a Y
i 1

where the

coefficients a1 , , a n are constants. Then, we have:

i 1

aiYi

a E{Y }
i 1

i 1

aiYi

a a {Y , Y }

i 1 j 1

When Y1 , , Yn are independent (as in the model in Chapter 1), the variance of the
linear combination simplifies to:

i 1

aiYi

a
i 1

2
i

{Yi }

When Y1 , , Yn are independent, the covariance of two linear functions

a Y
i 1

c Y
i

i 1

can be written as:

aiYi , ciYi ,
i 1

i 1

a c
i 1

i i

{Yi }

We will use these rules to obtain the distribution of the estimators

b0 , b1 , Y b0 b1 X

and

Inferences Concerning 1
Recall that the least squares estimate of the slope parameter, b1 , is a linear function of
the observed responses Y1 , , Yn :

i1

b1

( X i X )(Yi Y )
n

(X i X )

i1

ki

i1

(X i X )

i1

(X i X )

Yi
2

i1

k iY i

(X i X )
n

i1

(X i X )2

Note that E { Y i } 0 1 X i , so that the expected value of b1 is:


E {b1}

k iE {Yi}

i1

(X i X )

i1

i1

1
n

i1

(X i X )2
n

Note that

i1

i1

i1

(X

i1

(why?), so that the first term in the brackets is 0, and that

X ) 0 to the last term to get:

n
n

1 ( X i X ) X i 1 ( X i X ) X
i1
i1

(X i X )2

i1

( X i X ) 1 ( X i X ) X i

(X i X ) 0

we can add 1 X

E {b1}

(X i X )

( 0 1 X i )
2

Thus, b 1 is an unbiased estimator of the parameter 1 .


To obtain the variance of b 1 , recall that 2 { Y i } 2 . Thus:

1
n

i1

(X i X )2

1 ( X i X ) 2 1
i1

{b1}
2

i1

(X i X )

k i {Yi}
2

i1

i1

(X i X )

(X

i1

i1

X )2
2

(X i X )

i1

(X i X )2

Note that the variance of b 1 decreases when we have larger sample sizes (as long as the
added X levels are not placed at the sample mean X ). Since 2 is unknown in practice,
and must be estimated from the data, we obtain the estimated variance of the estimator b 1
by replacing the unknown 2 with its unbiased estimate s 2 M S E :
s 2 {b1}

s2
n

M SE

(X i X )2

i1

i1

(X i X )2

with estimated standard error:


s

s{b1}

i1

M SE

(X i X )2

i1

(X i X )2

Further, the sampling distribution of b 1 is normal, that is:

b 1 ~ N 1 ,

i1

(X i X )2

since under the current model, b 1 is a linear function of independent, normal random
variables Y1 , , Yn .
Making use of theory from mathematical statistics, we obtain the following result that
allows us to make inferences concerning 1 :
b1 1
~ t (n 2) where t(n-2) represents Students t-distribution with n-2 degrees of
s{b1 }
freedom.

Confidence Interval for 1

As a result of the fact that

b1 1
~ t (n 2) , we obtain the following probability
s{b1 }

statement:
P{t ( / 2; n 2)

b1 1
t (1 / 2; n 2)} 1 where t ( / 2; n 2) is the
s{b1 }

(/2)100th percentile of the t-distribution with n-2 degrees of freedom. Note that since the
t-distribution is symmetric around 0, we have that t ( / 2; n 2) t (1 / 2; n 2) .
Traditionally, we obtain the table values corresponding to t (1 / 2; n 2) , which is the
value of that leaves an upper tail area of /2. The following algebra results in obtaining a
(1-)100% confidence interval for 1:
P{t ( / 2; n 2)

b1 1
t (1 / 2; n 2)}
s{b1 }

P{t (1 / 2; n 2)

b1 1
t (1 / 2; n 2)}
s{b1 }

P{t (1 / 2; n 2) s{b1 } b1 1 t (1 / 2; n 2) s{b1 }}


P{b1 t (1 / 2; n 2) s{b1 } 1 b1 t (1 / 2; n 2) s{b1 }}
P{b1 t (1 / 2; n 2) s{b1 } 1 b1 t (1 / 2; n 2) s{b1 }}

This leads to the following rule for a (1-)100% confidence interval for 1:
b1 t (1 / 2; n 2) s{b1 }

Some statistical software packages print this out automatically (e.g. EXCEL and SPSS).
Other packages simply print out estimates and standard errors only (e.g. SAS).

Tests Concerning 1
b 1 1
~ t n 2 to test hypotheses concerning
s{b1}
the slope parameter. As with means and proportions (and differences of means
and proportions), we can conduct one-sided and two-sided tests, depending on
whether a priori a specific directional belief is held regarding the slope. More
often than not (but not necessarily), the null value for 1 is 0 (the mean of Y is
independent of X) and the alternative is that 1 is positive (1-sided), negative (1sided), or different from 0 (2-sided). The alternative hypothesis must be selected
before observing the data.

We can also make use of the of the fact that

2-sided tests

Null Hypothesis: H 0 : 1 10
Alternative (Research Hypothesis): H A : 1 10
b1 1
Test Statistic: t*
s{b1 }
Decision Rule: Conclude HA if | t* | t (1 / 2; n 2) , otherwise conclude H0
P-value: 2 P(t (n 2) | t* |)

All statistical software packages (to my knowledge) will print out the test statistic and Pvalue corresponding to a 2-sided test with 10=0.
1-sided tests (Upper Tail)

Null Hypothesis: H 0 : 1 10
Alternative (Research Hypothesis): H A : 1 10
b1 1
Test Statistic: t*
s{b1 }
Decision Rule: Conclude HA if t * t (1 ; n 2) , otherwise conclude H0
P-value: P (t ( n 2) t*)

A test for positive association between Y and X (HA:1>0) can be obtained from standard
statisical software by first checking that b1 (and thus t*) is positive, and cutting the
printed P-value in half.

1-sided tests (Lower Tail)

Null Hypothesis: H 0 : 1 10
Alternative (Research Hypothesis): H A : 1 10
b1 1
Test Statistic: t*
s{b1 }
Decision Rule: Conclude HA if t * t (1 ; n 2) , otherwise conclude H0
P-value: P (t (n 2) t*)

A test for negative association between Y and X (HA:1<0) can be obtained from standard
statisical software by first checking that b1 (and thus t*) is negative, and cutting the
printed P-value in half.

Inferences Concerning 0
Recall that the least squares estimate of the intercept parameter, b0 , is a linear function
of the observed responses Y1 , , Yn :

1
n
(X i X )X
Yi l i Yi
b0 Y b1 X n
i 1 n
i 1
( X i X ) 2

i 1

Recalling that E{Yi } 0 1 X i :

1
(X i X )X
( 0 1 X i )
E{b0 } n
2
i 1 n
(
X

X
)

i 1
n

1
n
(Xi X )X
(X i X )X
1
1 n
Xi
0 n
2
2
i 1 n
i 1 n
(
X

X
)
(
X

X
)

i
i

i 1
i 1
n

1 n
n
(X i X )2
0 1 ( X X (1)) 0
0 (1 0) 1 X i X n
n i 1
2
i 1
(Xi X )

i 1

Thus, b0 is an unbiased estimator or the parameter 0. Below, we obtain the variance of


the estimator of b0.

n
(X X )X
1

2 {b0 } n i
2
i 1 n
(
X

X
)

i 1

X (X i X )
1

2
n
n

( X i X )2
i 1

2 2
i 1

2
2

2X (X i X )
n

i 1

n ( X i X ) 2

n
2
n

i 1

1
2
n

X
n

(X
i 1

(X

X)

(X
i 1

X)

2X

n ( X i X )
i 1

(X

2 i 1

X )

X )

Note that the variance will decrease as the sample size increases, as long as X values are
not all placed at the mean. Further, the sampling distribution is normal under the
assumptions of the model. The estimated standard error of b0 replaces 2 with its
unbiased estimate s2=MSE and taking the square root of the variance.

1
s{b0 } s

X
n

(X
i 1

Note that

X)

1
MSE
n

X
n

(X
i 1

X )
2

b0 0
~ t ( n 2) , allowing for inferences concerning the intercept parameter
s{b0 }

0 when it is meaningful, namely when X=0 is within the range of observed data.
Confidence Interval for 0
b0 t (1 / 2; n 2) s{b0 }

It is also useful to obtain the covariance of b0 and b1, as they are only independent under
very rare circumstances:

{b0 , b1 }

l Y , k Y
i 1

i 1

n ( X i X ) 2

(X i X )

(X
i 1

(X
i 1

X
2

(X
i 1

X )2

2X

(X
i 1

X )2

X)

i 1

i 1

1
X (X i X )

n
2
i 1 n
(X i X )

i 1

l i k i 2 {Yi }

X )2

(X
i 1

X )2

X
2

(X
i 1

X )2

In practice, X is usually positive, so that the intercept and slope estimators are usually
negatively correlated. We will use the result shortly.

Considerations on Making Inferences Concerning 0 and 1

Normality of Error Terms


If the data are approximately normal, simulation results have shown that using the tdistribution will provide approximately correct significance levels and confidence
coefficients for tests and confidence intervals, respectively. Even if the distribution of the
errors (and thus Y) is far from normal, in large samples the sampling distributions of b0
and b1 have sampling distributions that are approximately normal as results of central
limit theorems. This is sometimes referred to as asymptotic normality.

Interpretations of Confidence Coefficients and Error Probabilities


Since X levels are treated as fixed constants, these refer to the case where we repeated the
experiment many times at the current set of X levels in this data set. In this sense, its
easier to interpret these terms in controlled experiments where the experimenter has set
the levels of X (such as time and temperature in a laboratory type setting) as opposed to
observational studies, where nature determines the X levels, and we may not be able to
reproduce the same conditions repeatedly. This will be covered later.

Spacing of X Levels

The variances of b0 and b1 (for given n and 2) decrease as the X levels are more spread
n

out, since their variances are inversely related to

(X
i 1

X ) 2 . However, there are

reasons to choose a diverse range of X levels for assessing model fit. This is covered in
Chapter 4.

Power of Tests
The power of a statistical test refers to the probability that we reject the null hypothesis.
Note that when the null hypothesis is true, the power is simply the probability of a Type I
error (). When the null hypothesis is false, the power is the probability that we correctly
reject the null hypothesis, which is 1 minus the probability of a Type II error (=1-),
where denotes the power of the test and is the probability of a Type II error (failing to
reject the null hypothesis when the alternative hypothesis is true). The following
procedure can be used to obtain the power of the test concerning the slope parameter with
a 2-sided alternative.

1) Write out null and alternative hypotheses: H 0 : 1 10

H A : 1 10

2) Obtain the noncentrality measure, the standardized distance between the true value of

1 and the value under the null hypothesis (10):

1 10
{b1 }

3) Choose the probability of a Type I error (=0.05 or =0.01)


4) Determine the degrees of freedom for error: df = n-2
5) Refer to Table B.5 (pages 1346-7), identifying (page), (row) and error degrees of
freedom (column). The table provides the power of the test under these parameter
values.
Note that the power increases within each tables as the noncentrality measure increases
for a given degrees of freedom, and as the degrees of freedom increases for a given
noncentrality measure.

Confidence Interval for E{Yh}= 0+ 1Xh


When we wish to estimate the mean at a hypothetical X value (within the range of
observed X values), we can use the fitted equation at that value of X=Xh as a point
estimate, but we haveto include the uncertainty in the regression estimators to construct a
confidence interval for the mean.

Parameter: E{Yh } 0 1 X h
Estimator:

Y h b0 b1 X h

We can obtain the variance of the estimator (as a function of X=Xh) as follows:

^
2 Y h 2 b0 b1 X h 2 {b0 } X h2 2 {b1} 2 X h {b0 , b1}

2
2
2
1

X
X h2 n

2 n
2X h n
n

2
2
2
(X i X )
(X i X )
(X i X )


i 1
i 1
i 1

1 (X X )2

2 n h
n
2
(
X

X
)

i 1

1 (X X )2
h
Estimated standard error of estimator: s{Y h } MSE

n
n (X X )2
i1 i

Y h E{Yh}
^

~ t(n 2)

which can be used to construct confidence intervals for the mean response at

s{Y h}
specific X levels, and tests concerning the mean (tests are rarely conducted).

(1-)100% Confidence Interval for E{Yh}:

Y h t(1 / ;2 n )2 s{Y h}
Predicting a Future Observation When X is Known

If 0 , 1 , were known, wed know that the distribution of responses when X=Xh is
normal with mean 0 1 X h and standard deviation . Thus, making use of the normal
distribution (and equivalently, the empirical rule) we know that if we took a sample item
from this distribution, it is very likely that the value fall within 2 standard deviations of
the mean. That is, we would know that the probability that the sampled item lies within
the range ( 0 1 X h , 0 1 X h ) is approximately 0.95.
In practice, we dont know the mean 0 1 X h or the standard deviation . However,
we just constructed a (1-)100% Confidence Interval for E{Yh}, and we have an estimate
of (s). Intuitively, we can approximately use the logic of the previous paragraph (with
the estimate of ) across the range of believable values for the mean. Then our
prediction interval spans the lower tail of the normal curve centered at the lower bound
for the mean to the upper tail of the normal curve centered at the upper bound for the
mean. See Figure 2.5 on page 64 of the text book.

The prediction error is for the new observation is the difference between the observed
^

Yh Y h . Since the data are assumed to be independent,

value and its predicted value:

the

new (future) value is independent of its predicted value, since it wasnt used in the
regression analysis. The variance of the prediction error can be obtained as follows:

1 (X X )
{pred} {Yh Y h} {Yh} {Y h} n h
n (X X )2
i1 i
2

2 2

1 ( X X )2
1 n h
n (X X )2
i1 i
2

and an unbiased estimator is:

(X X )
1

s 2 { pred } MSE 1 n h
n

2
(Xi X )

i 1

(1- )100% Prediction Interval for New Observation When X=Xh



1 ( X X )2
Yht(/2;n2) MSE1 n h
n ( X X )2
i1 i

^

It is a simple extension to obtain a prediction for the mean of m new observations when
X=Xh. The sample mean of m observations is
for the error in the prediction mean:

1 1
( X X )2

s 2 { predmean} MSE
n h
m n
2
(Xi X )

i 1

2
and we get the following variance for
m

and the obvious adjustment to the prediction interval for a single observation.
(1- )100% Prediction Interval for the Mean of m New Observations When X=Xh

1 1 ( X X )2
Yht( /2;n2) MSE n h
m n ( X X )2
i1 i

^

Confidence Band for the Entire Regression Line (Working-Hotelling Method)

Y h Ws{Y h} W 2F(1;2,n 2)
Analysis of Variance Approach to Regression

Consider the total deviations of the observed responses from the mean: Yi Y . When
these terms are all squared and summed up, this is referred to as the total sum of squares
(SSTO).
n

SSTO (Yi Y ) 2
i 1

The more spread out the observed data are, the larger SSTO will be.
Now consider the deviation of the observed responses from their fitted values based on
the regression model:

Yi Y i Yi (b0 b1 X i ) ei . When these terms are squared and summed

up, this is referred to as the error sum of squares (SSE). Weve already encounterd this
quantity and used it to estimate the error variance.
n

SSE (Yi Y i ) 2
i 1

When the observed responses fall close to the regression line, SSE will be small. When
the data are not near the line, SSE will be large.

Finally, there is a third quantity, representing the deviations of the predicted values from
the mean. Then these deviations are squared and summed up, this is referred to as the
regression sum of squares (SSR).
n

SSR (Y i Y ) 2
i 1

The error and regression sums of squares sum to the total sum of squares:
SSTO SSR SSE which can be seen as follows:

^ ^

^ ^

Yi Y Yi Y Y i Y i (Yi Y i) (Y i Y)
^ ^

^ ^
^ ^
2
2 2 2
i i i i i i i i i i

(Y Y) [(Y Y ) (Y Y)] (Y Y ) (Y Y) 2(Y Y )(Y Y)


n

n ^ ^
^ ^
2
2 2
i
i i i i i i
i1 i1

S TO (Y Y) (Y Y ) (Y Y) 2(Y Y )(Y Y)

n ^ n^ n ^ ^
2
2
i i i
i i i
i1
i1
i1

(Y Y ) (Y Y) 2(Y Y )(Y Y)
n ^ n^ n
2
2
i i i
i0 1i
i1
i1
i1

(Y Y ) (Y Y) 2e (b b X Y)

n ^ n^
n n n
2
2
i i i 0 i 1 ii i
i1
i1
i1 i1 i1

(Y Y ) (Y Y) 2 b e b e X Ye
n ^ n^

n ^ n^

The last term was 0 since

ei X i 0 ,

Each sum of squares has associated with degrees of freedom. The total degrees of
freedom is dfT = n-1. The error degrees of freedom is dfE = n-2. The regression degrees of
freedom is dfR = 1. Note that the error and regression degrees of freedom sum to the total
degrees of freedom: n 1 1 ( n 2) .
Mean squares are the sums of squares divided by their degrees of freedom:
MSR

SSR
1

MSE

SSE
n2

Note that MSE was our estimate of the error variance, and that we dont compute a total
mean square. It can be shown that the expected values of the mean squares are:
E{MSE} 2

E{MSR} 2 12 ( X i X ) 2
i 1

Note that these expected mean squares are the same if and only if 1=0.
The Analysis of Variance is reported in tabular form:
Source
Regression
Error
C Total

df
1
n-2
n-1

SS
SSR
SSE
SSTO

MS
MSR=SSR/1
MSE=SSE/(n-2)

F
F=MSR/MSE

F Test of 1 = 0 versus 1 0
As a result of Cochrans Theorem (stated on page 76 of text book), we have a test of
whether the dependent variable Y is linearly related to the predictor variable X. This is a
very specific case of the t-test described previously. Its full utility will be seen when we
consider multiple predictors. The test proceeds as follows:

Null hypothesis: H 0 : 1 0

Alternative (Research) Hypothesis: H A : 1 0

Test Statistic: TS : F *

Rejection Region: RR : F * F (1 ;1, n 2)

MSR
MSE

P-value:

P{F (1, n 2) F*}

Critical values of the F-distribution (indexed by numerator and denominator degrees of


freedom) are given in Table B.4, pages 1340-1345.
Note that this is a very specific version of the t-test regarding the slope parameter,
specifically a 2-sided test of whether the slope is 0. Mathematically, the tests are
identical:

( X X )(Y Y )
(X X )
i

MSE

(X

X)

b 0
t* 1

s{b1 }

( X X )(Y Y )
(X X )
i

MSE

Note that:
^

MSR SSR (Y i Y ) 2 (b0 b1 X i Y ) 2


nb02 b12 X i2 nY 2b0 b1 X i 2nb0 Y 2b1 Y X i
2

n(Y b1 X ) 2 b12 X i2 nY 2(Y b1 X )b1 n X 2n(Y b1 X )Y 2b1 Y n X


2

nY nb12 X 2nb1 X Y b12 X i2 nY 2nb1 X Y 2nb12 X 2nY 2nb1 X Y 2nb1 X Y


2

(nY nY 2nY ) (2nb1 X Y 2nb1 X Y 2nb1 X Y 2nb1 X Y ) (b12 X i2 nb12 X 2nb12 X )


2

00b

2
1

( X

2
i

nb X
2
1

X )(Yi Y )

(X

X)

2
1

(X

MSE

( X

(t*) 2

X)

X )2

( X X )(Y Y )
( X X )

( X X )(Y Y )
(X
( X X )

Thus:

X )(Yi Y )

(X

X )2

MSE

MSR
F*
MSE

Further, the critical values are equivalent: (t (1 / 2; n 2)) 2 F (1 ;1, n 2) ,


check this from the two tables. Thus, the tests are equivalent.

General Linear Test Approach


This is a very general method of testing hypotheses concerning regression models. We
first consider the the simple linear regression model, and testing whether Y is linearly
associated with X. We wish to test H 0 : 1 0 vs H A : 1 0 .
Full Model
This is the model specified under the alternative hypothesis, also referrred to as the
unrestricted model. Under simple linear regression with normal errors, we have:
Yi 0 1 X i i

Using least squares (and maximum likelihood) to estimate the model parameters (
^

Y i b0 b1 X i ), we obtain the error sum of squares for the full model:


^

SSE (F ) (Yi (b0 b1 X i )) (Yi Y i ) 2 SSE


2

Reduced Model
This the model specified by the null hypothesis, also referred to as the restricted model.
Under simple linear regression with normal errors, we have:
Yi 0 0 X i i 0 i

Under least squares (and maximum likelihood) to estimate the model parameter, we
obtain Y as the estimate of 0, and have b0 Y as the fitted value for each observation.
We when get the following error sum of squares under the reduced model:
SSE ( R)

(Y

b0 ) 2

(Y

Y ) 2 SSTO

Test Statistic
The error sum of squares for the full model will always be less that or equal to the error
sum of squares for reduced model, by definition of least squares. The test statistic will be:
SSE ( R ) SSE ( F )
df R df F
F*
where df R , df F are the error degrees of freedom for the
SSE ( F )
df F
full and reduced models. We will use this method throughout course.
For the simple linear regression model, we obtain the following quantities:
SSE ( F ) SSE

df F n 2

SSE ( R) SSTO

df R n 1

thus the F-Statistic for the General Linear Test can be written:
SSE ( R) SSE ( F )
SSTO SSE
SSR
df R df F
MSR
(n 1) (n 2)
F*

1
SSE ( F )
SSE
SSE
MSE
df F
n2
n2
Thus, for this particular null hypothesis, the general linear test generalizes to the F-test.

Descriptive Measures of Association


Along with the slope, Y-intercept, and error variance; several other measures are often
reported.
Coefficient of Determination (r2)
The coefficient of determination measures the proportion of the variation in Y that is
explained by the regression on X. It is computed as the regression sum of squares
divided by the total (corrected) sum of squares. Values near 0 imply that the regression
model has done little to explain variation in Y, while values near 1 imply that the model
has explained a large portion of the variation in Y. If all the data fall exactly on the
fitted line, r2=1. The coefficient of determination will lie beween 0 and 1.

r2

SSR
SSE
1
SSTO
SSTO

0 r2 1

Coefficient of Correlation (r)


The coefficient of correlation is a measure of the strength of the linear association
between Y and X. It will always be the same sign as the slope estimate (b1), but it has
several advantages:

In some applications, we cannot identify a clear dependent and independent


variable, we just wish to determine how two variables vary together in a population
(peoples heights and weights, closing stock prices of two firms, etc). Unlike the slope
estimate, the coefficient of correlation does not depend on which variable is labeled
as Y, and which is labeled as X.
The slope estimate depends on the units of X and Y, while the correlation
coefficient does not.
The slope estimate has no bound on its range of potential values. The correlation
coefficient is bounded by 1 and +1, with higher values (in absolute value) implying
stronger linear association (it is not useful in measuring nonlinear association which
may exist, however).
r sgn(b1 ) r 2

( X X )(Y Y )
( X X )(Y Y )
i

sx
b1
sy

1 r 1

where sgn(b1) is the sign (positive or negative) of b1, and s x , s y are the sample standard
deviations of X and Y, respectively.

Issues in Applying Regression Analysis

When using regression to predict the future, the assumption is that the conditions
are the same in future as they are now. Clearly any future predictions of economic
variables such as tourism made prior to September 11, 2001 would not be valid.
Often when we predict in the future, we must also predict X, as well as Y, especially
when we arent controlling the levels of X. Prediction intervals using methods
described previously will be too narrow (that is, they will overstate confidence
levels).
Inferences should be made only within the range of X values used in the regression
analysis. We have no means of knowing whether a linear association continues
outside the range observed. That is, we should not extrapolate outside the range of X
levels observed in experiment.

Even if we determine that X and Y are associated based on the t-test and/or F-test,
we cannot conclude that changes in X cause changes in Y. Finding an association is
only one step in demonstrating a causal relationship.
When multiple tests and/or confidence intervals are being made, we must adjust our
confidence levels. This is covered in Chapter 4.
When Xi is a random variable, and not being controlled, all methods described thus
far hold, as long as the Xi are independent, and their probability distribution does not
depend on 0 , 1 , 2 .

Anda mungkin juga menyukai