Anda di halaman 1dari 8

# Spring semester 2011

Econometrics I
Instructor: Jan Hanousek
TA: Pavla Nikolovova
Midterm - suggested solutions
1. Heteroskedasticity means that error terms for dierent observations have dierent vari-
ance, i.e.
V ar(
i
) =
2
i
=
2
j
= V ar(
j
) for i = j .
This causes the OLS estimate to be inecient and the standard errors (and thus the
statistical inference) to be incorrect.
Endogeneity problem occurs whenever the RHS variable in the regression is correlated
with the error term, i.e.
Cov(X, ) = 0 .
When there is endogeneity in the model, the parameters are not consistently estimated.
Instrumental variable estimator is designed to remedy the endogeneity problem. To
use it, we need to nd an instrumental variable or variables Z such that
Cov(Z, ) = 0 .
Then, only the variation of X that can be explained by Z (and thus is not correlated
with ) is determined (by regressing X on Z) and used in the original regression to nd
the consistent estimate of its parameters.
The exact procedure is at follows:
In the rst stage, we regress X on Z:
X = Z + ,
which gives us
= (Z

Z)
1
Z

X
and so the predicted

X is

X = Z = Z(Z

Z)
1
Z

X .
We use this predicted

u = 0
and so we get

(y X) = 0

y

X

X = 0

X =

X

= (

X)
1

y .
If we plug there for

X, we obtain

= ((Z(Z

Z)
1
Z

X)

X)
1
(Z(Z

Z)
1
Z

X)

y
= (X

Z(Z

Z)
1
Z

X)
1
X

Z(Z

Z)
1
Z

y .
1
2. We know that in the model
y = +x + ,
the general formulas for the coecients are
= y

=
n

i=1
(x
i
x)(y
i
y)
n

i=1
(x
i
x)
2
.
Let us denote
y

+
the scaled model.
(a) We have y

= 0.8y and x

= 0.8y and x

= 0.8x. Hence,

=
n

i=1
(x

i
x

)(y

i
y

)
n

i=1
(x

i
x

)
2
=
n

i=1
(0.8x
i
0.8x)(0.8y
i
0.8y)
n

i=1
(0.8x
i
0.8x)
2
=
0.8 0.8
n

i=1
(x
i
x)(y
i
y)
0.8
2

i=1
(x
i
x)
2
=
n

i=1
(x
i
x)(y
i
y)
n

i=1
(x
i
x)
2
=

= 0.7
and

= y

= 0.8y

0.8x = 0.8(y

## x) = 0.8 = 0.8 1 = 0.8 .

The slope coecient will remain unchanged, the intercept will be scaled by 0.8, i.e.

= 0.8

= 0.7 .
(b) We have y

= 0.8y and x

= x, which implies y

= 0.8y and x

= x. Hence,

=
n

i=1
(x

i
x

)(y

i
y

)
n

i=1
(x

i
x

)
2
=
n

i=1
(x
i
x)(0.8y
i
0.8y)
n

i=1
(x
i
x)
2
=
0.8
n

i=1
(x
i
x)(y
i
y)
n

i=1
(x
i
x)
2
= 0.8
n

i=1
(x
i
x)(y
i
y)
n

i=1
(x
i
x)
2
= 0.8

and

= y

= 0.8y 0.8

x = 0.8(y

## x) = 0.8 = 0.8 1 = 0.8 .

Both the slope coecient and the intercept will be scaled by 0.8, i.e.

= 0.8

= 0.56 .
2
(c) We have y

= y and x

## = 0.8x, which implies y

= y and x

= 0.8x. Hence,

=
n

i=1
(x

i
x

)(y

i
y

)
n

i=1
(x

i
x

)
2
=
n

i=1
(0.8x
i
0.8x)(y
i
y)
n

i=1
(0.8x
i
0.8x)
2
=
0.8
n

i=1
(x
i
x)(y
i
y)
0.8
2

i=1
(x
i
x)
2
=
1
0.8

n

i=1
(x
i
x)(y
i
y)
n

i=1
(x
i
x)
2
=
1
0.8

=
1
0.8
0.7 = 0.875
and

= y

= y
1
0.8

0.8x = y

x = = 1 .
The slope coecient will be scaled by
1
0.8
, the intercept will remain unchanged, i.e.

= 1

= 0.875 .
(d) We should know that t-statistics of all coecients on unscaled data and on scaled data
have to be the same, because scaling the explanatory variables (as well as any other
linear transformation of the matrix of explanatory variables) does cannot aect the t
of the model and the signicance of the coecients. We can prove this formally. We
know that
s.e.(

) =

2
n

i=1
(x
i
x)
2
and
s.e.(

) =

2
n

i=1
(x

i
x

)
2
=

2
n

i=1
(0.8x
i
0.8x)
2
=

2
0.8
2

i=1
(x
i
x)
2
=
1
0.8
s.e.(

) .
Then,
t(

) =

s.e.(

)
=
1
0.8

1
0.8
s.e.(

)
= t(

)
and so the t-statistic of the slope coecient is indeed the same as the one calculated
on unscaled data.
3. (a) Let us dene a dummy variable D such that
D
t
=
_
0 if t = 1970, ..., 1990
1 if t = 1991, ..., 2009
.
Let us consider the model
Y
t
=
1
+
1
D
t
+ (
2
+
2
D
t
)X
t
+
3
W
t
+u
t
=
1
+
1
D
t
+
2
X
t
+
2
D
t
X
t
+
3
W
t
+u
t
.
3
In period 1970-1990, the model looks like
Y
t
=
1
+
2
X
t
+
3
W
t
+u
t
and in period 1991-2009 like
Y
t
= (
1
+
1
) + (
2
+
2
)X
t
+
3
W
t
+u
t
,
which shows that in this model, we have indeed dierent intercepts and dierent slopes
at X
t
for the two periods.
Testing this hypothesis would mean to test
H
0
:
_

1
= 0

2
= 0
using an F-test over the restricted model
Y
t
=
1
+
2
X
t
+
3
W
t
+u
t
and the unrestricted model
Y
t
=
1
+
1
D
t
+
2
X
t
+
2
D
t
X
t
+
3
W
t
+u
t
with 2 restriction, 5 parameters and 2009 1970 + 1 = 40 observations:
F =
(SSE
R
SSE
U
)/2
SSE
U
/(40 5)
=
(SSE
R
SSE
U
)/2
SSE
U
/35
F
2,35
.
(b) To test whether residuals associated with dierent periods have dierent variance, we
would use the Goldfeld-Quandt test. This test requires ordering data by variance, split
it into two parts with T
1
and T
2
observations respectively and compare the variance for
these two parts (here two time periods). The test compares the variance of errors from
both parts and tests the hypothesis
H
0
:
2
1
=
2
2
versus H
A
:
2
1
<
2
2
.
After running the regression on both parts of data and saving the residuals, we compute
the test statistic
GQ =
SSE
2
/(T
2
k)
SSE
1
/(T
1
k)
F
T
2
k,T
1
k
.
In our case, T
1
= 21, T
2
= 19 if we assume that the second period is the one with
higher variance of the error term and k = 3 if we assume that the question is related
to the original model and not to its modication done in part (a). If GQ exceeds the
critical value, the H
0
hypothesis of the two variances being equal is rejected.
4. (a) From the very low p-values, we see that all four used explanatory variables (Size sq feet,
Bathrooms, Nice ratings and SwimingPool) are signicant on 1% and therefore also
on 5% level. The intercept is not signicant (even on 10% level).
To test that each coecient is signicantly dierent from 0, we need to form the t-
statistic
t(

) =

s.e.(

)
t
6805
for each coecient and compare with the required critical value. Here, the t-statistics
are already computed and presented in the output table and the p-values represent
levels of signicance. Therefore, our previous answer is sucient to claim that each
coecient is signicantly dierent from 0, so in fact there is no need to construct any
special test.
4
(b) The model seems to describe well the data. The used variables are statistically highly
signicant, and this holds despite the fact that some of them might be correlated (e.g.
size and number of bathrooms) and that the number of observation is not excessive,
which would go rather against high signicance. The coecients are of reasonable
magnitude (and so the variables are also economically signicant) and their signs are
expectable (see part (c)). The R
2
is high, indicating that there is not so much unex-
plained variation left, even though the model is rather parsimonious. This can be also
explained by the fact that all houses are in the same location, meaning that there is no
variation in location that would induce variation in price.
(c) The model tells us that the price of a house should depend on its size, number of
bathrooms, some subjective beauty and the presence or absence of swimming pool.
All coecients are positive, meaning that bigger and nicer houses with more bathrooms
and with a swimming pool should have higher price, which is logical. More precisely,
\$10 043, increase in the subjective rating adds \$10 042 and the presence of swimming
pool adds \$25 862, ceteris paribus.
5. First, let us test the restrictions separately.
(a) We can express

2
= 1
3

4
and plug into the unrestricted model:
y =
1
+ (1
3

4
)X
2
+
3
X
3
+
4
X
4
+
5
X
5
+
6
X
6
+
y X
2
=
1
+
3
(X
3
X
2
) +
4
(X
4
X
2
) +
5
X
5
+
6
X
6
+ .
We have here J = 1 restriction, n observations and k = 6 parameters. Hence, to
test the restrictions, we should run the unrestricted model
y =
1
+
2
X
2
+
3
X
3
+
4
X
4
+
5
X
5
+
6
X
6
+
and the restricted model
y X
2
=
1
+
3
(X
3
X
2
) +
4
(X
4
X
2
) +
5
X
5
+
6
X
6
+ ,
save SSE in both cases and test
F =
(SSE
R
SSE
U
)/J
SSE
U
/(n k)
=
(SSE
R
SSE
U
)/1
SSE
U
/(n 6)
F
1,n6
.
Note that since we have only one restriction, we could also use the fact that

F t
n6
and if n is large, also

F N(0, 1) .
(b) When we plug these restrictions into the original model, we obtain
y =
1
+
2
X
2
+
3
X
3
+
5
(X
5
X
4
) + .
We would apply the same method as in (a) with the only dierence that now we
have J = 2 restrictions and so
F =
(SSE
R
SSE
U
)/J
SSE
U
/(n k)
=
(SSE
R
SSE
U
)/2
SSE
U
/(n 6)
F
2,n6
.
5
Second, let us test the restrictions together. The restricted model now becomes
y X
2
=
1
+
3
(X
3
X
2
) +
5
(X
5
+X
2
X
4
) +
and J = 3, so we have
F =
(SSE
R
SSE
U
)/J
SSE
U
/(n k)
=
(SSE
R
SSE
U
)/3
SSE
U
/(n 6)
F
3,n6
.
Third, let us construct the LM test for the case of all three restrictions being tested
jointly (separate cases are analogous), i.e.
H
0
:
_
_
_

2
+
3
+
4
= 1

4
=
5

6
= 0
Let us denote
g(X, ) =
1
+
2
X
2
+
3
X
3
+
4
X
4
+
5
X
5
+
6
X
6
We know that the LM test consists then of the following steps:
(a) Run the restricted model
y X
2
=
1
+
3
(X
3
X
2
) +
5
(X
5
+X
2
X
4
) + ,
save the residuals e
R
i
.
(b) Find
g

.
g

1
= 1
g

2
= X
2
g

3
= X
3
g

4
= X
4
g

5
= X
5
g

6
= X
6
.
(c) Evaluate
g

under H
0
, which gives us
g

1
= 1
g

2
= X
2
g

3
= X
3
g

4
= X
4
g

5
= X
5
g

6
= X
6
.
(there is no change of
g

under H
0
).
6
(d) Regress e
R
i
on
g

|
H
0
:
e
R
i
=
1
+
2
X
2
+
3
X
3
+
4
X
4
+
5
X
5
+
6
X
6
+
i
,
save R
2
of this auxiliary regression and construct the LM statistics
LM = NR
2

2
J
.
The number of restriction J = 3, N, the number of observations in the auxiliary
regression, is now the same as in the original regression, i.e. N = n. We have thus
LM = nR
2

2
3
.
(e) If LM >
2
3
, we would reject H
0
.
6. (a) The presented model suers from heteroskedasticity, which means that if we estimated it
by OLS, it would not be asymptotically ecient. However, even with heteroskedasticity,
if the other classical assumptions hold (mainly the exogeneity of RHS variables), the
parameter can be still estimated consistently, which will be the base of the method
of estimation we are going to propose.
This method would be in fact the Feasible Generalized Least Squares procedure, for
which we need to estimate the variance-covariance matrix of the residuals . Once we
have the estimated

, we can nd its Choleski decomposition

1
= D

D .
Then, we multiply by D the model y
t
= a + x
0t
+
t
. By this multiplication, the
error term of the model becomes homoskedastic and so the OLS estimator over the
transformed model will be asymptotically ecient (as we know from the lecture).
The problem thus reduces to nding

. From the setup, we know that
=
_
_
_
_
_
exp(x
01
) 0 . . . 0
0 exp(x
02
) . . . 0
.
.
.
.
.
.
.
.
.
0 0 . . . exp(x
0T
)
_
_
_
_
_
,
which means that
D =
_
_
_
_
_
_
_
1

exp(x
01
b
)
0 . . . 0
0
1

exp(x
02
b
)
. . . 0
.
.
.
.
.
.
.
.
.
0 0 . . .
1

exp(x
0T
b
)
_
_
_
_
_
_
_
.
Hence, we have to estimate the parameter

. For this we can us the fact that
2
t
is up
to some scaling constant an estimator of V ar(
t
) and that, for the reasons explained
above, we can consistently estimate
t
.
Overall, our estimation should follow these steps:
Run the model y
t
= a +x
0t
+
t
, save the residuals
t
.
Run the regression
ln(
2
t
) = b +x
0t
+
t
,
get the estimate

.
7
Form the matrix D as dened above.
Transform the data of the original model multiplying them by D.
Run the regression over the transformed model. Given the very simple form of D,
we can see the transformed model will be
y

t
= ai

t
+x

0t
+
t
, (1)
where
y

t
=
y
t
_
exp(x
0t

)
i

t
=
1
_
exp(x
0t

)
x

0t
=
x
0t
_
exp(x
0t

)
.
(b) One method of testing the restriction = 0 (in other words testing the signicance of the
regression) would be to do so within the transformed model (1) which is homoskedastic
and where thus all classical assumptions hold. We would run this model and save
its R
2
. It is the unrestricted model with K
2
+ 1 parameters to be estimated. The
corresponding restricted model is given by the K
2
restrictions = 0 and includes thus
only the intercept. Therefore, its R
2
R
= 0, and so the F-statistic is
F =
(R
2
0)/K
2
(1 R
2
)/(T (K
2
+ 1))
F
K
2
,T(K
2
+1)
.
Another method would be to simply estimate the original model while allowing for
heteroskedastic errors by computing Hull-White robust variance-covariance matrix and
using it in the usual F-test of the validity of restrictions.
8