Anda di halaman 1dari 34

Universit`

a di Pavia

Vector AutoRegression Model


Eduardo Rossi

VAR
Vector autoregressions (VARs) were introduced into empirical
economics by C.Sims (1980), who demonstrated that VARs provide a
flexible and tractable framework for analyzing economic time series.
Identification issue: since these models dont dichotomize variables
into endogenous and exogenous, the exclusion restrictions used to
identify traditional simultaneous equations models make little sense.
A Vector Autoregression model (VAR) of order p is written as:
yt = c + 1 yt1 + . . . + p ytp + t
yt : (N 1)

i : (N N ) i, t : (N 1)

t=
E(t ) = 0 E(t ) =
0
t 6=

positive definite matrix.


c
Eduardo Rossi

Time Series Econometrics 11

VAR(p)

A VAR is a vector generalization of a scalar autoregression.


The VAR is a system in which each variable is regressed on a
constant and p of its own lags as well as on p lags of each of the other
variables in the VAR.
[IN 1 L p Lp ] yt

c + t

(L)yt

c + t

with
(L) = [IN 1 L p Lp ]
(L) (N N ) matrix polynomial in L.

c
Eduardo Rossi

Time Series Econometrics 11

VAR(p) - Stationarity

The element (i, j) in (L) is a scalar polynomial in L


(1)

(2)

(p)

ij ij L ij L2 . . . ij Lp

1 i=j
ij =
0 i 6= j

Stationarity: A vector process is said covariance stationary if its

first and second moments, E[yt ] and E[yt ytj


] respectively, are
independent of the date t.

c
Eduardo Rossi

Time Series Econometrics 11

VAR(1)

yt = c + 1 yt1 + t
First equation:
(1)

(1)

(1)

y1t = c1 + 11 y1t1 + 12 y2t1 + . . . + 1N yN t1 + 1t


=

c + 1 [c + 1 yt2 + t1 ] + t

c + 1 c + 21 yt2 + t + 1 t1

yt

...

yt

c + 1 c + . . . + k1
c + k1 ytk + t + 1 t1 + . . . + k1
tk+1
1
1

yt

E[yt ] =

k1
X

j1 c + k1 E[ytk ]

j=0

The value of this sum depends on the behavior of j1 as j increases.


c
Eduardo Rossi

Time Series Econometrics 11

Stability of VAR(1)
Let 1 , 2 , . . . , N be the eigenvalues of 1 , the solutions to the
characteristic equation
|1 IN | = 0
then, if the eigenvalues are all distinct
1 = QMQ1
k1 = QMk Q1
Mk = diag(k1 , k2 , . . . , kN )
If |i | < 1, i = 1, . . . , N
k 0, k
If |i | 1, i then one or more elements of Mk are not vanishing,
and may be tending to .
c
Eduardo Rossi

Time Series Econometrics 11

VAR(P) Companion form

VAR(p):
yt = 1 yt1 + . . . + p ytp + t
as a VAR(1) (Companion Form):
t = F t1 + vt

yt
y

t1

..

..
t =
(N
p

1)

.
.
t1

ytp+1
ytp

c
Eduardo Rossi

Time Series Econometrics 11

(N p 1)

VAR(P) Companion form

IN

F=
0
.
..

. . . p1

...

IN
..
.

...

0
..
.

...

E[vt v ] =
0

c
Eduardo Rossi

IN

t=
t 6=

0
(N pN p)
..
.

Q=

Time Series Econometrics 11

vt =

0 ... 0

0
..
.

0 ... 0
..
..
.
.

0 ... 0

t
0
..
.
0

(N p1)

(N p N p)

Stability of VAR(p)

yt = 1 yt1 + . . . + p ytp + t

t = 1, . . . , T

If the process yt has finite variance and an autocovariance sequence


that converges to zero at an exponential rate, then t must share
these properties. This is ensured by having the N p eigenvalues of F
lie inside the unit circle.
The determinant defining the characteristic equation is
|F IN p | = (1)N p |p IN p1 1 . . . p | = 0

c
Eduardo Rossi

Time Series Econometrics 11

Stability of VAR(p)

The required condition is that the roots of the equation


|p IN p1 1 . . . p | = 0
a polynomial of order N p must lie inside the unit circle.
Stability condition can also be expressed as the roots of
|(z)| = 0
lie outside the unit circle, where (z) is a (N N ) matrix
polynomial in the lag operator of order p.

c
Eduardo Rossi

Time Series Econometrics 11

10

Stability of VAR(p)

When p = 1, the roots of


|IN 1 z| = 0
outside the unit circle, i.e. |z| > 1, implies that the eigenvalues of 1
be inside the unit circle. Note that the eigenvalues, roots of
|1 IN | = 0, are the reciprocal of the roots of |IN 1 z| = 0.

c
Eduardo Rossi

Time Series Econometrics 11

11

Stability of VAR(p)
Three conditions are necessary for stationarity of the VAR(p) model:
Absence of mean shifts;
The vectors {t } are identically distributed, t;
Stability condition on F.
If the process is covariance stationary we can take the expectations of
both sides of
yt = c + 1 yt1 + . . . + p ytp + t

c + 1 + . . . + p

[IN 1 . . . p ]1 c

(1)1 c

c
Eduardo Rossi

Time Series Econometrics 11

12

Vector MA()

If the VAR(p) is stationary then it has a VMA() representation:


yt = + t + 1 t1 + 2 t2 + . . . + (L)t
ytj is a linear function of tj , tj1 , . . . each of which is
uncorrelated with t+1 for j = 0, 1, . . ..
It follows that
t+1 is uncorrelated with ytj for any j 0.
Linear forecast of yt+1 based on yt , yt1 , . . . is given by
bt+1|t = + 1 (yt ) + 2 (yt1 ) + . . . + (ytp+1 )
y
c
Eduardo Rossi

Time Series Econometrics 11

13

Forecasting with VAR

t+1 can be interpreted as the fundamental innovation in yt+1 , that


is the error in forecasting yt+1 on the basis of a linear function of a
constant and yt , yt1 , . . ..
A forecast of yt+s on the basis of yt , yt1 , . . . will take the form
(s)

(s)

(s)

bt+s|t = + F11 (yt ) + F12 (yt1 ) + . . . + F1p (ytp+1 )


y

The moving average matrices j can be calculated as:


(L) = [(L)]1
(L)(L) = IN

c
Eduardo Rossi

Time Series Econometrics 11

14

VMA coefficient matrices

[IN + 1 L + 2 L2 + . . .][IN 1 L 2 L2 + . . . p Lp ] = IN
Setting the coefficient on L1 equal to the zero matrix,
1 1 = 0
on L2
2 = 1 1 + 2
In general for Ls
s = 1 s1 + . . . + p sp s = 1, 2, . . .
0 = I N , s = 0 s < 0

c
Eduardo Rossi

Time Series Econometrics 11

15

VMA coefficient matrices

The innovation in the MA() representation is t , the fundamental


innovation for y.
There are alternative MA representation based on VWN processes
other than t .
Let H be a nonsingular (N N ) matrix
ut Ht
ut V W N .
yt

+ H1 Ht + 1 H1 Ht1 + . . .

+ J0 ut + J1 ut1 + J2 ut2 + . . .

Js s H1

c
Eduardo Rossi

Time Series Econometrics 11

16

VMA coefficient matrices

For example H can be any matrix that diagonalizes , the var-cov of


t ,
HH = D
the elements of ut are uncorrelated with one another.
It is always possible to write a stationary VAR(p) process as a
convergent infinite MA of a VWN whose elements are mutually
uncorrelated.
To obtain the MA representation for the fundamental innovations, we
must impose 0 = IN (while J0 is not the identity matrix).

c
Eduardo Rossi

Time Series Econometrics 11

17

Assumptions implicit in a VAR


For a covariance stationary process, the parameters c, 1 , . . . , p
could be defined as the coefficients of the projections of yt on
1, yt1 , yt2 , . . . , ytp .
t is uncorrelated with yt1 , . . . , ytp by the definition of
1 , . . . , p .
The parameters of a VAR can be estimated consistently with n
OLS regressions.
t defined by this projection is uncorrelated with
ytp1 , ytp2 , . . .
The assumption of yt V AR(p) is basically the assumption that
p lags are sufficient to summarize the dynamic correlations
between elements of y.

c
Eduardo Rossi

Time Series Econometrics 11

18

Vector MA(q) Process

yt = + t + 1 t1 + . . . + q tq
t V W N , j (N N ) matrix of MA coefficients j = 1, 2, . . . , q.
E(yt ) =
0

c
Eduardo Rossi

E[(yt )(yt ) ]

=
=

E[t t ] + 1 E[t1 t1 ]1 + . . . + q E[tq tq ]q

+ 1 1 + . . . + q q

Time Series Econometrics 11

19

Vector MA(q) Process

= E[(t + 1 t1 + . . . + q tq )(tj + 1 tj1 + . . . + q tjq ) ]

+
.
.
.
+

j = 1, . . . , q
j
j+1
j+2
q

1
2
q1

j =
j + 1 j+1 + . . . + q+j q
j = 1, . . . , q

0
|j| > q
j

0 = IN . Any VMA() is covariance stationary.

c
Eduardo Rossi

Time Series Econometrics 11

20

VAR(p) Autocovariances
Given:
j = E[(yt )(ytj ) ]
j = E[(yt )(yt+j ) ]
then
j
j

6=

{j }1,2 = cov(y1t , y2tj )


{j }2,1 = cov(y2t , y1tj )

{j }1,2 = cov(y1t , y2t+j )


j

E[(yt+j )(y(t+j)j ) ]

E[(yt+j )(yt ) ]

=
j
c
Eduardo Rossi

E[(yt )(yt+j ) ] = j

=
-

Time Series Econometrics 11

21

VAR(p) Autocovariances
Companion form:
t = F t1 + vt
= E[t t ]

= E

0

1
=
..
.

yt

yt1

[(yt ) . . . (ytp+1 ) ]
..

ytp+1

1
. . . p1

0
. . . p2
(N p N p)
..

p1

c
Eduardo Rossi

p2

. . . 0

Time Series Econometrics 11

22

VAR(p) Autocovariances
E[ t t ]

E[(Ft1 + vt )(Ft1 + vt ) ]

FE(t1 t1 )F + E(vt vt )

where FE(t1 vt ) = 0.
= FF + Q
vec() = vec(FF ) + vec(Q)
vec() = vec(FF ) + vec(Q)
vec()

=
=

(F F)vec() + vec(Q)

[I(N p)2 (F F)]1 vec(Q)

The eigenvalues of (F F) are all of the form j j where i and j


are eigenvalues of F. Since |i | < 1, i, it follows that all eigenvalues
of (F F) are inside the unit circle |I(N p)2 (F F)| =
6 0.
c
Eduardo Rossi

Time Series Econometrics 11

23

VAR(p) Autocovariances

The j -th autocovariance of t is


E[ t tj ] = FE[t1 tj ] + E[vt tj ]
j = Fj1
j = Fj j = 1, 2, . . .
The j -th autocovariance of yt is given by the first rows and n
columns of j
j = 1 j1 + . . . + p jp

c
Eduardo Rossi

Time Series Econometrics 11

24

Maximum Likelihood Estimation

yt = c + 1 yt1 + . . . + p ytp + t

(1)

t i.i.d.N (0, )
(T + p) observations. Conditioning on the first p observations we
estimate using the last T observations.
Conditional likelihood:
fYT ,YT 1 ,...,Y1 |Y0 ,...,Y1p (yT , . . . , y1 |y0 , . . . , y1p ; )

c
Eduardo Rossi

Time Series Econometrics 11

25

Maximum Likelihood Estimation

= (c , vec(1 ) , vec(2 ) , . . . , vec(p ) , vech() )


yt |yt1 , . . . , yp+1 N (c + 1 yt1 + . . . + p ytp , )

yt1

xt =
.. (N p + 1) 1
.

ytp

c 1

. . . p

E(yt |yt1 , . . . , yp+1 ) = xt

c
Eduardo Rossi

Time Series Econometrics 11

(N (N p + 1))

26

Maximum Likelihood Estimation

fYt |Yt1 ,...,Y1p (yt |yt1 , . . . , y1p ; ) =




1
n
1 12
2
(2) | | exp (yt xt ) 1 (yt xt )
2
The joint density, conditional on the y0 , . . . , y1p
fYt ,...,Y1 |Y0 ,...,Y1p (yt , . . . , y1 |y0 , . . . , y1p ; )

fYt |Yt1 ,...,Y1p ()


fYt1 ,...,Y1 |Y0 ,...,Y1p ()

c
Eduardo Rossi

Time Series Econometrics 11

27

Maximum Likelihood Estimation

Recursively, the likelihood function for the full sample, conditioning


on y0 , . . . , y1p , is the product of the single conditional densities:
fYT ,...,Y1 |Y0 ,...,Y1p

T
Y

t=1

fYt |Yt1 ,...,Y1p (yt |yt1 , . . . , y1p ; )

The log likelihood function:


T


TN
T
1 X
1

L() =
log (2)+ log | |
(yt xt ) (yt xt )
2
2
2 t=1

c
Eduardo Rossi

Time Series Econometrics 11

28

Maximum Likelihood Estimation


The ML estimate of :
" T
#" T
#1
X
X

b
=
yt xt
xt xt=1
t

t=1

b is:
The j -th row of
" T
#" T
#1
X
X
b = uj
uj
yt xt
xt xt
t=1

This is the estimated coefficient vector from an OLS regression of yjt


on xt . ML estimates are found by an OLS regression of yjt on a
constant and p lags of all the variables in the system.
The ML estimate of is given by:
T
X
1
b =
b
tb
t

T t=1
c
Eduardo Rossi

Time Series Econometrics 11

29

Maximum Likelihood Estimation


b
Asymptotic distribution of
b and
b will give consistent estimates of the
The ML estimates
population parameters even if the true innovations are non-gaussian:
(L)yt

i.i.d.(0, )

E[it jt lt mt ]

<

i, j, l, m

the roots of
|IN 1 z . . . p z p | = 0
Let K N p + 1, xt (1 K):

xt = [1 , yt , yt1
, . . . , ytp
]

bT)
b T = vec(

c
Eduardo Rossi

Time Series Econometrics 11

30

Maximum Likelihood Estimation

bT =

b 1,T

..
.
b N,T

b i,t =

xt xt

X
1
bT =
b

t b
t
T t

!1

X
t

xt yit

b i,T
b
it = yit xt

c
Eduardo Rossi

Time Series Econometrics 11

31

Maximum Likelihood Estimation


Then,
1 X
p
xt xt Q = E (xt xt )
T t
p

bT

p
b
T

d
T (b
T ) N (0, ( Q1 ))

d
T (b
i,T ) N (0, (i2 Q1 ))

i2 = E(2it )
1X 2 p 2
2

bi =
b
it i
T t
c
Eduardo Rossi

Time Series Econometrics 11

32

Maximum Likelihood Estimation

b i N i ,

bi2

!1
X

xt xt
t

OLS t and F statistics applied to the coefficients of any single


equation in the VAR are asymptotically valid. A more general
hypothesis
R = d
can be tested using a generalization of the Wald form of the OLS 2
test

 
d
1
T (Rb
T d) N 0, R Q
R

c
Eduardo Rossi

Time Series Econometrics 11

33

Maximum Likelihood Estimation

(m)

=
=
=

h 
 i1
1

bT Q
b
T (Rb
T d) R
R
(Rb
T d)
T
h 
 i1

1
b T (T Q
bT)
(Rb
T d) R
R
(Rb
T d)

!1 1
X
b
R (Rb
(Rb
T d) R T
xt xt
T d)

c
Eduardo Rossi

Time Series Econometrics 11

34

Anda mungkin juga menyukai