Anda di halaman 1dari 40

Decentralised economies II

Martin Ellison

Motivation

In the last lecture we described how to simulate a decentralised economy


using a log-linear approximation to the first order conditions of maximising
behaviour. Such a procedure will be acceptable as long as the underlying
structure of the economy is close to log-linearity. However, if the model is
not log-linear then approximation errors are likely to be significant. For example, log-linearisation of models with time-varying risk premia is likely to
cancel out variance eects. A method based on second order log-linearisation
(such Michel Juillards DYNARE at http://www.cepremap.cnrs.fr/dynare/)
would go some way to alleviating these concerns. Rather than proceeding
to such second-order approximations, our preferred route in this lecture is
to eliminate approximation error and describe a technique which leads to an
exact solution to the model. The technique is known as the Parameterised
Expectations Algorithm (PEA). Whilst most applications of PEA are macroeconomic in nature, the technique is more general and can be applied to any
model, for example models of nonlinear pricing in microeconomics.

Key reading

The Parameterised Expectations Algorithm was first presented in den Haan,


W.J. and A. Marcet, 1990, Solving the Stochastic Growth Model by Parameterized Expectations, Journal of Business and Economic Statistics 8,
31-34. The accuracy test - equally applicable to testing the accuracy of any
simulation method - is published as Accuracy in Simulations, Review of
Economic Studies 61, 3-17.

Other reading

The asymptotic learning convergence results on which PEA is based are


taken from Marcet, A. and T.J. Sargent, 1989, Convergence of Least-Squares
Learning Mechanisms in Self-Referential Linear Stochastic Models, Journal
of Economic Theory 48, 337-368. More advance PEA techniques such as
grids are discussed in L.J. Christiano and J.D.M. Fisher, 1997. Algorithms
for solving dynamic models with occasionally binding constraints, Working
Paper 9711, Federal Reserve Bank of Cleveland. A special issue of the Journal
of Economic and Business Statistics in 1991 compared alternative solution
techniques.

Parameterised expectations algorithm

This section tries to give in a general setting the basic approach of the PEA.
For more details on the intuition see den Haan and Marcet (1990). For those
of you who prefer concrete examples the next section gives a fully worked
out example.
Most economic models are based around certain key Euler equations.
These Euler equations link together a function of current dated variables

to a conditional expectation of a function of future dated variables. For


instance, let xt denote a vector of endogenous variables at time t, zt a vector
of state variables and let ut denote a vector of random disturbances. Then
an economic model will be characterised by an Euler equation of the form
f (xt ) = Et h(xt+1 ; ut+1 ; zt )
where Et denotes the expectations operator conditional on information
at time t. For instance, x might be consumption, f () the marginal utility of
consumption and h() = f ()Rt where R is the interest rate. Notice that for
plausible assumptions about the utility function and the production function
f () and h() are likely to be highly nonlinear. For example, as risk aversion
increases then the marginal utility function becomes increasingly nonlinear
and if the variance of shocks is large then the interest rate term will reflect a
large risk premium. The result is that the Euler equation will contain large
and important nonlinearities which will be both hard to solve and poorly
approximated via quadratic approximations.
The PEA focuses on the Euler equation. Notice that we can re-write this
Euler equation as
xt = f 1 (Et h(xt+1 ; ut+1 ; zt ))
Given that we know the function f 1 , if we knew what Et h(xt+1 ; ut+1 ; zt )
was then we could solve for xt . In other words, if we knew the value of xt+1
then we could solve backwards for xt . By contrast, the PEA approach solves
forward for the solution - that is it uses xt1 to solve for xt . It does this
by noting that although Et h(xt+1 ; ut+1 ; zt ) is a function of xt+1 it must by
definition be a function of information available at time t when the consumer
is choosing xt . Therefore it must be possible to calculate xt using information
recursively - that is by solving the model forward over time. More specifically,
the PEA assumes that
3

Et h(xt+1 ; ut+1 ; zt ) = m(; Zt1 )


where Zt1 is a subset of state variables (potentially all of them). In other
words PEA assumes that the expectational term is a function of variables
known at time t (the predetermined variables and the current realisations
of the shocks). From this it is obvious where the name Parameterised Expectations comes from - the PEA approach assumes that it is possible to
model any expectational term as a polynomial in the information set. Of
course an important decision is of the form of m() and here den Haan and
Marcet assume that it is a polynomial in the variables Zt1 . The success of
the PEA approach depends on how well the polynomial m() approximates
the expectational term Et h(). This depends on two factors (a) the degree
of the polynomial m() and (b) the validity of the parameter vector . By
increasing the degree of the polynomial m() it is possible to get ever more
accurate approximations to Et h(). However, the key thing is to arrive at
estimates of which ensure that Et h() is well approximated by m(). In the
next section we explain how the PEA arrives at the final estimates of the .

An example

Consider the standard stochastic growth model where a representative agent


seeks to
maxEt
{ct }

subject to

X
c1
t t
1
t=0

ct + kt kt1 = At kt1

where At denotes a random productivity shock and


4

ln At+1 = ln At + t
and is the discount factor. is one less the depreciation rate and the
coecient of relative risk aversion. The first order condition for this model
is
MUct = Et Rt+1 MUct+1
where MUct is the marginal utility of consumption at time t or
1
c
= Et [c
+ )]
t
t+1 (At+1 kt

The PEA approach is then to replace this expectation term on the right
hand side with a polynomial in the state variables. In this simple model the
state variables are the capital stock inherited from the last period and the
current productivity shock. This we re-write the Euler equation as
c
= m(kt1 ; At ; )
t
where m() denotes the polynomial that the PEA approach uses to substitute for the conditional expectation in the Euler equation. In what follows
we assume that
2
m(kt1 ; At ; ) = 1 kt1
At 3

that is that m() is a first order polynomial in the state variables. However, we could use any order polynomial, i.e. we could use instead
2
2
At 3 (kt1 At )4 (kt1
)5 (A2t )6
m(kt1 ; At ; ) = 1 kt1

Notice that we have not yet specified anything about the coecients of
the polynomial, the s. For the moment we shall just set them equal to some
5

arbitrary starting values. Given these starting values the PEA then iterates
to solve for the correct values as we shall explain below.
If we go back to the first order polynomial we now have
2
c
= 1 kt1
At 3
t

Therefore if we make an assumption about the initial capital stock (k0 )


and if we take a random draw from the computer for the first period productivity term (A1 ) then for our given values of 1 ; 2 and 3 we can calculate
first period consumption c1 . If we then look at the budget constraint we
see we have all the information required to calculate k1 . We can therefore
calculate values for all relevant variables in the first period.
However, we have now solved for k1 and y1 and so if we take a random
draw from the computer for the value of A2 the productivity shock in time
2 then we can use the Euler equation for the second period

1
c
+ )]
2 = E2 [c2 (A3 k2

to solve for c2 . Once again, the trick is to use the PEA to write this Euler
equation as
2 3
c
2 = 1 k1 A2

as we have values for k1 and A2 we can once again (for given s) solve
for c2 . Given c2 we can from the budget constraint then calculate k2 . By
repeating this process for all time periods we can therefore arrive at a time
series for consumption, capital and output for any arbitrary number of periods. However, as can be seen the resulting {ct }; {yt } and {kt } series depend
on the coecients in the PEA polynomial, that is the s. Dierent choices
of these s will produce dierent sequences for consumption, output and
capital. While we can always solve for {ct }; {yt } and {kt } for given s the
6

resulting equilibrium sequences need not satisfy the Euler equation. That
is agents expectations may not generate actions which are consistent with
these expectations. The solution of the PEA approach is to arrive at the
correct s - that is those expectations which produce behaviour which is
consistent with those expectations.
What criteria should we use in choosing the correct parameters? If the
1
PEA approach is to be correct then it must be the case that Et [c
+
t+1 (At+1 kt
2
)] is approximately equal to 1 kt1
At 3 or
2
1
+ )] 1 kt1
At 3 = vt 0
Et [c
t+1 (At+1 kt

The closer to zero is vt the better the approximation being used by PEA.
this immediately suggests one criteria to be used in choosing the s for
P 2
the Parameterised Expectations term - that is minimising
vt . But this
is of course exactly what a regression would do. In other words, if you

2
1
+ )] on 1 kt1
At 3 then the resulting estimates of
regress Et [c
t+1 (At+1 kt
P
s will minimise vt2 . Obviously this regression is non-linear so needs to be

estimated using non-linear least squares.

Notice a circularity in what we have outlined so far. In order to calculate


{ct }; {yt } and {kt } we needed to assume values for 1 ; 2 and 3 . Dierent
choices of 1 ; 2 and 3 will lead to dierent sequences of {ct }; {yt } and {kt }.

Having made an initial choice for 1 ; 2 and 3 we then construct a sequence

for {ct }; {yt } and {kt } and run a nonlinear regression1 to estimate the 1 ; 2
P 2
and 3 that minimises
vt . The s from the nonlinear regression will not
necessarily be the same as the s assumed in the construction of {ct }; {yt }

and {kt }. That is the formula used to form expectations of the future may
not produce outcomes which are consistent with these forecasts. In this
case agents need to change their forecasting equation. The point of the
1

Like all nonlinear regressions a certain amount of art is required in choosing sensible

starting values and nudging the program to a solution.

PEA procedure is to continue until the s from the nonlinear regression are
the same as the s assumed in constructing {ct }; {yt } and {kt } - in other
words a fixed point has been reached. This fixed point is the solution to the
model as at this point the forecasting function used by consumers produces
consumption demands which lead to outcomes for future consumption which
prove the original forecasting equations correct.
As an example of the end product of the PEA approach, consider the
above stochastic growth model calibrated on UK data. In this case we arrive
at estimates for the coecients of 1 = 1:7931; 2 = 1:9249 and 3 =

1:6701. Given that = 0:99 and = 5 that means that the solution to

our model

2
c
= 1 kt1
At 3
t

is that

1
ct = exp [ln 0:99 + ln 1:7931 1:9249 ln kt1 1:6701 ln At
5
therefore the solution is a simple expression relating current consumption
to the current state variables.
What is the intuition behind the fixed point which underlies PEA? Probably the best way to rationalise the PEA approach is in terms of learning.
Imagine a world where agents know their first order conditions but do not
know how to form expectations that the Euler equations entail. That is
they have to learn how the economy works. One simple way of forming an
expectation would be to try and forecast the expectational terms using a
forecasting rule involving only the state variables. At first the consumer has
very few observations with which to construct a forecasting rule so essentially
makes a guess as to the coecients in the forecasting equation. However, as
more data becomes available they can estimate a more precise forecasting
8

rule and need to rely less on intuition. At first the consumer will make mistakes. They do not as yet know how the economy operates and will form
expectations on the basis of an incorrect forecast of the forward-looking term
1
in the Euler equation. In other words [c
+ )] will not be equal
t+1 (At+1 kt
1
+)]. Therefore agents expectations will
in expectation to Et [c
t+1 (At+1 kt

be inconsistent with actual outcomes. However, under certain convergence


assumptions on the economy and beliefs it is possible to show (see the work
of Marcet and Sargent) that eventually agents expectations and the equilib1
rium of the economy will coincide so that [c
+)] will equal the
t+1 (At+1 kt

prediction made by the consumers forecasting equation. That is the actions


which consumers make on the basis of their expectations lead to economic
realisations which confirm those expectations are correct.
The PEA approach essentially mimics this learning algorithm and the
solution to the PEA algorithm is the fixed point of the learning process. In
2
other words when agents use 1 kt1
At 3 to form their expectations this leads

to consumption and capital choices at time t which ensure that in expectation


2
1
Et [c
+ )] 1 kt1
At 3 = 0, i.e. expectations are correct.
t+1 (At+1 kt

We can therefore write down the following plan for how to use the PEA

for this simple growth model.


STEP 1 : Simulate a sequence for the productivity shock, {At }.

STEP 2 : Use guess of 1 ; 2 and 3 and {At } and PEA expression to solve

for {ct }.

STEP 3: Use results from STEP 2 and budget constraint to solve for {kt }

and {yt }.

2
1
STEP 4: Run nonlinear regression of [c
+ )] on 1 kt1
At 3
t+1 (At+1 kt

and record new estimates of 1 ; 2 and 3 .


STEP 5: If new estimates of 1 ; 2 and 3 equal those of STEP 2 then
STOP, model solution found.
STEP 6: If new estimates of 1 ; 2 and 3 dier from those of STEP 2
9

then repeat STEPS 2-5 using new 1 ; 2 and 3 .

6
6.1

Some practical issues


Will the model always converge to a fixed point?

Marcet and Marshall (1994) have written in detail on this issue. In theory the
answer is that for most economic model we are interested in the model will
converge. Only in the case of models with multiple equilibria or increasing
returns will we found problems of convergence. However, in practice obtaining covergence may not be a trivial process. For instance, if the user chooses
a first order polynomial but this is a poor approximation then the model
may not converge. In this case it is worth experimenting with increasing the
order of the polynomial and/or trying alternative combinations of the state
variables.

6.2

What about non-stationary variables?

A crucial part of the solution algorithm is the nonlinear least squares regression. Notice that the data used in this regression is a function of the previous
estimates of the s. Because of this normal super-convergence results concerning nonstationary variables do not hold and the case of nonstationary
variables you will not get convergence on a fixed point. In our case the data
changes with every set of parameter estimates so there is no reason for parameters to converge. Therefore in the case where variables are nonstationary
it is important to detrend all the variables to induce stationarity. This involves replacing all variables xt which contain a trend with their detrended
values xt . For instance, assume that ln At+1 = g + ln At + "t . In this case all
the variables share a common trend A1=(1) . In this case we therefore need
to transform everything by dividing through by this trend. Thus instead
10

of the utility function being ln ct it becomes ln ct and the budget constraint


1=(1)

changes from ct +kt kt1 = At kt1


to ct +kt kt1
(g +"t )1=(1) =

(g +"t )=(1) by dividing both sides through by A. Having transformed


kt1

the model to stationarity in this way we then simply derive the first order
conditions for this transformed model and solve using PEA as in the previous
example.

6.3

What convergence criteria should I use on the s?

This partly depends on the problem but I have found 0.00001 to be fairly
reliable.

6.4

When I construct {ct }; {kt } and {yt } to run the non-

linear regression how many observations should I


use?

The convergence results to do with the learning intuition are all asymptotic.
In other words, it takes a long time to hit the fixed point. As a result you
may need to do simulations/regressions with lots of data. In my experience
I have found this involves using between 40000 and 100000 observations for
each simulation/regression.

Testing the accuracy of solutions

When we arrive at a fixed point to the PEA we have found a set of coecients
such that
2
1
Et [c
+ )] 1 kt1
At 3 = vt 0
t+1 (At+1 kt

but is this a good solution? In other words, just because we have found a
11

fixed point which minimises a metric of squared residuals this does not mean
that we have an accurate solution to the model. We merely have a solution.
In order to gauge the accuracy of the solution, den Haan and Marcet
(1994) propose a very simple test. This test can be used to gauge the accuracy
of any solution technique and is not dependent on the use of the PEA.
1
c
= Et [c
+ )]
t
t+1 (At+1 kt

We can therefore define an expectational error

1
wt+1 = c
+ )]
t Et [ct+1 (At+1 kt

According to Rational Expectations this error term wt+1 should be unpredictable at time t. That is agents do not make predictable forecasting
mistakes. Therefore to assess the accuracy of the model solution all one needs
to do is regress wt+1 on a variety of variables dated at time t or previously. If
these variables have no predictive power then the solution is accurate. If the
variables predict the expectational error then the solution is inaccurate and
the order of the polynomial needs to be increased and the process repeated.

7.1

What variables should I use to try and predict


wt+1 ?

The obvious choices are lags of the state variables. In the case of our model the
obvious candidates would be {kt1 ; kt2 ; kt3 ; kt4 ; At ; At1 ; At2 ; At3 ; At4 }.

7.2

How do I assess the ability of these variables at


predicting wt+1 ?

The details are given in den Haan and Marcet but essentially if the model
solution is accurate then the test statistic for whether these variables have
12

any predictive power follows a chi-squared distribution centred around zero.


Therefore you examine the distribution of the test statistic at its upper and
lower tails and see if it accords with a chi-squared distribution.

7.3

How should I use the accuracy test?

the accuracy test is incredibly powerful at distinguishing between alternative


models. For instance, comparing the stylised facts of the business cycle
form solutions produced by quadratic approximations and from PEA show
relatively small dierences. However, the PEA solution comfortably passes
the accuracy test whereas the quadratic approximation fails. Passing the
accuracy test is therefore a very demanding criteria.

Example computer code

8.1

PEA.PRG

The main program used for applying the den Haan and Marcet solution technique. The program searches for the rational expectations solution by first
simulating and then updating the parameterised expectation until convergence.
CLS;
PRINT ;
PRINT PEA.PRG ;
PRINT This program replicates the solution developed by ;
PRINT den Haan and Marcet for the Stochastic growth
economy ;
PRINT ;
PRINT created by Martin Ellison 22.VIII.1997 ;
13

PRINT ;
PRINT ;
PRINT;
PRINT Intialising variables ...;;
#include params.prg;
PRINT done. Press any key to continue;WAIT;
PRINT;
PRINT STEP 1 : simulate a sequence for the productivity
shock ...;;
#include shock.prg;
PRINT done.;
PRINT;
PRINT Example values of productivity term (t=100:111);
PRINT theta[100:111];
PRINT Press any key to continue;WAIT;
PRINT;
/* begin the PEA loop. continue until the updated delta estimates (nd)
are close to the previous estimates (od) */
DO UNTIL maxc(abs(nd-od))<0.000001;
od=nd;
/* set the random number generator seed - not essential */
RNDSEED 1;
PRINT STEPS 2 & 3: use guess of deltas and productivity
term;
PRINT to solve for consumption and capital.;
PRINT;
PRINT Current deltas:;
PRINT nd;
14

PRINT Press any key to begin simulating ... ;;WAIT;


z=streams(nd);
PRINT done.;
PRINT;
PRINT Example values of consumption (t=100:111);
PRINT z[100:111,1];
PRINT;
PRINT Example vales of capital stock (t=100:111);
PRINT z[100:111,2];
PRINT;
ng=onl2;
nl2=0;
PRINT STEP 4 : run non-linear regression;
PRINT Press any key to begin regressing.;WAIT;
PRINT;
PRINT Convergence of deltas:;
PRINT;
PRINT nd;
DO UNTIL maxc(abs(nl2-nl1))<0.000001;
nl1=ng;
nl2=choose(nl1);
PRINT nl2;
ng=nl2;
ENDO;
PRINT Converged;
PRINT;
PRINT Previous estimate of deltas;
PRINT nd;
PRINT New estimate of deltas;
15

PRINT nl2;
PRINT;
PRINT Do you want to test the (A)ccuracy of this solution;
PRINT or (C) continue iterating?;
PRINT;
ans=cons;
IF ans $==a OR ans $==A;
fd=nl2;
#include accuracy.prg;
PRINT Press any key to continue;WAIT;
PRINT;
ENDIF;
onl2=ng;
ND=nl2;
ENDO;
z=streams(nD); /* calculate final c and k*/
PRINT Final solution:;
PRINT nd;
PRINT;
PRINT Do you want to test the accuracy of the final solution?;
ans=cons;
IF ans $== y OR ans $== Y;
fd=nd;
#include accuracy.prg;
ENDIF;
end;
PROC streams(D);

16

/* calculates simulated time series of c and k given parameterisation D */


n=1;
DO UNTIL n>T;
/* define lagged capital lkn */
if n<2;
lkn=k0;
else;
lkn=k[n-1];
endif;
/* calculate the parameterised expectation
c comes from the first order condition
k from the resource constraint */
@ replace the following with the first order conditions of the
model @
lw[n]=ln(d[1,1])+d[2,1]*ln(lkn)+d[3,1]*theta[n]
+d[4,1]*(theta[n])^2;
c[n]=exp((-1/u)*(ln(beta)+lw[n]));
k[n]=(exp(theta[n]))*(lkn^a)+m*lkn-exp(lc[n]);
@ @
n=n+1;
ENDO;
/* return the simulated c and k series */
z=c~k;
RETP(z);
ENDP;
PROC choose(gam);
local tx;
/* fits a weighted regression to the expectations stream */
17

n=1;
DO UNTIL n>T-1;
@ replace the following with the outcome of the expectation - @
phi[n]=(c[n+1]^(-u))*(a*exp(theta[n+1])*(z[n,2]^(a-1))+m);
@ @
/* construct the weighting function */
IF n<2; /* initial conditions */
f[1,n]=ln(k0);
lkn=k0;
ELSE; /* lagged values */
lkn=z[n-1,2];
f[1,n]=ln(z[n-1,2]);
ENDIF;
f[2,n]=theta[n];
f[3,n]=theta[n]^2;
mlt=(1/gam[1])|f[.,n];
/* calculate approximation to the expectation */
lw[n]=ln(gam[1,1])+gam[2,1]*ln(lkn)+gam[3]*theta[n]
+gam[4]*theta[n]^2;
/* dependant and independant variables */
dep[n]=(phi[n]-exp(lw[n])+gam(exp(lw[n]))*mlt);
indep[n,.]=(exp(lw[n])*mlt);
n=n+1;
ENDO;
/* perform the regression using the standard formula b=(XX)^1(XY) */
/* the first tx observations are discarded */
tx=101;
18

inden=invpd(iNdep[tx:t-2,.]indep[tx:t-2,.])*(indep[tx:t-2,.]dep[tx:t2,.]);
/* return the new estimate */
RETP(inden);
ENDP;

8.2

PARAMS.PRG

Initialises all variables and calibrated parameters needed by PEA.PRG.


@ @
@ PARAMS.PRG @
@ This program initialises all the parameters used @
@ in PEA.PRG to values calibrated from UK data @
@@
@ created by Martin Ellison 22.VIII.1997 @
@@
@ @
/* calibrated parameters */
a=0.4436; /* steady-state capital share of income */
beta=0.99; /* discount factor */
u=5; /* risk aversion */
m=0.975; /* depreciation rate */
T=4000; /* number of time periods of interest */
k0=95; /* steady-state capital stock (guess) */
l=1; /* step length for updating deltas */
iter=1; n=1; /* counting variables */
/* initial guess of deltas in parameterised expectation. change
if
algorithm does not converge */
19

delta=ones(4,1);
delta[1,1]=1.1703263;
delta[2,1]=-1.8305086;
delta[3,1]=-1.6100586;
delta[4,1]=-0.26105643;
/* initialise delta vectors */
OD=ones(4,1);
ND=delta;
OG=ones(4,1);
NG=delta;
gam=ones(4,1);
onl2=delta;
/* Variables and parameters used in the procedure to calculate
the
streams of consumption and capital */
theta=zeros(t,1); /* productivity term */
lw=ones(T,1)*0.5; /* approximate of the Euler eqn */
lk=ones(T,1)*k0;lkn=0; /* lagged capital stock */
k=(lk); /* capital stock */
lc=ones(T,1)*0.93; /* logarithm of consumption */
c=exp(lc); /* consumption */
z=c~k; /* z = [ c | k ] contains simulations */

/* Variables and parameters used in calculating the optimal


gamma via Non-Linear Least Squares Regression */
phi=ones(T-1,1); /* outcome of expectation */
f=ones(3,T); /* used in construction of mlt */
mlt=ones(4,1); /* weighting factor */
dep=zeros(T-1,1); /* dependant variable Y */
num=zeros(4,1); /* numerator = XY */
20

den=zeros(4,4); /* denominator = (XX) */


inden=zeros(4,4); /* inverse of denominator */
indep=zeros(T-1,4); /* independant variable X */
nl1=10;nl2=0;

8.3

SHOCK.PRG

Generates a random draw of the productivity term.


@ @
@ SHOCK.PRG @
@ This program generates a sequence of length t for @
@ the productivity term with variance v and AR(1) @
@ persistence of rho @
@@
@ created by Martin Ellison 22.VIII.1997 @
@@
@ @
v=0.009253;
rho=0.95;
epsilom=v*rndn(T,1); /* calculate the productivity shocks */
theta[1,1]=0; /* initial condition of productivity term */
/* generate productivity term */
count=2;
DO UNTIL count+1>t;
theta[count,1]=rho*theta[count-1,1]+epsilom[count,1];
count=count+1;
ENDO;

21

8.4

ACCURACY.PRG

Tests the accuracy of the parameterisation of the expectation according to


the Marcet (1994) Review of Economic Studies test statistic. This requires
making repeated simulations of the model economy and testing whether t
dated or lagged values of state variables have predictive power for the rational
expectations error. If not, the solution passes the accuracy test.
PRINT ;
PRINT ACCURACY.PRG ;
PRINT This program tests the accuracy of the solution ;
PRINT for the deltas contained in nd ;
PRINT ;
PRINT created by Martin Ellison 22.VIII.1997 ;
PRINT ;
PRINT ;
PRINT;
draw=1;lcount=0;ucount=0; /* initialise variables */
ndraw=10; /* number of draws */
t=500; /* length of each draw */
PRINT Proposed solution is;
PRINT fd;
PRINT;
PRINT Press any key to begin accuracy test.;WAIT;
PRINT;
DO UNTIL draw>ndraw;
/* make a draw of the productivity term for t periods */
#include shock.prg;
/* use guess of deltas and productivity term to solve for
consumption and capital */
22

loop=1;
z=tstreams(fd);
/* calculate outcome of the expectation */
phi=tchoose;
/* define matrix of regressors. here uses three lags of the
capital
and productivity term */
ah=ONES(1,T-103)|k[103:T-1]|theta[103:T-1]|k[102:t-2];
ah=ah|theta[102:t-2]|k[101:t-3]|theta[101:t-3];
/* calculate the rational expectations error */
res=beta*phi[103:t-1]-c[103:t-1]^(-u);
/* calculate test statistic.
see Marcet Review of Economic Studies 1994 */
ihh=invpd(Ah*Ah);
ahat=ihh*Ah*res;
mdhm=ahat*(Ah*Ah)*invpd((Ah*(Ah.*(res^2))))*(Ah*Ah)*ahat;
kkkk=rows(Ah);
format 1,3;
print Test statistic is : mdhm P-value cdfchinc(mdhm,kkkk,0);
@dmdhm[1,draw]=mdhm; dmdhm[2,draw]=cdfchinc(mdhm,kkkk,0);@
/* update counts if in upper or lower tail */
IF cdfchinc(mdhm,kkkk,0)<0.05;
lcount=lcount+1;
ENDIF;
IF cdfchinc(mdhm,kkkk,0)>0.95;
ucount=ucount+1;
ENDIF;
FORMAT 2,2;
PRINT t= draw;;
23

PRINT lower tail % lcount/draw upper tail % ucount/draw;


PRINT;
draw=draw+1;
ENDO;
PRINT Final accuracy results;
PRINT;
FORMAT 16,8;
PRINT Delta tested:;
PRINT fd;
PRINT;
FORMAT 3,2;
PRINT Number of draws draw;
PRINT lower tail % lcount/draw upper tail % ucount/draw;
PRINT;
FORMAT 16,8;
PROC tchoose;
/* calculate actual outcome of expectation */
phi=ones(t,1);
n=1;
DO UNTIL n>T-1;
@ replace the following with the outcome of the expectation - @
phi[n]=(c[n+1]^(-u))*(a*exp(theta[n+1])*(z[n,2]^(a-1))+m);
@ @
n=n+1;
ENDO;
retp(phi);
24

ENDP;
PROC tstreams(D);
/* calculate sequence of c and k given delta */
n=1;
z=ones(t,2);k=ones(t,1);lw=ones(t,1);
lc=ones(t,1);c=ones(t,1);
DO UNTIL n>T;
if n<2;lkn=k0;theta[1]=epsilom[1];
else;lkn=k[n-1];
endif;
lk[n]=lkn;
@ replace the following with the first order conditions of the
model @
lw[n]=ln(d[1,1])+d[2,1]*ln(lkn)+d[3,1]*theta[n]+d[4,1]*(theta[n])^2;
lc[n]=(-1/u)*(ln(beta)+lw[n]);
k[n]=(exp(theta[n]))*(lkn^a)+m*lkn-exp(lc[n]);
@ -@
n=n+1;
ENDO;
c=exp(lc);
z=c~k;
RETP(z);
ENDP;

8.5

SIMS.PRG

Simulates the model economy. This program can be easily adapted to give
any stylised facts desired. At present it calculates cross correlations and
volatilities by averaging across 50 simulations of 130 periods each. Data is

25

Hodrick-Prescott filtered but this can be omitted if desired. It is also simple


to use this program to give impulse response functions by defining epsilon in
the first simulation to follow a process of 0, 0, 0, 0, 0, , 0, 0, 0, 0, 0, 0 ...
CLS;
PRINT ;
PRINT SIMS.PRG ;
PRINT This program simulates the stochastic growth economy ;
PRINT ;
PRINT created by Martin Ellison 22.VIII.1997 ;
PRINT ;
PRINT ;
PRINT;
PRINT Press any key to begin simulations;WAIT;
/* define calibrated parameters */
beta=.99;
alpha=.4436;
mu=.975;
sigma=.009253;
tau=5;
/* parameters of the parameterised expectation */
nd = {1.1703263,-1.8305086,-1.6100586,-0.26105643};
ks = 95.499;
/* simulate the model. this program produces 50 simulations,
each of
130 periods in length */
output file=results.out reset;
epsilon = sigma*rndn(131,50);

26

theta = zeros(131,50);
theta[1,.] = epsilon[1,.];
k = zeros(131,50);
c = zeros(130,50);
y = zeros(130,50);
invest = zeros(130,50);
k[1,.] = ks*ones(1,50);
ii = 1;
do while ii<=130;
@ replace the following with the first order conditions of the
model @
theta[ii+1,.] = rho*theta[ii,.]+epsilon[ii+1,.];
c[ii,.] = ln(nd[1])*ones(1,50)+nd[2]*ln(k[ii,.])+nd[3]*theta[ii,.];
c[ii,.] = c[ii,.]+nd[4]*theta[ii,.].^2;
c[ii,.] = exp((-1/tau)*(ln(beta)+c[ii,.]));
y[ii,.] = (k[ii,.].^alpha).*exp(theta[ii,.]);
invest[ii,.] = y[ii,.]-c[ii,.];
k[ii+1,.] = invest[ii,.]+mu*k[ii,.];
@ @
ii = ii + 1;
endo;
/* take logarithms of the series in preparation for HP filtering
*/
lc = ln(c);
ly = ln(y);
li = ln(invest);
/* Hodrick-Prescott filter the generated series */
n=rows(c);
vv=lc~li~ly;
27

lamda=1600;
format /rdn 10,4;
aa=3*eye(n);
i=1;
DO UNTIL i>(n-2);
aa[i,i+1]=-4;
aa[i,i+2]=1;
i=i+ 1;
ENDO;
aa=aa+aa;
aa[1,1]=1;
aa[2,1]=-2;
aa[1,2]=-2;
aa[2,2]=5;
aa[n-1,n-1]=5;
aa[n-1,n]=-2;
aa[n,n-1]=-2;
aa[n,n]=1;
screen o;
W=eye(n)+lamda*aa;
IW=invpd(W);
trend=IW*vv;
cp=vv[.,1:50]-trend[.,1:50];
ip=vv[.,51:100]-trend[.,51:100];
yp=vv[.,101:150]-trend[.,101:150];
screen on;
S = zeros(50,20);
S[.,1] = stdc(cp);
S[.,2] = stdc(ip);
28

S[.,3] = stdc(yp);
j = 1;
do while j <= 50;
aa = zeros(5,26);
aa[1:2,1:2] = corrx(cp[2:130,j]~cp[1:129,j]);
aa[1:2,3:4] = corrx(ip[2:130,j]~ip[1:129,j]);
aa[1:2,5:6] = corrx(yp[2:130,j]~yp[1:129,j]);
aa[.,7:11]=corrx(cp[4:130,j]~yp[4:130,j]~yp[3:129,j]~yp[2:128,j]~yp[1:127,j]);
aa[.,12:16] = corrx(ip[4:130,j]~yp[4:130,j]~yp[3:129,j]
~yp[2:128,j]~yp[1:127,j]);
aa[1:4,17:20] = corrx(yp[4:130,j]~cp[3:129,j]~cp[2:128,j]~cp[1:127,j]);
aa[1:4,21:24] = corrx(yp[4:130,j]~ip[3:129,j]~ip[2:128,j]~ip[1:127,j]);
S[j,4] = aa[1,2];
S[j,5] = aa[1,4];
S[j,6] = aa[1,6];
S[j,7:13] = (rev(aa[1,18:20]))~aa[1,8:11];
S[j,14:20] = (rev(aa[1,22:24]))~aa[1,13:16];
j = j+1;
endo;
/* print results */
R = zeros(20,2);
R = meanc(S)~stdc(S);
PRINT ;
PRINT SIMS.PRG ;
PRINT Simulation of the the Stochastic growth economy ;
PRINT ;
PRINT created by Martin Ellison 22.VIII.1997 ;
PRINT ;
PRINT ;
29

PRINT;
PRINT StDev(Consumption) 100*R[1,.];
PRINT StDev(Investment) 100*R[2,.];
PRINT StDev(Output) 100*R[3,.];
PRINT Rho(Consumption) R[4,.];
PRINT Rho(Investment) R[5,.];
PRINT Rho(Output) R[6,.];
PRINT Corr(C,Y) R[7:13,1]|R[7:13,2];
PRINT Corr(I,Y) R[14:20,1]|R[14:20,2];
PRINT ;
output o;

9
9.1

Example printouts
PEA.GSS

PEA.PRG
This program replicates the solution developed by
den Haan and Marcet for the Stochastic growth economy
created by Martin Ellison 22.VIII.1997

Intialising variables ... done. Press any key to continue


STEP 1 : simulate a sequence for the productivity shock ...done.
Example values of productivity term (t=100:111)
-0.034809919 -0.027160032 -0.030133135 -0.018852850
-0.025530091 -0.010038216 -0.013528491 0.0067613673
0.0026377245 0.0099787851 -9.2593289e-005 0.0099178908
Press any key to continue

30

STEPS 2 & 3: use guess of deltas and productivity term


to solve for consumption and capital.
Current deltas:
1.1703263 -1.8305086 -1.6100586 -0.26105643
Press any key to begin simulating ... done.
Example values of consumption (t=100:111)
8.2850372 8.3188592 8.3252443 8.3691233
8.3661706 8.4218103 8.4279932 8.4983040
8.5042629 8.5410426 8.5305513 8.5740324
Example vales of capital stock (t=100:111)
361.74087 363.44014 365.08506 366.86624
368.54229 370.41288 372.21977 374.29027
376.28590 378.36701 380.28957 382.33571
STEP 4 : run non-linear regression
Press any key to begin regressing.
Convergence of deltas:
1.1703263 -1.8305086 -1.6100586 -0.26105643
1.0265682 -1.8145534 -1.5758951 -0.14321741
1.0323259 -1.8141734 -1.5751578 -0.14134240
1.0323314 -1.8141768 -1.5751638 -0.14135557
1.0323314 -1.8141768 -1.5751638 -0.14135559
Converged
Previous estimate of deltas
1.1703263 -1.8305086 -1.6100586 -0.26105643
New estimate of deltas
1.0323314 -1.8141768 -1.5751638 -0.14135559
Do you want to test the (A)ccuracy of this solution
or (C) continue iterating?
c
31

STEPS 2 & 3: use guess of deltas and productivity term


to solve for consumption and capital.
Current deltas:
1.0323314 -1.8141768 -1.5751638 -0.14135559
Press any key to begin simulating ...

9.2

ACCURACY.GSS

ACCURACY.PRG
This program tests the accuracy of the solution
for the deltas contained in nd
created by Martin Ellison 22.VIII.1997

Proposed solution is
1.0323314 -1.8141768 -1.5751638 -0.14135559
Press any key to begin accuracy test.
Test statistic is : 5.39 P-value 0.388
t=1.0 lower tail % 0.00 upper tail % 0.00
Test statistic is : 8.78 P-value 0.731
t=2.0 lower tail % 0.00 upper tail % 0.00
Test statistic is : 6.19 P-value 0.483
t=3.0 lower tail % 0.00 upper tail % 0.00
Test statistic is : 24.2 P-value 0.999
t=4.0 lower tail % 0.00 upper tail % 0.25
Test statistic is : 4.64 P-value 0.297
t=5.0 lower tail % 0.00 upper tail % 0.20
Test statistic is : 12.5 P-value 0.916
t=6.0 lower tail % 0.00 upper tail % 0.17

32

Test statistic is : 4.26 P-value 0.251


t=7.0 lower tail % 0.00 upper tail % 0.14
Test statistic is : 2.70 P-value 0.0887
t=8.0 lower tail % 0.00 upper tail % 0.13
Test statistic is : 3.77 P-value 0.194
t=9.0 lower tail % 0.00 upper tail % 0.11
Test statistic is : 3.28 P-value 0.142
t=10. lower tail % 0.00 upper tail % 0.10
Final accuracy results
Delta tested:
1.0323314 -1.8141768 -1.5751638 -0.14135559
Number of draws 11.
lower tail % 0.00 upper tail % 0.091
Press any key to continue

9.3

SIMS.GSS

SIMS.PRG
Simulation of the the Stochastic growth economy
created by Martin Ellison 22.VIII.1997

StDev(Consumption)
0.3846
0.0459
StDev(Investment)
2.8964
0.3203

33

StDev(Output)
1.1796
0.1324
Rho(Consumption)
0.7182
0.0524
Rho(Investment)
0.6990
0.0533
Rho(Output)
0.7022
0.0529
Corr(C,Y)
0.1696 0.3882 0.6618 0.9885 0.7359 0.5129 0.3246
0.0994 0.0849 0.0501 0.0034 0.0513 0.0910 0.1105
Corr(I,Y)
0.2601 0.4625 0.7096 0.9986 0.6886 0.4272 0.2161
0.1063 0.0900 0.0529 0.0005 0.0526 0.0886 0.1037

10

Speeding up the PEA

The paper by Christiano and Fisher contains numerous ways of speeding


up the PEA approach. Here we focus on only one such trick. The main
computational cost of the method we have outlined above is the need to run
repeatedly a nonlinear least squares regression involving more than 40000
observations. We shall call this the time series approach to PEA. Essentially
what we are doing is estimating the parameterised expectations polynomial

34

by using lots of data. However, that is not necessary as all we need to do is


solve the model so that
2
1
Et [c
+ )] 1 kt1
At 3
t+1 (At+1 kt

Solving this equation for the s does not necessarily require lots of observations. This is the motivation behind the grid approach to using the
PEA technique. Rather than generate long time series and solve the model
that way, the grid approach chooses certain values for the state variables and
then chooses s so that the PEA is a good approximation to the expectations
term at these values. If the grid values are chosen well (i.e. values which
frequently occur in the model) then these should give good answers to the
model.
To understand more fully this approach go back to our simple stochastic
growth model. In this case we have two state variables - the capital stock
and the productivity term. The point about using a grid is to select a vector
of potential values for the capital stock and the productivity term and then
solve for a fixed point using all combinations of these chosen capital and
productivity terms. For instance, assume that we choose three capital values:
a low, medium and high capital stock and three values of the productivity
term: a low, medium and high value. then we have 9 potential combinations
of capital and the productivity term (all the way from high capital, low
productivity to high productivity/low capital). For each of these grid points
we can calculate ct and kt just as we did in the time series approach. However,
by focusing only on 9 grid points we only have to calculate ct and kt 9 times
- not the 40000 times we did in the time series approach. The next thing
1
we have to do is calculate Et [c
+ )] for each of these 9 grid
t+1 (At+1 kt

points. We already have kt calculated and by using the PEA formula we


can calculate ct+1 or at least a guess of it. It only needs a drawing of At
35

1
and we can calculate c
+ ). However, we are interested in
t+1 (At+1 kt

the expectation of this term so we have to make lots of dierent drawings


for At (in practice I use 500) and average across all of them to arrive at
1
Et [c
+ )]. At this point we can perform the same nonlinear
t+1 (At+1 kt

regression and iterative procedure to arrive at the fixed point for the s.
However, the major advantage is that the regression is now performed over
only 9 observations rather than 40000. In other words, there is an enormous
time saving produced via the grid approach.

10.1

What grid points should I use?

Christiano and Fisher contain a detailed discussion of this point. The most
obvious choice for the productivity shock is to choose its mean and also plus
and minus one standard deviation. For each of these values we then calculate
the steady-state capital stock attached to each of these shocks. This will give
us a lower and upper bound on capital (the worst and best case scenario)
and a mean level. If all we want is three gridpoints for capital this is enough.
However, the more gridpoints used then the more accurate the solution so
we would probably want some additional capital values to be considered in
the grid. In choosing these values we also want to put a heavier weight
on capital stocks close to its mean. These values are likely to occur more
frequently and so it is important that the grid method solves for the values
in this range well. Therefore a range of capital values is chosen between the
lower and upper bounds calculated earlier and via means of a cosine function
these values tend to be clumped together around the mean value.

36

11

Hints and tips

1. The non-linear regression is highly sensitive if the independent variables,


i.e. the state variables in the parameterised expectation, have dierent orders
of magnitude. To avoid this problem divide the state variables by their
steady state variables so that each variable has an order of magnitude of
approximately one.
2. Obtaining an accurate non-linear regression can be dicult if the
initial PEA simulation is poor so it is best to set the accuracy tolerance of
the non-linear regression loop relatively high, say to 0.001. This can then be
decreased as the PEA continues and the simulation improves.
3. Finding satisfactory starting values can be problematic. The starting
values should be such that the initial simulation has some economic plausibility. The simplest solution is to use the starting values from a similar
model, e.g. use flexible price model parameters for the starting values of an
inflexible price model. If this is not possible then try setting 1 to the long
run value of the parameterised expectation. In simple macromodels this is
usually , the marginal utility of income. In simple macromodels the on the
capital stock, divided by its steady-state value as suggested above, should
be set initially at -1. This is very important. By setting it to -1 we are
introducing some basic economic intuition into the model. If capital stock
is above its steady-state value then the choice of -1 ensures that will fall
and consequently consumption will rise. This is what we would expect the
representative consumer to do and leads to simulations roughly in line with
economic theory. The PEA approach will tend to converge much faster from
this type of simulation.
4. When choosing which state variables to use when parameterising an
expectation it is preferable to use a Specific to General approach. This has
the advantage initially that computational time is less. In addition, the

37

algorithm tends to converge at greater speed without complication quadratic


and cubic terms which may be insignificant. The most parsimonious linear
approximation may be the best starting point. It can always be tested using
the accuracy test and higher order terms incorporated.
5. When adding extra state variable terms to the set of dependent variables it is advisable to use some economic intuition. For example, real shocks
would tend to be crossed with real variables and nominal shocks with nominal variables. Again the accuracy test will show whether these are desirable
or necessary.
6. When de-bugging it is useful to set the variance of all shocks to zero
and start o the economy from its steady state. Look then at the first
couple of simulations. If the economy is started from its steady state and
no shocks applied then it would be expected that the economy will still be
at its steady state in the next period. The capital stock in period 2 should
then be almost identical to that in period 1. This provides a useful check
on the programming up of the model. If the steady state is not maintained
then there is an error in the original mathematics, programming or steady
state calculations. It is usually easy to proceed to PEA when the model
is properly programmed. In my experience most of the problems relate to
human error than the algorithm. Do not worry though if this procedure
does not give exactly the same capital stock value as before. It may be that
the way the model has been set up via the parameterised expectation means
that the steady-state equilibrium is initially unstable. If for some reason the
initial steady-state calculations are infinitesimally away the steady state then
the economy may move away from steady state even if there are no shocks
occurring. This is not a cause for concern, in general the PEA will correct
this and update the parameterised expectation in such a way that the final
equilibrium is stable.
7. In connection with the above it has been suggested that the model can
38

be solved by first solving for a very small variance of any shocks so that the
model is almost deterministic. The values of the parameterised expectation
can then be used as starting values as the variance is increased.
8. It is useful to set the number of simulations to a low value when first
estimating the model. This saves time whilst debugging. A rough parameterisation can be obtained with a low number of simulations and then this
used as starting values for the full estimation.
9. To enable results to be replicated it is always advisable to set the
random number generator seed to a specific value. This ensures that the
random draw of any shocks is identical from estimation to estimation and
helps the PEA algorithm to converge. It creates no problems for the algorithm because the parameterised values still have to pass the accuracy test.
The point is that the parameterisation will inevitably be slightly dierent for
dierent draws of the random shocks. Setting the random number generator
seed eliminates this unnecessary variation.
10. If there is more than one expectation in the model to parameterise
then initially hold one of the parameterisations constant whilst optimising
the other. Then hold the other constant and estimate the other and so
on. Only after a number of iterations such as these is it worth attempting
to estimate both parameterisations simultaneously. This ensures that the
simulations remain sensible and the algorithm converges.
11. Stylised facts can obviously be constructed from inaccurate solutions
that have not passed the accuracy test. Whilst these facts cannot be interpreted rigorously they do give some idea as to whether the model is working
correctly. For example, is consumption smoother than output as expected in
most of these types of model?
12. Problems tend to arise if the expectation to be parameterised also
contains known values of the state variables. In this case it is preferable to
remove the known variable and parameterise the rest. For example rather
39


than parameterise t = Et mct kt1
, where is mct unknown but kt1 is already

known, parameterise 0t = t kt1


= Et mct . This avoids some multicollinear-

ity problems.

40

Anda mungkin juga menyukai