Martin Ellison
Motivation
Key reading
Other reading
This section tries to give in a general setting the basic approach of the PEA.
For more details on the intuition see den Haan and Marcet (1990). For those
of you who prefer concrete examples the next section gives a fully worked
out example.
Most economic models are based around certain key Euler equations.
These Euler equations link together a function of current dated variables
An example
subject to
X
c1
t t
1
t=0
ct + kt kt1 = At kt1
ln At+1 = ln At + t
and is the discount factor. is one less the depreciation rate and the
coecient of relative risk aversion. The first order condition for this model
is
MUct = Et Rt+1 MUct+1
where MUct is the marginal utility of consumption at time t or
1
c
= Et [c
+ )]
t
t+1 (At+1 kt
The PEA approach is then to replace this expectation term on the right
hand side with a polynomial in the state variables. In this simple model the
state variables are the capital stock inherited from the last period and the
current productivity shock. This we re-write the Euler equation as
c
= m(kt1 ; At ; )
t
where m() denotes the polynomial that the PEA approach uses to substitute for the conditional expectation in the Euler equation. In what follows
we assume that
2
m(kt1 ; At ; ) = 1 kt1
At 3
that is that m() is a first order polynomial in the state variables. However, we could use any order polynomial, i.e. we could use instead
2
2
At 3 (kt1 At )4 (kt1
)5 (A2t )6
m(kt1 ; At ; ) = 1 kt1
Notice that we have not yet specified anything about the coecients of
the polynomial, the s. For the moment we shall just set them equal to some
5
arbitrary starting values. Given these starting values the PEA then iterates
to solve for the correct values as we shall explain below.
If we go back to the first order polynomial we now have
2
c
= 1 kt1
At 3
t
1
c
+ )]
2 = E2 [c2 (A3 k2
to solve for c2 . Once again, the trick is to use the PEA to write this Euler
equation as
2 3
c
2 = 1 k1 A2
as we have values for k1 and A2 we can once again (for given s) solve
for c2 . Given c2 we can from the budget constraint then calculate k2 . By
repeating this process for all time periods we can therefore arrive at a time
series for consumption, capital and output for any arbitrary number of periods. However, as can be seen the resulting {ct }; {yt } and {kt } series depend
on the coecients in the PEA polynomial, that is the s. Dierent choices
of these s will produce dierent sequences for consumption, output and
capital. While we can always solve for {ct }; {yt } and {kt } for given s the
6
resulting equilibrium sequences need not satisfy the Euler equation. That
is agents expectations may not generate actions which are consistent with
these expectations. The solution of the PEA approach is to arrive at the
correct s - that is those expectations which produce behaviour which is
consistent with those expectations.
What criteria should we use in choosing the correct parameters? If the
1
PEA approach is to be correct then it must be the case that Et [c
+
t+1 (At+1 kt
2
)] is approximately equal to 1 kt1
At 3 or
2
1
+ )] 1 kt1
At 3 = vt 0
Et [c
t+1 (At+1 kt
The closer to zero is vt the better the approximation being used by PEA.
this immediately suggests one criteria to be used in choosing the s for
P 2
the Parameterised Expectations term - that is minimising
vt . But this
is of course exactly what a regression would do. In other words, if you
2
1
+ )] on 1 kt1
At 3 then the resulting estimates of
regress Et [c
t+1 (At+1 kt
P
s will minimise vt2 . Obviously this regression is non-linear so needs to be
for {ct }; {yt } and {kt } and run a nonlinear regression1 to estimate the 1 ; 2
P 2
and 3 that minimises
vt . The s from the nonlinear regression will not
necessarily be the same as the s assumed in the construction of {ct }; {yt }
and {kt }. That is the formula used to form expectations of the future may
not produce outcomes which are consistent with these forecasts. In this
case agents need to change their forecasting equation. The point of the
1
Like all nonlinear regressions a certain amount of art is required in choosing sensible
PEA procedure is to continue until the s from the nonlinear regression are
the same as the s assumed in constructing {ct }; {yt } and {kt } - in other
words a fixed point has been reached. This fixed point is the solution to the
model as at this point the forecasting function used by consumers produces
consumption demands which lead to outcomes for future consumption which
prove the original forecasting equations correct.
As an example of the end product of the PEA approach, consider the
above stochastic growth model calibrated on UK data. In this case we arrive
at estimates for the coecients of 1 = 1:7931; 2 = 1:9249 and 3 =
1:6701. Given that = 0:99 and = 5 that means that the solution to
our model
2
c
= 1 kt1
At 3
t
is that
1
ct = exp [ln 0:99 + ln 1:7931 1:9249 ln kt1 1:6701 ln At
5
therefore the solution is a simple expression relating current consumption
to the current state variables.
What is the intuition behind the fixed point which underlies PEA? Probably the best way to rationalise the PEA approach is in terms of learning.
Imagine a world where agents know their first order conditions but do not
know how to form expectations that the Euler equations entail. That is
they have to learn how the economy works. One simple way of forming an
expectation would be to try and forecast the expectational terms using a
forecasting rule involving only the state variables. At first the consumer has
very few observations with which to construct a forecasting rule so essentially
makes a guess as to the coecients in the forecasting equation. However, as
more data becomes available they can estimate a more precise forecasting
8
rule and need to rely less on intuition. At first the consumer will make mistakes. They do not as yet know how the economy operates and will form
expectations on the basis of an incorrect forecast of the forward-looking term
1
in the Euler equation. In other words [c
+ )] will not be equal
t+1 (At+1 kt
1
+)]. Therefore agents expectations will
in expectation to Et [c
t+1 (At+1 kt
We can therefore write down the following plan for how to use the PEA
STEP 2 : Use guess of 1 ; 2 and 3 and {At } and PEA expression to solve
for {ct }.
STEP 3: Use results from STEP 2 and budget constraint to solve for {kt }
and {yt }.
2
1
STEP 4: Run nonlinear regression of [c
+ )] on 1 kt1
At 3
t+1 (At+1 kt
6
6.1
Marcet and Marshall (1994) have written in detail on this issue. In theory the
answer is that for most economic model we are interested in the model will
converge. Only in the case of models with multiple equilibria or increasing
returns will we found problems of convergence. However, in practice obtaining covergence may not be a trivial process. For instance, if the user chooses
a first order polynomial but this is a poor approximation then the model
may not converge. In this case it is worth experimenting with increasing the
order of the polynomial and/or trying alternative combinations of the state
variables.
6.2
A crucial part of the solution algorithm is the nonlinear least squares regression. Notice that the data used in this regression is a function of the previous
estimates of the s. Because of this normal super-convergence results concerning nonstationary variables do not hold and the case of nonstationary
variables you will not get convergence on a fixed point. In our case the data
changes with every set of parameter estimates so there is no reason for parameters to converge. Therefore in the case where variables are nonstationary
it is important to detrend all the variables to induce stationarity. This involves replacing all variables xt which contain a trend with their detrended
values xt . For instance, assume that ln At+1 = g + ln At + "t . In this case all
the variables share a common trend A1=(1) . In this case we therefore need
to transform everything by dividing through by this trend. Thus instead
10
the model to stationarity in this way we then simply derive the first order
conditions for this transformed model and solve using PEA as in the previous
example.
6.3
This partly depends on the problem but I have found 0.00001 to be fairly
reliable.
6.4
The convergence results to do with the learning intuition are all asymptotic.
In other words, it takes a long time to hit the fixed point. As a result you
may need to do simulations/regressions with lots of data. In my experience
I have found this involves using between 40000 and 100000 observations for
each simulation/regression.
When we arrive at a fixed point to the PEA we have found a set of coecients
such that
2
1
Et [c
+ )] 1 kt1
At 3 = vt 0
t+1 (At+1 kt
but is this a good solution? In other words, just because we have found a
11
fixed point which minimises a metric of squared residuals this does not mean
that we have an accurate solution to the model. We merely have a solution.
In order to gauge the accuracy of the solution, den Haan and Marcet
(1994) propose a very simple test. This test can be used to gauge the accuracy
of any solution technique and is not dependent on the use of the PEA.
1
c
= Et [c
+ )]
t
t+1 (At+1 kt
1
wt+1 = c
+ )]
t Et [ct+1 (At+1 kt
According to Rational Expectations this error term wt+1 should be unpredictable at time t. That is agents do not make predictable forecasting
mistakes. Therefore to assess the accuracy of the model solution all one needs
to do is regress wt+1 on a variety of variables dated at time t or previously. If
these variables have no predictive power then the solution is accurate. If the
variables predict the expectational error then the solution is inaccurate and
the order of the polynomial needs to be increased and the process repeated.
7.1
The obvious choices are lags of the state variables. In the case of our model the
obvious candidates would be {kt1 ; kt2 ; kt3 ; kt4 ; At ; At1 ; At2 ; At3 ; At4 }.
7.2
The details are given in den Haan and Marcet but essentially if the model
solution is accurate then the test statistic for whether these variables have
12
7.3
8.1
PEA.PRG
The main program used for applying the den Haan and Marcet solution technique. The program searches for the rational expectations solution by first
simulating and then updating the parameterised expectation until convergence.
CLS;
PRINT ;
PRINT PEA.PRG ;
PRINT This program replicates the solution developed by ;
PRINT den Haan and Marcet for the Stochastic growth
economy ;
PRINT ;
PRINT created by Martin Ellison 22.VIII.1997 ;
13
PRINT ;
PRINT ;
PRINT;
PRINT Intialising variables ...;;
#include params.prg;
PRINT done. Press any key to continue;WAIT;
PRINT;
PRINT STEP 1 : simulate a sequence for the productivity
shock ...;;
#include shock.prg;
PRINT done.;
PRINT;
PRINT Example values of productivity term (t=100:111);
PRINT theta[100:111];
PRINT Press any key to continue;WAIT;
PRINT;
/* begin the PEA loop. continue until the updated delta estimates (nd)
are close to the previous estimates (od) */
DO UNTIL maxc(abs(nd-od))<0.000001;
od=nd;
/* set the random number generator seed - not essential */
RNDSEED 1;
PRINT STEPS 2 & 3: use guess of deltas and productivity
term;
PRINT to solve for consumption and capital.;
PRINT;
PRINT Current deltas:;
PRINT nd;
14
PRINT nl2;
PRINT;
PRINT Do you want to test the (A)ccuracy of this solution;
PRINT or (C) continue iterating?;
PRINT;
ans=cons;
IF ans $==a OR ans $==A;
fd=nl2;
#include accuracy.prg;
PRINT Press any key to continue;WAIT;
PRINT;
ENDIF;
onl2=ng;
ND=nl2;
ENDO;
z=streams(nD); /* calculate final c and k*/
PRINT Final solution:;
PRINT nd;
PRINT;
PRINT Do you want to test the accuracy of the final solution?;
ans=cons;
IF ans $== y OR ans $== Y;
fd=nd;
#include accuracy.prg;
ENDIF;
end;
PROC streams(D);
16
n=1;
DO UNTIL n>T-1;
@ replace the following with the outcome of the expectation - @
phi[n]=(c[n+1]^(-u))*(a*exp(theta[n+1])*(z[n,2]^(a-1))+m);
@ @
/* construct the weighting function */
IF n<2; /* initial conditions */
f[1,n]=ln(k0);
lkn=k0;
ELSE; /* lagged values */
lkn=z[n-1,2];
f[1,n]=ln(z[n-1,2]);
ENDIF;
f[2,n]=theta[n];
f[3,n]=theta[n]^2;
mlt=(1/gam[1])|f[.,n];
/* calculate approximation to the expectation */
lw[n]=ln(gam[1,1])+gam[2,1]*ln(lkn)+gam[3]*theta[n]
+gam[4]*theta[n]^2;
/* dependant and independant variables */
dep[n]=(phi[n]-exp(lw[n])+gam(exp(lw[n]))*mlt);
indep[n,.]=(exp(lw[n])*mlt);
n=n+1;
ENDO;
/* perform the regression using the standard formula b=(XX)^1(XY) */
/* the first tx observations are discarded */
tx=101;
18
inden=invpd(iNdep[tx:t-2,.]indep[tx:t-2,.])*(indep[tx:t-2,.]dep[tx:t2,.]);
/* return the new estimate */
RETP(inden);
ENDP;
8.2
PARAMS.PRG
delta=ones(4,1);
delta[1,1]=1.1703263;
delta[2,1]=-1.8305086;
delta[3,1]=-1.6100586;
delta[4,1]=-0.26105643;
/* initialise delta vectors */
OD=ones(4,1);
ND=delta;
OG=ones(4,1);
NG=delta;
gam=ones(4,1);
onl2=delta;
/* Variables and parameters used in the procedure to calculate
the
streams of consumption and capital */
theta=zeros(t,1); /* productivity term */
lw=ones(T,1)*0.5; /* approximate of the Euler eqn */
lk=ones(T,1)*k0;lkn=0; /* lagged capital stock */
k=(lk); /* capital stock */
lc=ones(T,1)*0.93; /* logarithm of consumption */
c=exp(lc); /* consumption */
z=c~k; /* z = [ c | k ] contains simulations */
8.3
SHOCK.PRG
21
8.4
ACCURACY.PRG
loop=1;
z=tstreams(fd);
/* calculate outcome of the expectation */
phi=tchoose;
/* define matrix of regressors. here uses three lags of the
capital
and productivity term */
ah=ONES(1,T-103)|k[103:T-1]|theta[103:T-1]|k[102:t-2];
ah=ah|theta[102:t-2]|k[101:t-3]|theta[101:t-3];
/* calculate the rational expectations error */
res=beta*phi[103:t-1]-c[103:t-1]^(-u);
/* calculate test statistic.
see Marcet Review of Economic Studies 1994 */
ihh=invpd(Ah*Ah);
ahat=ihh*Ah*res;
mdhm=ahat*(Ah*Ah)*invpd((Ah*(Ah.*(res^2))))*(Ah*Ah)*ahat;
kkkk=rows(Ah);
format 1,3;
print Test statistic is : mdhm P-value cdfchinc(mdhm,kkkk,0);
@dmdhm[1,draw]=mdhm; dmdhm[2,draw]=cdfchinc(mdhm,kkkk,0);@
/* update counts if in upper or lower tail */
IF cdfchinc(mdhm,kkkk,0)<0.05;
lcount=lcount+1;
ENDIF;
IF cdfchinc(mdhm,kkkk,0)>0.95;
ucount=ucount+1;
ENDIF;
FORMAT 2,2;
PRINT t= draw;;
23
ENDP;
PROC tstreams(D);
/* calculate sequence of c and k given delta */
n=1;
z=ones(t,2);k=ones(t,1);lw=ones(t,1);
lc=ones(t,1);c=ones(t,1);
DO UNTIL n>T;
if n<2;lkn=k0;theta[1]=epsilom[1];
else;lkn=k[n-1];
endif;
lk[n]=lkn;
@ replace the following with the first order conditions of the
model @
lw[n]=ln(d[1,1])+d[2,1]*ln(lkn)+d[3,1]*theta[n]+d[4,1]*(theta[n])^2;
lc[n]=(-1/u)*(ln(beta)+lw[n]);
k[n]=(exp(theta[n]))*(lkn^a)+m*lkn-exp(lc[n]);
@ -@
n=n+1;
ENDO;
c=exp(lc);
z=c~k;
RETP(z);
ENDP;
8.5
SIMS.PRG
Simulates the model economy. This program can be easily adapted to give
any stylised facts desired. At present it calculates cross correlations and
volatilities by averaging across 50 simulations of 130 periods each. Data is
25
26
theta = zeros(131,50);
theta[1,.] = epsilon[1,.];
k = zeros(131,50);
c = zeros(130,50);
y = zeros(130,50);
invest = zeros(130,50);
k[1,.] = ks*ones(1,50);
ii = 1;
do while ii<=130;
@ replace the following with the first order conditions of the
model @
theta[ii+1,.] = rho*theta[ii,.]+epsilon[ii+1,.];
c[ii,.] = ln(nd[1])*ones(1,50)+nd[2]*ln(k[ii,.])+nd[3]*theta[ii,.];
c[ii,.] = c[ii,.]+nd[4]*theta[ii,.].^2;
c[ii,.] = exp((-1/tau)*(ln(beta)+c[ii,.]));
y[ii,.] = (k[ii,.].^alpha).*exp(theta[ii,.]);
invest[ii,.] = y[ii,.]-c[ii,.];
k[ii+1,.] = invest[ii,.]+mu*k[ii,.];
@ @
ii = ii + 1;
endo;
/* take logarithms of the series in preparation for HP filtering
*/
lc = ln(c);
ly = ln(y);
li = ln(invest);
/* Hodrick-Prescott filter the generated series */
n=rows(c);
vv=lc~li~ly;
27
lamda=1600;
format /rdn 10,4;
aa=3*eye(n);
i=1;
DO UNTIL i>(n-2);
aa[i,i+1]=-4;
aa[i,i+2]=1;
i=i+ 1;
ENDO;
aa=aa+aa;
aa[1,1]=1;
aa[2,1]=-2;
aa[1,2]=-2;
aa[2,2]=5;
aa[n-1,n-1]=5;
aa[n-1,n]=-2;
aa[n,n-1]=-2;
aa[n,n]=1;
screen o;
W=eye(n)+lamda*aa;
IW=invpd(W);
trend=IW*vv;
cp=vv[.,1:50]-trend[.,1:50];
ip=vv[.,51:100]-trend[.,51:100];
yp=vv[.,101:150]-trend[.,101:150];
screen on;
S = zeros(50,20);
S[.,1] = stdc(cp);
S[.,2] = stdc(ip);
28
S[.,3] = stdc(yp);
j = 1;
do while j <= 50;
aa = zeros(5,26);
aa[1:2,1:2] = corrx(cp[2:130,j]~cp[1:129,j]);
aa[1:2,3:4] = corrx(ip[2:130,j]~ip[1:129,j]);
aa[1:2,5:6] = corrx(yp[2:130,j]~yp[1:129,j]);
aa[.,7:11]=corrx(cp[4:130,j]~yp[4:130,j]~yp[3:129,j]~yp[2:128,j]~yp[1:127,j]);
aa[.,12:16] = corrx(ip[4:130,j]~yp[4:130,j]~yp[3:129,j]
~yp[2:128,j]~yp[1:127,j]);
aa[1:4,17:20] = corrx(yp[4:130,j]~cp[3:129,j]~cp[2:128,j]~cp[1:127,j]);
aa[1:4,21:24] = corrx(yp[4:130,j]~ip[3:129,j]~ip[2:128,j]~ip[1:127,j]);
S[j,4] = aa[1,2];
S[j,5] = aa[1,4];
S[j,6] = aa[1,6];
S[j,7:13] = (rev(aa[1,18:20]))~aa[1,8:11];
S[j,14:20] = (rev(aa[1,22:24]))~aa[1,13:16];
j = j+1;
endo;
/* print results */
R = zeros(20,2);
R = meanc(S)~stdc(S);
PRINT ;
PRINT SIMS.PRG ;
PRINT Simulation of the the Stochastic growth economy ;
PRINT ;
PRINT created by Martin Ellison 22.VIII.1997 ;
PRINT ;
PRINT ;
29
PRINT;
PRINT StDev(Consumption) 100*R[1,.];
PRINT StDev(Investment) 100*R[2,.];
PRINT StDev(Output) 100*R[3,.];
PRINT Rho(Consumption) R[4,.];
PRINT Rho(Investment) R[5,.];
PRINT Rho(Output) R[6,.];
PRINT Corr(C,Y) R[7:13,1]|R[7:13,2];
PRINT Corr(I,Y) R[14:20,1]|R[14:20,2];
PRINT ;
output o;
9
9.1
Example printouts
PEA.GSS
PEA.PRG
This program replicates the solution developed by
den Haan and Marcet for the Stochastic growth economy
created by Martin Ellison 22.VIII.1997
30
9.2
ACCURACY.GSS
ACCURACY.PRG
This program tests the accuracy of the solution
for the deltas contained in nd
created by Martin Ellison 22.VIII.1997
Proposed solution is
1.0323314 -1.8141768 -1.5751638 -0.14135559
Press any key to begin accuracy test.
Test statistic is : 5.39 P-value 0.388
t=1.0 lower tail % 0.00 upper tail % 0.00
Test statistic is : 8.78 P-value 0.731
t=2.0 lower tail % 0.00 upper tail % 0.00
Test statistic is : 6.19 P-value 0.483
t=3.0 lower tail % 0.00 upper tail % 0.00
Test statistic is : 24.2 P-value 0.999
t=4.0 lower tail % 0.00 upper tail % 0.25
Test statistic is : 4.64 P-value 0.297
t=5.0 lower tail % 0.00 upper tail % 0.20
Test statistic is : 12.5 P-value 0.916
t=6.0 lower tail % 0.00 upper tail % 0.17
32
9.3
SIMS.GSS
SIMS.PRG
Simulation of the the Stochastic growth economy
created by Martin Ellison 22.VIII.1997
StDev(Consumption)
0.3846
0.0459
StDev(Investment)
2.8964
0.3203
33
StDev(Output)
1.1796
0.1324
Rho(Consumption)
0.7182
0.0524
Rho(Investment)
0.6990
0.0533
Rho(Output)
0.7022
0.0529
Corr(C,Y)
0.1696 0.3882 0.6618 0.9885 0.7359 0.5129 0.3246
0.0994 0.0849 0.0501 0.0034 0.0513 0.0910 0.1105
Corr(I,Y)
0.2601 0.4625 0.7096 0.9986 0.6886 0.4272 0.2161
0.1063 0.0900 0.0529 0.0005 0.0526 0.0886 0.1037
10
34
Solving this equation for the s does not necessarily require lots of observations. This is the motivation behind the grid approach to using the
PEA technique. Rather than generate long time series and solve the model
that way, the grid approach chooses certain values for the state variables and
then chooses s so that the PEA is a good approximation to the expectations
term at these values. If the grid values are chosen well (i.e. values which
frequently occur in the model) then these should give good answers to the
model.
To understand more fully this approach go back to our simple stochastic
growth model. In this case we have two state variables - the capital stock
and the productivity term. The point about using a grid is to select a vector
of potential values for the capital stock and the productivity term and then
solve for a fixed point using all combinations of these chosen capital and
productivity terms. For instance, assume that we choose three capital values:
a low, medium and high capital stock and three values of the productivity
term: a low, medium and high value. then we have 9 potential combinations
of capital and the productivity term (all the way from high capital, low
productivity to high productivity/low capital). For each of these grid points
we can calculate ct and kt just as we did in the time series approach. However,
by focusing only on 9 grid points we only have to calculate ct and kt 9 times
- not the 40000 times we did in the time series approach. The next thing
1
we have to do is calculate Et [c
+ )] for each of these 9 grid
t+1 (At+1 kt
1
and we can calculate c
+ ). However, we are interested in
t+1 (At+1 kt
regression and iterative procedure to arrive at the fixed point for the s.
However, the major advantage is that the regression is now performed over
only 9 observations rather than 40000. In other words, there is an enormous
time saving produced via the grid approach.
10.1
Christiano and Fisher contain a detailed discussion of this point. The most
obvious choice for the productivity shock is to choose its mean and also plus
and minus one standard deviation. For each of these values we then calculate
the steady-state capital stock attached to each of these shocks. This will give
us a lower and upper bound on capital (the worst and best case scenario)
and a mean level. If all we want is three gridpoints for capital this is enough.
However, the more gridpoints used then the more accurate the solution so
we would probably want some additional capital values to be considered in
the grid. In choosing these values we also want to put a heavier weight
on capital stocks close to its mean. These values are likely to occur more
frequently and so it is important that the grid method solves for the values
in this range well. Therefore a range of capital values is chosen between the
lower and upper bounds calculated earlier and via means of a cosine function
these values tend to be clumped together around the mean value.
36
11
37
be solved by first solving for a very small variance of any shocks so that the
model is almost deterministic. The values of the parameterised expectation
can then be used as starting values as the variance is increased.
8. It is useful to set the number of simulations to a low value when first
estimating the model. This saves time whilst debugging. A rough parameterisation can be obtained with a low number of simulations and then this
used as starting values for the full estimation.
9. To enable results to be replicated it is always advisable to set the
random number generator seed to a specific value. This ensures that the
random draw of any shocks is identical from estimation to estimation and
helps the PEA algorithm to converge. It creates no problems for the algorithm because the parameterised values still have to pass the accuracy test.
The point is that the parameterisation will inevitably be slightly dierent for
dierent draws of the random shocks. Setting the random number generator
seed eliminates this unnecessary variation.
10. If there is more than one expectation in the model to parameterise
then initially hold one of the parameterisations constant whilst optimising
the other. Then hold the other constant and estimate the other and so
on. Only after a number of iterations such as these is it worth attempting
to estimate both parameterisations simultaneously. This ensures that the
simulations remain sensible and the algorithm converges.
11. Stylised facts can obviously be constructed from inaccurate solutions
that have not passed the accuracy test. Whilst these facts cannot be interpreted rigorously they do give some idea as to whether the model is working
correctly. For example, is consumption smoother than output as expected in
most of these types of model?
12. Problems tend to arise if the expectation to be parameterised also
contains known values of the state variables. In this case it is preferable to
remove the known variable and parameterise the rest. For example rather
39
than parameterise t = Et mct kt1
, where is mct unknown but kt1 is already
ity problems.
40