Anda di halaman 1dari 24

Periodic Unobserved Component Time Series Models:

estimation and forecasting with applications


Siem Jan Koopman and Marius Ooms
Free University Amsterdam, Department of Econometrics
De Boelelaan 1105, NL-1081 HV Amsterdam
mooms@feweb.vu.nl
February 13, 2002
Abstract
Periodic time series analysis refers to the modelling approach where important time
series properties depend on the period of the year. The standard approach to time series
modelling is to treat a time series as a stochastic process with seasonal uctuations. In
a periodic analysis seasonal variations are modelled using separate yearly time series for
each season, which do not possess seasonal dynamics by construction.
If the seasonal subseries are unrelated, periodic analysis simply implies a repeated
analysis for each season. If the seasonal subseries are related, periodic analysis requires a
truly multivariate time series approach for the seasonal subseries.
This paper explores the periodic analysis in the context of unobserved components time
series models which decompose a time series into components of interest including trend,
seasonal and irregular. We compare ve approaches. Standard nonperiodic structural
time series modelling, periodic univariate unobserved components modelling of seasonal
subseries, homogeneous multivariate unobserved components modelling, common trend
modelling and seemingly unrelated unobserved components time series modelling.
We conne ourselves to cases where estimation, diagnostic checking and forecasting,
can be carried out easily and interactively using existing user-friendly software packages.
We illustrate the methodology using three quarterly time series of energy demand
(electricity, gas and coal) in the UK. We demonstrate that a periodic analysis is relatively
straightforward and that it can be a viable alternative to the more parsimonious standard
approach.
Keywords: Seasonality, Structural Time Series, Periodicity, Common Trends
JEL code: C22
Preliminary version
Please do not quote without permission of authors.
1
1 Introduction
The main problem in modelling seasonal time series is to estimate the recurring but persistently
changing pattern within the years in an ecient way for forecasting. Structural time series mod-
els provide a convenient statistical tool to solve this problem. In particular we focus on the basic
structural time series model which decomposes a time series into trend, seasonal and irregular
components. For the problem at hand, this approach suits three aims: rstly, it decomposes the
observed series into unobserved stochastic processes which provide (after estimation) a better
understanding of the dynamic characteristics of the series; secondly, it provides an eective
basis for seasonal adjustment; thirdly, it generates optimal forecasts straightforwardly using
the Kalman lter.
The inclusion of a seasonal component in a structural time series model allows for the sea-
sonal variation within the year. An alternative approach is to consider a set of yearly time
series, in which each series is associated with a particular season, simultaneously. The impli-
cation of this so-called periodic approach is that an intrinsically univariate analysis becomes
a multivariate analysis which seems a drawback at rst sight. However, in this paper we will
show that a periodic analysis can still rely on merely univariate techniques. In the context
of autoregressive moving average models (ARMA) and dynamic econometric models, extensive
studies using the periodic approach are carried out by Osborn and Smith (1989), Osborn (1991)
and in the excellent monograph by Franses (1996). The consequences of a periodic approach for
seasonal long-memory or fractional ARMA models have been explored by Ooms and Franses
(2001). Ooms and Franses (1997) introduced diagnostics for the evaluation of model based
seasonal adjustment in a periodic analysis of nonstationary processes and found that periodic
extensions can improve the adequacy of standard seasonal unobserved component models.
This paper examines the possibilities of a periodic analysis of seasonal time series within the
class of unobserved components time series models for nonstationary processes. The periodic
approach is compared with the use of standard univariate models which include a seasonal
component. We therefore consider two variants of modelling: rstly, univariate with the time
index measured in seasons; secondly, multivariate or periodic with the time index measured in
years. The main purpose of this paper is to implement periodic unobserved components (PUC)
models using state space methods and to validate the empirical results of estimating various
models and forecasting energy series based on these models. Proietti (2001) also explores a
periodic analysis in the context of unobserved components model with a particular emphasis
on signal extraction that is achieved by various restrictions on variance matrices in order to
obtain an adequate decomposition of the time series. In this paper we partly build on this work
but we explore dierent periodic models and emphasise ready-to-use estimation and forecasting
methods for those models.
We argue that periodic models are well-suited to deal with specic seasonal features whereas
other seasonal models may be more appropriate for other types of seasonal variations. Therefore
we do not regard periodic approaches to seasonality within the class of unobserved components
time series models as competitors to standard approaches. We approach them as possible
alternatives, generating useful extra information about the time series under consideration.
The estimation of components and the forecasting of the series require rst the estimation of
parameters associated with the unobserved components such as trend, seasonal and irregular.
All the empirical work in this paper is done using the STAMP program of Koopman, Harvey,
Doornik, and Shephard (2000). We estimate a number of periodic models and analyse them
1
using various diagnostic test statistics. The forecast performance of the various models is also
investigated empirically. The paper is organised as follow. Section 2 introduces the notation
for the nonperiodic unobserved component models. Section 3 discusses four periodic unob-
served component models. Section 4 presents the statistical treatment of periodic unobserved
component models, where we discuss estimation of the models in state pace form, diagnostic
checking and forecasting. Section 5 discusses empirical results for the energy series and section
6 concludes.
2 Unobserved components time series models
The univariate structural time series model that is particularly suitable for many economic
data sets is given by
y
t
=
t
+
t
+
t
,
t
NID(0,
2

), t = 1, . . . , n, (1)
where
t
,
t
and
t
represent trend, seasonal and irregular components respectively. The trend
and seasonal components are modelled by linear dynamic stochastic processes which depend on
disturbances. The components are formulated exibly and they are allowed to change over time
rather than being deterministic. The disturbances driving the components are independent of
each other. The denitions of the components are given below, but a full explanation of the
underlying rationale can be found in Harvey (1989, chapter 2). The eectiveness of structural
time series models compared to ARIMA type models, especially when messy features in time
series are present, is shown in Harvey, Koopman, and Penzer (1998).
The trend component can be dened as

t
=
t1
+
t1
+
t
,
t
NID(0,
2

),

t
=
t1
+
t
,
t
NID(0,
2

),
(2)
where the level and slope disturbances,
t
and
t
are mutually uncorrelated. When
2

is zero,
we have a random walk plus drift, and when
2

is zero as well, a deterministic linear trend is


obtained. A relatively smooth trend, related to a cubic spline, results when a zero value of
2

is coupled with a positive


2

; Young (1984) calls this model an integrated random walk.


To take account of the seasonal variation in the time series a seasonal component is included.
A deterministic seasonal component should have the property that it sums to zero over the
previous year to ensure that it cannot be confounded with the trend. Flexibility of the seasonal
component is achieved when it is allowed to change over time. This is can be established
by adding a disturbance term to the sum of the seasonal eects over the past year. This is
the dummy variable form of the seasonal component. Alternatively, a deterministic seasonal
pattern is obtained by a set of sine and cosine functions. Allowing these to be time-varying
leads to the trigonometric form of the seasonal component
t
and is given by

t
=
[s/2]

j=1

+
j,t
, where
_

+
j,t+1

j,t+1
_
=
_
cos
j
sin
j
sin
j
cos
j
__

+
j,t

j,t
_
+
_

+
j,t

j,t
_
, (3)
with
j
= 2j/s as the j-th seasonal frequency and
_

+
j,t

j,t
_
NID
__
0
0
_
,
2

I
2
_
, j = 1, . . . , [s/2].
2
Note that for s even [s/2] = s/2, while for s odd, [s/2] = (s 1)/2. For s even, the process

j,t
, with j = s/2, can be dropped. The state space representation is straightforward and the
initial conditions are
+
j,1

N
(0, ) and

j,1

N
(0, ), for j = 1, . . . , [s/2]. We have assumed
that the variance
2

is the same for all trigonometric terms. However, we can impose dierent
variances for the terms associated with dierent frequencies; in the quarterly case (s = 4) we
can estimate two dierent
2

s rather than just one. We could also consider to drop a pair


of sine-cosine terms at some frequency which appear not to be strongly present in the seasonal
process. The trigonometric seasonal process evolves smoothly over time; it can be shown that
the sum of the seasonals over the past year follows an MA(s 2) rather than white noise in
the case of the dummy seasonal form. We therefore will take the trigonometric specication
for the seasonal component in our analyses. More details on the trigonometric specication for
the seasonal process can be found in Harvey (1989, page 56).
3 Periodic unobserved components time series models
In a periodic analysis the time series are re-ordered in s yearly time series which we denote by
y
j,p
; it represents observation y
t
with time index t = (j 1)s+p for year j = 1, . . . , J and season
p = 1, . . . , s where J is the number of years; see Tiao and Grupe (1980). When considering a
yearly time series associated with a particular season, the seasonal variation is not present and
therefore we only include trend and irregular components in the models. The stack of the s
yearly time series is represented by
y

j
=
_
_
_
y
j,1
.
.
.
y
j,s
_
_
_
, j = 1, . . . , J.
3.1 Independent periodic models
A simple periodic approach is to consider independent local linear trend models for the s yearly
time series, that is
y
j,p
=
j,p
+
j,p
,

j+1,p
=
j,p
+
j,p
+
j,p
, p = 1, . . . , s,

j+1,p
=
j,p
+
j,p
, j = 1, . . . , J,
(4)
where the disturbances
j,p
,
j,p
and
j,p
are serially and mutually uncorrelated over both indices.
Eectively, s separate models are introduced which can be represented as a multivariate model
by
y

j
=

j
+

j
,

j

N
(0,

),

j+1
=

j
+

j
+

j
,

j

N
(0,

),

j+1
=

j
+

j
,

j

N
(0,

), j = 1, . . . , J,
(5)
where the variance matrices

and

are restricted to be diagonal. Generally, x

j
refers
to the stack of the seasonal values x
j,1
, . . . , x
j,s
, that is
x

j
=
_
_
_
x
j,1
.
.
.
x
j,s
_
_
_
,
3
where x can refer to the variables , , , , and . The dimension of the state vector is 2s.
A simple example of an independent periodic model is the well known seasonal random
walk, extended with periodically varying variances, where

= 0 and where the diagonal


elements of

give the innovations variance for each season of the year. A combination of
random walk behaviour for three subseries with one white noise subseries is also a possible
independent periodic model. In the general case, the number of unknown variance parameters
for this model adds up to 3s parameters which is 12 in the quarterly case. The variance
parameters can be estimated separately for each subseries.
3.2 Homogeneous periodic models
Multivariate homogeneous structural time series models are discussed in Harvey (1989, section
8.3). For the case of the local linear trend model these are obtained by replacing the diagonal
variance matrix restrictions for model (5) with the homogeneity restrictions

= q

= q

= q

. (6)
The dimension of the state vector still equals 2s. The variance matrices of the disturbances of
the dierent components are therefore equal upto scalar factors q

, q

and q

. The variance ratios


of trend slope and irregular are the same for each linear combination of the seasonal subseries
and the correlation function at yearly lags does not depend on the season. This aspect of the
model is nonperiodic. Periodicity is conned to innovation variances and to correlations at
other nonyearly lags. The number of unknown covariance parameters is 2+
1
2
s(s +1). However,
not all these parameters have to be estimated simultaneously.
The estimation of homogeneous models can be based on the prole loglikelihood function
in which the variance matrix

is concentrated out of the likelihood together with one of the


scalars q

, q

and q

. The consequence is that only two parameters need to be estimated by


directly maximising the prole loglikelihood function using numerical methods. Further details
of estimation are given in the next section.
Homogeneous models are used for forecasting multiple time series for the purpose of inven-
tory control of ranges of products. However such models have not yet been used in the context
of periodic time series. In this paper we investigate the eectiveness of homogeneous models in
the context of periodic time series.
3.3 Seemingly unrelated periodic equations
Model (5) without restrictions on the variance matrices can be regarded as a set of seemingly
unrelated time series equations or, more appropriately, seemingly unrelated periodic structural
time series equations (SUPSE). This can be compared with the set of seemingly unrelated
regression equations (SURE) introduced by Zellner (1962). The number of unknown covariance
parameters to be estimated for the local linear trend SUPSE model increases substantially to
3s(s + 1)/2, that is 30 for quarterly time series. All parameters are identied theoretically
but in practice the estimation of parameters may be dicult and numerical convergence to the
maximum loglikelihood can be slow when s is large.
4
3.4 Periodic common trends
A nal set of variants we will consider is model (5) with rank restrictions on the variance ma-
trices. It is shown by Harvey and Koopman (1997) that rank restrictions on variance matrices

and

lead to the notion of common trends which can be interpreted as stochastic cointe-
gration for a vector of time series. For periodic unobserved component models, common trends
can occur as common stochastic slopes, common stochastic levels, or as combinations of both.
Suppose

=0, so there are no stochastic slopes, but

= 0. In that case the notion of


common stochastic levels indicates the existence of linear combinations of seasonal subseries
without stochastic levels, whereas the constituent subseries separately do contain stochastic
levels. A simple interpretable example of a periodic common trend model then obtains when
() has rank one. All seasonal subseries share one common trend, which can be interpreted
as the overall trend and the factor loadings for this common trend can then be interpreted as
long run seasonal factors.
These notions may be of interest to the investigator, but for the present purpose we consider
these models merely as a more parsimonious representation of SUPSE models.
The number of unknown variance parameters in this case is given by r

(1 + s
1
2
(r

+ 1))
+ r

(1 + s
1
2
(r

+ 1)) +
1
2
s(s + 1) where r

and r

are the ranks of () and (). Each


reduced rank matrix is identied by r variances and sr
1
2
r(r +1) parameters in the matrix of
normalised factor loadings. Section 5 provides several empirical examples. All these parameters
need to be estimated simultaneously. Simple asymptotic tests for the rank based on likelihood
ratio tests are not available yet. Lagrange multiplier tests have been developed in detail. For
a recent overview, see Harvey (2001) and the references therein.
4 Statistical treatment
The state space form provides a unied representation of a wide range of linear Gaussian
time series models including the structural time series model; see, for example, Harvey (1993,
Chapter 4), Kitagawa and Gersch (1996) and Durbin and Koopman (2001). The Gaussian
state space form consists of a transition equation and a measurement equation; we formulate
it as

t+1
= T
t

t
+ H
t

t
,
1

N
(a, P) , t = 1, . . . , n, (7)
y
t
= Z
t

t
+ G
t

t
,
t
NID(0, I) , (8)
where NID(, ) indicates an independent sequence of normally distributed random vectors
with mean and variance matrix , and, similarly,
N
(, ) indicates a normally distributed
variable. The N observations at time t are placed in the vector y
t
and the N n data matrix is
given by (y
1
, . . . , y
n
). The structural time series model has a univariate measurement equation:
N = 1. The periodic model is based on a multivariate measurement equation: N = s. In
the following we use a single time index t which may refer to a time scale measured either in
quarters or in years.
The m 1 state vector
t
contains unobserved stochastic processes and unknown xed
eects. The state equation (7) has a Markovian structure which is an eective way to describe
the serial correlation structure of the time series y
t
. The initial state vector is assumed to be
random with mean a and variance matrix P but some elements of the state can be diuse which
means that have mean zero and variance where is large. The measurement equation (8)
5
relates the observation vector y
t
to the state vector
t
through the signal Z
t

t
and the vector of
disturbances
t
. The deterministic matrices T
t
, Z
t
, H
t
and G
t
are referred to as system matrices
and they usually are sparse selection matrices. When the system matrices are constant over
time, we drop the time-indices to obtain the matrices T, Z, H and G. The resulting state space
form is referred to as time-invariant.
4.1 Diuse Kalman lter and log likelihood evaluation
The Kalman lter is a recursive algorithm for the evaluation of moments of the normal distri-
bution of state vector
t+1
conditional on the data set Y
t
= {y
1
, . . . , y
t
}, that is
a
t+1
=
E
(
t+1
|Y
t
) , P
t+1
= cov (
t+1
|Y
t
) ,
for t = 1, . . . , n; see Anderson and Moore (1979, page 36) and Harvey (1989, page 104). The
Kalman lter is given by
v
t
= y
t
Z
t
a
t
F
t
= Z
t
P
t
Z

t
+ G
t
G

t
K
t
= (T
t
P
t
Z

t
+ H
t
G

t
) F
1
t
a
t+1
= T
t
a
t
+ K
t
v
t
P
t+1
= T
t
P
t
T

t
+ H
t
H

t
K
t
F
t
K

t
(9)
for t = 1, . . . , n, and where a
1
= a, and P
1
= P. v
t
is the innovation and F
t
is variance. K
t
is the Kalman gain: the derivative of the forecast function for the state with respect to the
current innovation. The variance matrix P is given by
P = P

+ P

,
where is large; for example, = 10
7
. The matrix P

contains the variances and covariances


between the stationary elements of the state vector (zeroes elsewhere) and P

is a diagonal
matrix with unity for nonstationary and deterministic elements of the state and zero elsewhere.
The number of diuse elements, equals to the rank of P

), is given by d.
The Kalman lter is used to compute the Gaussian log-likelihood function via the prediction
error decomposition for models in state space form; see Schweppe (1965), Jones (1980) and
Harvey (1989, section 3.4). The log-likelihood function is given by
l = log p (y
1
, . . . , y
n
; ) =
n

t=1
log p (y
t
|y
1
, . . . , y
t1
; )
=
nN
2
log (2)
1
2
n

t=1
_
log |F
t
| + v

t
F
1
t
v
t
_
(10)
where is the vector of parameters for a specic statistical model represented in state space
form (9). The innovations v
t
and their variances F
t
are computed by the Kalman lter for
a given vector . Note that the summation in (10) is from 1 to n, but usually the rst d
summations will be approximately zero as F
1
t
will be very small for t = 1, . . . , d. For more
details on diuse initialisation; see Koopman (1997).
6
When observations y
t
for t = , . . . ,

1 are missing, the vector v


t
and the matrix K
t
of
the Kalman lter are set to zero for these values, that is v
t
= 0 and K
t
= 0, and the Kalman
updates become
a
t+1
= T
t
a
t
, P
t+1
= T
t
P
t
T

t
+ H
t
H

t
, t = , . . . ,

1; (11)
This simple treatment of missing observations is one of the attractions of the state space
methods for time series analysis. The same principle can be used to generate forecasts and
forecast root mean squared errors as we discuss below.
4.2 State space formulation of seasonal UC models
The unobserved components time series model (1) for quarterly series (s = 4) with trend,
seasonal and irregular components requires a state vector of ve elements which is given by

t
=
_

t
,
t
,
1,t
,

1,t
,
2,t
_

. The model specications for trend and seasonal are given by (2)
and (3), respectively. The seasonal component
t
is the sum of two trigonometric variables
+
1,t
and
+
2,t
associated with the seasonal frequencies
1
2
and , respectively. The variable

2,t
is
not considered since sin = 0 and it follows from (3) that
+
2,t+1
=
+
2,t
+
+
2,t
. The state space
formulation of the full model relies on time-invariant system matrices and are given by
T =
_

_
1 1 0 0 0
0 1 0 0 0
0 0 0 1 0
0 0 1 0 0
0 0 0 0 1
_

_
, H =
_

_
0

0 0 0 0
0 0

0 0 0
0 0 0

0 0
0 0 0 0

0
0 0 0 0 0

_
, (12)
(13)
Z =
_
1 0 1 0 1
_
, G =
_

0 0 0 0 0
_
. (14)
The variances of the disturbances are unknown and need to be estimated. They are
2

,
2

and
2

which we transform to logs so that we can estimate them without constraints. These
unknown parameters are collected in the 4 1 vector . The state vector has dimension 2 +
s 1.
4.3 State space formulation of periodic UC models
A periodic unobserved components time series models is represented by a multivariate state
space model with N = s. In the case of the periodic local linear trend model the state vector
is of size 2s, that is 8 for quarterly data, and is given by
j
=
_

1,j
,
1,j
, . . . ,
s,j
,
s,j
_

for
j = 1, . . . , J. The number of observations J for the periodic model will be approximately n/s
where n is the number of observations for the structural time series model. The system matrices
for the state space representation (7)-(8) are given by
T = I
s

_
1 1
0 1
_
, Z = I
s

_
1 0
_
, (15)
where I
s
is the s s identity matrix and denotes the Kronecker matrix product. The
dimension of the state vector therefore equals 2s in all periodic models under consideration in
this paper.
7
The specication for H and G vary depending on the type of periodic model. For example,
for the homogeneous model we have
H = [0, A] D, G = [

A, 0, 0, 0, 0] ,
where A is the s s matrix obtained from the decomposition

= AA

and D is the 2 2
diagonal matrix diag(

). Note that the zero matrices 0 in h and G are of dimension


s 2.
The Kalman lter requires more computations when dealing with multivariate models com-
pared to univariate. Therefore we may expect that a periodic analysis is computationally more
involved than a standard analysis. Note however that the number of (vector) observations
in a periodic analysis is smaller than in a univariate analysis with a factor of approximately
1/s. Also the state vector will usually be larger for periodic models. However the multivariate
measurement equation can be handled by univariate Kalman ltering methods as set out in
Koopman and Durbin (2000). This can lead to computational savings for periodic models and
therefore it is expected that the computational cost for a run of the Kalman lter for both
methods is comparable. Optimisation of the likelihood is of course much more time consuming
as more parameters have to be estimated nonlinearly, see also the discussion in 3 above. In
applications it often turns out that some variance parameters are nearly zero and therefore
some stochastic components drop out of the model.
4.4 Diagnostic checking
The assumptions underlying a Gaussian model are that the disturbance vector
t
is normally
distributed and serially independent with unity variance matrix. Under these assumptions the
standardised one-step prediction errors, in the univariate structural time series model dened
by
e
t
=
v
t

F
t
, t = 1, . . . , n, (16)
are also normally distributed and serially independent with unit variance. In the periodic
models we consider the separate seasonal subseries of standardised prediction errors and the
index t refers to years, j, instead of quarters.
e
p,j
=
v
p
, j
_
F
pp,j
, p = 1, . . . , s, j = 1, . . . , J. (17)
Other really multivariate diagnostics can be computed for the periodic models, see Harvey
(1989, 8.4.2), but we do not consider those in the current paper.
We can check that these properties hold by means of the following diagnostic tests, where
it must be understood that t refers to quarters in the structural time series model and t refers
to years for the periodic models.
Normality
The rst four moments of the standardised prediction errors are given by
m
1
=
1
n
n

t=1
e
t
,
m
q
=
1
n
n

t=1
(e
t
m
1
)
q
, q = 2, 3, 4.
8
Skewness and kurtosis are denoted by S and K, respectively, and are dened as
S =
m
3
_
m
3
2
, K =
m
4
m
2
2
,
and it can be shown that when the model assumptions are valid they are asymptotically
normally distributed as
S
N
(0,
6
n
), K
N
(3,
24
n
);
see Bowman and Shenton (1975). Standard statistical tests can be used to check whether
the observed values of S and K are consistent with their asymptotic densities. They can
also be combined as
N = n{
S
2
6
+
(K 3)
2
24
},
which asymptotically has a
2
distribution with 2 degrees of freedom on the null hy-
pothesis that the normality assumption is valid. In practice, rejection of the normality
assumption is often due to a small number of extreme observations. In our application
below we treat outlying observations using simple outlier models. A dierent approach
would be to treat the extreme observations as missing, which would be more appropriate
in the context of sporadic measurement errors.
Heteroskedasticity
A simple test for heteroskedasticity is obtained by comparing the sum of squares of two
exclusive subsets of the sample. For example, the statistic
H(h) =

n
nh
e
2
t

h+1
t=1
e
2
t
,
where e
t
is dened in (17), is F
h,h
-distributed for some preset positive integer h, under
the null hypothesis of homoskedasticity.
Serial correlation
When the model holds, the standardised forecast errors are serially uncorrelated. There-
fore, the correlogram of the one-step prediction errors should reveal no serial correlation.
A standard portmanteau test statistic for serial correlation is based on the Box-Ljung
statistic; see Ljung and Box (1978). This is given by
Q(p) = n(n + 2)
p

j=1
c
2
j
n j
,
for some preset positive integer p where c
j
is the j-th correlogram value
c
j
=
1
nm
2
n

t=j+1
(e
t
m
1
)(e
tj
m
1
).
This test is asymptotically
2
distributed with p degrees of freedom.
9
4.5 Forecasting
Out-of-sample predictions, together with their mean squared errors, can be generated by the
Kalman lter by extending the data set y
1
, . . . , y
n
with a set of missing values. When y
n+k
is
missing, the Kalman lter step reduces to
a
n+k+1
= T
n+k
a
n+k
, P
n+k+1
= T
n+k
P
n+k
T

n+k
+ H
n+k
H

n+k
,
which are the state space forecasting equations for k = 1, . . . , K where K is the forecast horizon;
see also the treatment of missing observations in the previous section. The multi-step forecast
of y
n+k
is simply given by
y
n+k
= Z
n+k
a
n+k
, Var( y
n+k
) = Z
n+k
P
n+k
Z

n+k
, k = 1, . . . , K.
A sequence of missing values at the end of the sample will therefore produce a set of multi-step
forecasts and corresponding forecast error covariances.
5 An empirical illustration: Energy series
5.1 Data
Figure 1: Energy series and decomposition
1960 1970 1980
5
6
Electricity
1960 1970 1980
0.25
0.00
0.25
1960 1970 1980
0.05
0.00
0.05
1960 1970 1980
5
6
7
Gas
1960 1970 1980
0.5
0.0
0.5
1960 1970 1980
0.0
0.1
1960 1970 1980
4
5
Coal
Observations and Trend
1960 1970 1980
0.25
0.00
0.25
Seasonal
1960 1970 1980
0.25
0.00
0.25
Irregular
In the empirical analysis we consider three quarterly time series of energy consumption (in
millions of useful therms) of coal, gas and electricity in the UK economy between 1960 and 1986.
10
The n = 108 observations for each series are transformed to logs. The decomposition of the
logged energy series into trend, seasonal and irregular components are graphically represented
in Figure 1.
In Figure 2 the actual time series of electricity, gas and coal are presented as yearly series
for each quarter. The number of observations for each series equals the number of available
years, that is J = 27. The yearly time series graphs for each quarter are in particular useful for
identifying specic irregularities in the time series. For example, the outlier in quarter 1 (Q1)
of the electricity series, the outlier in Q3 of the gas series and the break in Q4 of the gas series
become immediately apparent.
Figure 2: Energy yearly series for each quarter
1960 1970 1980
5.5
6.0
6.5
Electricity
Q1
1960 1970 1980
5.5
6.0
Q2
1960 1970 1980
5.0
5.5
6.0
Q3
1960 1970 1980
5.5
6.0
6.5
Q4
1960 1970 1980
6
7
Gas
1960 1970 1980
5.0
5.5
6.0
6.5
1960 1970 1980
4.5
5.0
5.5
6.0
1960 1970 1980
5
6
1960 1970 1980
4.5
5.0
5.5
Coal
1960 1970 1980
4.0
4.5
5.0
5.5
1960 1970 1980
4
5
1960 1970 1980
4.0
4.5
5.0
5.5
5.2 Structural time series models
The decompositions presented in Figure 1 are based on the structural time series model (1) with
trigonometric specication for the seasonal component with the unknown parameters estimated
by maximum likelihood using STAMP. The model diagnostics for these estimated models were
not entirely satisfactory but after the inclusion of some interventions, which take account of
outliers (exceptional innovations in the irregular component, denoted out) and breaks in the
trends (exceptional innovations in the level component, denoted lvl), the estimated models
produced satisfactory diagnostics. The details of the estimated models and their diagnostics
are presented in Table 1
11
5.3 Periodic models
In the analysis we have considered the four dierent periodic models: (i) independent, (ii) ho-
mogeneous, (iii) seemingly unrelated and (iv) common trend equations. The estimation results
of these models are presented in Tables 2 to 5. The diagonal elements of the estimated variance
matrices are reported in the tables together with the associated loglikelihood values of the es-
timated models. Analysing the periodic series simultaneously turns out to be straightforward.
Fewer interventions are required which leads to a more simple procedure of modelling time
series in practice. For example, in the case of the univariate structural time series model for
the gas series four interventions are required while for the periodic models only one intervention
is required.
The estimated variance matrices of the homogeneous model are given by

elec

= 10
4
_

_
8.88 .525 .505 .356
4.77 9.28 .566 .022
4.33 4.96 8.28 .537
3.92 .250 5.71 13.7
_

_
,

gas

= 10
3
_

_
2.97 .085 .244 .451
.274 3.46 .313 .320
1.22 1.69 8.42 .558
2.50 1.92 5.21 10.35
_

_
,

coal

= 10
3
_

_
11.1 .237 .101 .304
2.24 8.03 .022 .049
1.58 .299 22.2 .722
3.72 .513 12.5 13.5
_

_
,
with correlations given by the upper-triangular elements (not to be multiplied by 10
4
or 10
3
).
The estimation details of the SUPSE model, additional to the results reported in Table 4,
are rather extensive and therefore omitted here.
The details of the estimated common trends model are as follows. The rank restrictions
apply to the variance matrices of the level disturbances
t
and the slope disturbances
t
, that
is r(

) = r

and r(

) = r

. We therefore can decompose these matrices by

= A

where A

is a s r

lower unity triangular matrix and D

is a r

strictly positive diagonal


matrix. The same decomposition applies to

= A

where A

is a s r

matrix and D

is a r

diagonal matrix.
We have considered two cases. Firstly the variance matrices of the level and slope distur-
bances are restricted to have rank r

= r

= 1. This implies that the error terms of the level


and slope are common to all seasons. In other words, the yearly trend components are the same
for all seasons apart from a periodically varying scaling factor. This restriction is a special case
of the model considered by Proietti (2001). The other case under investigation is the common
trend model with restrictions r

= 2 and r

= 1. This model allows seasons to have more


dierent dynamic features. Other restrictions may also be considered and we plan to explore
them in future research.
12
The estimated matrices for the three energy series in the case of r

= r

= 1 are given by
A
elec

=
_

_
1
1.92
.942
.619
_

_
, D
elec

= .02446, A
elec

=
_

_
1
.654
.844
1.26
_

_
, D
elec

= .02144,
A
gas

=
_

_
1
.805
1.85
.506
_

_
, D
gas

= .0330, A
gas

=
_

_
1
.355
.437
.936
_

_
, D
gas

= .0179,
A
coal

=
_

_
1
.550
.225
.559
_

_
, D
coal

= .0890, A
coal

=
_

_
1
.898
1.51
.340
_

_
, D
coal

= .01175,
For all three series, the estimated models are adequate according to the standard diagnostic
test statistics.
The estimated matrices for the three energy series in the case of r

= 2 and r

= 1 are given
by
A
elec

=
_

_
1 0
0 1
0 .01
0 .60
_

_
, D
elec

= diag(0, .0301), A
elec

=
_

_
1
.88
.84
.87
_

_
, D
elec

= .022,
A
gas

=
_

_
1 0
3.60 1
1.33 .01
1.42 .60
_

_
, D
gas

= diag(.015, .006), A
gas

=
_

_
1
.45
.20
.85
_

_
, D
gas

= .023,
A
coal

=
_

_
1 0
.55 1
.22 1.5
.56 .48
_

_
, D
coal

= diag(.089, .0026), A
coal

=
_

_
1
.92
1.6
.33
_

_
, D
coal

= .011,
A particular conclusion from these estimation results is that in the case of electricity and gas
series, the dynamic properties of the trend for the seasons Q2 and Q4 are dierent than the
properties implied by the common trend.
To compare the loglikelihood values of the various estimated models we need to take account
of the dierent numbers of parameters which are estimated together with the dierent number
of elements in the state vector. The well-known Akaike information criterion (AIC) is used
as a measurement that takes account of this. The loglikelihood values and the corresponding
AIC values are presented in Table 6. Within the class of periodic models, the common trends
model is best for all series. Note again that traditional likelihood ratio testing of common
trend models against seemingly unrelated periodic model using test statistics with asymptotic
chi squared distributions is not feasible.
Since these results are from the rst empirical study in which periodic and non-periodic
approaches within the class of UC models are compared, we feel that more research is required
13
to make nal conclusions about the various approaches and models. However, we nd that
appropriate structural time series models do provide appropriate means for modelling seasonal
time series. Further it is straightforward to estimate a range periodic unobserved component
models and it has been easier to model data irregularities such as outliers in the periodic models.
5.4 Forecasting
Multi-step out-of sample forecasts and corresponding root mean squared errors (RMSEs) for
the years 1987-1989 (12 quarters) based on the various estimated models are reported in Table
7, 8 and 9 for the three energy series electricity, gas and coal, respectively.
It is noted that the RMSEs are functions from parameter estimates. A small value for the
RMSE indicates that the model was able to give a precise prediction of future values based on
the information within the sample. If the true data generating process is nonperiodic, we expect
smaller RMSEs for the structural time series model model than for the periodic models. The
structural model is updated quarterly and the parameter estimates are based on minimising
squared one-quarter ahead prediction errors, whereas the periodic models are updated yearly
and estimation is essentially based on minimising squared one year ahead forecasts errors.
For electricity and coal, the structural time series model produced the smallest values of
the RMSE for most horizons. For gas the periodic models produced the smaller RMSEs for all
horizons. These comparisons do not account for the dierent dimensions of the parameter and
state vectors.
6 Conclusion
In this paper we have investigated a number of periodic unobserved components time series
models and we have compared them with the standard univariate structural time series model.
The empirical results were based on series of energy consumption in the UK. We have not found
convincing empirical evidence that the periodic models produced better results in this study.
However we have found that an interesting periodic analysis using ready-to-use software is a
viable extension to the standard approach. More empirical research is required to make more
denitive conclusions.
In order to compare our results with periodic ARMA models, extensions with extra periodic
AR components and other extensions with larger state vectors are to be investigated. Another
topic for future research is the quarterly updating of the multivariate periodic unobserved
component models. This could lead to better forecasting performance compared with the yearly
updating employed in the current paper. This would also enable an interesting comparison of
actual out-of-sample forecast error performance of nonperiodic and periodic unobserved compo-
nent time series models. These extensions would require extended software, e.g. implemented
in SsfPack by Koopman, Shephard, and Doornik (1999) and are not as ready-to-use in the
near future as the methods discussed in this paper.
References
Anderson, B. D. O. and J. B. Moore (1979). Optimal Filtering. Englewood Clis: Prentice-
Hall.
14
Doornik, J. A. (1998). Object-Oriented Matrix Programming using Ox 2.0. London: Timber-
lake Consultants Press.
Durbin, J. and S. J. Koopman (2001). Time Series Analysis by State Space Methods. Oxford:
Oxford University Press.
Franses, P. H. (1996). Periodicity and Stochastic Trends in Economic Time Series. Oxford,
UK.: Oxford University Press.
Harvey, A. C. (1989). Forecasting, Structural Time Series Models and the Kalman Filter.
Cambridge: Cambridge University Press.
Harvey, A. C. (1993). Time Series Models (2nd ed.). Hemel Hempstead: Harvester Wheat-
sheaf.
Harvey, A. C. (2001). A unied approach to testing for stationarity and unit roots. Discussion
paper, Faculty of Economics and Politics, University of Cambridge, UK.
Harvey, A. C. and S. J. Koopman (1997). Multivariate structural time series models. In
C. Heij, H. Schumacher, B. Hanzon, and C. Praagman (Eds.), Systematic dynamics in
economic and nancial models, pp. 26998. Chichester: John Wiley and Sons.
Harvey, A. C., S. J. Koopman, and J. Penzer (1998). Messy time series. In T. B. Fomby
and R. C. Hill (Eds.), Advances in Econometrics, volume 13, pp. 10343. New York: JAI
Press.
Jones, R. H. (1980). Maximum likelihood tting of ARIMA models to time series with missing
observations. Technometrics 22, 38995.
Kitagawa, G. and W. Gersch (1996). Smoothness Priors Analysis of Time Series. New York:
Springer Verlag.
Koopman, S. J. (1997). Exact initial Kalman ltering and smoothing for non-stationary time
series models. J. American Statistical Association 92, 16308.
Koopman, S. J. and J. Durbin (2000). Fast ltering and smoothing for multivariate state
space models. J. Time Series Analysis 21, 28196.
Koopman, S. J., A. C. Harvey, J. A. Doornik, and N. Shephard (2000). Stamp 6.0: Structural
Time Series Analyser, Modeller and Predictor. London: Timberlake Consultants.
Koopman, S. J., N. Shephard, and J. A. Doornik (1999). Statistical algorithms for
models in state space form using SsfPack 2.2. Econometrics Journal 2, 11366.
http://www.ssfpack.com/.
Ljung, G. M. and G. E. P. Box (1978). On a measure of lack of t in time series models.
Biometrika 66, 6772.
Ooms, M. and P. H. Franses (1997). On periodic correlations between estimated seasonal
and nonseasonal components in German and U.S. unemployment. Journal of Business
and Economic Statistics 15, 470481.
Ooms, M. and P. H. Franses (2001). A seasonal periodic long memory model for monthly
river ows. Environmental Modelling & Software 16, 559569.
Osborn, D. R. (1991). The implications of periodically varying coecients for seasonal time-
series processes. Journal of Econometrics 48, 373384.
15
Osborn, D. R. and J. P. Smith (1989). The performance of periodic autoregressive models
in forecasting seasonal UK consumption. Journal of Business and Economic Statistics 7,
117127.
Proietti, T. (2001). Seasonal specic structural models. mimeo, Department of Statistics,
University of Udine, Italy.
Schweppe, F. (1965). Evaluation of likelihood functions for Gaussian signals. IEEE Transac-
tions on Information Theory 11, 6170.
Tiao, G. C. and M. R. Grupe (1980). Hidden periodic autoregressive-moving average models
in time series data. Biometrika 67, 365373.
Young, P. C. (1984). Recursive Estimation and Time Series Analysis. New York: Springer-
Verlag.
Zellner, A. (1962). An ecient method of estimating seemingly unrelated regressions and
tests for aggregation bias. Journal of the American Statistical Association 57, 346368.
16
Table 1: Estimation results of structural time series models for quarterly data, 1960-1986
Model basic Electric Gas Coal
estimates
2

2
/
2

2
/
2

2
/
2

.000889 1 .001617 1 .016454 1

.000211 .237 0 0 .000749 .0455

1.71 10
6
.002 7.48 10
6
.0046 2.75 10
6
.0002

.000141 .159 .000841 .5201 0 0


log-likelihood 283.761 229.293 188.015
N 70.547 58.329 6.699
H(34) .430 2.999 1.511
Q(6) 14.964 9.188 2.642
Model interv Electric Gas Coal
estimates
2

2
/
2

2
/
2

2
/
2

0 0 .000104 .1897 .012370 1

.000302 1 .000248 .4514 .000169 .0137

7.10 10
7
.0024 5.40 10
6
.0098 5.46 10
6
.0004

.000109 .363 .000549 1 0 0


interventions |t|-value |t|-value |t|-value
1 out 67.2 3.425 out 70.3 7.571 out 75.1 3.395
2 out 68.4 3.329 out 70.4 6.352 out 84.3 3.868
3 out 74.1 9.146 out 84.4 4.157
4 out 74.2 3.661
5 lvl 75.1 4.907
log-likelihood 301.210 251.799 198.046
N 4.488 4.397 2.271
H(34) .778 3.046 .635
Q(6) 8.140 8.516 2.324
Parameter estimates are reported together with their signal-to-noise ratios; the loglikelihood value of the esti-
mated model is reported together with the diagnostic test statistics discussed in section 4.4.
17
Table 2: Estimation results of periodic independent models, 4 seasonal subseries of yearly data,
1960-1986
Electric Gas Coal
estimates Q1
2

2
/
2

2
/
2

2
/
2

.000563 .447 .001813 1 .005957 .4983

.001260 1 0 0 .011954 1

.000123 .0972 .000492 .2716 0 0


N .04 1.16 .81
Q(6) 12.1 6.48 2.3
estimates Q2
2

2
/
2

2
/
2

2
/
2

.000129 .0879 .001371 .3827 .009230 1

.001464 1 .003581 1 4.34 10


7
.0

8.47 10
5
.0578 0 0 .000442 .0479
N 3.22 3.91 1.53
Q(6) 3.69 4.14 7.84
estimates Q3
2

2
/
2

2
/
2

2
/
2

.000541 1 0 0 .028554 1

.000183 .3388 .003161 1 0 0

.000199 .3675 .000431 .1364 .000297 .0104


N 2.04 .31 .625
Q(6) 2.86 .821 8.36
estimates Q4
2

2
/
2

2
/
2

2
/
2

0 0 .000430 .1067 .017988 1

.003000 1 .004031 1 .001943 .1080

9.86 10
5
.0329 4.48 10
5
.0111 0 0
N 3.05 3.79 .532
Q(6) 14.5 6.96 2.05
interventions |t|-value |t|-value
1 out 72.1 5.589 out 70.3 9.751
2 out 68.2 3.624 lvl 71.4 8.085
3 out 75.3 3.910
4 out 68.4 2.528
log-likelihood 282.558 252.926 174.762
Parameter estimates are reported together with their signal-to-noise ratios; the loglikelihood value of the esti-
mated model is reported together with the diagnostic test statistics discussed in section 4.4.
18
Table 3: Estimation results of periodic homogeneous models, 4 seasonal subseries of yearly
data, 1960-1986
Electric Gas Coal
estimates Q1
2

2
/
2

2
/
2

2
/
2

.000888 .7380 .000604 .2031 .011067 1

.001203 1 .002973 1 .005738 .5185

3.26 10
5
.0271 0 0 4.31 10
5
.0039
N .268 10.4 1.77
Q(6) 11.4 8.32 4.16
estimates Q2
2

2
/
2

2
/
2

2
/
2

.000928 .7380 .000703 .2031 .008032 1

.001258 1 .003461 1 .004165 .5185

3.41 10
5
.0271 0 0 3.13 10
5
.0039
N .268 6.23 2.31
Q(6) 3.52 3.45 7.75
estimates Q3
2

2
/
2

2
/
2

2
/
2

.000828 .7380 .001710 0.2031 .022231 1

.001121 1 .008418 1 .011527 .5185

3.04 10
5
.0271 0 0 8.66 10
5
.0039
N 6.78 6.77 .406
Q(6) 2.18 2.68 9.41
estimates Q4
2

2
/
2

2
/
2

2
/
2

.001366 .7380 .002102 .2031 .013485 1

.001850 1 .010351 1 .006992 .5185

5.01 10
5
.0271 0 0 5.25 10
5
.0039
N 12.7 15.1 4.09
Q(6) 7.90 7.02 3.27
interventions |t|-value |t|-value
1 out 74.1 6.424 out 70.3 3.766
2 lvl 71.4 3.094
log-likelihood 293.29 242.39 183.83
Parameter estimates are reported together with their signal-to-noise ratios; the loglikelihood value of the esti-
mated model is reported together with the diagnostic test statistics discussed in section 4.4.
19
Table 4: Estimation results of seemingly unrelated periodic models, SUPSE, 4 seasonal subseries
of yearly data, 1960-1986
Electric Gas Coal
estimates Q1
2

2
/
2

2
/
2

2
/
2

.000541 .4399 .001374 2.443 .008329 1

.001232 1 .000562 1 .007889 .9471

9.85 10
5
.0800 .000466 0.8293 .000133 .0160
N 1.97 .192 .933
Q(6) 15.5 7.43 3.30
estimates Q2
2

2
/
2

2
/
2

2
/
2

.000329 .1672 .001180 .3708 .008584 1

.001967 1 .003181 1 .002347 .2734

5.82 10
5
.0296 .000100 .0315 .000110 .0128
N 5.09 8.25 3.75
Q(6) 4.00 5.15 10.0
estimates Q3
2

2
/
2

2
/
2

2
/
2

.000778 1.035 .000262 0.1264 .026177 1

.000752 1 .002075 1 .000360 .0138

7.16 10
5
.0953 .000717 .3455 .000320 .0122
N .152 .402 1.43
Q(6) 2.53 1.96 12.6
estimates Q4
2

2
/
2

2
/
2

2
/
2

.000336 .1135 .001445 1.017 .015653 1

.002960 1 .001421 1 .002444 .1561

.000113 .0381 .003070 .2160 1.55 10


5
.0010
N 11.1 1.17 5.66
Q(6) 10.1 5.87 3.20
interventions |t|-value |t|-value
1 out 74.1 7.271 out 70.3 10.87
2 lvl 71.4 11.06
log-likelihood 305.874 264.320 190.410
Parameter estimates are reported together with their signal-to-noise ratios; the loglikelihood value of the esti-
mated model is reported together with the diagnostic test statistics discussed in section 4.4.
20
Table 5: Estimation results of periodic common trend models with r

= 2 and r

= 1, 4 seasonal
subseries of yearly data, 1960-1986
Electric Gas Coal
estimates Q1
2

2
/
2

2
/
2

2
/
2

.000902 1 .001522 6.946 .008313 1

0 0 .000219 1 .007923 .9530

0.000486 .5385 .000539 2.462 .000127 .0153


N 4.10 .178 .944
Q(6) 19.6 5.47 3.30
estimates Q2
2

2
/
2

2
/
2

2
/
2

.000650 1 .001400 .4869 .008581 1

.000904 1.391 .002875 1 .002361 .2751

.000373 .5738 .000107 .0373 .000108 .0125


N 3.32 7.14 3.73
Q(6) 5.39 4.95 10.0
estimates Q3
2

2
/
2

2
/
2

2
/
2

.000961 1 4.64 10
5
0.0118 .026142 1

6.96 10
7
.0007 .003932 1 .000389 .0149

.000347 .3608 2.19 10


5
.0056 .000320 .0122
N .351 1.71 1.43
Q(6) 2.88 2.22 12.6
estimates Q4
2

2
/
2

2
/
2

2
/
2

.000473 1 .002049 4.613 .015653 1

.002483 5.249 .004442 1 .002456 .1569

.000370 .7833 .003865 .8700 1.43 10


5
.0009
N 10.8 2.70 5.65
Q(6) 11.0 7.75 3.19
interventions |t|-value |t|-value
1 out 74.1 7.303 out 70.3 9.324
2 lvl 71.4 11.30
log-likelihood 304.736 263.140 190.408
Parameter estimates are reported together with their signal-to-noise ratios; the loglikelihood value of the esti-
mated model is reported together with the diagnostic test statistics discussed in section 4.4.
21
Table 6: Comparison between various models
models # state # par loglikelihood
1
2
AIC
Elec Gas Coal Elec Gas Coal
E/G/C
stsm 9/5/6 4 301.21 251.80 198.05 288.21* 241.80* 189.05*
per
indep 12/10/8 12 282.56 252.93 174.76 258.56 230.93 154.76
homog 9/10/8 12 293.29 242.39 183.83 272.29 220.39 163.83
SUPSE 9/10/8 30 305.87 264.32 190.41 266.87 224.32 152.41
r

= 1 9/10/8 18 303.81 262.58 190.40 276.81 234.14 164.40


r

= 2 9/10/8 21 304.74 263.14 190.41 274.74 232.14 161.41


AIC indicates Akaike information criterion: AIC = 2loglik +
2(number of states (including interventions) and unknown covariance parameters). Asterisk indicates
smallest AIC among the dierent models.
Table 7: Multi-step out-of-sample Electricity forecasts based on various time series models
period str mod indep per homog per SUPSE common trend
k F RMSE F RMSE F RMSE F RMSE F RMSE
87.1 1 6.52 .0434* 6.51 .0548 6.51 .0560 6.52 .0487 6.54 .0507
87.2 2 6.31 .0450* 6.32 .0466 6.30 .0573 6.31 .0531 6.32 .0544
87.3 3 6.26 .0477 6.27 .0436* 6.25 .0541 6.25 .0459 6.27 .0469
87.4 4 6.44 .0484* 6.43 .0560 6.43 .0695 6.43 .0634 6.44 .0645
88.1 5 6.57 .0673* 6.55 .0780 6.54 .0724 6.56 .0701 6.60 .0764
88.2 6 6.35 .0689* 6.36 .0705 6.34 .0740 6.35 .0757 6.38 .0807
88.3 7 6.30 .0716 6.34 .0629 6.30 .0699 6.30 .0624* 6.34 .0674
88.4 8 6.48 .0725* 6.47 .0921 6.46 .0898 6.47 .0932 6.50 .0967
89.1 9 6.62 .0882* 6.59 .1026 6.58 .0889 6.60 .0925 6.66 .1101
89.2 10 6.40 .0901* 6.41 .0940 6.39 .0909 6.39 .0968 6.44 .1116
89.3 11 6.35 .0929 6.41 .0865 6.36 .0858 6.36 .0803* 6.41 .0950
89.4 12 6.53 .0940* 6.50 .1219 6.50 .1103 6.51 .1214 6.56 .1304
Actual forecast of y
n+k
(F) with model based root mean squared error (RMSE) for the years 1987-1989. An
asterisk indicates the smallest RMSE among competitive models.
22
Table 8: Multi-step out-of-sample Gas forecasts based on various time series models
period str mod indep per homog per SUPSE common trend
k F RMSE F RMSE F RMSE F RMSE F RMSE
87.1 1 7.16 .0786 7.12 .0717 7.14 .0647 7.15 .0644* 7.13 .0647
87.2 2 6.49 .0786 6.46 .0790 6.47 .0703* 6.47 .0728 6.47 .0733
87.3 3 5.94 .0802 5.97 .0676 5.88 .1097 5.99 .0658 5.90 .0649*
87.4 4 6.75 .0804 6.72 .0734 6.73 .1217 6.71 .0705* 6.71 .0718
88.1 5 7.25 .1181 7.18 .0992 7.21 .0872* 7.21 .0934 7.19 .0931
88.2 6 6.57 .1184 6.52 .1017 6.53 .0941* 6.53 .0982 6.53 .0965
88.3 7 6.02 .1210 6.09 .1112 5.93 .1468 6.13 .1104 5.96 .0936*
88.4 8 6.83 .1219 6.77 .1045 6.79 .1628 6.75 .0962 6.76 .0948*
89.1 9 7.33 .1551 7.25 .1345 7.29 .1058* 7.28 .1291 7.26 .1294
89.2 10 6.66 .1560 6.58 .1213 6.59 .1142* 6.58 .1222 6.59 .1190
89.3 11 6.11 .1597 6.21 .1558 5.99 .1781 6.28 .1594 6.01 .1173*
89.4 12 6.92 .1615 6.82 .1322 6.84 .1975 6.79 .1257 6.81 .1243*
Actual forecast of y
n+k
(F) with model based root mean squared error (RMSE) for the years 1987-1989. An
asterisk indicates the smallest RMSE among models.
Table 9: Multi-step out-of-sample Coal forecasts based on various time series models
period str mod indep per homog per SUPSE common trend
k F RMSE F RMSE F RMSE F RMSE F RMSE
87.1 1 4.21 .1269* 4.10 .1522 4.11 .1562 4.18 .1459 4.18 .1459
87.2 2 3.82 .1299 3.80 .1342 3.73 .1330 3.77 .1191 3.77 .1191*
87.3 3 3.52 .1333* 3.55 .2120 3.59 .2213 3.60 .1911 3.60 .1911
87.4 4 4.16 .1371* 3.91 .1617 3.95 .1724 3.94 .1470 3.94 .1470
88.1 5 4.19 .1425* 4.04 .1920 4.06 .1813 4.17 .1788 4.17 .1787
88.2 6 3.80 .1472 3.78 .1596 3.68 .1544 3.76 .1364 3.76 .1363*
88.3 7 3.50 .1524* 3.55 .2321 3.55 .2569 3.61 .2077 3.61 .2077
88.4 8 4.14 .1579 3.85 .1699 3.89 .2001 3.90 .1574* 3.90 .1575
89.1 9 4.17 .1651* 3.98 .2270 4.01 .2064 4.17 .2110 4.16 .2107
89.2 10 3.78 .1715 3.77 .1923 3.64 .1758 3.74 .1557 3.74 .1555*
89.3 11 3.48 .1783* 3.55 .2577 3.52 .2925 3.63 .2300 3.63 .2301
89.4 12 4.12 .1854 3.79 .1782 3.83 .2278 3.86 .1683 3.85 .1683*
Actual forecast of y
n+k
(F) with model based root mean squared error (RMSE) for the years 1987-1989. An
asterisk indicates the smallest RMSE among models.
23

Anda mungkin juga menyukai