), t = 1, . . . , n, (1)
where
t
,
t
and
t
represent trend, seasonal and irregular components respectively. The trend
and seasonal components are modelled by linear dynamic stochastic processes which depend on
disturbances. The components are formulated exibly and they are allowed to change over time
rather than being deterministic. The disturbances driving the components are independent of
each other. The denitions of the components are given below, but a full explanation of the
underlying rationale can be found in Harvey (1989, chapter 2). The eectiveness of structural
time series models compared to ARIMA type models, especially when messy features in time
series are present, is shown in Harvey, Koopman, and Penzer (1998).
The trend component can be dened as
t
=
t1
+
t1
+
t
,
t
NID(0,
2
),
t
=
t1
+
t
,
t
NID(0,
2
),
(2)
where the level and slope disturbances,
t
and
t
are mutually uncorrelated. When
2
is zero,
we have a random walk plus drift, and when
2
t
=
[s/2]
j=1
+
j,t
, where
_
+
j,t+1
j,t+1
_
=
_
cos
j
sin
j
sin
j
cos
j
__
+
j,t
j,t
_
+
_
+
j,t
j,t
_
, (3)
with
j
= 2j/s as the j-th seasonal frequency and
_
+
j,t
j,t
_
NID
__
0
0
_
,
2
I
2
_
, j = 1, . . . , [s/2].
2
Note that for s even [s/2] = s/2, while for s odd, [s/2] = (s 1)/2. For s even, the process
j,t
, with j = s/2, can be dropped. The state space representation is straightforward and the
initial conditions are
+
j,1
N
(0, ) and
j,1
N
(0, ), for j = 1, . . . , [s/2]. We have assumed
that the variance
2
is the same for all trigonometric terms. However, we can impose dierent
variances for the terms associated with dierent frequencies; in the quarterly case (s = 4) we
can estimate two dierent
2
j
=
_
_
_
y
j,1
.
.
.
y
j,s
_
_
_
, j = 1, . . . , J.
3.1 Independent periodic models
A simple periodic approach is to consider independent local linear trend models for the s yearly
time series, that is
y
j,p
=
j,p
+
j,p
,
j+1,p
=
j,p
+
j,p
+
j,p
, p = 1, . . . , s,
j+1,p
=
j,p
+
j,p
, j = 1, . . . , J,
(4)
where the disturbances
j,p
,
j,p
and
j,p
are serially and mutually uncorrelated over both indices.
Eectively, s separate models are introduced which can be represented as a multivariate model
by
y
j
=
j
+
j
,
j
N
(0,
),
j+1
=
j
+
j
+
j
,
j
N
(0,
),
j+1
=
j
+
j
,
j
N
(0,
), j = 1, . . . , J,
(5)
where the variance matrices
and
j
refers
to the stack of the seasonal values x
j,1
, . . . , x
j,s
, that is
x
j
=
_
_
_
x
j,1
.
.
.
x
j,s
_
_
_
,
3
where x can refer to the variables , , , , and . The dimension of the state vector is 2s.
A simple example of an independent periodic model is the well known seasonal random
walk, extended with periodically varying variances, where
give the innovations variance for each season of the year. A combination of
random walk behaviour for three subseries with one white noise subseries is also a possible
independent periodic model. In the general case, the number of unknown variance parameters
for this model adds up to 3s parameters which is 12 in the quarterly case. The variance
parameters can be estimated separately for each subseries.
3.2 Homogeneous periodic models
Multivariate homogeneous structural time series models are discussed in Harvey (1989, section
8.3). For the case of the local linear trend model these are obtained by replacing the diagonal
variance matrix restrictions for model (5) with the homogeneity restrictions
= q
= q
= q
. (6)
The dimension of the state vector still equals 2s. The variance matrices of the disturbances of
the dierent components are therefore equal upto scalar factors q
, q
and q
, q
and q
and
lead to the notion of common trends which can be interpreted as stochastic cointe-
gration for a vector of time series. For periodic unobserved component models, common trends
can occur as common stochastic slopes, common stochastic levels, or as combinations of both.
Suppose
(1 + s
1
2
(r
+ 1))
+ r
(1 + s
1
2
(r
+ 1)) +
1
2
s(s + 1) where r
and r
t+1
= T
t
t
+ H
t
t
,
1
N
(a, P) , t = 1, . . . , n, (7)
y
t
= Z
t
t
+ G
t
t
,
t
NID(0, I) , (8)
where NID(, ) indicates an independent sequence of normally distributed random vectors
with mean and variance matrix , and, similarly,
N
(, ) indicates a normally distributed
variable. The N observations at time t are placed in the vector y
t
and the N n data matrix is
given by (y
1
, . . . , y
n
). The structural time series model has a univariate measurement equation:
N = 1. The periodic model is based on a multivariate measurement equation: N = s. In
the following we use a single time index t which may refer to a time scale measured either in
quarters or in years.
The m 1 state vector
t
contains unobserved stochastic processes and unknown xed
eects. The state equation (7) has a Markovian structure which is an eective way to describe
the serial correlation structure of the time series y
t
. The initial state vector is assumed to be
random with mean a and variance matrix P but some elements of the state can be diuse which
means that have mean zero and variance where is large. The measurement equation (8)
5
relates the observation vector y
t
to the state vector
t
through the signal Z
t
t
and the vector of
disturbances
t
. The deterministic matrices T
t
, Z
t
, H
t
and G
t
are referred to as system matrices
and they usually are sparse selection matrices. When the system matrices are constant over
time, we drop the time-indices to obtain the matrices T, Z, H and G. The resulting state space
form is referred to as time-invariant.
4.1 Diuse Kalman lter and log likelihood evaluation
The Kalman lter is a recursive algorithm for the evaluation of moments of the normal distri-
bution of state vector
t+1
conditional on the data set Y
t
= {y
1
, . . . , y
t
}, that is
a
t+1
=
E
(
t+1
|Y
t
) , P
t+1
= cov (
t+1
|Y
t
) ,
for t = 1, . . . , n; see Anderson and Moore (1979, page 36) and Harvey (1989, page 104). The
Kalman lter is given by
v
t
= y
t
Z
t
a
t
F
t
= Z
t
P
t
Z
t
+ G
t
G
t
K
t
= (T
t
P
t
Z
t
+ H
t
G
t
) F
1
t
a
t+1
= T
t
a
t
+ K
t
v
t
P
t+1
= T
t
P
t
T
t
+ H
t
H
t
K
t
F
t
K
t
(9)
for t = 1, . . . , n, and where a
1
= a, and P
1
= P. v
t
is the innovation and F
t
is variance. K
t
is the Kalman gain: the derivative of the forecast function for the state with respect to the
current innovation. The variance matrix P is given by
P = P
+ P
,
where is large; for example, = 10
7
. The matrix P
is a diagonal
matrix with unity for nonstationary and deterministic elements of the state and zero elsewhere.
The number of diuse elements, equals to the rank of P
), is given by d.
The Kalman lter is used to compute the Gaussian log-likelihood function via the prediction
error decomposition for models in state space form; see Schweppe (1965), Jones (1980) and
Harvey (1989, section 3.4). The log-likelihood function is given by
l = log p (y
1
, . . . , y
n
; ) =
n
t=1
log p (y
t
|y
1
, . . . , y
t1
; )
=
nN
2
log (2)
1
2
n
t=1
_
log |F
t
| + v
t
F
1
t
v
t
_
(10)
where is the vector of parameters for a specic statistical model represented in state space
form (9). The innovations v
t
and their variances F
t
are computed by the Kalman lter for
a given vector . Note that the summation in (10) is from 1 to n, but usually the rst d
summations will be approximately zero as F
1
t
will be very small for t = 1, . . . , d. For more
details on diuse initialisation; see Koopman (1997).
6
When observations y
t
for t = , . . . ,
t
+ H
t
H
t
, t = , . . . ,
1; (11)
This simple treatment of missing observations is one of the attractions of the state space
methods for time series analysis. The same principle can be used to generate forecasts and
forecast root mean squared errors as we discuss below.
4.2 State space formulation of seasonal UC models
The unobserved components time series model (1) for quarterly series (s = 4) with trend,
seasonal and irregular components requires a state vector of ve elements which is given by
t
=
_
t
,
t
,
1,t
,
1,t
,
2,t
_
. The model specications for trend and seasonal are given by (2)
and (3), respectively. The seasonal component
t
is the sum of two trigonometric variables
+
1,t
and
+
2,t
associated with the seasonal frequencies
1
2
and , respectively. The variable
2,t
is
not considered since sin = 0 and it follows from (3) that
+
2,t+1
=
+
2,t
+
+
2,t
. The state space
formulation of the full model relies on time-invariant system matrices and are given by
T =
_
_
1 1 0 0 0
0 1 0 0 0
0 0 0 1 0
0 0 1 0 0
0 0 0 0 1
_
_
, H =
_
_
0
0 0 0 0
0 0
0 0 0
0 0 0
0 0
0 0 0 0
0
0 0 0 0 0
_
, (12)
(13)
Z =
_
1 0 1 0 1
_
, G =
_
0 0 0 0 0
_
. (14)
The variances of the disturbances are unknown and need to be estimated. They are
2
,
2
and
2
which we transform to logs so that we can estimate them without constraints. These
unknown parameters are collected in the 4 1 vector . The state vector has dimension 2 +
s 1.
4.3 State space formulation of periodic UC models
A periodic unobserved components time series models is represented by a multivariate state
space model with N = s. In the case of the periodic local linear trend model the state vector
is of size 2s, that is 8 for quarterly data, and is given by
j
=
_
1,j
,
1,j
, . . . ,
s,j
,
s,j
_
for
j = 1, . . . , J. The number of observations J for the periodic model will be approximately n/s
where n is the number of observations for the structural time series model. The system matrices
for the state space representation (7)-(8) are given by
T = I
s
_
1 1
0 1
_
, Z = I
s
_
1 0
_
, (15)
where I
s
is the s s identity matrix and denotes the Kronecker matrix product. The
dimension of the state vector therefore equals 2s in all periodic models under consideration in
this paper.
7
The specication for H and G vary depending on the type of periodic model. For example,
for the homogeneous model we have
H = [0, A] D, G = [
A, 0, 0, 0, 0] ,
where A is the s s matrix obtained from the decomposition
= AA
and D is the 2 2
diagonal matrix diag(
F
t
, t = 1, . . . , n, (16)
are also normally distributed and serially independent with unit variance. In the periodic
models we consider the separate seasonal subseries of standardised prediction errors and the
index t refers to years, j, instead of quarters.
e
p,j
=
v
p
, j
_
F
pp,j
, p = 1, . . . , s, j = 1, . . . , J. (17)
Other really multivariate diagnostics can be computed for the periodic models, see Harvey
(1989, 8.4.2), but we do not consider those in the current paper.
We can check that these properties hold by means of the following diagnostic tests, where
it must be understood that t refers to quarters in the structural time series model and t refers
to years for the periodic models.
Normality
The rst four moments of the standardised prediction errors are given by
m
1
=
1
n
n
t=1
e
t
,
m
q
=
1
n
n
t=1
(e
t
m
1
)
q
, q = 2, 3, 4.
8
Skewness and kurtosis are denoted by S and K, respectively, and are dened as
S =
m
3
_
m
3
2
, K =
m
4
m
2
2
,
and it can be shown that when the model assumptions are valid they are asymptotically
normally distributed as
S
N
(0,
6
n
), K
N
(3,
24
n
);
see Bowman and Shenton (1975). Standard statistical tests can be used to check whether
the observed values of S and K are consistent with their asymptotic densities. They can
also be combined as
N = n{
S
2
6
+
(K 3)
2
24
},
which asymptotically has a
2
distribution with 2 degrees of freedom on the null hy-
pothesis that the normality assumption is valid. In practice, rejection of the normality
assumption is often due to a small number of extreme observations. In our application
below we treat outlying observations using simple outlier models. A dierent approach
would be to treat the extreme observations as missing, which would be more appropriate
in the context of sporadic measurement errors.
Heteroskedasticity
A simple test for heteroskedasticity is obtained by comparing the sum of squares of two
exclusive subsets of the sample. For example, the statistic
H(h) =
n
nh
e
2
t
h+1
t=1
e
2
t
,
where e
t
is dened in (17), is F
h,h
-distributed for some preset positive integer h, under
the null hypothesis of homoskedasticity.
Serial correlation
When the model holds, the standardised forecast errors are serially uncorrelated. There-
fore, the correlogram of the one-step prediction errors should reveal no serial correlation.
A standard portmanteau test statistic for serial correlation is based on the Box-Ljung
statistic; see Ljung and Box (1978). This is given by
Q(p) = n(n + 2)
p
j=1
c
2
j
n j
,
for some preset positive integer p where c
j
is the j-th correlogram value
c
j
=
1
nm
2
n
t=j+1
(e
t
m
1
)(e
tj
m
1
).
This test is asymptotically
2
distributed with p degrees of freedom.
9
4.5 Forecasting
Out-of-sample predictions, together with their mean squared errors, can be generated by the
Kalman lter by extending the data set y
1
, . . . , y
n
with a set of missing values. When y
n+k
is
missing, the Kalman lter step reduces to
a
n+k+1
= T
n+k
a
n+k
, P
n+k+1
= T
n+k
P
n+k
T
n+k
+ H
n+k
H
n+k
,
which are the state space forecasting equations for k = 1, . . . , K where K is the forecast horizon;
see also the treatment of missing observations in the previous section. The multi-step forecast
of y
n+k
is simply given by
y
n+k
= Z
n+k
a
n+k
, Var( y
n+k
) = Z
n+k
P
n+k
Z
n+k
, k = 1, . . . , K.
A sequence of missing values at the end of the sample will therefore produce a set of multi-step
forecasts and corresponding forecast error covariances.
5 An empirical illustration: Energy series
5.1 Data
Figure 1: Energy series and decomposition
1960 1970 1980
5
6
Electricity
1960 1970 1980
0.25
0.00
0.25
1960 1970 1980
0.05
0.00
0.05
1960 1970 1980
5
6
7
Gas
1960 1970 1980
0.5
0.0
0.5
1960 1970 1980
0.0
0.1
1960 1970 1980
4
5
Coal
Observations and Trend
1960 1970 1980
0.25
0.00
0.25
Seasonal
1960 1970 1980
0.25
0.00
0.25
Irregular
In the empirical analysis we consider three quarterly time series of energy consumption (in
millions of useful therms) of coal, gas and electricity in the UK economy between 1960 and 1986.
10
The n = 108 observations for each series are transformed to logs. The decomposition of the
logged energy series into trend, seasonal and irregular components are graphically represented
in Figure 1.
In Figure 2 the actual time series of electricity, gas and coal are presented as yearly series
for each quarter. The number of observations for each series equals the number of available
years, that is J = 27. The yearly time series graphs for each quarter are in particular useful for
identifying specic irregularities in the time series. For example, the outlier in quarter 1 (Q1)
of the electricity series, the outlier in Q3 of the gas series and the break in Q4 of the gas series
become immediately apparent.
Figure 2: Energy yearly series for each quarter
1960 1970 1980
5.5
6.0
6.5
Electricity
Q1
1960 1970 1980
5.5
6.0
Q2
1960 1970 1980
5.0
5.5
6.0
Q3
1960 1970 1980
5.5
6.0
6.5
Q4
1960 1970 1980
6
7
Gas
1960 1970 1980
5.0
5.5
6.0
6.5
1960 1970 1980
4.5
5.0
5.5
6.0
1960 1970 1980
5
6
1960 1970 1980
4.5
5.0
5.5
Coal
1960 1970 1980
4.0
4.5
5.0
5.5
1960 1970 1980
4
5
1960 1970 1980
4.0
4.5
5.0
5.5
5.2 Structural time series models
The decompositions presented in Figure 1 are based on the structural time series model (1) with
trigonometric specication for the seasonal component with the unknown parameters estimated
by maximum likelihood using STAMP. The model diagnostics for these estimated models were
not entirely satisfactory but after the inclusion of some interventions, which take account of
outliers (exceptional innovations in the irregular component, denoted out) and breaks in the
trends (exceptional innovations in the level component, denoted lvl), the estimated models
produced satisfactory diagnostics. The details of the estimated models and their diagnostics
are presented in Table 1
11
5.3 Periodic models
In the analysis we have considered the four dierent periodic models: (i) independent, (ii) ho-
mogeneous, (iii) seemingly unrelated and (iv) common trend equations. The estimation results
of these models are presented in Tables 2 to 5. The diagonal elements of the estimated variance
matrices are reported in the tables together with the associated loglikelihood values of the es-
timated models. Analysing the periodic series simultaneously turns out to be straightforward.
Fewer interventions are required which leads to a more simple procedure of modelling time
series in practice. For example, in the case of the univariate structural time series model for
the gas series four interventions are required while for the periodic models only one intervention
is required.
The estimated variance matrices of the homogeneous model are given by
elec
= 10
4
_
_
8.88 .525 .505 .356
4.77 9.28 .566 .022
4.33 4.96 8.28 .537
3.92 .250 5.71 13.7
_
_
,
gas
= 10
3
_
_
2.97 .085 .244 .451
.274 3.46 .313 .320
1.22 1.69 8.42 .558
2.50 1.92 5.21 10.35
_
_
,
coal
= 10
3
_
_
11.1 .237 .101 .304
2.24 8.03 .022 .049
1.58 .299 22.2 .722
3.72 .513 12.5 13.5
_
_
,
with correlations given by the upper-triangular elements (not to be multiplied by 10
4
or 10
3
).
The estimation details of the SUPSE model, additional to the results reported in Table 4,
are rather extensive and therefore omitted here.
The details of the estimated common trends model are as follows. The rank restrictions
apply to the variance matrices of the level disturbances
t
and the slope disturbances
t
, that
is r(
) = r
and r(
) = r
= A
where A
is a s r
is a r
= A
where A
is a s r
matrix and D
is a r
diagonal matrix.
We have considered two cases. Firstly the variance matrices of the level and slope distur-
bances are restricted to have rank r
= r
= 2 and r
= r
= 1 are given by
A
elec
=
_
_
1
1.92
.942
.619
_
_
, D
elec
= .02446, A
elec
=
_
_
1
.654
.844
1.26
_
_
, D
elec
= .02144,
A
gas
=
_
_
1
.805
1.85
.506
_
_
, D
gas
= .0330, A
gas
=
_
_
1
.355
.437
.936
_
_
, D
gas
= .0179,
A
coal
=
_
_
1
.550
.225
.559
_
_
, D
coal
= .0890, A
coal
=
_
_
1
.898
1.51
.340
_
_
, D
coal
= .01175,
For all three series, the estimated models are adequate according to the standard diagnostic
test statistics.
The estimated matrices for the three energy series in the case of r
= 2 and r
= 1 are given
by
A
elec
=
_
_
1 0
0 1
0 .01
0 .60
_
_
, D
elec
= diag(0, .0301), A
elec
=
_
_
1
.88
.84
.87
_
_
, D
elec
= .022,
A
gas
=
_
_
1 0
3.60 1
1.33 .01
1.42 .60
_
_
, D
gas
= diag(.015, .006), A
gas
=
_
_
1
.45
.20
.85
_
_
, D
gas
= .023,
A
coal
=
_
_
1 0
.55 1
.22 1.5
.56 .48
_
_
, D
coal
= diag(.089, .0026), A
coal
=
_
_
1
.92
1.6
.33
_
_
, D
coal
= .011,
A particular conclusion from these estimation results is that in the case of electricity and gas
series, the dynamic properties of the trend for the seasons Q2 and Q4 are dierent than the
properties implied by the common trend.
To compare the loglikelihood values of the various estimated models we need to take account
of the dierent numbers of parameters which are estimated together with the dierent number
of elements in the state vector. The well-known Akaike information criterion (AIC) is used
as a measurement that takes account of this. The loglikelihood values and the corresponding
AIC values are presented in Table 6. Within the class of periodic models, the common trends
model is best for all series. Note again that traditional likelihood ratio testing of common
trend models against seemingly unrelated periodic model using test statistics with asymptotic
chi squared distributions is not feasible.
Since these results are from the rst empirical study in which periodic and non-periodic
approaches within the class of UC models are compared, we feel that more research is required
13
to make nal conclusions about the various approaches and models. However, we nd that
appropriate structural time series models do provide appropriate means for modelling seasonal
time series. Further it is straightforward to estimate a range periodic unobserved component
models and it has been easier to model data irregularities such as outliers in the periodic models.
5.4 Forecasting
Multi-step out-of sample forecasts and corresponding root mean squared errors (RMSEs) for
the years 1987-1989 (12 quarters) based on the various estimated models are reported in Table
7, 8 and 9 for the three energy series electricity, gas and coal, respectively.
It is noted that the RMSEs are functions from parameter estimates. A small value for the
RMSE indicates that the model was able to give a precise prediction of future values based on
the information within the sample. If the true data generating process is nonperiodic, we expect
smaller RMSEs for the structural time series model model than for the periodic models. The
structural model is updated quarterly and the parameter estimates are based on minimising
squared one-quarter ahead prediction errors, whereas the periodic models are updated yearly
and estimation is essentially based on minimising squared one year ahead forecasts errors.
For electricity and coal, the structural time series model produced the smallest values of
the RMSE for most horizons. For gas the periodic models produced the smaller RMSEs for all
horizons. These comparisons do not account for the dierent dimensions of the parameter and
state vectors.
6 Conclusion
In this paper we have investigated a number of periodic unobserved components time series
models and we have compared them with the standard univariate structural time series model.
The empirical results were based on series of energy consumption in the UK. We have not found
convincing empirical evidence that the periodic models produced better results in this study.
However we have found that an interesting periodic analysis using ready-to-use software is a
viable extension to the standard approach. More empirical research is required to make more
denitive conclusions.
In order to compare our results with periodic ARMA models, extensions with extra periodic
AR components and other extensions with larger state vectors are to be investigated. Another
topic for future research is the quarterly updating of the multivariate periodic unobserved
component models. This could lead to better forecasting performance compared with the yearly
updating employed in the current paper. This would also enable an interesting comparison of
actual out-of-sample forecast error performance of nonperiodic and periodic unobserved compo-
nent time series models. These extensions would require extended software, e.g. implemented
in SsfPack by Koopman, Shephard, and Doornik (1999) and are not as ready-to-use in the
near future as the methods discussed in this paper.
References
Anderson, B. D. O. and J. B. Moore (1979). Optimal Filtering. Englewood Clis: Prentice-
Hall.
14
Doornik, J. A. (1998). Object-Oriented Matrix Programming using Ox 2.0. London: Timber-
lake Consultants Press.
Durbin, J. and S. J. Koopman (2001). Time Series Analysis by State Space Methods. Oxford:
Oxford University Press.
Franses, P. H. (1996). Periodicity and Stochastic Trends in Economic Time Series. Oxford,
UK.: Oxford University Press.
Harvey, A. C. (1989). Forecasting, Structural Time Series Models and the Kalman Filter.
Cambridge: Cambridge University Press.
Harvey, A. C. (1993). Time Series Models (2nd ed.). Hemel Hempstead: Harvester Wheat-
sheaf.
Harvey, A. C. (2001). A unied approach to testing for stationarity and unit roots. Discussion
paper, Faculty of Economics and Politics, University of Cambridge, UK.
Harvey, A. C. and S. J. Koopman (1997). Multivariate structural time series models. In
C. Heij, H. Schumacher, B. Hanzon, and C. Praagman (Eds.), Systematic dynamics in
economic and nancial models, pp. 26998. Chichester: John Wiley and Sons.
Harvey, A. C., S. J. Koopman, and J. Penzer (1998). Messy time series. In T. B. Fomby
and R. C. Hill (Eds.), Advances in Econometrics, volume 13, pp. 10343. New York: JAI
Press.
Jones, R. H. (1980). Maximum likelihood tting of ARIMA models to time series with missing
observations. Technometrics 22, 38995.
Kitagawa, G. and W. Gersch (1996). Smoothness Priors Analysis of Time Series. New York:
Springer Verlag.
Koopman, S. J. (1997). Exact initial Kalman ltering and smoothing for non-stationary time
series models. J. American Statistical Association 92, 16308.
Koopman, S. J. and J. Durbin (2000). Fast ltering and smoothing for multivariate state
space models. J. Time Series Analysis 21, 28196.
Koopman, S. J., A. C. Harvey, J. A. Doornik, and N. Shephard (2000). Stamp 6.0: Structural
Time Series Analyser, Modeller and Predictor. London: Timberlake Consultants.
Koopman, S. J., N. Shephard, and J. A. Doornik (1999). Statistical algorithms for
models in state space form using SsfPack 2.2. Econometrics Journal 2, 11366.
http://www.ssfpack.com/.
Ljung, G. M. and G. E. P. Box (1978). On a measure of lack of t in time series models.
Biometrika 66, 6772.
Ooms, M. and P. H. Franses (1997). On periodic correlations between estimated seasonal
and nonseasonal components in German and U.S. unemployment. Journal of Business
and Economic Statistics 15, 470481.
Ooms, M. and P. H. Franses (2001). A seasonal periodic long memory model for monthly
river ows. Environmental Modelling & Software 16, 559569.
Osborn, D. R. (1991). The implications of periodically varying coecients for seasonal time-
series processes. Journal of Econometrics 48, 373384.
15
Osborn, D. R. and J. P. Smith (1989). The performance of periodic autoregressive models
in forecasting seasonal UK consumption. Journal of Business and Economic Statistics 7,
117127.
Proietti, T. (2001). Seasonal specic structural models. mimeo, Department of Statistics,
University of Udine, Italy.
Schweppe, F. (1965). Evaluation of likelihood functions for Gaussian signals. IEEE Transac-
tions on Information Theory 11, 6170.
Tiao, G. C. and M. R. Grupe (1980). Hidden periodic autoregressive-moving average models
in time series data. Biometrika 67, 365373.
Young, P. C. (1984). Recursive Estimation and Time Series Analysis. New York: Springer-
Verlag.
Zellner, A. (1962). An ecient method of estimating seemingly unrelated regressions and
tests for aggregation bias. Journal of the American Statistical Association 57, 346368.
16
Table 1: Estimation results of structural time series models for quarterly data, 1960-1986
Model basic Electric Gas Coal
estimates
2
2
/
2
2
/
2
2
/
2
1.71 10
6
.002 7.48 10
6
.0046 2.75 10
6
.0002
2
/
2
2
/
2
2
/
2
7.10 10
7
.0024 5.40 10
6
.0098 5.46 10
6
.0004
2
/
2
2
/
2
2
/
2
.001260 1 0 0 .011954 1
2
/
2
2
/
2
2
/
2
8.47 10
5
.0578 0 0 .000442 .0479
N 3.22 3.91 1.53
Q(6) 3.69 4.14 7.84
estimates Q3
2
2
/
2
2
/
2
2
/
2
.000541 1 0 0 .028554 1
2
/
2
2
/
2
2
/
2
9.86 10
5
.0329 4.48 10
5
.0111 0 0
N 3.05 3.79 .532
Q(6) 14.5 6.96 2.05
interventions |t|-value |t|-value
1 out 72.1 5.589 out 70.3 9.751
2 out 68.2 3.624 lvl 71.4 8.085
3 out 75.3 3.910
4 out 68.4 2.528
log-likelihood 282.558 252.926 174.762
Parameter estimates are reported together with their signal-to-noise ratios; the loglikelihood value of the esti-
mated model is reported together with the diagnostic test statistics discussed in section 4.4.
18
Table 3: Estimation results of periodic homogeneous models, 4 seasonal subseries of yearly
data, 1960-1986
Electric Gas Coal
estimates Q1
2
2
/
2
2
/
2
2
/
2
3.26 10
5
.0271 0 0 4.31 10
5
.0039
N .268 10.4 1.77
Q(6) 11.4 8.32 4.16
estimates Q2
2
2
/
2
2
/
2
2
/
2
3.41 10
5
.0271 0 0 3.13 10
5
.0039
N .268 6.23 2.31
Q(6) 3.52 3.45 7.75
estimates Q3
2
2
/
2
2
/
2
2
/
2
3.04 10
5
.0271 0 0 8.66 10
5
.0039
N 6.78 6.77 .406
Q(6) 2.18 2.68 9.41
estimates Q4
2
2
/
2
2
/
2
2
/
2
5.01 10
5
.0271 0 0 5.25 10
5
.0039
N 12.7 15.1 4.09
Q(6) 7.90 7.02 3.27
interventions |t|-value |t|-value
1 out 74.1 6.424 out 70.3 3.766
2 lvl 71.4 3.094
log-likelihood 293.29 242.39 183.83
Parameter estimates are reported together with their signal-to-noise ratios; the loglikelihood value of the esti-
mated model is reported together with the diagnostic test statistics discussed in section 4.4.
19
Table 4: Estimation results of seemingly unrelated periodic models, SUPSE, 4 seasonal subseries
of yearly data, 1960-1986
Electric Gas Coal
estimates Q1
2
2
/
2
2
/
2
2
/
2
9.85 10
5
.0800 .000466 0.8293 .000133 .0160
N 1.97 .192 .933
Q(6) 15.5 7.43 3.30
estimates Q2
2
2
/
2
2
/
2
2
/
2
5.82 10
5
.0296 .000100 .0315 .000110 .0128
N 5.09 8.25 3.75
Q(6) 4.00 5.15 10.0
estimates Q3
2
2
/
2
2
/
2
2
/
2
7.16 10
5
.0953 .000717 .3455 .000320 .0122
N .152 .402 1.43
Q(6) 2.53 1.96 12.6
estimates Q4
2
2
/
2
2
/
2
2
/
2
= 2 and r
= 1, 4 seasonal
subseries of yearly data, 1960-1986
Electric Gas Coal
estimates Q1
2
2
/
2
2
/
2
2
/
2
2
/
2
2
/
2
2
/
2
2
/
2
2
/
2
2
/
2
.000961 1 4.64 10
5
0.0118 .026142 1
6.96 10
7
.0007 .003932 1 .000389 .0149
2
/
2
2
/
2
2
/
2