Anda di halaman 1dari 921

STATA TIME-SERIES

REFERENCE MANUAL
RELEASE 14

A Stata Press Publication


StataCorp LP
College Station, Texas

c 19852015 StataCorp LP
Copyright
All rights reserved
Version 14

Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845
Typeset in TEX
ISBN-10: 1-59718-169-2
ISBN-13: 978-1-59718-169-3
This manual is protected by copyright. All rights are reserved. No part of this manual may be reproduced, stored
in a retrieval system, or transcribed, in any form or by any meanselectronic, mechanical, photocopy, recording, or
otherwisewithout the prior written permission of StataCorp LP unless permitted subject to the terms and conditions
of a license granted to you by StataCorp LP to use the software and documentation. No license, express or implied,
by estoppel or otherwise, to any intellectual property rights is granted by this document.
StataCorp provides this manual as is without warranty of any kind, either expressed or implied, including, but
not limited to, the implied warranties of merchantability and fitness for a particular purpose. StataCorp may make
improvements and/or changes in the product(s) and the program(s) described in this manual at any time and without
notice.
The software described in this manual is furnished under a license agreement or nondisclosure agreement. The software
may be copied only in accordance with the terms of the agreement. It is against the law to copy the software onto
DVD, CD, disk, diskette, tape, or any other medium for any purpose other than backup or archival purposes.
c 1979 by Consumers Union of U.S.,
The automobile dataset appearing on the accompanying media is Copyright
Inc., Yonkers, NY 10703-1057 and is reproduced by permission from CONSUMER REPORTS, April 1979.
Stata,

, Stata Press, Mata,

, and NetCourse are registered trademarks of StataCorp LP.

Stata and Stata Press are registered trademarks with the World Intellectual Property Organization of the United Nations.
NetCourseNow is a trademark of StataCorp LP.
Other brand and product names are registered trademarks or trademarks of their respective companies.
For copyright information about the software, type help copyright within Stata.

The suggested citation for this software is


StataCorp. 2015. Stata: Release 14 . Statistical Software. College Station, TX: StataCorp LP.

Contents
intro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to time-series manual
time series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to time-series commands

1
2

arch . . . . . . . . . Autoregressive conditional heteroskedasticity (ARCH) family of estimators 12


arch postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for arch 46
arfima . . . . . . . . . . . . . . . . . . . Autoregressive fractionally integrated moving-average models 52
arfima postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for arfima 71
arima . . . . . . . . . . . . . . . . . . . . . . . ARIMA, ARMAX, and other dynamic regression models 79
arima postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for arima 103
corrgram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tabulate and graph autocorrelations 112
cumsp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cumulative spectral distribution 120
dfactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic-factor models
dfactor postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for dfactor
dfgls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DF-GLS unit-root test
dfuller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Augmented DickeyFuller unit-root test

124
142
148
155

estat
estat
estat
estat

acplot . . . . . . . . . . . . . . . . Plot parametric autocorrelation and autocovariance functions


aroots . . . . . . . . . . . . . . . . . . . . . . . . Check the stability condition of ARIMA estimates
sbknown . . . . . . . . . . . . . . . . . . . . Test for a structural break with a known break date
sbsingle . . . . . . . . . . . . . . . . . . Test for a structural break with an unknown break date

161
166
172
177

fcast compute . . . . . . . . . . . . . . . . . . . . . . Compute dynamic forecasts after var, svar, or vec


fcast graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph forecasts after fcast compute
forecast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Econometric model forecasting
forecast adjust . . . . . . . . . . . . . . . . . . . . . . Adjust a variable by add factoring, replacing, etc.
forecast clear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clear current model from memory
forecast coefvector . . . . . . . . . . . . . . . . . . . . . . . Specify an equation via a coefficient vector
forecast create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a new forecast model
forecast describe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Describe features of the forecast model
forecast drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drop forecast variables
forecast estimates . . . . . . . . . . . . . . . . . . . . . . . . . Add estimation results to a forecast model
forecast exogenous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Declare exogenous variables
forecast identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add an identity to a forecast model
forecast list . . . . . . . . . . . . . . . . . . . . . . . . List forecast commands composing current model
forecast query . . . . . . . . . . . . . . . . . . . . . . Check whether a forecast model has been started
forecast solve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Obtain static and dynamic forecasts

186
195
199
213
217
218
223
225
231
233
244
246
248
250
251

irf
irf
irf
irf
irf
irf
irf
irf
irf
irf

267
272
274
279
305
311
315
318
325
331

. . . . . . . . . . . . . . . . . Create and analyze IRFs, dynamic-multiplier functions, and FEVDs


add . . . . . . . . . . . . . . . . . . . . . . . . . . . Add results from an IRF file to the active IRF file
cgraph . . . . . . . . . . Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs
create . . . . . . . . . . . . . . . . . . . . . Obtain IRFs, dynamic-multiplier functions, and FEVDs
ctable . . . . . . . . . . . Combined tables of IRFs, dynamic-multiplier functions, and FEVDs
describe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Describe an IRF file
drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drop IRF results from the active IRF file
graph . . . . . . . . . . . . . . . . . . . . Graphs of IRFs, dynamic-multiplier functions, and FEVDs
ograph . . . . . . . . . . . Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs
rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rename an IRF result in an IRF file
i

ii

Contents

irf set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set the active IRF file 333


irf table . . . . . . . . . . . . . . . . . . . . Tables of IRFs, dynamic-multiplier functions, and FEVDs 336
mgarch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multivariate GARCH models
mgarch ccc . . . . . . . . . . . . . . Constant conditional correlation multivariate GARCH models
mgarch ccc postestimation . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for mgarch ccc
mgarch dcc . . . . . . . . . . . . . . Dynamic conditional correlation multivariate GARCH models
mgarch dcc postestimation . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for mgarch dcc
mgarch dvech . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagonal vech multivariate GARCH models
mgarch dvech postestimation . . . . . . . . . . . . . . . . . . . Postestimation tools for mgarch dvech
mgarch vcc . . . . . . . . . . . . . . . Varying conditional correlation multivariate GARCH models
mgarch vcc postestimation . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for mgarch vcc
mswitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Markov-switching regression models
mswitch postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for mswitch

342
348
363
368
383
388
401
407
422
427
453

newey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression with NeweyWest standard errors 463


newey postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for newey 468
pergram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Periodogram
pperron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PhillipsPerron unit-root test
prais . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prais Winsten and Cochrane Orcutt regression
prais postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for prais
psdensity . . . . . . . . . . . . Parametric spectral density estimation after arima, arfima, and ucm

474
482
487
499
502

rolling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rolling-window and recursive estimation 513


sspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . State-space models 521
sspace postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for sspace 545
tsappend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add observations to a time-series dataset
tsfill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fill in gaps in time variable
tsfilter . . . . . . . . . . . . . . . . . . . . . . . . . Filter a time-series, keeping only selected periodicities
tsfilter bk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BaxterKing time-series filter
tsfilter bw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Butterworth time-series filter
tsfilter cf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ChristianoFitzgerald time-series filter
tsfilter hp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HodrickPrescott time-series filter
tsline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plot time-series data
tsreport . . . . . . . . . . . . . . . . . . . Report time-series aspects of a dataset or estimation sample
tsrevar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time-series operator programming command
tsset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Declare data to be time-series data
tssmooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smooth and forecast univariate time-series data
tssmooth dexponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Double-exponential smoothing
tssmooth exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Single-exponential smoothing
tssmooth hwinters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HoltWinters nonseasonal smoothing
tssmooth ma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving-average filter
tssmooth nl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear filter
tssmooth shwinters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HoltWinters seasonal smoothing

553
560
565
584
593
602
611
618
624
631
634
651
653
660
669
677
683
686

ucm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unobserved-components model 696


ucm postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for ucm 725
var
var
var
var

intro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to vector autoregressive models


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vector autoregressive models
postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for var
svar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structural vector autoregressive models

732
739
752
756

Contents

var svar postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for svar


varbasic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fit a simple VAR and graph IRFs or FEVDs
varbasic postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for varbasic
vargranger . . . . . . . . . . . . . . . . . . . Perform pairwise Granger causality tests after var or svar
varlmar . . . . . . . . . . . . . . . . . . Perform LM test for residual autocorrelation after var or svar
varnorm . . . . . . . . . . . . . . . . . . . . Test for normally distributed disturbances after var or svar
varsoc . . . . . . . . . . . . . . . . . . . . . . Obtain lag-order selection statistics for VARs and VECMs
varstable . . . . . . . . . . . . . . . . . . . . . Check the stability condition of VAR or SVAR estimates
varwle . . . . . . . . . . . . . . . . . . . . . . . . . . Obtain Wald lag-exclusion statistics after var or svar
vec intro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to vector error-correction models
vec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vector error-correction models
vec postestimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Postestimation tools for vec
veclmar . . . . . . . . . . . . . . . . . . . . . . . Perform LM test for residual autocorrelation after vec
vecnorm . . . . . . . . . . . . . . . . . . . . . . . . . . Test for normally distributed disturbances after vec
vecrank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimate the cointegrating rank of a VECM
vecstable . . . . . . . . . . . . . . . . . . . . . . . . . . Check the stability condition of VECM estimates

iii

775
778
783
787
793
797
803
809
815
821
840
865
869
873
877
886

wntestb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bartletts periodogram-based test for white noise 891


wntestq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Portmanteau (Q) test for white noise 896
xcorr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-correlogram for bivariate time series 899
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

904

Subject and author index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

913

Cross-referencing the documentation


When reading this manual, you will find references to other Stata manuals. For example,
[U] 26 Overview of Stata estimation commands
[R] regress
[D] reshape

The first example is a reference to chapter 26, Overview of Stata estimation commands, in the Users
Guide; the second is a reference to the regress entry in the Base Reference Manual; and the third
is a reference to the reshape entry in the Data Management Reference Manual.
All the manuals in the Stata Documentation have a shorthand notation:
[GSM]
[GSU]
[GSW]
[U]
[R]
[BAYES]
[D]
[FN]
[G]
[IRT]
[XT]
[ME]
[MI]
[MV]
[PSS]
[P]
[SEM]
[SVY]
[ST]
[TS]
[TE]
[I]

Getting Started with Stata for Mac


Getting Started with Stata for Unix
Getting Started with Stata for Windows
Stata Users Guide
Stata Base Reference Manual
Stata Bayesian Analysis Reference Manual
Stata Data Management Reference Manual
Stata Functions Reference Manual
Stata Graphics Reference Manual
Stata Item Response Theory Reference Manual
Stata Longitudinal-Data/Panel-Data Reference Manual
Stata Multilevel Mixed-Effects Reference Manual
Stata Multiple-Imputation Reference Manual
Stata Multivariate Statistics Reference Manual
Stata Power and Sample-Size Reference Manual
Stata Programming Reference Manual
Stata Structural Equation Modeling Reference Manual
Stata Survey Data Reference Manual
Stata Survival Analysis Reference Manual
Stata Time-Series Reference Manual
Stata Treatment-Effects Reference Manual:
Potential Outcomes/Counterfactual Outcomes
Stata Glossary and Index

[M]

Mata Reference Manual

Title
intro Introduction to time-series manual

Description

Also see

Description
This manual documents Statas time-series commands and is referred to as [TS] in cross-references.
After this entry, [TS] time series provides an overview of the ts commands. The other parts of
this manual are arranged alphabetically. If you are new to Statas time-series features, we recommend
that you read the following sections first:
[TS] time series
[TS] tsset

Introduction to time-series commands


Declare a dataset to be time-series data

Stata is continually being updated, and Stata users are always writing new commands. To ensure
that you have the latest features, you should install the most recent official update; see [R] update.

Also see
[U] 1.3 Whats new

[R] intro Introduction to base reference manual

Title
time series Introduction to time-series commands

Description

Remarks and examples

References

Also see

Description
The Time-Series Reference Manual organizes the commands alphabetically, making it easy to find
individual command entries if you know the name of the command. This overview organizes and
presents the commands conceptually, that is, according to the similarities in the functions that they
perform. The table below lists the manual entries that you should see for additional information.
Data management tools and time-series operators.
These commands help you prepare your data for further analysis.
Univariate time series.
These commands are grouped together because they are either estimators or filters designed for
univariate time series or preestimation or postestimation commands that are conceptually related
to one or more univariate time-series estimators.
Multivariate time series.
These commands are similarly grouped together because they are either estimators designed for
use with multivariate time series or preestimation or postestimation commands conceptually related
to one or more multivariate time-series estimators.
Forecasting models.
These commands work as a group to provide the tools you need to create models by combining
estimation results, identities, and other objects and to solve those models to obtain forecasts.

Within these three broad categories, similar commands have been grouped together.

Data management tools and time-series operators


[TS] tsset
Declare data to be time-series data
[TS] tsfill
Fill in gaps in time variable
[TS] tsappend
Add observations to a time-series dataset
[TS] tsreport
Report time-series aspects of a dataset or estimation
sample
[TS] tsrevar
Time-series operator programming command
[TS] rolling
Rolling-window and recursive estimation
[D] datetime business calendars User-definable business calendars

time series Introduction to time-series commands

Univariate time series


Estimators
[TS] arfima

[TS]
[TS]
[TS]
[TS]

arfima postestimation
arima
arima postestimation
arch

[TS]
[TS]
[TS]
[TS]
[TS]
[TS]
[TS]
[TS]
[TS]

arch postestimation
mswitch
mswitch postestimation
newey
newey postestimation
prais
prais postestimation
ucm
ucm postestimation

Time-series smoothers and filters


[TS] tsfilter bk
[TS] tsfilter bw
[TS] tsfilter cf
[TS] tsfilter hp
[TS] tssmooth ma
[TS] tssmooth dexponential
[TS] tssmooth exponential
[TS] tssmooth hwinters
[TS] tssmooth shwinters
[TS] tssmooth nl

Autoregressive fractionally integrated moving-average


models
Postestimation tools for arfima
ARIMA, ARMAX, and other dynamic regression models
Postestimation tools for arima
Autoregressive conditional heteroskedasticity (ARCH)
family of estimators
Postestimation tools for arch
Markov-switching regression models
Postestimation tools for mswitch
Regression with NeweyWest standard errors
Postestimation tools for newey
PraisWinsten and CochraneOrcutt regression
Postestimation tools for prais
Unobserved-components model
Postestimation tools for ucm

BaxterKing time-series filter


Butterworth time-series filter
ChristianoFitzgerald time-series filter
HodrickPrescott time-series filter
Moving-average filter
Double-exponential smoothing
Single-exponential smoothing
HoltWinters nonseasonal smoothing
HoltWinters seasonal smoothing
Nonlinear filter

time series Introduction to time-series commands

Diagnostic tools
[TS] corrgram
Tabulate and graph autocorrelations
[TS] xcorr
Cross-correlogram for bivariate time series
[TS] cumsp
Cumulative spectral distribution
[TS] pergram
Periodogram
[TS] psdensity
Parametric spectral density estimation
[TS] estat acplot
Plot parametric autocorrelation and autocovariance functions
[TS] estat aroots
Check the stability condition of ARIMA estimates
[TS] dfgls
DF-GLS unit-root test
[TS] dfuller
Augmented DickeyFuller unit-root test
[TS] pperron
PhillipsPerron unit-root test
[TS] estat sbknown
Test for a structural break with a known break date
[TS] estat sbsingle
Test for a structural break with an unknown break date
[R] regress postestimation time series Postestimation tools for regress with time series
[TS] mswitch postestimation
Postestimation tools for mswitch
[TS] wntestb
Bartletts periodogram-based test for white noise
[TS] wntestq
Portmanteau (Q) test for white noise
Multivariate time series
Estimators
[TS] dfactor
[TS] dfactor postestimation
[TS] mgarch ccc
[TS] mgarch ccc postestimation
[TS] mgarch dcc
[TS] mgarch dcc postestimation
[TS] mgarch dvech
[TS] mgarch dvech postestimation
[TS] mgarch vcc
[TS] mgarch vcc postestimation
[TS] sspace
[TS] sspace postestimation
[TS] var
[TS] var postestimation
[TS] var svar
[TS] var svar postestimation
[TS] varbasic
[TS] varbasic postestimation
[TS] vec
[TS] vec postestimation

Dynamic-factor models
Postestimation tools for dfactor
Constant conditional correlation multivariate GARCH models
Postestimation tools for mgarch ccc
Dynamic conditional correlation multivariate GARCH models
Postestimation tools for mgarch dcc
Diagonal vech multivariate GARCH models
Postestimation tools for mgarch dvech
Varying conditional correlation multivariate GARCH models
Postestimation tools for mgarch vcc
State-space models
Postestimation tools for sspace
Vector autoregressive models
Postestimation tools for var
Structural vector autoregressive models
Postestimation tools for svar
Fit a simple VAR and graph IRFs or FEVDs
Postestimation tools for varbasic
Vector error-correction models
Postestimation tools for vec

time series Introduction to time-series commands

Diagnostic tools
[TS] varlmar
[TS] varnorm
[TS] varsoc
[TS] varstable
[TS] varwle
[TS] veclmar
[TS] vecnorm
[TS] vecrank
[TS] vecstable

Perform LM test for residual autocorrelation


Test for normally distributed disturbances
Obtain lag-order selection statistics for VARs and VECMs
Check the stability condition of VAR or SVAR estimates
Obtain Wald lag-exclusion statistics
Perform LM test for residual autocorrelation
Test for normally distributed disturbances
Estimate the cointegrating rank of a VECM
Check the stability condition of VECM estimates

Forecasting, inference, and interpretation


[TS] irf create
Obtain IRFs, dynamic-multiplier functions, and FEVDs
[TS] fcast compute
Compute dynamic forecasts after var, svar, or vec
[TS] vargranger
Perform pairwise Granger causality tests
Graphs and tables
[TS] corrgram
[TS] xcorr
[TS] pergram
[TS] irf graph
[TS] irf cgraph
[TS] irf ograph
[TS] irf table
[TS] irf ctable
[TS] fcast graph
[TS] tsline
[TS] varstable
[TS] vecstable
[TS] wntestb

Tabulate and graph autocorrelations


Cross-correlogram for bivariate time series
Periodogram
Graphs of IRFs, dynamic-multiplier functions, and FEVDs
Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs
Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs
Tables of IRFs, dynamic-multiplier functions, and FEVDs
Combined tables of IRFs, dynamic-multiplier functions, and FEVDs
Graph forecasts after fcast compute
Plot time-series data
Check the stability condition of VAR or SVAR estimates
Check the stability condition of VECM estimates
Bartletts periodogram-based test for white noise

Results management tools


[TS] irf add
[TS] irf describe
[TS] irf drop
[TS] irf rename
[TS] irf set

Add results from an IRF file to the active IRF file


Describe an IRF file
Drop IRF results from the active IRF file
Rename an IRF result in an IRF file
Set the active IRF file

time series Introduction to time-series commands

Forecasting models
[TS] forecast
[TS] forecast adjust
[TS] forecast clear
[TS] forecast coefvector
[TS] forecast create
[TS] forecast describe
[TS] forecast drop
[TS] forecast estimates
[TS] forecast exogenous
[TS] forecast identity
[TS] forecast list
[TS] forecast query
[TS] forecast solve

Econometric model forecasting


Adjust a variable by add factoring, replacing, etc.
Clear current model from memory
Specify an equation via a coefficient vector
Create a new forecast model
Describe features of the forecast model
Drop forecast variables
Add estimation results to a forecast model
Declare exogenous variables
Add an identity to a forecast model
List forecast commands composing current model
Check whether a forecast model has been started
Obtain static and dynamic forecasts

Remarks and examples


Remarks are presented under the following headings:
Data management tools and time-series operators
Univariate time series
Estimators
Time-series smoothers and filters
Diagnostic tools
Multivariate time series
Estimators
Diagnostic tools
Forecasting models
Additional resources

time series Introduction to time-series commands

Data management tools and time-series operators


Because time-series estimators are, by definition, a function of the temporal ordering of the
observations in the estimation sample, Statas time-series commands require the data to be sorted and
indexed by time, using the tsset command, before they can be used. tsset is simply a way for you
to tell Stata which variable in your dataset represents time; tsset then sorts and indexes the data
appropriately for use with the time-series commands. Once your dataset has been tsset, you can
use Statas time-series operators in data manipulation or programming using that dataset and when
specifying the syntax for most time-series commands. Stata has time-series operators for representing
the lags, leads, differences, and seasonal differences of a variable. The time-series operators are
documented in [TS] tsset.
You can also define a business-day calendar so that Statas time-series operators respect the structure
of missing observations in your data. The most common example is having Monday come after Friday
in market data. [D] datetime business calendars provides a discussion and examples.
tsset can also be used to declare that your dataset contains cross-sectional time-series data, often
referred to as panel data. When you use tsset to declare your dataset to contain panel data, you
specify a variable that identifies the panels and a variable that identifies the time periods. Once your
dataset has been tsset as panel data, the time-series operators work appropriately for the data.
tsfill, which is documented in [TS] tsfill, can be used after tsset to fill in missing times with
missing observations. tsset will report any gaps in your data, and tsreport will provide more
details about the gaps. tsappend adds observations to a time-series dataset by using the information
set by tsset. This function can be particularly useful when you wish to predict out of sample after
fitting a model with a time-series estimator. tsrevar is a programmers command that provides a
way to use varlists that contain time-series operators with commands that do not otherwise support
time-series operators.
rolling performs rolling regressions, recursive regressions, and reverse recursive regressions.
Any command that stores results in e() or r() can be used with rolling.

Univariate time series


Estimators
The seven univariate time-series estimators currently available in Stata are arfima, arima, arch,
mswitch, newey, prais, and ucm. newey and prais are really just extensions to ordinary linear
regression. When you fit a linear regression on time-series data via ordinary least squares (OLS), if
the disturbances are autocorrelated, the parameter estimates are usually consistent, but the estimated
standard errors tend to be underestimated. Several estimators have been developed to deal with this
problem. One strategy is to use OLS for estimating the regression parameters and use a different
estimator for the variances, one that is consistent in the presence of autocorrelated disturbances, such
as the NeweyWest estimator implemented in newey. Another strategy is to model the dynamics of
the disturbances. The estimators found in prais, arima, arch, arfima, and ucm are based on such
a strategy.
prais implements two such estimators: the PraisWinsten and the CochraneOrcutt generalized
least-squares (GLS) estimators. These estimators are GLS estimators, but they are fairly restrictive
in that they permit only first-order autocorrelation in the disturbances. Although they have certain
pedagogical and historical value, they are somewhat obsolete. Faster computers with more memory
have made it possible to implement full information maximum likelihood (FIML) estimators, such
as Statas arima command. These estimators permit much greater flexibility when modeling the
disturbances and are more efficient estimators.

time series Introduction to time-series commands

arima provides the means to fit linear models with autoregressive moving-average (ARMA)
disturbances, or in the absence of linear predictors, autoregressive integrated moving-average (ARIMA)
models. This means that, whether you think that your data are best represented as a distributed-lag
model, a transfer-function model, or a stochastic difference equation, or you simply wish to apply
a BoxJenkins filter to your data, the model can be fit using arima. arch, a conditional maximum
likelihood estimator, has similar modeling capabilities for the mean of the time series but can also model
autoregressive conditional heteroskedasticity in the disturbances with a wide variety of specifications
for the variance equation.
arfima estimates the parameters of autoregressive fractionally integrated moving-average (ARFIMA)
models, which handle higher degrees of dependence than ARIMA models. ARFIMA models allow the
autocorrelations to decay at the slower hyperbolic rate, whereas ARIMA models handle processes
whose autocorrelations decay at an exponential rate.
Markov-switching models are used for series that are believed to transition over a finite set
of unobserved states, allowing the process to evolve differently in each state. The transitions occur
according to a Markov process. mswitch estimates the state-dependent parameters of Markov-switching
dynamic regression models and Markov-switching autoregression models.
Unobserved-components models (UCMs) decompose a time series into trend, seasonal, cyclical,
and idiosyncratic components and allow for exogenous variables. ucm estimates the parameters of
UCMs by maximum likelihood. UCMs can also model the stationary cyclical component using the
stochastic-cycle parameterization that has an intuitive frequency-domain interpretation.
Time-series smoothers and filters
In addition to the estimators mentioned above, Stata also provides time-series filters and smoothers.
The BaxterKing and ChristianoFitzgerald band-pass filters and the Butterworth and HodrickPrescott
high-pass filters are implemented in tsfilter; see [TS] tsfilter for an overview.
Also included are a simple, uniformly weighted, moving-average filter with unit weights; a
weighted moving-average filter in which you can specify the weights; single- and double-exponential
smoothers; HoltWinters seasonal and nonseasonal smoothers; and a nonlinear smoother. Most of
these smoothers were originally developed as ad hoc procedures and are used for reducing the noise in
a time series (smoothing) or forecasting. Although they have limited application for signal extraction,
these smoothers have all been found to be optimal for some underlying modern time-series models;
see [TS] tssmooth.
Diagnostic tools
Statas time-series commands also include several preestimation and postestimation diagnostic and
interpretation commands. corrgram estimates the autocorrelation function and partial autocorrelation
function of a univariate time series, as well as Q statistics. These functions and statistics are often used
to determine the appropriate model specification before fitting ARIMA models. corrgram can also be
used with wntestb and wntestq to examine the residuals after fitting a model for evidence of model
misspecification. Statas time-series commands also include the commands pergram and cumsp,
which provide the log-standardized periodogram and the cumulative-sample spectral distribution,
respectively, for time-series analysts who prefer to estimate in the frequency domain rather than the
time domain.
psdensity computes the spectral density implied by the parameters estimated by arfima, arima,
or ucm. The estimated spectral density shows the relative importance of components at different
frequencies. estat acplot computes the autocorrelation and autocovariance functions implied by
the parameters estimated by arima. These functions provide a measure of the dependence structure
in the time domain.

time series Introduction to time-series commands

xcorr estimates the cross-correlogram for bivariate time series and can similarly be used for both
preestimation and postestimation. For example, the cross-correlogram can be used before fitting a
transfer-function model to produce initial estimates of the IRF. This estimate can then be used to
determine the optimal lag length of the input series to include in the model specification. It can
also be used as a postestimation tool after fitting a transfer function. The cross-correlogram between
the residual from a transfer-function model and the prewhitened input series of the model can be
examined for evidence of model misspecification.
When you fit ARMA or ARIMA models, the dependent variable being modeled must be covariance
stationary (ARMA models), or the order of integration must be known (ARIMA models). Stata has three
commands that can test for the presence of a unit root in a time-series variable: dfuller performs
the augmented DickeyFuller test, pperron performs the PhillipsPerron test, and dfgls performs
a modified DickeyFuller test. arfima can also be used to investigate the order of integration. After
estimation, you can use estat aroots to check the stationarity of an ARMA process.
After using mswitch to fit a Markov-switching model, two postestimation commands help interpret
the results. estat transition reports the transition probabilities and the corresponding standard
errors in a tabular form. estat duration computes the expected duration of being in a given state
and displays See [TS] mswitch postestimation.
After fitting a model with regress or ivregress, estat sbknown and estat sbsingle test
for structural breaks. estat sbknown tests for breaks at known break dates, and estat sbsingle
tests for a break at an unknown break date; see [TS] estat sbknown and [TS] estat sbsingle.
The remaining diagnostic tools for univariate time series are for use after fitting a linear model via
OLS with Statas regress command. They are documented collectively in [R] regress postestimation

time series. They include estat dwatson, estat durbinalt, estat bgodfrey, and estat
archlm. estat dwatson computes the DurbinWatson d statistic to test for the presence of firstorder autocorrelation in the OLS residuals. estat durbinalt likewise tests for the presence of
autocorrelation in the residuals. By comparison, however, Durbins alternative test is more general
and easier to use than the DurbinWatson test. With estat durbinalt, you can test for higher
orders of autocorrelation, the assumption that the covariates in the model are strictly exogenous is
relaxed, and there is no need to consult tables to compute rejection regions, as you must with the
DurbinWatson test. estat bgodfrey computes the BreuschGodfrey test for autocorrelation in the
residuals, and although the computations are different, the test in estat bgodfrey is asymptotically
equivalent to the test in estat durbinalt. Finally, estat archlm performs Engles LM test for the
presence of autoregressive conditional heteroskedasticity.

Multivariate time series


Estimators
Stata provides commands for fitting the most widely applied multivariate time-series models. var
and svar fit vector autoregressive and structural vector autoregressive models to stationary data. vec
fits cointegrating vector error-correction models. dfactor fits dynamic-factor models. mgarch ccc,
mgarch dcc, mgarch dvech, and mgarch vcc fit multivariate GARCH models. sspace fits state-space
models. Many linear time-series models, including vector autoregressive moving-average (VARMA)
models and structural time-series models, can be cast as state-space models and fit by sspace.

10

time series Introduction to time-series commands

Diagnostic tools
Before fitting a multivariate time-series model, you must specify the number of lags of the dependent
variable to include. varsoc produces statistics for determining the order of a VAR or VECM.
Several postestimation commands perform the most common specification analysis on a previously
fitted VAR or SVAR. You can use varlmar to check for serial correlation in the residuals, varnorm
to test the null hypothesis that the disturbances come from a multivariate normal distribution, and
varstable to see if the fitted VAR or SVAR is stable. Two common types of inference about VAR
models are whether one variable Granger-causes another and whether a set of lags can be excluded
from the model. vargranger reports Wald tests of Granger causation, and varwle reports Wald lag
exclusion tests.
Similarly, several postestimation commands perform the most common specification analysis on a
previously fitted VECM. You can use veclmar to check for serial correlation in the residuals, vecnorm
to test the null hypothesis that the disturbances come from a multivariate normal distribution, and
vecstable to analyze the stability of the previously fitted VECM.
VARs and VECMs are often fit to produce baseline forecasts. fcast produces dynamic forecasts
from previously fitted VARs and VECMs.

Many researchers fit VARs, SVARs, and VECMs because they want to analyze how unexpected
shocks affect the dynamic paths of the variables. Stata has a suite of irf commands for estimating
IRF functions and interpreting, presenting, and managing these estimates; see [TS] irf.

Forecasting models
Stata provides a set of commands for obtaining forecasts by solving models, collections of equations
that jointly determine the outcomes of one or more variables. You use Stata estimation commands such
as regress, reg3, var, and vec to fit stochastic equations and store the results using estimates
store. Then you create a forecast model using forecast create and use commands, including
forecast estimates and forecast identity, to build models consisting of estimation results,
nonstochastic relationships (identities), and other model features. Models can be as simple as a single
linear regression for which you want to obtain dynamic forecasts, or they can be complicated systems
consisting of dozens of estimation results and identities representing a complete macroeconometric
model.
The forecast solve command allows you to obtain both stochastic and dynamic forecasts.
Confidence intervals for forecasts can be obtained via stochastic simulation incorporating both
parameter uncertainty and additive random shocks. By using forecast adjust, you can incorporate
outside information and specify different paths for some of the models variables to obtain forecasts
under alternative scenarios.

Additional resources
In addition to the manual, there are several other resources available for learning about Statas
time-series features.

On the web: You can watch the Time series in Stata playlist on our YouTube Channel. You can
read our posts about time-series analysis at Not Elsewhere ClassifiedThe Stata Blog. You
can search for blog posts from within Stata by typing
. search time series blog

You can also connect with other Stata users interested in discussing time series and other Stata
topics on Statalist, the Stata forum. Visit http://www.statalist.org/.

time series Introduction to time-series commands

11

Training: Statas NetCourse 461 provides a comprehensive introduction to univariate time-series


analysis. See http://www.stata.com/netcourse/nc461.html for more information. We also offer a
public training course, Time-Series Analysis Using Stata.
Books: We recommend the Stata Press book Introduction to Time Series Using Stata by Sean
Becketti. See http://www.stata-press.com/books/introduction-to-time-series-using-stata for full
comments from our technical group.

References
Baum, C. F. 2005. Stata: The language of choice for time-series analysis? Stata Journal 5: 4663.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Box-Steffensmeier, J. M., J. R. Freeman, M. P. Hitt, and J. C. W. Pevehouse. 2014. Time Series Analysis for the
Social Sciences. New York: Cambridge University Press.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Pickup, M. 2015. Introduction to Time Series Analysis. Thousand Oaks, CA: Sage.
Pisati, M. 2001. sg162: Tools for spatial data analysis. Stata Technical Bulletin 60: 2137. Reprinted in Stata Technical
Bulletin Reprints, vol. 10, pp. 277298. College Station, TX: Stata Press.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101115.

Also see
[U] 1.3 Whats new

[R] intro Introduction to base reference manual

Title
arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
Description
Options
References

Quick start
Remarks and examples
Also see

Menu
Stored results

Syntax
Methods and formulas

Description
arch fits regression models in which the volatility of a series varies through time. Usually, periods
of high and low volatility are grouped together. ARCH models estimate future volatility as a function of
prior volatility. To accomplish this, arch fits models of autoregressive conditional heteroskedasticity
(ARCH) by using conditional maximum likelihood. In addition to ARCH terms, models may include
multiplicative heteroskedasticity. Gaussian (normal), Students t, and generalized error distributions
are supported.
Concerning the regression equation itself, models may also contain ARCH-in-mean and ARMA
terms.

Quick start
ARCH model of y with first- and second-order ARCH components and regressor x using tsset data

arch y x, arch(1,2)
Add a second-order GARCH component
arch y x, arch(1,2) garch(2)
Add an autoregressive component of order 2 and a moving-average component of order 3
arch y x, arch(1,2) garch(2) ar(2) ma(3)
As above, but with the conditional variance included in the mean equation
arch y x, arch(1,2) garch(2) ar(2) ma(3) archm
EGARCH model of order 2 for y with an autoregressive component of order 1

arch y, earch(2) egarch(2) ar(1)

12

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

Menu
ARCH/GARCH
Statistics

>

Time series

>

ARCH/GARCH

>

ARCH and GARCH models

>

ARCH/GARCH

>

Nelsons EGARCH model

>

Threshold ARCH model

>

GJR form of threshold ARCH model

EARCH/EGARCH
Statistics

>

Time series

ABARCH/ATARCH/SDGARCH
Statistics

>

Time series

>

ARCH/GARCH

ARCH/TARCH/GARCH
Statistics

>

Time series

>

ARCH/GARCH

ARCH/SAARCH/GARCH
Statistics

>

Time series

>

ARCH/GARCH

>

Simple asymmetric ARCH model

>

ARCH/GARCH

>

Power ARCH model

>

ARCH/GARCH

>

Nonlinear ARCH model

>

ARCH/GARCH

>

Nonlinear ARCH model with one shift

>

ARCH/GARCH

>

Asymmetric power ARCH model

>

ARCH/GARCH

>

Nonlinear power ARCH model

PARCH/PGARCH
Statistics

>

Time series

NARCH/GARCH
Statistics

>

Time series

NARCHK/GARCH
Statistics

>

Time series

APARCH/PGARCH
Statistics

>

Time series

NPARCH/PGARCH
Statistics

>

Time series

13

14

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

Syntax
arch depvar

indepvars

options

 

if

 

in

 

weight

 

, options

Description

Model

noconstant
arch(numlist)
garch(numlist)
saarch(numlist)
tarch(numlist)
aarch(numlist)
narch(numlist)
narchk(numlist)
abarch(numlist)
atarch(numlist)
sdgarch(numlist)
earch(numlist)
egarch(numlist)
parch(numlist)
tparch(numlist)
aparch(numlist)
nparch(numlist)
nparchk(numlist)
pgarch(numlist)
constraints(constraints)
collinear

suppress constant term


ARCH terms
GARCH terms

simple asymmetric ARCH terms


threshold ARCH terms
asymmetric ARCH terms
nonlinear ARCH terms
nonlinear ARCH terms with single shift
absolute value ARCH terms
absolute threshold ARCH terms
lags of t
news terms in Nelsons (1991) EGARCH model
lags of ln(t2 )
power ARCH terms
threshold power ARCH terms
asymmetric power ARCH terms
nonlinear power ARCH terms
nonlinear power ARCH terms with single shift
power GARCH terms
apply specified linear constraints
keep collinear variables

Model 2

archm
archmlags(numlist)
archmexp(exp)
arima(# p ,# d ,# q )
ar(numlist)
ma(numlist)

include ARCH-in-mean term in the mean-equation specification


include specified lags of conditional variance in mean equation
apply transformation in exp to any ARCH-in-mean terms
specify ARIMA(p, d, q) model for dependent variable
autoregressive terms of the structural model disturbance
moving-average terms of the structural model disturbances

Model 3

 
distribution(dist # )
het(varlist)
savespace

use dist distribution for errors (may be gaussian, normal, t,


or ged; default is gaussian)
include varlist in the specification of the conditional variance
conserve memory during estimation

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

15

Priming

arch0(xb)
arch0(xb0)
arch0(xbwt)
arch0(xb0wt)
arch0(zero)
arch0(#)
arma0(zero)
arma0(p)
arma0(q)
arma0(pq)
arma0(#)
condobs(#)

compute priming values on the basis of the expected unconditional


variance; the default
compute priming values on the basis of the estimated variance of the
residuals from OLS
compute priming values on the basis of the weighted sum of squares
from OLS residuals
compute priming values on the basis of the weighted sum of squares
from OLS residuals, with more weight at earlier times
set priming values of ARCH terms to zero
set priming values of ARCH terms to #
set all priming values of ARMA terms to zero; the default
begin estimation after observation p, where p is the
maximum AR lag in model
begin estimation after observation q , where q is the
maximum MA lag in model
begin estimation after observation (p + q )
set priming values of ARMA terms to #
set conditioning observations at the start of the sample to #

SE/Robust

vce(vcetype)

vcetype may be opg, robust, or oim

Reporting

level(#)
detail
nocnsreport
display options

set confidence level; default is level(95)


report list of gaps in time series
do not display constraints
control columns and column formats, row spacing, and line width

Maximization

maximize options

control the maximization process; seldom used

coeflegend

display legend instead of statistics

You must tsset your data before using arch; see [TS] tsset.
depvar and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, statsby, and xi are allowed; see [U] 11.1.10 Prefix commands.
iweights are allowed; see [U] 11.1.6 weight.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.

To fit an ARCH(# m ) model with Gaussian errors, type


. arch depvar

. . . , arch(1/#m )

To fit a GARCH(# m , # k ) model assuming that the errors follow Students t distribution with 7 degrees
of freedom, type
. arch depvar

. . . , arch(1/#m ) garch(1/#k ) distribution(t 7)

You can also fit many other models.

16

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

Details of syntax
The basic model arch fits is

yt = xt + t
Var(t ) = t2 = 0 + A(, ) + B(, )2

(1)

The yt equation may optionally include ARCH-in-mean and ARMA terms:

yt = xt +

2
i g(ti
) + ARMA(p, q) + t

If no options are specified, A() = B() = 0, and the model collapses to linear regression. The
following options add to A() (, , and represent parameters to be estimated):
Terms added to A()

Option
arch()

A() = A()+ 1,1 2t1 + 1,2 2t2 +

garch()

2
2
A() = A()+ 2,1 t1
+ 2,2 t2
+

saarch()

A() = A()+ 3,1 t1 + 3,2 t2 +

tarch()

A() = A()+ 4,1 2t1 (t1 > 0) + 4,2 2t2 (t2 > 0) +

aarch()

A() = A()+ 5,1 (|t1 | + 5,1 t1 )2 + 5,2 (|t2 | + 5,2 t2 )2 +

narch()

A() = A()+ 6,1 (t1 6,1 )2 + 6,2 (t2 6,2 )2 +

narchk()

A() = A()+ 7,1 (t1 7 )2 + 7,2 (t2 7 )2 +

The following options add to B():


Terms added to B()

Option
abarch()

B() = B()+ 8,1 |t1 | + 8,2 |t2 | +

atarch()

B() = B()+ 9,1 |t1 |(t1 > 0) + 9,2 |t2 |(t2 > 0) +

sdgarch()

B() = B()+ 10,1 t1 + 10,2 t2 +

Each option requires a numlist argument (see [U] 11.1.8 numlist), which determines the lagged
terms included. arch(1) specifies 1,1 2t1 , arch(2) specifies 1,2 2t2 , arch(1,2) specifies
1,1 2t1 + 1,2 2t2 , arch(1/3) specifies 1,1 2t1 + 1,2 2t2 + 1,3 2t3 , etc.
If the earch() or egarch() option is specified, the basic model fit is

yt = xt +

2
i g(ti
) + ARMA(p, q) + t

(2)

lnVar(t ) = lnt2 = 0 + C( ln, z) + A(, ) + B(, )2


where zt = t /t . A() and B() are given as above, but A() and B() now add to lnt2 rather than
t2 . (The options corresponding to A() and B() are rarely specified here.) C() is given by

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

17

Terms added to C()

Option
earch()

p
C() = C() +11,1 zt1 + 11,1 (|zt1 | p2/)
+11,2 zt2 + 11,2 (|zt2 | 2/) +

egarch()

2
2
C() = C() +12,1 lnt1
+ 12,2 lnt2
+

Instead, if the parch(), tparch(), aparch(), nparch(), nparchk(), or pgarch() options are
specified, the basic model fit is
X
2
yt = xt +
i g(ti
) + ARMA(p, q) + t
i
(3)

/2
2
{Var(t )}
= t = 0 + D(, ) + A(, ) + B(, )
where is a parameter to be estimated. A() and B() are given as above, but A() and B() now add

to t . (The options corresponding to A() and B() are rarely specified here.) D() is given by
Terms added to D()

Option
parch()

D() = D()+ 13,1 


t1 + 13,2 t2 +

tparch()

D() = D()+ 14,1 


t1 (t1 > 0) + 14,2 t2 (t2 > 0) +

aparch()

D() = D()+ 15,1 (|t1 | + 15,1 t1 ) + 15,2 (|t2 | + 15,2 t2 ) +

nparch()

D() = D()+ 16,1 |t1 16,1 | + 16,2 |t2 16,2 | +

nparchk()

D() = D()+ 17,1 |t1 17 | + 17,2 |t2 17 | +

pgarch()

D() = D()+ 18,1 t1


+ 18,2 t2
+

Common models
Common term

Options to specify

ARCH (Engle 1982)

arch()

GARCH (Bollerslev 1986)

arch() garch()

ARCH-in-mean (Engle, Lilien, and Robins 1987)

archm arch() [garch()]

GARCH with ARMA terms

arch() garch() ar() ma()

EGARCH (Nelson 1991)

earch() egarch()

TARCH, threshold ARCH (Zakoian 1994)

abarch() atarch() sdgarch()

GJR, form of threshold ARCH (Glosten, Jagannathan, and Runkle 1993)

arch() tarch() [garch()]

SAARCH, simple asymmetric ARCH (Engle 1990)

arch() saarch() [garch()]

PARCH, power ARCH (Higgins and Bera 1992)

parch() [pgarch()]

NARCH, nonlinear ARCH

narch() [garch()]

NARCHK, nonlinear ARCH with one shift

narchk() [garch()]

A-PARCH, asymmetric power ARCH (Ding, Granger, and Engle 1993)

aparch() [pgarch()]

NPARCH, nonlinear power ARCH

nparch() [pgarch()]

18

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

In all cases, you type



arch depvar indepvars , options
where options are chosen from the table above. Each option requires that you specify as its argument
a numlist that specifies the lags to be included. For most ARCH models, that value will be 1. For
instance, to fit the classic first-order GARCH model on cpi, you would type
. arch cpi, arch(1) garch(1)

If you wanted to fit a first-order GARCH model of cpi on wage, you would type
. arch cpi wage, arch(1) garch(1)

If, for any of the options, you want first- and second-order terms, specify optionname(1/2). Specifying
garch(1) arch(1/2) would fit a GARCH model with first- and second-order ARCH terms. If you
specified arch(2), only the lag 2 term would be included.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

19

Reading arch output


The regression table reported by arch when using the normal distribution for the errors will appear
as
op.depvar

Coef.

Std. Err.

P>|z|

[95% Conf. Interval]

depvar
x1
x2

...

L1.
L2.

#
#

...
...

_cons

...

sigma2

...

ar
L1.

...

ma
L1.

...

z1
z2

...

L1.
L2.

#
#

...
...

arch
L1.

...

garch
L1.

...

aparch
L1.
etc.

...

_cons

...

power

...

ARCHM

ARMA

HET

ARCH

POWER

Dividing lines separate equations.


The first one, two, or three equations report the mean model:
X
2
yt = xt +
i g(ti
) + ARMA(p, q) + t
i

The first equation reports , and the equation will be named [depvar]; if you fit a model on d.cpi,
the first equation would be named [cpi]. In Stata, the coefficient on x1 in the above example could
be referred to as [depvar] b[x1]. The coefficient on the lag 2 value of x2 would be referred to
as [depvar] b[L2.x2]. Such notation would be used, for instance, in a later test command; see
[R] test.

20

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

The [ARCHM] equation reports the coefficients if your model includes ARCH-in-mean terms;
see options discussed under the Model 2 tabPbelow. Most ARCH-in-mean models include only a
2
2
contemporaneous variance term, so the term
i i g(ti ) becomes t . The coefficient will
2
be [ARCHM] b[sigma2]. If your model includes lags of t , the additional coefficients will be
[ARCHM] b[L1.sigma2], and so on. If you specify a transformation g() (option archmexp()),
the coefficients will be [ARCHM] b[sigma2ex], [ARCHM] b[L1.sigma2ex], and so on. sigma2ex
refers to g(t2 ), the transformed value of the conditional variance.
The [ARMA] equation reports the ARMA coefficients if your model includes them; see options discussed
under the Model 2 tab below. This equation includes one or two variables named ar and ma. In
later test statements, you could refer to the coefficient on the first lag of the autoregressive term
by typing [ARMA] b[L1.ar] or simply [ARMA] b[L.ar] (the L operator is assumed to be lag 1 if
you do not specify otherwise). The second lag on the moving-average term, if there were one, could
be referred to by typing [ARMA] b[L2.ma].
The next one, two, or three equations report the variance model.
The [HET] equation reports the multiplicative heteroskedasticity if the model includes it. When
you fit such a model, you specify the variables (and their lags), determining the multiplicative
heteroskedasticity; after estimation, their coefficients are simply [HET] b[op.varname].
The [ARCH] equation reports the ARCH, GARCH, etc., terms by referring to variables arch,
garch, and so on. For instance, if you specified arch(1) garch(1) when you fit the model, the
2
. The coefficients would be named
conditional variance is given by t2 = 0 + 1,1 2t1 + 2,1 t1
[ARCH] b[ cons] (0 ), [ARCH] b[L.arch] (1,1 ), and [ARCH] b[L.garch] (2,1 ).
The [POWER] equation appears only if you are fitting a variance model in the form of (3) above; the
estimated is the coefficient [POWER] b[power].
Also, if you use the distribution() option and specify either Students t or the generalized
error distribution but do not specify the degree-of-freedom or shape parameter, then you will see
two additional rows in the table. The final row contains the estimated degree-of-freedom or shape
parameter. Immediately preceding the final row is a transformed version of the parameter that arch
used during estimation to ensure that the degree-of-freedom parameter is greater than two or that the
shape parameter is positive.
The naming convention for estimated ARCH, GARCH, etc., parameters is as follows (definitions for
parameters i , i , and i can be found in the tables for A(), B(), C(), and D() above):

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators


Option

1st parameter

= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]

2nd parameter

Common parameter

arch()
garch()
saarch()
tarch()
aarch()
narch()
narchk()

1
2
3
4
5
6
7

abarch()
atarch()
sdgarch()

8 = [ARCH] b[abarch]
9 = [ARCH] b[atarch]
10 = [ARCH] b[sdgarch]

earch()
egarch()

11 = [ARCH] b[earch]
12 = [ARCH] b[egarch]

11 = [ARCH] b[earch a]

parch()
tparch()
aparch()
nparch()
nparchk()
pgarch()

13
14
15
16
17
18

= [POWER]
= [POWER]
15 = [ARCH] b[aparch e] = [POWER]
16 = [ARCH] b[nparch k] = [POWER]
17 = [ARCH] b[nparch k] = [POWER]
= [POWER]

= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]

b[arch]
b[garch]
b[saarch]
b[tarch]
b[aarch]
b[narch]
b[narch]

b[parch]
b[tparch]
b[aparch]
b[nparch]
b[nparch]
b[pgarch]

21

5 = [ARCH] b[aarch e]
6 = [ARCH] b[narch k]
7 = [ARCH] b[narch k]

b[power]
b[power]
b[power]
b[power]
b[power]
b[power]

Options


Model

noconstant; see [R] estimation options.


arch(numlist) specifies the ARCH terms (lags of 2t ).
Specify arch(1) to include first-order terms, arch(1/2) to specify first- and second-order terms,
arch(1/3) to specify first-, second-, and third-order terms, etc. Terms may be omitted. Specify
arch(1/3 5) to specify terms with lags 1, 2, 3, and 5. All the options work this way.
arch() may not be specified with aarch(), narch(), narchk(), nparchk(), or nparch(), as
this would result in collinear terms.
garch(numlist) specifies the GARCH terms (lags of t2 ).
saarch(numlist) specifies the simple asymmetric ARCH terms. Adding these terms is one way to
make the standard ARCH and GARCH models respond asymmetrically to positive and negative
innovations. Specifying saarch() with arch() and garch() corresponds to the SAARCH model
of Engle (1990).
saarch() may not be specified with narch(), narchk(), nparchk(), or nparch(), as this
would result in collinear terms.
tarch(numlist) specifies the threshold ARCH terms. Adding these is another way to make the
standard ARCH and GARCH models respond asymmetrically to positive and negative innovations.
Specifying tarch() with arch() and garch() corresponds to one form of the GJR model (Glosten,
Jagannathan, and Runkle 1993).
tarch() may not be specified with tparch() or aarch(), as this would result in collinear terms.
aarch(numlist) specifies the lags of the two-parameter term i (|t | + i t )2 . This term provides the
same underlying form of asymmetry as including arch() and tarch(), but it is expressed in a
different way.
aarch() may not be specified with arch() or tarch(), as this would result in collinear terms.

22

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

narch(numlist) specifies the lags of the two-parameter term i (t i )2 . This term allows the
minimum conditional variance to occur at a value of lagged innovations other than zero. For any
term specified at lag L, the minimum contribution to conditional variance of that lag occurs when
2tL = L the squared innovations at that lag are equal to the estimated constant L .
narch() may not be specified with arch(), saarch(), narchk(), nparchk(), or nparch(),
as this would result in collinear terms.
narchk(numlist) specifies the lags of the two-parameter term i (t )2 ; this is a variation of
narch() with held constant for all lags.
narchk() may not be specified with arch(), saarch(), narch(), nparchk(), or nparch(),
as this would result in collinear terms.
abarch(numlist) specifies lags of the term |t |.
atarch(numlist) specifies lags of |t |(t > 0), where (t > 0) represents the indicator function
returning 1 when true and 0 when false. Like the TARCH terms, these ATARCH terms allow the
effect of unanticipated innovations to be asymmetric about zero.
sdgarch(numlist) specifies lags of t . Combining atarch(), abarch(), and sdgarch() produces
the model by Zakoian (1994) that the author called the TARCH model. The acronym TARCH,
however, refers to any model using thresholding to obtain asymmetry.
p
earch(numlist) specifies lags of the two-parameter term zt +(|zt | 2/). These terms represent
the influence of newslagged innovationsin Nelsons (1991) EGARCH model. For these terms,
zt = t /t , and arch assumes zt N (0, 1). Nelson derived the general form of an EGARCH model
for any assumed distribution and performed estimation assuming a generalized error distribution
(GED). See Hamilton (1994) for a derivation where zt is assumed normal. The zt terms can be
parameterized in either of these two equivalent ways. arch uses Nelsons original parameterization;
see Hamilton (1994) for an equivalent alternative.
egarch(numlist) specifies lags of ln(t2 ).

For the following options, the model is parameterized in terms of h(t ) and t . One is estimated,
even when more than one option is specified.
parch(numlist) specifies lags of |t | . parch() combined with pgarch() corresponds to the class
of nonlinear models of conditional variance suggested by Higgins and Bera (1992).
tparch(numlist) specifies lags of (t > 0)|t | , where (t > 0) represents the indicator function
returning 1 when true and 0 when false. As with tarch(), tparch() specifies terms that allow
for a differential impact of good (positive innovations) and bad (negative innovations) news
for lags specified by numlist.
tparch() may not be specified with tarch(), as this would result in collinear terms.
aparch(numlist) specifies lags of the two-parameter term (|t | + t ) . This asymmetric power
ARCH model, A-PARCH, was proposed by Ding, Granger, and Engle (1993) and corresponds to
a BoxCox function in the lagged innovations. The authors fit the original A-PARCH model on
more than 16,000 daily observations of the Standard and Poors 500, and for good reason. As the
number of parameters and the flexibility of the specification increase, more data are required to
estimate the parameters of the conditional heteroskedasticity. See Ding, Granger, and Engle (1993)
for a discussion of how seven popular ARCH models nest within the A-PARCH model.
When goes to 1, the full term goes to zero for many observations and can then be numerically
unstable.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

23

nparch(numlist) specifies lags of the two-parameter term |t i | .


nparch() may not be specified with arch(), saarch(), narch(), narchk(), or nparchk(),
as this would result in collinear terms.
nparchk(numlist) specifies lags of the two-parameter term |t | ; this is a variation of nparch()
with held constant for all lags. This is the direct analog of narchk(), except for the power
of . nparchk() corresponds to an extended form of the model of Higgins and Bera (1992) as
presented by Bollerslev, Engle, and Nelson (1994). nparchk() would typically be combined with
the pgarch() option.
nparchk() may not be specified with arch(), saarch(), narch(), narchk(), or nparch(),
as this would result in collinear terms.

pgarch(numlist) specifies lags of t .


constraints(constraints), collinear; see [R] estimation options.

Model 2

archm specifies that an ARCH-in-mean term be included in the specification of the mean equation. This
term allows the expected value of depvar to depend on the conditional variance. ARCH-in-mean is
most commonly used in evaluating financial time series when a theory supports a tradeoff between
asset risk and return. By default, no ARCH-in-mean terms are included in the model.
archm specifies that the contemporaneous expected conditional variance be included in the mean
equation. For example, typing
. arch y x, archm arch(1)

specifies the model


yt = 0 + 1 xt + t2 + t

t2 = 0 + 2t1
archmlags(numlist) is an expansion of archm that includes lags of the conditional variance t2 in
the mean equation. To specify a contemporaneous and once-lagged variance, specify either archm
archmlags(1) or archmlags(0/1).
archmexp(exp) applies the transformation in exp to any ARCH-in-mean terms in the model. The
expression should contain an X wherever a value of the conditional variance is to enter the expression.
This option can be used to produce the commonly used ARCH-in-mean of the conditional standard
deviation. With the example from archm, typing
. arch y x, archm arch(1) archmexp(sqrt(X))

specifies the mean equation yt = 0 + 1 xt + t + t . Alternatively, typing


. arch y x, archm arch(1) archmexp(1/sqrt(X))

specifies yt = 0 + 1 xt + /t + t .
arima(# p ,# d ,# q ) is an alternative, shorthand notation for specifying autoregressive models in the
dependent variable. The dependent variable and any independent variables are differenced # d times,
1 through # p lags of autocorrelations are included, and 1 through # q lags of moving averages are
included. For example, the specification
. arch y, arima(2,1,3)

is equivalent to
. arch D.y, ar(1/2) ma(1/3)

24

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

The former is easier to write for classic ARIMA models of the mean equation, but it is not nearly
as expressive as the latter. If gaps in the AR or MA lags are to be modeled, or if different operators
are to be applied to independent variables, the latter syntax is required.
ar(numlist) specifies the autoregressive terms of the structural model disturbance to be included in
the model. For example, ar(1/3) specifies that lags 1, 2, and 3 of the structural disturbance be
included in the model. ar(1,4) specifies that lags 1 and 4 be included, possibly to account for
quarterly effects.
If the model does not contain regressors, these terms can also be considered autoregressive terms
for the dependent variable; see [TS] arima.
ma(numlist) specifies the moving-average terms to be included in the model. These are the terms for
the lagged innovations or white-noise disturbances.

Model 3

 
distribution(dist # ) specifies the distribution to assume for the error term. dist may be
gaussian, normal, t, or ged. gaussian and normal are synonyms, and # cannot be specified
with them.
If distribution(t) is specified, arch assumes that the errors follow Students t distribution,
and the degree-of-freedom parameter is estimated along with the other parameters of the model.
If distribution(t #) is specified, then arch uses Students t distribution with # degrees of
freedom. # must be greater than 2.
If distribution(ged) is specified, arch assumes that the errors have a generalized error
distribution, and the shape parameter is estimated along with the other parameters of the model.
If distribution(ged #) is specified, then arch uses the generalized error distribution with
shape parameter #. # must be positive. The generalized error distribution is identical to the normal
distribution when the shape parameter equals 2.
het(varlist) specifies that varlist be included in the specification of the conditional variance. varlist
may contain time-series operators. This varlist enters the variance specification collectively as
multiplicative heteroskedasticity; see Judge et al. (1985). If het() is not specified, the model will
not contain multiplicative heteroskedasticity.
Assume that the conditional variance depends on variables x and w and has an ARCH(1) component.
We request this specification by using the het(x w) arch(1) options, and this corresponds to the
conditional-variance model

t2 = exp(0 + 1 xt + 2 wt ) + 2t1
Multiplicative heteroskedasticity enters differently with an EGARCH model because the variance is
already specified in logs. For the het(x w) earch(1) egarch(1) options, the variance model is
ln(t2 ) = 0 + 1 xt + 2 wt + zt1 + (|zt1 |

2
2/) + ln(t1
)

savespace conserves memory by retaining only those variables required for estimation. The original
dataset is restored after estimation. This option is rarely used and should be specified only if
there is insufficient memory to fit a model without the option. arch requires considerably more
temporary storage during estimation than most estimation commands in Stata.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

25

Priming

arch0(cond method) is a rarely used option that specifies how to compute the conditioning (presample
or priming) values for t2 and 2t . In the presample period, it is assumed that t2 = 2t and that this
value is constant. If arch0() is not specified, the priming values are computed as the expected
unconditional variance given the current estimates of the coefficients and any ARMA parameters.
arch0(xb), the default, specifies that the priming values are the expected unconditional variance
PT 2
of the model, which is
t /T , where b
t is computed from the mean equation and any
1 b
ARMA terms.
arch0(xb0) specifies that the priming values are the estimated variance of the residuals from an
OLS estimate of the mean equation.
arch0(xbwt) specifies that the priming values are the weighted sum of the b
t2 from the current
conditional mean equation (and ARMA terms) that places more weight on estimates of 2t at the
beginning of the sample.
arch0(xb0wt) specifies that the priming values are the weighted sum of the b
t2 from an OLS
estimate of the mean equation (and ARMA terms) that places more weight on estimates of 2t
at the beginning of the sample.
arch0(zero) specifies that the priming values are 0. Unlike the priming values for ARIMA
models, 0 is generally not a consistent estimate of the presample conditional variance or squared
innovations.
arch0(#) specifies that t2 = 2t = # for any specified nonnegative #. Thus arch0(0) is equivalent
to arch0(zero).
arma0(cond method) is a rarely used option that specifies how the t values are initialized at the
beginning of the sample for the ARMA component, if the model has one. This option has an effect
only when AR or MA terms are included in the model (the ar(), ma(), or arima() options
specified).
arma0(zero), the default, specifies that all priming values of t be taken as 0. This fits the model
over the entire requested sample and takes t as its expected value of 0 for all lags required
by the ARMA terms; see Judge et al. (1985).
arma0(p), arma0(q), and arma0(pq) specify that estimation begin after priming the recursions
for a certain number of observations. p specifies that estimation begin after the pth observation
in the sample, where p is the maximum AR lag in the model; q specifies that estimation begin
after the q th observation in the sample, where q is the maximum MA lag in the model; and pq
specifies that estimation begin after the (p + q )th observation in the sample.
During the priming period, the recursions necessary to generate predicted disturbances are performed,
but results are used only to initialize preestimation values of t . To understand the definition
of preestimation, say that you fit a model in 10/100. If the model is specified with ar(1,2),
preestimation refers to observations 10 and 11.
The ARCH terms t2 and 2t are also updated over these observations. Any required lags of t
before the priming period are taken to be their expected value of 0, and 2t and t2 take the
values specified in arch0().
arma0(#) specifies that the presample values of t are to be taken as # for all lags required by
the ARMA terms. Thus arma0(0) is equivalent to arma0(zero).
condobs(#) is a rarely used option that specifies a fixed number of conditioning observations at
the start of the sample. Over these priming observations, the recursions necessary to generate
predicted disturbances are performed, but only to initialize preestimation values of t , 2t , and t2 .

26

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

Any required lags of t before the initialization period are taken to be their expected value of 0
(or the value specified in arma0()), and required values of 2t and t2 assume the values specified
by arch0(). condobs() can be used if conditioning observations are desired for the lags in the
ARCH terms of the model. If arma() is also specified, the maximum number of conditioning
observations required by arma() and condobs(#) is used.

SE/Robust

vce(vcetype) specifies the type of standard error reported, which includes types that are robust to
some kinds of misspecification (robust) and that are derived from asymptotic theory (oim, opg);
see [R] vce option.
For ARCH models, the robust or quasimaximum likelihood estimates (QMLE) of variance are robust
to symmetric nonnormality in the disturbances. The robust variance estimates generally are not
robust to functional misspecification of the mean equation; see Bollerslev and Wooldridge (1992).
The robust variance estimates computed by arch are based on the full Huber/White/sandwich
formulation, as discussed in [P] robust. Many other software packages report robust estimates
that set some terms to their expectations of zero (Bollerslev and Wooldridge 1992), which saves
them from calculating second derivatives of the log-likelihood function.

Reporting

level(#); see [R] estimation options.


detail specifies that a detailed list of any gaps in the series be reported, including gaps due to
missing observations or missing data for the dependent variable or independent variables.
nocnsreport; see [R] estimation options.
display options: noci, nopvalues, vsquish, cformat(% fmt), pformat(% fmt), sformat(% fmt),
and nolstretch; see [R] estimation options.

Maximization

 
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
gtolerance(#), nrtolerance(#), nonrtolerance, and from(init specs); see [R] maximize
for all options except gtolerance(), and see below for information on gtolerance().
These options are often more important for ARCH models than for other maximum likelihood
models because of convergence problems associated with ARCH models ARCH model likelihoods
are notoriously difficult to maximize.
Setting technique() to something other than the default or BHHH changes the vcetype to vce(oim).
The following options are all related to maximization and are either particularly important in fitting
ARCH models or not available for most other estimators.
gtolerance(#) specifies the tolerance for the gradient relative to the coefficients. When
|gi bi | gtolerance() for all parameters bi and the corresponding elements of the
gradient gi , the gradient tolerance criterion is met. The default gradient tolerance for arch
is gtolerance(.05).
gtolerance(999) may be specified to disable the gradient criterion. If the optimizer becomes
stuck with repeated (backed up) messages, the gradient probably still contains substantial
values, but an uphill direction cannot be found for the likelihood. With this option, results can
often be obtained, but whether the global maximum likelihood has been found is unclear.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

27

When the maximization is not going well, it is also possible to set the maximum number of
iterations (see [R] maximize) to the point where the optimizer appears to be stuck and to inspect
the estimation results at that point.
from(init specs) specifies the initial values of the coefficients. ARCH models may be sensitive
to initial values and may have coefficient values that correspond to local maximums. The
default starting values are obtained via a series of regressions, producing results that, on
the basis of asymptotic theory, are consistent for the and ARMA parameters and generally
reasonable for the rest. Nevertheless, these values may not always be feasible in that the
likelihood function cannot be evaluated at the initial values arch first chooses. In such cases,
the estimation is restarted with ARCH and ARMA parameters initialized to zero. It is possible,
but unlikely, that even these values will be infeasible and that you will have to supply initial
values yourself.
The standard syntax for from() accepts a matrix, a list of values, or coefficient name value
pairs; see [R] maximize. arch also allows the following:
from(archb0) sets the starting value for all the ARCH/GARCH/. . . parameters in the conditionalvariance equation to 0.
from(armab0) sets the starting value for all ARMA parameters in the model to 0.
from(archb0 armab0) sets the starting value for all ARCH/GARCH/. . . and ARMA parameters
to 0.
The following option is available with arch but is not shown in the dialog box:
coeflegend; see [R] estimation options.

Remarks and examples


The volatility of a series is not constant through time; periods of relatively low volatility and periods
of relatively high volatility tend to be grouped together. This is a commonly observed characteristic
of economic time series and is even more pronounced in many frequently sampled financial series.
ARCH models seek to estimate this time-dependent volatility as a function of observed prior volatility.
Sometimes the model of volatility is of more interest than the model of the conditional mean. As
implemented in arch, the volatility model may also include regressors to account for a structural
component in the volatilityusually referred to as multiplicative heteroskedasticity.
ARCH models were introduced by Engle (1982) in a study of inflation rates, and there has since
been a barrage of proposed parametric and nonparametric specifications of autoregressive conditional
heteroskedasticity. Overviews of the literature can found in Bollerslev, Engle, and Nelson (1994) and
Bollerslev, Chou, and Kroner (1992). Introductions to basic ARCH models appear in many general
econometrics texts, including Davidson and MacKinnon (1993), Greene (2012), Kmenta (1997), Stock
and Watson (2015), and Wooldridge (2016). Harvey (1989) and Enders (2004) provide introductions
to ARCH in the larger context of econometric time-series modeling, and Hamilton (1994) gives
considerably more detail in the same context. Becketti (2013, chap. 8) provides a simple introduction
to ARCH modeling with an emphasis on how to use Statas arch command.

arch fits models of autoregressive conditional heteroskedasticity (ARCH, GARCH, etc.) using conditional maximum likelihood. By conditional, we mean that the likelihood is computed based on
an assumed or estimated set of priming values for the squared innovations 2t and variances t2 prior
to the estimation sample; see Hamilton (1994) or Bollerslev (1986). Sometimes more conditioning is
done on the first a, g , or a + g observations in the sample, where a is the maximum ARCH term lag
and g is the maximum GARCH term lag (or the maximum lags from the other ARCH family terms).

28

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

The original ARCH model proposed by Engle (1982) modeled the variance of a regression models
disturbances as a linear function of lagged values of the squared regression disturbances. We can
write an ARCH(m) model as

yt = xt + t
t2 = 0 + 1 2t1 + 2 2t2 + + m 2tm
where

(conditional mean)
(conditional variance)

2t is the squared residuals (or innovations)


i are the ARCH parameters

The ARCH model has a specification for both the conditional mean and the conditional variance,
and the variance is a function of the size of prior unanticipated innovations 2t . This model was
generalized by Bollerslev (1986) to include lagged values of the conditional variancea GARCH
model. The GARCH(m, k) model is written as

yt = xt + t
2
2
2
t2 = 0 + 1 2t1 + 2 2t2 + + m 2tm + 1 t1
+ 2 t2
+ + k tk

where

i are the ARCH parameters


i are the GARCH parameters
In his pioneering work, Engle (1982) assumed that the error term, t , followed a Gaussian
(normal) distribution: t N (0, t2 ). However, as Mandelbrot (1963) and many others have noted,
the distribution of stock returns appears to be leptokurtotic, meaning that extreme stock returns are
more frequent than would be expected if the returns were normally distributed. Researchers have
therefore assumed other distributions that can have fatter tails than the normal distribution; arch
allows you to fit models assuming the errors follow Students t distribution or the generalized error
distribution. The t distribution has fatter tails than the normal distribution; as the degree-of-freedom
parameter approaches infinity, the t distribution converges to the normal distribution. The generalized
error distributions tails are fatter than the normal distributions when the shape parameter is less than
two and are thinner than the normal distributions when the shape parameter is greater than two.
The GARCH model of conditional variance can be considered an ARMA process in the squared
innovations, although not in the variances as the equations might seem to suggest; see Hamilton (1994).
Specifically, the standard GARCH model implies that the squared innovations result from

2t = 0 + (1 + 1 )2t1 + (2 + 2 )2t2 + + (k + k )2tk + wt 1 wt1 2 wt2 3 wt3


where

wt = 2t t2
wt is a white-noise process that is fundamental for 2t

One of the primary benefits of the GARCH specification is its parsimony in identifying the conditional
variance. As with ARIMA models, the ARMA specification in GARCH allows the conditional variance
to be modeled with fewer parameters than with an ARCH specification alone. Empirically, many series
with a conditionally heteroskedastic disturbance have been adequately modeled with a GARCH(1,1)
specification.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

29

An ARMA process in the disturbances can easily be added to the mean equation. For example, the
mean equation can be written with an ARMA(1, 1) disturbance as

yt = xt + (yt1 xt1 ) + t1 + t


with an obvious generalization to ARMA(p, q) by adding terms; see [TS] arima for more discussion
of this specification. This change affects only the conditional-variance specification in that 2t now
results from a different specification of the conditional mean.
Much of the literature on ARCH models focuses on alternative specifications of the variance equation.
arch allows many of these specifications to be requested using the saarch() through pgarch()
options, which imply that one or more terms may be changed or added to the specification of the
variance equation.
These alternative specifications also address asymmetry. Both the ARCH and GARCH specifications
imply a symmetric impact of innovations. Whether an innovation 2t is positive or negative makes
no difference to the expected variance t2 in the ensuing periods; only the size of the innovation
mattersgood news and bad news have the same effect. Many theories, however, suggest that positive
and negative innovations should vary in their impact. For risk-averse investors, a large unanticipated
drop in the market is more likely to lead to higher volatility than a large unanticipated increase (see
Black [1976], Nelson [1991]). saarch(), tarch(), aarch(), abarch(), earch(), aparch(), and
tparch() allow various specifications of asymmetric effects.
narch(), narchk(), nparch(), and nparchk() imply an asymmetric impact of a specific form.
All the models considered so far have a minimum conditional variance when the lagged innovations
are all zero. No news is good news when it comes to keeping the conditional variance small.
narch(), narchk(), nparch(), and nparchk() also have a symmetric response to innovations,
but they are not centered at zero. The entire news-response function (response to innovations) is
shifted horizontally so that minimum variance lies at some specific positive or negative value for prior
innovations.
ARCH-in-mean models allow the conditional variance of the series to influence the conditional
mean. This is particularly convenient for modeling the riskreturn relationship in financial series; the
riskier an investment, with all else equal, the lower its expected return. ARCH-in-mean models modify
the specification of the conditional mean equation to be

yt = xt + t2 + t

(ARCH-in-mean)

Although this linear form in the current conditional variance has dominated the literature, arch allows
the conditional variance to enter the mean equation through a nonlinear transformation g() and for
this transformed term to be included contemporaneously or lagged.
2
2
yt = xt + 0 g(t2 ) + 1 g(t1
) + 2 g(t2
) + + t

Square root is the most commonly used g() transformation because researchers want to include a
linear term for the conditional standard deviation, but any transform g() is allowed.

Example 1: ARCH model


Consider a simple model of the U.S. Wholesale Price Index (WPI) (Enders 2004, 8793), which
we also consider in [TS] arima. The data are quarterly over the period 1960q1 through 1990q4.
In [TS] arima, we fit a model of the continuously compounded rate of change in the WPI,
ln(WPIt ) ln(WPIt1 ). The graph of the differenced seriessee [TS] arima clearly shows periods
of high volatility and other periods of relative tranquility. This makes the series a good candidate for
ARCH modeling. Indeed, price indices have been a common target of ARCH models. Engle (1982)
presented the original ARCH formulation in an analysis of U.K. inflation rates.

30

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

First, we fit a constant-only model by OLS and test ARCH effects by using Engles Lagrange
multiplier test (estat archlm).
. use http://www.stata-press.com/data/r14/wpi1
. regress D.ln_wpi
Source

SS

df

Model
Residual

0
.02521709

0
122

.
.000206697

Total

.02521709

122

.000206697

D.ln_wpi

Coef.

_cons

.0108215

MS

Number of obs
F(0, 122)
Prob > F
R-squared
Adj R-squared
Root MSE

=
=
=
=
=
=

123
0.00
.
0.0000
0.0000
.01438

Std. Err.

P>|t|

[95% Conf. Interval]

.0012963

8.35

0.000

.0082553

.0133878

. estat archlm, lags(1)


LM test for autoregressive conditional heteroskedasticity (ARCH)
lags(p)

chi2

df

Prob > chi2

8.366

0.0038

vs.

H0: no ARCH effects

H1: ARCH(p) disturbance

Because the LM test shows a p-value of 0.0038, which is well below 0.05, we reject the null hypothesis
of no ARCH(1) effects. Thus we can further estimate the ARCH(1) parameter by specifying arch(1).
See [R] regress postestimation time series for more information on Engles LM test.
The first-order generalized ARCH model (GARCH, Bollerslev 1986) is the most commonly used
specification for the conditional variance in empirical work and is typically written GARCH(1, 1). We
can estimate a GARCH(1, 1) process for the log-differenced series by typing
. arch D.ln_wpi, arch(1) garch(1)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
Iteration 1:
log likelihood =
(output omitted )
Iteration 10: log likelihood =

355.23458
365.64586
373.23397

ARCH family regression


Sample: 1960q2 - 1990q4
Distribution: Gaussian
Log likelihood =
373.234

D.ln_wpi

Coef.

Number of obs
Wald chi2(.)
Prob > chi2

=
=
=

123
.
.

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ln_wpi
_cons

.0061167

.0010616

5.76

0.000

.0040361

.0081974

arch
L1.

.4364123

.2437428

1.79

0.073

-.0413147

.9141394

garch
L1.

.4544606

.1866606

2.43

0.015

.0886127

.8203086

_cons

.0000269

.0000122

2.20

0.028

2.97e-06

.0000508

ARCH

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

31

We have estimated the ARCH(1) parameter to be 0.436 and the GARCH(1) parameter to be 0.454, so
our fitted GARCH(1, 1) model is

yt = 0.0061 + t
2
t2 = 0.436 2t1 + 0.454 t1

where yt = ln(wpit ) ln(wpit1 ).


The model Wald test and probability are both reported as missing (.). By convention, Stata reports
the model test for the mean equation. Here and fairly often for ARCH models, the mean equation
consists only of a constant, and there is nothing to test.

Example 2: ARCH model with ARMA process


We can retain the GARCH(1, 1) specification for the conditional variance and model the mean as
an ARMA process with AR(1) and MA(1) terms as well as a fourth-lag MA term to control for quarterly
seasonal effects by typing

32

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators


. arch D.ln_wpi, ar(1) ma(1 4) arch(1) garch(1)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
380.9997
Iteration 1:
log likelihood = 388.57823
Iteration 2:
log likelihood = 391.34143
Iteration 3:
log likelihood = 396.36991
Iteration 4:
log likelihood = 398.01098
(switching optimization to BFGS)
Iteration 5:
log likelihood = 398.23668
BFGS stepping has contracted, resetting BFGS Hessian (0)
Iteration 6:
log likelihood = 399.21497
Iteration 7:
log likelihood = 399.21537 (backed up)
(output omitted )
(switching optimization to BHHH)
Iteration 15: log likelihood = 399.51441
Iteration 16: log likelihood = 399.51443
Iteration 17: log likelihood = 399.51443
ARCH family regression -- ARMA disturbances
Sample: 1960q2 - 1990q4
Distribution: Gaussian
Log likelihood = 399.5144

D.ln_wpi

Coef.

Number of obs
Wald chi2(3)
Prob > chi2

=
=
=

123
153.56
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ln_wpi
_cons

.0069541

.0039517

1.76

0.078

-.000791

.0146992

ar
L1.

.7922674

.1072225

7.39

0.000

.5821153

1.00242

ma
L1.
L4.

-.341774
.2451724

.1499943
.1251131

-2.28
1.96

0.023
0.050

-.6357574
-.0000447

-.0477905
.4903896

arch
L1.

.2040449

.1244991

1.64

0.101

-.0399688

.4480587

garch
L1.

.6949687

.1892176

3.67

0.000

.3241091

1.065828

_cons

.0000119

.0000104

1.14

0.253

-8.52e-06

.0000324

ARMA

ARCH

To clarify exactly what we have estimated, we could write our model as

yt = 0.007 + 0.792 (yt1 0.007) 0.342 t1 + 0.245 t4 + t


2
t2 = 0.204 2t1 + .695 t1

where yt = ln(wpit ) ln(wpit1 ).


The ARCH(1) coefficient, 0.204, is not significantly different from zero, but the ARCH(1) and
GARCH(1) coefficients are significant collectively. If you doubt this, you can check with test.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

33

. test [ARCH]L1.arch [ARCH]L1.garch


( 1)
( 2)

[ARCH]L.arch = 0
[ARCH]L.garch = 0
chi2( 2) =
Prob > chi2 =

84.92
0.0000

(For comparison, we fit the model over the same sample used in example 1 of [TS] arima; Enders
fits this GARCH model but over a slightly different sample.)

Technical note
The rather ugly iteration log on the previous result is typical, as difficulty in converging is common
in ARCH models. This is actually a fairly well-behaved likelihood for an ARCH model. The switching
optimization to . . . messages are standard messages from the default optimization method for arch.
The backed up messages are typical of BFGS stepping as the BFGS Hessian is often overoptimistic,
particularly during early iterations. These messages are nothing to be concerned about.
Nevertheless, watch out for the messages BFGS stepping has contracted, resetting BFGS Hessian
and backed up, which can flag problems that may result in an iteration log that goes on and on.
Stata will never report convergence and will never report final results. The question is, when do you
give up and press Break, and if you do, what then?
If the BFGS stepping has contracted message occurs repeatedly (more than, say, five times), it
often indicates that convergence will never be achieved. Literally, it means that the BFGS algorithm
was stuck and reset its Hessian and take a steepest-descent step.
The backed up message, if it occurs repeatedly, also indicates problems, but only if the likelihood
value is simultaneously not changing. If the message occurs repeatedly but the likelihood value is
changing, as it did above, all is going well; it is just going slowly.
If you have convergence problems, you can specify options to assist the current maximization
method or try a different method. Or, your model specification and data may simply lead to a likelihood
that is not concave in the allowable region and thus cannot be maximized.
If you see the backed up message with no change in the likelihood, you can reset the gradient
tolerance to a larger value. Specifying the gtolerance(999) option disables gradient checking,
allowing convergence to be declared more easily. This does not guarantee that convergence will be
declared, and even if it is, the global maximum likelihood may not have been found.
You can also try to specify initial values.
Finally, you can try a different maximization method; see options discussed under the Maximization
tab above.
ARCH models are notorious for having convergence difficulties. Unlike in most estimators in Stata,
it is common for convergence to require many steps or even to fail. This is particularly true of the
explicitly nonlinear terms such as aarch(), narch(), aparch(), or archm (ARCH-in-mean), and of
any model with several lags in the ARCH terms. There is not always a solution. You can try other
maximization methods or different starting values, but if your data do not support your assumed ARCH
structure, convergence simply may not be possible.
ARCH models can be susceptible to irrelevant regressors or unnecessary lags, whether in the
specification of the conditional mean or in the conditional variance. In these situations, arch will
often continue to iterate, making little to no improvement in the likelihood. We view this conservative
approach as better than declaring convergence prematurely when the likelihood has not been fully

34

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

maximized. arch is estimating the conditional form of second sample moments, often with flexible
functions, and that is asking much of the data.

Technical note
if exp and in range are interpreted differently with commands accepting time-series operators.
The time-series operators are resolved before the conditions are tested, which may lead to some
confusion. Note the results of the following list commands:
. use http://www.stata-press.com/data/r14/archxmpl
. list t y l.y in 5/10
L.
y

5.
6.
7.
8.
9.

1961q1
1961q2
1961q3
1961q4
1962q1

30.8
30.5
30.5
30.6
30.7

30.7
30.8
30.5
30.5
30.6

10.

1962q2

30.6

30.7

. keep in 5/10
(118 observations deleted)
. list t y l.y
L.
y

1.
2.
3.
4.
5.

1961q1
1961q2
1961q3
1961q4
1962q1

30.8
30.5
30.5
30.6
30.7

.
30.8
30.5
30.5
30.6

6.

1962q2

30.6

30.7

We have one more lagged observation for y in the first case: l.y was resolved before the in
restriction was applied. In the second case, the dataset no longer contains the value of y to compute
the first lag. This means that
. use http://www.stata-press.com/data/r14/archxmpl, clear
. arch y l.x if twithin(1962q2, 1990q3), arch(1)

is not the same as


. keep if twithin(1962q2, 1990q3)
. arch y l.x, arch(1)

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

35

Example 3: Asymmetric effectsEGARCH model


Continuing with the WPI data, we might be concerned that the economy as a whole responds
differently to unanticipated increases in wholesale prices than it does to unanticipated decreases.
Perhaps unanticipated increases lead to cash flow issues that affect inventories and lead to more
volatility. We can see if the data support this supposition by specifying an ARCH model that allows an
asymmetric effect of newsinnovations or unanticipated changes. One of the most popular such
models is EGARCH (Nelson 1991). The full first-order EGARCH model for the WPI can be specified
as follows:
. use http://www.stata-press.com/data/r14/wpi1, clear
. arch D.ln_wpi, ar(1) ma(1 4) earch(1) egarch(1)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
227.5251
Iteration 1:
log likelihood = 381.68426
(output omitted )
Iteration 23: log likelihood = 405.31453
ARCH family regression -- ARMA disturbances
Sample: 1960q2 - 1990q4
Number of obs
Distribution: Gaussian
Wald chi2(3)
Log likelihood = 405.3145
Prob > chi2

D.ln_wpi

Coef.

=
=
=

123
156.02
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ln_wpi
_cons

.0087342

.0034004

2.57

0.010

.0020695

.0153989

ar
L1.

.7692139

.0968393

7.94

0.000

.5794124

.9590154

ma
L1.
L4.

-.3554623
.2414626

.1265721
.0863834

-2.81
2.80

0.005
0.005

-.6035391
.0721543

-.1073855
.4107709

earch
L1.

.4063939

.11635

3.49

0.000

.1783521

.6344358

earch_a
L1.

.2467327

.1233357

2.00

0.045

.0049993

.4884662

egarch
L1.

.8417332

.0704074

11.96

0.000

.7037372

.9797291

_cons

-1.488366

.6604354

-2.25

0.024

-2.782795

-.1939363

ARMA

ARCH

Our result for the variance is


p
2
)
ln(t2 ) = 1.49 + .406 zt1 + .247 ( zt1 2/ ) + .842 ln(t1
where zt = t /t , which is distributed as N (0, 1).
This is a strong indication for a leverage effect. The positive L1.earch coefficient implies that
positive innovations (unanticipated price increases) are more destabilizing than negative innovations.
The effect appears strong (0.406) and is substantially larger than the symmetric effect (0.247). In fact,
the relative scales of the two coefficients imply that the positive leverage completely dominates the
symmetric effect.

36

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

This can readily be seen if we plot what is often referred to as the news-response or news-impact
function. This curve shows the resulting conditional variance as a function of unanticipated news,
in the form of innovations, that is, the conditional variance t2 as a function of t . Thus we must
evaluate t2 for various values of t say, 4 to 4and then graph the result.

Example 4: Asymmetric power ARCH model


As an example of a frequently sampled, long-run series, consider the daily closing indices of the
Dow Jones Industrial Average, variable dowclose. To avoid the first half of the century, when the
New York Stock Exchange was open for Saturday trading, only data after 1jan1953 are used. The
compound return of the series is used as the dependent variable and is graphed below.

.3

.2

.1

.1

DOW, compound return on DJIA

01jan1950

01jan1960

01jan1970
date

01jan1980

01jan1990

We formed this difference by referring to D.ln dow, but only after playing a trick. The series is
daily, and each observation represents the Dow closing index for the day. Our data included a time
variable recorded as a daily date. We wanted, however, to model the log differences in the series,
and we wanted the span from Friday to Monday to appear as a single-period difference. That is, the
day before Monday is Friday. Because our dataset was tsset with date, the span from Friday to
Monday was 3 days. The solution was to create a second variable that sequentially numbered the
observations. By tsseting the data with this new variable, we obtained the desired differences.
. generate t = _n
. tsset t

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

37

Now our data look like this:


. use http://www.stata-press.com/data/r14/dow1, clear
. generate dayofwk = dow(date)
. list date dayofwk t ln_dow D.ln_dow in 1/8
D.
ln_dow

date

dayofwk

ln_dow

1.
2.
3.
4.
5.

02jan1953
05jan1953
06jan1953
07jan1953
08jan1953

5
1
2
3
4

1
2
3
4
5

5.677096
5.682899
5.677439
5.672636
5.671259

.
.0058026
-.0054603
-.0048032
-.0013762

6.
7.
8.

09jan1953
12jan1953
13jan1953

5
1
2

6
7
8

5.661223
5.653191
5.659134

-.0100365
-.0080323
.0059433

. list date dayofwk t ln_dow D.ln_dow in -8/l


D.
ln_dow

date

dayofwk

ln_dow

9334.
9335.
9336.
9337.
9338.

08feb1990
09feb1990
12feb1990
13feb1990
14feb1990

4
5
1
2
3

9334
9335
9336
9337
9338

7.880188
7.881635
7.870601
7.872665
7.872577

.0016198
.0014472
-.011034
.0020638
-.0000877

9339.
9340.
9341.

15feb1990
16feb1990
20feb1990

4
5
2

9339
9340
9341

7.88213
7.876863
7.862054

.009553
-.0052676
-.0148082

The difference operator D spans weekends because the specified time variable, t, is not a true date
and has a difference of 1 for all observations. We must leave this contrived time variable in place
during estimation, or arch will be convinced that our dataset has gaps. If we were using calendar
dates, we would indeed have gaps.
Ding, Granger, and Engle (1993) fit an A-PARCH model of daily returns of the Standard and Poors
500 (S&P 500) for 3jan192830aug1991. We will fit the same model for the Dow data shown above.
The model includes an AR(1) term as well as the A-PARCH specification of conditional variance.

38

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators


. arch D.ln_dow, ar(1) aparch(1) pgarch(1)
(setting optimization to BHHH)
Iteration 0:
log likelihood = 31139.547
Iteration 1:
log likelihood = 31350.751
(output omitted )
Iteration 68: log likelihood = 32273.555
Iteration 69: log likelihood = 32273.555
ARCH family regression -- AR disturbances
Sample: 2 - 9341
Distribution: Gaussian
Log likelihood = 32273.56

D.ln_dow

Coef.

(backed up)

Number of obs
Wald chi2(1)
Prob > chi2

=
=
=

9,340
175.46
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ln_dow
_cons

.0001786

.0000875

2.04

0.041

7.15e-06

.00035

ar
L1.

.1410944

.0106519

13.25

0.000

.1202171

.1619716

aparch
L1.

.0626323

.0034307

18.26

0.000

.0559082

.0693564

aparch_e
L1.

-.3645093

.0378485

-9.63

0.000

-.4386909

-.2903277

pgarch
L1.

.9299015

.0030998

299.99

0.000

.923826

.935977

_cons

7.19e-06

2.53e-06

2.84

0.004

2.23e-06

.0000121

power

1.585187

.0629186

25.19

0.000

1.461869

1.708505

ARMA

ARCH

POWER

In the iteration log, the final iteration reports the message backed up. For most estimators,
ending on a backed up message would be a cause for great concern, but not with arch or, for that
matter, arima, as long as you do not specify the gtolerance() option. arch and arima, by default,
monitor the gradient and declare convergence only if, in addition to everything else, the gradient is
small enough.
The fitted model demonstrates substantial asymmetry, with the large negative L1.aparch e
coefficient indicating that the market responds with much more volatility to unexpected drops in
returns (bad news) than it does to increases in returns (good news).

Example 5: ARCH model with nonnormal errors


Stock returns tend to be leptokurtotic, meaning that large returns (either positive or negative) occur
more frequently than one would expect if returns were in fact normally distributed. Here we refit the
previous A-PARCH model assuming the errors follow the generalized error distribution, and we let
arch estimate the shape parameter of the distribution.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators


. use http://www.stata-press.com/data/r14/dow1, clear
. arch D.ln_dow, ar(1) aparch(1) pgarch(1) distribution(ged)
(setting optimization to BHHH)
Iteration 0:
log likelihood = 31139.547
Iteration 1:
log likelihood =
31348.13
(output omitted )
Iteration 74: log likelihood = 32486.461
ARCH family regression -- AR disturbances
Sample: 2 - 9341
Number of obs
Distribution: GED
Wald chi2(1)
Log likelihood = 32486.46
Prob > chi2

D.ln_dow

Coef.

=
=
=

39

9,340
178.22
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ln_dow
_cons

.0002735

.000078

3.51

0.000

.0001207

.0004264

ar
L1.

.1337479

.0100187

13.35

0.000

.1141116

.1533842

aparch
L1.

.0641772

.0049401

12.99

0.000

.0544949

.0738595

aparch_e
L1.

-.405225

.0573059

-7.07

0.000

-.5175426

-.2929074

pgarch
L1.

.9341739

.0045668

204.56

0.000

.9252231

.9431247

_cons

.0000216

.0000117

1.84

0.066

-1.39e-06

.0000446

power

1.32524

.1030699

12.86

0.000

1.123227

1.527253

/lnshape

.3527019

.0094819

37.20

0.000

.3341177

.371286

shape

1.422907

.0134919

1.396707

1.449598

ARMA

ARCH

POWER

The ARMA and ARCH coefficients are similar to those we obtained when we assumed normally
distributed errors, though we do note that the power term is now closer to 1. The estimated shape
parameter for the generalized error distribution is shown at the bottom of the output. Here the shape
parameter is 1.42; because it is less than 2, the distribution of the errors has tails that are fatter than
they would be if the errors were normally distributed.

Example 6: ARCH model with constraints


Engles (1982) original model, which sparked the interest in ARCH, provides an example requiring
constraints. Most current ARCH specifications use GARCH terms to provide flexible dynamic properties
without estimating an excessive number of parameters. The original model was limited to ARCH
terms, and to help cope with the collinearity of the terms, a declining lag structure was imposed in
the parameters. The conditional variance equation was specified as

t2 = 0 + (.4 t1 + .3 t2 + .2 t3 + .1 t4 )


= 0 + .4 t1 + .3 t2 + .2 t3 + .1 t4

40

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

From the earlier arch output, we know how the coefficients will be named. In Stata, the formula is

t2 = [ARCH] cons + .4 [ARCH]L1.arch t1 + .3 [ARCH]L2.arch t2


+ .2 [ARCH]L3.arch t3 + .1 [ARCH]L4.arch t4
We could specify these linear constraints many ways, but the following seems fairly intuitive; see
[R] constraint for syntax.
.
.
.
.

use http://www.stata-press.com/data/r14/wpi1, clear


constraint 1 (3/4)*[ARCH]l1.arch = [ARCH]l2.arch
constraint 2 (2/4)*[ARCH]l1.arch = [ARCH]l3.arch
constraint 3 (1/4)*[ARCH]l1.arch = [ARCH]l4.arch

The original model was fit on U.K. inflation; we will again use the WPI data and retain our earlier
specification of the mean equation, which differs from Engles U.K. inflation model. With our
constraints, we type
. arch D.ln_wpi, ar(1) ma(1 4) arch(1/4) constraints(1/3)
(setting optimization to BHHH)
Iteration 0:
log likelihood = 396.80198
Iteration 1:
log likelihood = 399.07809
(output omitted )
Iteration 9:
log likelihood = 399.46243
ARCH family regression -- ARMA disturbances
Sample: 1960q2 - 1990q4
Number of obs
Distribution: Gaussian
Wald chi2(3)
Log likelihood = 399.4624
Prob > chi2
( 1) .75*[ARCH]L.arch - [ARCH]L2.arch = 0
( 2) .5*[ARCH]L.arch - [ARCH]L3.arch = 0
( 3) .25*[ARCH]L.arch - [ARCH]L4.arch = 0

D.ln_wpi

Coef.

=
=
=

123
123.32
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ln_wpi
_cons

.0077204

.0034531

2.24

0.025

.0009525

.0144883

ar
L1.

.7388168

.1126811

6.56

0.000

.5179659

.9596676

ma
L1.
L4.

-.2559691
.2528923

.1442861
.1140185

-1.77
2.22

0.076
0.027

-.5387646
.02942

.0268264
.4763645

arch
L1.
L2.
L3.
L4.

.2180138
.1635103
.1090069
.0545034

.0737787
.055334
.0368894
.0184447

2.95
2.95
2.95
2.95

0.003
0.003
0.003
0.003

.0734101
.0550576
.0367051
.0183525

.3626174
.2719631
.1813087
.0906544

_cons

.0000483

7.66e-06

6.30

0.000

.0000333

.0000633

ARMA

ARCH

L1.arch, L2.arch, L3.arch, and L4.arch coefficients have the constrained relative sizes.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

41

Stored results
arch stores the following in e():
Scalars
e(N)
e(N gaps)
e(condobs)
e(k)
e(k eq)
e(k eq model)
e(k dv)
e(k aux)
e(df m)
e(ll)
e(chi2)
e(p)
e(archi)
e(archany)
e(tdf)
e(shape)
e(tmin)
e(tmax)
e(power)
e(rank)
e(ic)
e(rc)
e(converged)
Macros
e(cmd)
e(cmdline)
e(depvar)
e(covariates)
e(eqnames)
e(wtype)
e(wexp)
e(title)
e(tmins)
e(tmaxs)
e(dist)
e(mhet)
e(dfopt)
e(chi2type)
e(vce)
e(vcetype)
e(ma)
e(ar)
e(arch)
e(archm)
e(archmexp)
e(earch)
e(egarch)
e(aarch)
e(narch)
e(aparch)
e(nparch)
e(saarch)
e(parch)
e(tparch)
e(abarch)
e(tarch)

number of observations
number of gaps
number of conditioning observations
number of parameters
number of equations in e(b)
number of equations in overall model test
number of dependent variables
number of auxiliary parameters
model degrees of freedom
log likelihood
2

significance
02 =20 , priming values
1 if model contains ARCH terms, 0 otherwise
degrees of freedom for Students t distribution
shape parameter for generalized error distribution
minimum time
maximum time
for power ARCH terms
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise

arch
command as typed
name of dependent variable
list of covariates
names of equations
weight type
weight expression
title in estimation output
formatted minimum time
formatted maximum time
distribution for error term: gaussian, t, or ged
1 if multiplicative heteroskedasticity
yes if degrees of freedom for t distribution or shape parameter for GED distribution
was estimated; no otherwise
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
lags for moving-average terms
lags for autoregressive terms
lags for ARCH terms
ARCH-in-mean lags
ARCH-in-mean exp
lags for EARCH terms
lags for EGARCH terms
lags for AARCH terms
lags for NARCH terms
lags for A-PARCH terms
lags for NPARCH terms
lags for SAARCH terms
lags for PARCH terms
lags for TPARCH terms
lags for ABARCH terms
lags for TARCH terms

42

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators


lags for ATARCH terms
lags for SDGARCH terms
lags for PGARCH terms
lags for GARCH terms
type of optimization
type of ml method
name of likelihood-evaluator program
maximization technique
maximization technique, including number of iterations
number of iterations performed before switching techniques
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins

e(atarch)
e(sdgarch)
e(pgarch)
e(garch)
e(opt)
e(ml method)
e(user)
e(technique)
e(tech)
e(tech steps)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(V)
e(V modelbased)
Functions
e(sample)

coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample

Methods and formulas


The mean equation for the model fit by arch and with ARMA terms can be written as
(
)
p
p
p
X
X
X
2
2
yt = xt +
i g(ti ) +
j ytj xtj
i g(tji )
i=1

q
X

j=1

i=1

k tk + t

(conditional mean)

k=1

where
are the regression parameters,
are the ARCH-in-mean parameters,
are the autoregression parameters,
are the moving-average parameters, and

g() is a general function, see the archmexp() option.


Any of the parameters in this full specification of the conditional mean may be zero. For example,
the model need not have moving-average parameters ( = 0) or ARCH-in-mean parameters ( = 0).
The variance equation will be one of the following:

2 = 0 + A(, ) + B(, )2


ln t2
t

(1)
2

= 0 + C( ln, z) + A(, ) + B(, )


2

= 0 + D(, ) + A(, ) + B(, )

(2)
(3)

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

43

where A(, ), B(, ), C( ln, z), and D(, ) are linear sums of the appropriate ARCH terms; see
Details of syntax for more information. Equation (1) is used if no EGARCH or power ARCH terms
are included in the model, (2) if EGARCH terms are included, and (3) if any power ARCH terms are
included; see Details of syntax.
Methods and formulas are presented under the following headings:
Priming values
Likelihood from prediction error decomposition
Missing data

Priming values
The above model is recursive with potentially long memory. It is necessary to assume preestimation
sample values for t , 2t , and t2 to begin the recursions, and the remaining computations are therefore
conditioned on these priming values, which can be controlled using the arch0() and arma0()
options. See options discussed under the Priming tab above.
The arch0(xb0wt) and arch0(xbwt) options compute a weighted sum of estimated disturbances
with more weight on the early observations. With either of these options,

t20 i = 2t0 i = (1 0.7)

T
1
X

0.7T t1 2T t

t=0

where t0 is the first observation for which the likelihood is computed; see options discussed under the
Priming tab above. The 2t are all computed from the conditional mean equation. If arch0(xb0wt)
is specified, , i , j , and k are taken from initial regression estimates and held constant during
optimization. If arch0(xbwt) is specified, the current estimates of , i , j , and k are used to
compute 2t on every iteration. If any i is in the mean equation (ARCH-in-mean is specified), the
estimates of 2t from the initial regression estimates are not consistent.

Likelihood from prediction error decomposition


The likelihood function for ARCH has a particularly simple form. Given priming (or conditioning)
values of t , 2t , and t2 , the mean equation above can be solved recursively for every t (prediction error
decomposition). Likewise, the conditional variance can be computed recursively for each observation
by using the variance equation. Using these predicted errors, their associated variances, and the
assumption that t N (0, t2 ), we find that the log likelihood for each observation t is

1
ln Lt =
2

(
ln(2t2 )

2
+ t2
t

If we assume that t t(df), then as given in Hamilton (1994, 662),


ln Lt = ln

df + 1
2


ln

df
2






1
2t
ln (df 2)t2 + (df + 1) ln 1 +
2
(df 2)t2

The likelihood is not defined for df 2, so instead of estimating df directly, we estimate m = ln(df2).
Then df = exp(m) + 2 > 2 for any m.

44

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

Following Bollerslev, Engle, and Nelson (1994, 2978), the log likelihood for the tth observation,
assuming t GED(s), is



 1 t s
s+1
1



ln Lt = ln s ln
ln 2 ln s
s
2 t
where

(
=

 )1/2
s1
22/s (3s1 )

To enforce the restriction that s > 0, we estimate r = ln s.


This command supports the Huber/White/sandwich estimator of the variance using vce(robust).
See [P] robust, particularly Maximum likelihood estimators and Methods and formulas.

Missing data
ARCH allows missing data or missing observations but does not attempt to condition on the
surrounding data. If a dynamic component cannot be computed t , 2t , and/or t2 its priming
value is substituted. If a covariate, the dependent variable, or the entire observation is missing, the
observation does not enter the likelihood, and its dynamic components are set to their priming values
for that observation. This is acceptable only asymptotically and should not be used with a great deal
of missing data.

Robert Fry Engle (1942 ) was born in Syracuse, New York. He earned degrees in physics
and economics at Williams College and Cornell and then worked at MIT and the University of
California, San Diego, before moving to New York University Stern School of Business in 2000.
He was awarded the 2003 Nobel Prize in Economics for research on autoregressive conditional
heteroskedasticity and is a leading expert in time-series analysis, especially the analysis of
financial markets.

References
Adkins, L. C., and R. C. Hill. 2011. Using Stata for Principles of Econometrics. 4th ed. Hoboken, NJ: Wiley.
Baum, C. F. 2000. sts15: Tests for stationarity of a time series. Stata Technical Bulletin 57: 3639. Reprinted in
Stata Technical Bulletin Reprints, vol. 10, pp. 356360. College Station, TX: Stata Press.
Baum, C. F., and R. I. Sperling. 2000. sts15.1: Tests for stationarity of a time series: Update. Stata Technical Bulletin
58: 3536. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 360362. College Station, TX: Stata Press.
Baum, C. F., and V. L. Wiggins. 2000. sts16: Tests for long memory in a time series. Stata Technical Bulletin 57:
3944. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 362368. College Station, TX: Stata Press.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Berndt, E. K., B. H. Hall, R. E. Hall, and J. A. Hausman. 1974. Estimation and inference in nonlinear structural
models. Annals of Economic and Social Measurement 3/4: 653665.
Black, F. 1976. Studies of stock price volatility changes. Proceedings of the American Statistical Association, Business
and Economics Statistics 177181.
Bollerslev, T. 1986. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 31: 307327.
Bollerslev, T., R. Y. Chou, and K. F. Kroner. 1992. ARCH modeling in finance. Journal of Econometrics 52: 559.

arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators

45

Bollerslev, T., R. F. Engle, and D. B. Nelson. 1994. ARCH models. In Vol. 4 of Handbook of Econometrics, ed.
R. F. Engle and D. L. McFadden. Amsterdam: Elsevier.
Bollerslev, T., and J. M. Wooldridge. 1992. Quasi-maximum likelihood estimation and inference in dynamic models
with time-varying covariances. Econometric Reviews 11: 143172.
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
Diebold, F. X. 2003. The ET Interview: Professor Robert F. Engle. Econometric Theory 19: 11591193.
Ding, Z., C. W. J. Granger, and R. F. Engle. 1993. A long memory property of stock market returns and a new
model. Journal of Empirical Finance 1: 83106.
Enders, W. 2004. Applied Econometric Time Series. 2nd ed. New York: Wiley.
Engle, R. F. 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom
inflation. Econometrica 50: 9871007.
. 1990. Discussion: Stock volatility and the crash of 87. Review of Financial Studies 3: 103106.
Engle, R. F., D. M. Lilien, and R. P. Robins. 1987. Estimating time varying risk premia in the term structure: The
ARCH-M model. Econometrica 55: 391407.
Glosten, L. R., R. Jagannathan, and D. E. Runkle. 1993. On the relation between the expected value and the volatility
of the nominal excess return on stocks. Journal of Finance 48: 17791801.
Greene, W. H. 2012. Econometric Analysis. 7th ed. Upper Saddle River, NJ: Prentice Hall.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
. 1990. The Econometric Analysis of Time Series. 2nd ed. Cambridge, MA: MIT Press.
Higgins, M. L., and A. K. Bera. 1992. A class of nonlinear ARCH models. International Economic Review 33:
137158.
Hill, R. C., W. E. Griffiths, and G. C. Lim. 2011. Principles of Econometrics. 4th ed. Hoboken, NJ: Wiley.
Judge, G. G., W. E. Griffiths, R. C. Hill, H. Lutkepohl, and T.-C. Lee. 1985. The Theory and Practice of Econometrics.
2nd ed. New York: Wiley.
Kmenta, J. 1997. Elements of Econometrics. 2nd ed. Ann Arbor: University of Michigan Press.
Mandelbrot, B. B. 1963. The variation of certain speculative prices. Journal of Business 36: 394419.
Nelson, D. B. 1991. Conditional heteroskedasticity in asset returns: A new approach. Econometrica 59: 347370.
Pickup, M. 2015. Introduction to Time Series Analysis. Thousand Oaks, CA: Sage.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. 2007. Numerical Recipes: The Art of Scientific
Computing. 3rd ed. New York: Cambridge University Press.
Stock, J. H., and M. W. Watson. 2015. Introduction to Econometrics. Updated 3rd ed. Hoboken, NJ: Pearson.
Wooldridge, J. M. 2016. Introductory Econometrics: A Modern Approach. 6th ed. Boston: Cengage.
Zakoian, J. M. 1994. Threshold heteroskedastic models. Journal of Economic Dynamics and Control 18: 931955.

Also see
[TS] arch postestimation Postestimation tools for arch
[TS] tsset Declare data to be time-series data
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] mgarch Multivariate GARCH models
[R] regress Linear regression
[U] 20 Estimation and postestimation commands

Title
arch postestimation Postestimation tools for arch
Postestimation commands
Also see

predict

margins

Remarks and examples

Postestimation commands
The following postestimation commands are available after arch:
Command

Description

estat ic
estat summarize
estat vce
estimates
forecast
lincom

Akaikes and Schwarzs Bayesian information criteria (AIC and BIC)


summary statistics for the estimation sample
variancecovariance matrix of the estimators (VCE)
cataloging estimation results
dynamic forecasts and simulations
point estimates, standard errors, testing, and inference for linear combinations
of coefficients
likelihood-ratio test
marginal means, predictive margins, marginal effects, and average marginal
effects
graph the results from margins (profile plots, interaction plots, etc.)
point estimates, standard errors, testing, and inference for nonlinear
combinations of coefficients
predictions, residuals, influence statistics, and other diagnostic measures
point estimates, standard errors, testing, and inference for generalized predictions
Wald tests of simple and composite linear hypotheses
Wald tests of nonlinear hypotheses

lrtest
margins
marginsplot
nlcom
predict
predictnl
test
testnl

46

arch postestimation Postestimation tools for arch

47

predict
Description for predict
predict creates a new variable containing predictions such as expected values and residuals. All
predictions are available as static one-step-ahead predictions or as dynamic multistep predictions, and
you can control when dynamic predictions begin.

Menu for predict


Statistics

>

Postestimation

Syntax for predict




predict

statistic

type

newvar

if

 

in

 

, statistic options

Description

Main

xb
y
variance
het
residuals
yresiduals

predicted values for mean equationthe differenced series; the default


predicted values for the mean equation in y the undifferenced series
predicted values for the conditional variance
predicted values of the variance, considering only the multiplicative
heteroskedasticity
residuals or predicted innovations
residuals or predicted innovations in y the undifferenced series

These statistics are available both in and out of sample; type predict
for the estimation sample.

options

. . . if e(sample) . . . if wanted only

Description

Options

dynamic(time constant)
at(varname | #  varname2 | # 2 )
t0(time constant)
structural

how to handle the lags of yt


make static predictions
set starting point for the recursions to time constant
calculate considering the structural component only

time constant is a # or a time literal, such as td(1jan1995) or tq(1995q1), etc.; see


Conveniently typing SIF values in [D] datetime.

48

arch postestimation Postestimation tools for arch

Options for predict


Six statistics can be computed by using predict after arch: the predictions of the mean equation
(option xb, the default), the undifferenced predictions of the mean equation (option y), the predictions
of the conditional variance (option variance), the predictions of the multiplicative heteroskedasticity
component of variance (option het), the predictions of residuals or innovations (option residuals),
and the predictions of residuals or innovations in terms of y (option yresiduals). Given the dynamic
nature of ARCH models and because the dependent variable might be differenced, there are other
ways of computing each statistic. We can use all the data on the dependent variable available right
up to the time of each prediction (the default, which is often called a one-step prediction), or we
can use the data up to a particular time, after which the predicted value of the dependent variable
is used recursively to make later predictions (option dynamic()). Either way, we can consider or
ignore the ARMA disturbance component, which is considered by default and is ignored if you specify
the structural option. We might also be interested in predictions at certain fixed points where we
specify the prior values of t and t2 (option at()).

Main

xb, the default, calculates the predictions from the mean equation. If D.depvar is the dependent
variable, these predictions are of D.depvar and not of depvar itself.
y specifies that predictions of depvar are to be made even if the model was specified for, say,
D.depvar.
variance calculates predictions of the conditional variance
bt2 .
het calculates predictions of the multiplicative heteroskedasticity component of variance.
residuals calculates the residuals. If no other options are specified, these are the predicted innovations
t ; that is, they include any ARMA component. If the structural option is specified, these are
the residuals from the mean equation, ignoring any ARMA terms; see structural below. The
residuals are always from the estimated equation, which may have a differenced dependent variable;
if depvar is differenced, they are not the residuals of the undifferenced depvar.
yresiduals calculates the residuals for depvar, even if the model was specified for, say, D.depvar. As
with residuals, the yresiduals are computed from the model, including any ARMA component.
If the structural option is specified, any ARMA component is ignored and yresiduals are the
residuals from the structural equation; see structural below.

Options

dynamic(time constant) specifies how lags of yt in the model are to be handled. If dynamic()
is not specified, actual values are used everywhere lagged values of yt appear in the model to
produce one-step-ahead forecasts.
dynamic(time constant) produces dynamic (also known as recursive) forecasts. time constant
specifies when the forecast is to switch from one step ahead to dynamic. In dynamic forecasts,
references to yt evaluate to the prediction of yt for all periods at or after time constant; they
evaluate to the actual value of yt for all prior periods.
dynamic(10), for example, would calculate predictions where any reference to yt with t < 10
evaluates to the actual value of yt and any reference to yt with t 10 evaluates to the prediction
of yt . This means that one-step-ahead predictions would be calculated for t < 10 and dynamic
predictions would be calculated thereafter. Depending on the lag structure of the model, the dynamic
predictions might still refer to some actual values of yt .
You may also specify dynamic(.) to have predict automatically switch from one-step-ahead to
dynamic predictions at p + q , where p is the maximum AR lag and q is the maximum MA lag.

arch postestimation Postestimation tools for arch

49

at(varname | #  varname2 | # 2 ) makes static predictions. at() and dynamic() may not be
specified together.
Specifying at() allows static evaluation of results for a given set of disturbances. This is useful,
for instance, in generating the news response function. at() specifies two sets of values to be
used for t and t2 , the dynamic components in the model. These specified values are treated as
given. Also, any lagged values of depvar in the model are obtained from the real values of the
dependent variable. All computations are based on actual data and the given values.
at() requires that you specify two arguments, which can be either a variable name or a number.
The first argument supplies the values to be used for t ; the second supplies the values to be used
for t2 . If t2 plays no role in your model, the second argument may be specified as . to indicate
missing.
t0(time constant) specifies the starting point for the recursions to compute the predicted statistics;
disturbances are assumed to be 0 for t < t0(). The default is to set t0() to the minimum t
observed in the estimation sample, meaning that observations before that are assumed to have
disturbances of 0.
t0() is irrelevant if structural is specified because then all observations are assumed to have
disturbances of 0.
t0(5), for example, would begin recursions at t = 5. If your data were quarterly, you might
instead type t0(tq(1961q2)) to obtain the same result.
Any ARMA component in the mean equation or GARCH term in the conditional-variance equation
makes arch recursive and dependent on the starting point of the predictions. This includes
one-step-ahead predictions.
structural makes the calculation considering the structural component only, ignoring any ARMA
terms, and producing the steady-state equilibrium predictions.

50

arch postestimation Postestimation tools for arch

margins
Description for margins
margins estimates margins of response for expected values.

Menu for margins


Statistics

>

Postestimation

Syntax for margins



margins

margins

marginlist

 

marginlist

, options

, predict(statistic . . . )

predict(statistic . . . ) . . .

 

options

statistic

Description

xb
y
variance
het

predicted values for mean equationthe differenced series; the default


predicted values for the mean equation in y the undifferenced series
predicted values for the conditional variance
predicted values of the variance, considering only the multiplicative
heteroskedasticity
not allowed with margins
not allowed with margins

residuals
yresiduals

Statistics not allowed with margins are functions of stochastic quantities other than e(b).
For the full syntax, see [R] margins.

Remarks and examples


Example 1
Continuing with our EGARCH model example (example 3) in [TS] arch, we can see that predict,
at() calculates t2 given a set of specified innovations (t , t1 , . . .) and prior conditional variances
2
2
, t2
, . . .). The syntax is
(t1
. predict newvar, variance at(epsilon sigma2)

epsilon and sigma2 are either variables or numbers. Using sigma2 is a little tricky because you specify
values of t2 , which predict is supposed to predict. predict does not simply copy variable sigma2
into newvar but uses the lagged values contained in sigma2 to produce the predicted value of t2 . It
does this for all t, and those results are saved in newvar. (If you are interested in dynamic predictions
of t2 , see Options for predict.)
We will generate predictions for t2 , assuming that the lagged values of t2 are 1, and we will
vary t from 4 to 4. First, we will create variable et containing t , and then we will create and
graph the predictions:

arch postestimation Postestimation tools for arch

51

. generate et = (_n-64)/15
. predict sigma2, variance at(et 1)
. line sigma2 et in 2/l, m(i) c(l) title(News response function)

Conditional variance, onestep


.5
1
1.5
2

2.5

News response function

0
et

The positive asymmetry does indeed dominate the shape of the news response function. In fact, the
response is a monotonically increasing function of news. The form of the response function shows
that, for our simple model, only positive, unanticipated price increases have the destabilizing effect
that we observe as larger conditional variances.

Example 2
Continuing with our ARCH model with constraints example (example 6) in [TS] arch, using lincom
we can recover the parameter from the original specification.
. lincom [ARCH]l1.arch/.4
( 1) 2.5*[ARCH]L.arch = 0
D.ln_wpi

Coef.

(1)

.5450344

Std. Err.

P>|z|

[95% Conf. Interval]

.1844468

2.95

0.003

.1835253

.9065436

Any arch parameter could be used to produce an identical estimate.

Also see
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[U] 20 Estimation and postestimation commands

Title
arfima Autoregressive fractionally integrated moving-average models
Description
Options
References

Quick start
Remarks and examples
Also see

Menu
Stored results

Syntax
Methods and formulas

Description
arfima estimates the parameters of autoregressive fractionally integrated moving-average (ARFIMA)
models.
Long-memory processes are stationary processes whose autocorrelation functions decay more
slowly than short-memory processes. The ARFIMA model provides a parsimonious parameterization of
long-memory processes that nests the autoregressive moving-average (ARMA) model, which is widely
used for short-memory processes. By allowing for fractional degrees of integration, the ARFIMA model
also generalizes the autoregressive integrated moving-average (ARIMA) model with integer degrees of
integration. See [TS] arima for ARMA and ARIMA parameter estimation.

Quick start
Autoregressive fractionally integrated moving-average model for y with regressor x using tsset data
arfima y x
Add autoregressive components of orders 1 and 2 and a moving-average component of order 4
arfima y x, ar(1 2) ma(4)
ARIMA for y with autoregressive components of orders 1 and 2

arfima y, ar(1 2) smemory

Menu
Statistics

>

Time series

>

ARFIMA models

52

arfima Autoregressive fractionally integrated moving-average models

53

Syntax
arfima depvar

indepvars

 

if

 

in

 

, options

Description

options
Model

noconstant
ar(numlist)
ma(numlist)
smemory
mle
mpl
constraints(numlist)
collinear

suppress constant term


autoregressive terms
moving-average terms
estimate short-memory model without fractional integration
maximum likelihood estimates; the default
maximum modified-profile-likelihood estimates
apply specified linear constraints
do not drop collinear variables

SE/Robust

vce(vcetype)

vcetype may be oim or robust

Reporting

level(#)
nocnsreport
display options

set confidence level; default is level(95)


do not display constraints
control columns and column formats, row spacing, line width,
display of omitted variables and base and empty cells, and
factor-variable labeling

Maximization

maximize options

control the maximization process; seldom used

coeflegend

display legend instead of statistics

You must tsset your data before using arfima; see [TS] tsset.
indepvars may contain factor variables; see [U] 11.4.3 Factor variables.
depvar and indepvars may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.

Options


Model

noconstant; see [R] estimation options.


ar(numlist) specifies the autoregressive (AR) terms to be included in the model. An AR(p), p 1,
specification would be ar(1/p). This model includes all lags from 1 to p, but not all lags need
to be included. For example, the specification ar(1 p) would specify an AR(p) with only lags 1
and p included, setting all the other AR lag parameters to 0.
ma(numlist) specifies the moving-average terms to be included in the model. These are the terms for
the lagged innovations (white-noise disturbances). ma(1/q ), q 1, specifies an MA(q ) model, but
like the ar() option, not all lags need to be included.

54

arfima Autoregressive fractionally integrated moving-average models

smemory causes arfima to fit a short-memory model with d = 0. This option causes arfima to
estimate the parameters of an ARMA model by a method that is asymptotically equivalent to that
produced by arima; see [TS] arima.
mle causes arfima to estimate the parameters by maximum likelihood. This method is the default.
mpl causes arfima to estimate the parameters by maximum modified profile likelihood (MPL). The
MPL estimator of the fractional-difference parameter has less small-sample bias than the maximum
likelihood estimator when there are covariates in the model. mpl may only be specified when there
is a constant term or indepvars in the model, and it may not be combined with the mle option.
constraints(numlist), collinear; see [R] estimation options.

SE/Robust

vce(vcetype) specifies the type of standard error reported, which includes types that are robust to
some kinds of misspecification (robust) and that are derived from asymptotic theory (oim); see
[R] vce option.
Options vce(robust) and mpl may not be combined.

Reporting

level(#), nocnsreport; see [R] estimation options.


display options: noci, nopvalues, noomitted, vsquish, noemptycells, baselevels,
allbaselevels, nofvlabel, fvwrap(#), fvwrapon(style), cformat(% fmt), pformat(% fmt),
sformat(% fmt), and nolstretch; see [R] estimation options.

Maximization

 
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), gtolerance(#), nonrtolerance(#), and from(init specs); see [R] maximize for all options.
Some special points for arfimas maximize options are listed below.
technique(algorithm spec) sets the optimization algorithm. The default algorithm is BFGS and
BHHH is not allowed. See [R] maximize for a description of the available optimization algorithms.
You can specify multiple optimization methods. For example, technique(bfgs 10 nr) requests
that the optimizer perform 10 BFGS iterations and then switch to NewtonRaphson until convergence.
iterate(#) sets the maximum number of iterations. When the maximization is not going well,
set the maximum number of iterations to the point where the optimizer appears to be stuck and
inspect the estimation results at that point.
from(matname) allows you to specify starting values for the model parameters in a row vector.
We recommend that you use the iterate(0) option, retrieve the initial estimates from e(b),
and modify these elements.
The following option is available with arfima but is not shown in the dialog box:
coeflegend; see [R] estimation options.

arfima Autoregressive fractionally integrated moving-average models

55

Remarks and examples


Long-memory processes are stationary processes whose autocorrelation functions decay more
slowly than short-memory processes. Because the autocorrelations die out so slowly, long-memory
processes display a type of long-run dependence. The autoregressive fractionally integrated movingaverage (ARFIMA) model provides a parsimonious parameterization of long-memory processes. This
parameterization nests the autoregressive moving-average (ARMA) model, which is widely used for
short-memory processes.
The ARFIMA model also generalizes the autoregressive integrated moving-average (ARIMA) model
with integer degrees of integration. ARFIMA models provide a solution for the tendency to overdifference
stationary series that exhibit long-run dependence. In the ARIMA approach, a nonstationary time series
is differenced d times until the differenced series is stationary, where d is an integer. Such series
are said to be integrated of order d, denoted I(d), with not differencing, I(0), being the option for
stationary series. Many series exhibit too much dependence to be I(0) but are not I(1), and ARFIMA
models are designed to represent these series.
The ARFIMA model allows for a continuum of fractional differences, 0.5 < d < 0.5. The
generalization to fractional differences allows the ARFIMA model to handle processes that are neither
I(0) nor I(1), to test for overdifferencing, and to model long-run effects that only die out at long
horizons.

Technical note
An ARIMA model for the series yt is given by

(L)(1 L)d yt = (L)t

(1)

where (L) = (1 1 L 2 L2 p Lp ) is the autoregressive (AR) polynomial in the lag


operator L; Lyt = yt1 ; (L) = (1 + 1 L + 2 L2 + + p Lp ) is the moving-average (MA) lag
polynomial; t is the independent and identically distributed innovation term; and d is the integer
number of differences required to make the yt stationary. An ARFIMA model is also specified by (1)
with the generalization that 0.5 < d < 0.5. Series with d 0.5 are handled by differencing and
subsequent ARFIMA modeling.

Because long-memory processes are stationary, one might be tempted to approximate the processes
with many terms in an ARMA model. But these approximate models are difficult to fit and to interpret
because ARMA models with many terms are difficult to estimate and the ARMA parameterization has
an inherent short-run nature. In contrast, the ARFIMA model has the d parameter for the long-run
dependence and ARMA parameters for short-run dependence. Using different parameters for different
types of dependence facilitates estimation and interpretation, as discussed by Sowell (1992a).

Technical note
An ARFIMA model specifies a fractionally integrated ARMA process. Formally, the ARFIMA model
specifies that

yt = (1 L)d {(L)}1 (L)t


The short-run ARMA process (L)1 (L)t captures the short-run effects, and the long-run effects
are captured by fractionally integrating the short-run ARMA process.

56

arfima Autoregressive fractionally integrated moving-average models

Essentially, the fractional-integration parameter d captures the long-run effects, and the ARMA
parameters capture the short-run effects. Having separate parameters for short-run and long-run
effects makes the ARFIMA model more flexible and easier to interpret than the ARMA model. After
estimating the ARFIMA parameters, the short-run effects are obtained by setting d = 0, whereas the
long-run effects use the estimated value for d. The short-run effects describe the behavior of the
fractionally differenced process (1 L)d yt , whereas the long-run effects describe the behavior of the
fractionally integrated yt .

ARFIMA models have been useful in fields as diverse as hydrology and economics. Long-memory
processes were first introduced in hydrology by Hurst (1951). Hosking (1981), in hydrology, and
Granger and Joyeux (1980), in economics, independently discovered the ARFIMA representation of
long-memory processes. Beran (1994), Baillie (1996), and Palma (2007) provide good introductions
to long-memory processes and ARFIMA models.

Example 1: Mount Campito tree ring data


Baillie (1996) discusses a time series of measurements of the widths of the annual rings of a
Mount Campito Bristlecone pine. The series contains measurements on rings formed in the tree from
3436 BC to 1969 AD. Essentially, larger widths were good years for the tree and narrower widths
were harsh years.
We begin by plotting the time series.
. use http://www.stata-press.com/data/r14/campito
(Campito Mnt. tree ring data from 3435BC to 1969AD)

20

tree ring width in 0.01mm


40
60
80

100

. tsline width, xlabel(-3435(500)1969) ysize(2)

3435 2935 2435 1935 1435

935 435
year

65

565

1065

1565

2065

Good years and bad years seem to run together, causing the appearance of local trends. The local
trends are evidence of dependence, but they are not as pronounced as those in a nonstationary series.

arfima Autoregressive fractionally integrated moving-average models

57

We plot the autocorrelations for another view:

0.20

Autocorrelations of width
0.00
0.20
0.40
0.60

0.80

. ac width, ysize(2)

10

20
Lag

30

40

Bartletts formula for MA(q) 95% confidence bands

The autocorrelations do not start below 1 but decay very slowly.


Granger and Joyeux (1980) show that the autocorrelations from an ARMA model decay exponentially,
whereas the autocorrelations from an ARFIMA process decay at the much slower hyperbolic rate. Box,
Jenkins, and Reinsel (2008) define short-memory processes as those whose autocorrelations decay
exponentially fast and long-memory processes as those whose autocorrelations decay at the hyperbolic
rate. The above plot of autocorrelations looks closer to hyperbolic than exponential.
Together, the above plots make us suspect that the series was generated by a long-memory process.
We see evidence that the series is stationary but that the autocorrelations die out much slower than a
short-memory process would predict.

58

arfima Autoregressive fractionally integrated moving-average models

Given that we believe the data was generated by a stationary process, we begin by fitting the data
to an ARMA model. We begin by using a short-memory model because a comparison of the results
highlights the advantages of using an ARFIMA model for a long-memory process.
. arima width, ar(1/2) ma(1) technique(bhhh 4 nr)
(setting optimization to BHHH)
Iteration 0:
log likelihood = -18934.593
Iteration 1:
log likelihood = -18914.337
Iteration 2:
log likelihood = -18913.407
Iteration 3:
log likelihood = -18913.24
(switching optimization to Newton-Raphson)
Iteration 4:
log likelihood = -18913.214
Iteration 5:
log likelihood = -18913.208
Iteration 6:
log likelihood = -18913.208
ARIMA regression
Sample: -3435 - 1969
Number of obs
Wald chi2(3)
Log likelihood = -18913.21
Prob > chi2
OIM
Std. Err.

width

Coef.

_cons

42.45055

1.02142

ar
L1.
L2.

1.264367
-.2848827

ma
L1.
/sigma

=
=
=

5405
133686.46
0.0000

P>|z|

[95% Conf. Interval]

41.56

0.000

40.44861

44.4525

.0253199
.0227534

49.94
-12.52

0.000
0.000

1.214741
-.3294785

1.313993
-.240287

-.8066007

.0189699

-42.52

0.000

-.8437811

-.7694204

8.005814

.0770004

103.97

0.000

7.854896

8.156732

width

ARMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

The estimated coefficients seem high in magnitude. We use estat aroots to investigate further.
. estat aroots
Eigenvalue stability condition
Eigenvalue
.9709661
.2934013

Modulus
.970966
.293401

All the eigenvalues lie inside the unit circle.


AR parameters satisfy stability condition.
Eigenvalue stability condition
Eigenvalue
.8066007

Modulus
.806601

All the eigenvalues lie inside the unit circle.


MA parameters satisfy invertibility condition.

The roots of the AR polynomial are 0.971 and 0.293, and the root of the MA polynomial is 0.807;
all of these are less than one in magnitude, indicating that the series is stationary and invertible

arfima Autoregressive fractionally integrated moving-average models

59

but has a high level of persistence. See Hamilton (1994, 59) and [TS] estat aroots for details about
computing and interpreting the roots of the polynomials from the estimated ARIMA coefficients.
Below we estimate the parameters of an ARFIMA model with only the fractional difference parameter
and a constant.
. arfima width
Iteration 0:
log likelihood
Iteration 1:
log likelihood
Iteration 2:
log likelihood
Iteration 3:
log likelihood
Iteration 4:
log likelihood
Iteration 5:
log likelihood
Iteration 6:
log likelihood
Iteration 7:
log likelihood
Refining estimates:
Iteration 0:
log likelihood
Iteration 1:
log likelihood
ARFIMA regression
Sample: -3435 - 1969

=
=
=
=
=
=
=
=

-18918.219
-18916.84
-18908.508
-18908.508
-18907.29
-18907.286
-18907.279
-18907.279

(backed up)

= -18907.279
= -18907.279
Number of obs
Wald chi2(1)
Prob > chi2

Log likelihood = -18907.279

=
=
=

5,405
1864.43
0.0000

OIM
Std. Err.

P>|z|

[95% Conf. Interval]

44.01432

9.174318

4.80

0.000

26.03299

61.99565

.4468888

.0103497

43.18

0.000

.4266038

.4671737

/sigma2

63.92927

1.229754

51.99

0.000

61.519

66.33955

width

Coef.

_cons

width

ARFIMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

The estimate of d is large and statistically significant. The relative parsimony of the ARFIMA model
is illustrated by the fact that the estimates of the standard deviation of the idiosyncratic errors are
about the same in the 5-parameter ARMA model and the 3-parameter ARFIMA model.

60

arfima Autoregressive fractionally integrated moving-average models

Lets add an AR parameter to the above ARFIMA model:


. arfima width, ar(1)
Iteration 0:
log likelihood
Iteration 1:
log likelihood
Iteration 2:
log likelihood
Iteration 3:
log likelihood
Iteration 4:
log likelihood
Iteration 5:
log likelihood
Iteration 6:
log likelihood
Refining estimates:
Iteration 0:
log likelihood
Iteration 1:
log likelihood
ARFIMA regression
Sample: -3435 - 1969

=
=
=
=
=
=
=

-18910.997
-18910.949
-18908.158
-18907.248
-18907.233
-18907.233
-18907.233

(backed up)
(backed up)

= -18907.233
= -18907.233
Number of obs
Wald chi2(2)
Prob > chi2

Log likelihood = -18907.233

=
=
=

5,405
1875.34
0.0000

OIM
Std. Err.

P>|z|

[95% Conf. Interval]

43.98774

8.685171

5.06

0.000

26.96512

61.01036

ar
L1.

.0063323

.0209987

0.30

0.763

-.0348244

.047489

.4432471

.0158937

27.89

0.000

.412096

.4743981

/sigma2

63.92915

1.229755

51.99

0.000

61.51887

66.33942

width

Coef.

_cons

width

ARFIMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

That the estimated AR term is tiny and statistically insignificant indicates that the d parameter has
accounted for all the dependence in the series.

As mentioned above, there is a sense in which the main advantages of an ARFIMA model over an
ARMA model for long-memory processes are the relative parsimony of the ARFIMA parameterization
and the ability of the ARFIMA parameterization to separate out the long-run effects from the short-run
effects. If the true process was generated from an ARFIMA model, an ARMA model with many terms
can approximate the process, but the terms make estimation difficult and the lack of separate long-run
and short-run parameters complicates interpretation.
This example highlights the relative parsimony of the ARFIMA model. In the examples below, we
illustrate the advantages of having separate parameters for long-run and short-run effects.

Technical note
You may be wondering what long-run effects can be produced by a model for stationary processes.
Because the autocorrelations of a long-memory process die out so slowly, the spectral density becomes
infinite as the frequency goes to 0 and the impulseresponse functions die out at a much slower rate.
The spectral density of a process describes the relative contributions of random components at
different frequencies to the variance of the process, with the low-frequency components corresponding
to long-run effects. See [TS] psdensity for an introduction to estimating and interpreting spectral
densities implied by the estimated parameters of parametric models.

arfima Autoregressive fractionally integrated moving-average models

61

Granger and Joyeux (1980) motivate ARFIMA models by noting that their implied spectral densities
are finite except at frequency 0 with 0 < d < 0.5, whereas stationary ARMA models have finite spectral
densities at all frequencies. Granger and Joyeux (1980) argue that the ability of ARFIMA models to
capture this long-range dependence, which cannot be captured by stationary ARMA models, is an
important advantage of ARFIMA models over ARMA models when modeling long-memory processes.
Impulseresponse functions are the coefficients on the infinite-order MA representation of a process,
and they describe how a shock feeds though the dynamic system. If the process is stationary, the
coefficients decay to 0 and they sum to a finite constant. As expected, the coefficients from an ARFIMA
model die out at a slower rate than those from an ARMA model. Because the ARMA terms model
the short-run effects and the d parameter models the long-run effects, an ARFIMA model specifies
both a short-run impulseresponse function and a long-run impulseresponse function. When an
ARMA model is used to approximate a long-memory model, the ARMA impulseresponse-function
coefficients confound the two effects.

Example 2
In this example, we model the log of the monthly levels of carbon dioxide above Mauna Loa,
Hawaii. To remove the seasonality, we model the twelfth seasonal difference of the log of the series.
This example illustrates that the ARFIMA model parameterizes long-run and short-run effects, whereas
the ARMA model confounds the two effects. (Sowell [1992a] discusses this point in greater depth.)
We begin by fitting the series to an ARMA model with an AR(1) term and an MA(2).
. use http://www.stata-press.com/data/r14/mloa, clear
. arima S12.log, ar(1) ma(2)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
Iteration 1:
log likelihood =
Iteration 2:
log likelihood =
Iteration 3:
log likelihood =
Iteration 4:
log likelihood =

2000.9262
2001.5484
2001.5637
2001.5641
2001.5641

ARIMA regression
Sample:

1960m1 - 1990m12

Number of obs
Wald chi2(2)
Prob > chi2

Log likelihood =

2001.564

S12.log

Coef.

_cons

.0036754

.0002475

ar
L1.

.7354346

ma
L2.
/sigma

OPG
Std. Err.

=
=
=

372
500.41
0.0000

P>|z|

[95% Conf. Interval]

14.85

0.000

.0031903

.0041605

.0357715

20.56

0.000

.6653237

.8055456

.1353086

.0513156

2.64

0.008

.0347319

.2358853

.0011129

.0000401

27.77

0.000

.0010344

.0011914

log

ARMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

62

arfima Autoregressive fractionally integrated moving-average models

All the parameters are statistically significant, and they indicate a high degree of dependence.
Below we nest the previously fit ARMA model into an ARFIMA model.
. arfima S12.log, ar(1) ma(2)
Iteration 0:
log likelihood
Iteration 1:
log likelihood
Iteration 2:
log likelihood
Iteration 3:
log likelihood
Iteration 4:
log likelihood
Refining estimates:
Iteration 0:
log likelihood
Iteration 1:
log likelihood

=
=
=
=
=

2006.0757
2006.0774
2006.0775
2006.0804
2006.0805

=
=

2006.0805
2006.0805

(backed up)
(backed up)

ARFIMA regression
Sample: 1960m1 - 1990m12
Log likelihood =

2006.0805

S12.log

Coef.

S12.log
_cons

Number of obs
Wald chi2(3)
Prob > chi2

=
=
=

372
248.88
0.0000

OIM
Std. Err.

P>|z|

[95% Conf. Interval]

.003616

.0012968

2.79

0.005

.0010743

.0061578

ar
L1.

.2160894

.1015547

2.13

0.033

.0170458

.415133

ma
L2.

.1633916

.0516905

3.16

0.002

.0620801

.2647031

.4042573

.0805417

5.02

0.000

.2463985

.562116

/sigma2

1.20e-06

8.84e-08

13.63

0.000

1.03e-06

1.38e-06

ARFIMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

All the parameters are statistically significant at the 5% level. That the confidence interval for the
fractional-difference parameter d includes numbers greater than 0.5 is evidence that the series may be
nonstationary. Alternatively, we proceed as if the series is stationary, and the wide confidence interval
for d reflects the difficulty of fitting a complicated dynamic model with only 372 observations.
With the above caveat, we can now proceed to compare the interpretations of the ARMA and ARFIMA
estimates. We compare these estimates in terms of their implied spectral densities. The spectral density
of a stationary time series describes the relative importance of components at different frequencies.
See [TS] psdensity for an introduction to spectral densities.
Below we quietly refit the ARMA model and use psdensity to estimate the parametric spectral
density implied by the ARMA parameter estimates.
. quietly arima S12.log, ar(1) ma(2)
. psdensity d_arma omega1

The psdensity command above put the estimated ARMA spectral density into the new variable
d arma at the frequencies stored in the new variable omega1.
Below we quietly refit the ARFIMA model and use psdensity to estimate the long-run parametric
spectral density and then the short-run parametric spectral density implied by the ARFIMA parameter
estimates. The long-run estimates use the estimated d, and the short-run estimates set d to 0 (as is

arfima Autoregressive fractionally integrated moving-average models

63

implied by specifying the smemory option). The long-run estimates describe the fractionally integrated
series, and the short-run estimates describe the fractionally differenced series.
. quietly arfima S12.log, ar(1) ma(2)
. psdensity d_arfima omega2
. psdensity ds_arfima omega3, smemory

Now that we have the ARMA estimates, the long-run ARFIMA estimates, and the short-run ARFIMA
estimates, we graph them below.

. line d_arma d_arfima omega1, name(lmem) nodraw


. line d_arma ds_arfima omega1, name(smem) nodraw
. graph combine lmem smem, cols(1) xcommon

Frequency
ARFIMA longmemory spectral density

.5

1.5

ARMA spectral density

Frequency
ARMA spectral density

ARFIMA shortmemory spectral density

The top graph contains a plot of the spectral densities implied by the ARMA parameter estimates
and by the long-run ARFIMA parameter estimates. As discussed by Granger and Joyeux (1980), the
two models imply different spectral densities for frequencies close to 0 when d > 0. When d > 0,
the spectral density implied by the ARFIMA estimates diverges to infinity, whereas the spectral density
implied by the ARMA estimates remains finite at frequency 0 for stable ARMA processes. This difference
reflects the ability of ARFIMA models to capture long-run effects that ARMA models only capture as
the parameters approach those of an unstable model.
The bottom graph contains a plot of the spectral densities implied by the ARMA parameter estimates
and by the short-run ARFIMA parameter estimates, which are the ARMA parameters for the fractionally
differenced process. Comparing the two plots illustrates the ability of the short-run ARFIMA parameters
to capture both low-frequency and high-frequency components in the fractionally differenced series. In
contrast, the ARMA parameters captured only low-frequency components in the fractionally integrated
series.
Comparing the ARFIMA and ARMA spectral densities in the two graphs illustrates that the additional
fractional-difference parameter allows the ARFIMA model to identify both long-run and short-run
effects, which the ARMA model confounds.

64

arfima Autoregressive fractionally integrated moving-average models

Technical note
As noted above, the spectral density of an ARFIMA process with d > 0 diverges to infinity as
the frequency goes to 0. In contrast, the spectral density of an ARFIMA process with d < 0 is 0 at
frequency 0.
The autocorrelation function of an ARFIMA process with d < 0 also decays at the slower hyperbolic
rate. ARFIMA processes with d < 0 are sometimes called antipersistent because all the autocorrelations
for lags greater than 0 are negative.
Hosking (1981), Baillie (1996), and others refer to ARFIMA processes with d < 0 as intermediate
memory processes and ARFIMA processes with d > 0 as long-memory processes. Box, Jenkins, and
Reinsel (2008, 429) define long-memory processes as those with the slower hyperbolic rate of decay,
which includes ARFIMA processes with d < 0. We follow Box, Jenkins, and Reinsel (2008) and thus
call ARFIMA processes for 0.5 < d < 0 and 0 < d < 0.5 long-memory processes.
Sowell (1992a) uses the properties of ARFIMA processes with d < 0 to derive tests for whether a
series was generated by an I(1) process or an I(d) process with d < 1.

Example 3
In this example, we use arfima to test whether a series is nonstationary. More specifically, we
test whether the series was generated by an I(1) process by testing whether the first difference of
the series is overdifferenced.
We have monthly data on the log of the number of reported cases of mumps in New York City
between January 1928 and December 1972. We believe that the series is stationary, after accounting
for the monthly seasonal effects. We use an ARFIMA model for differenced series to test the null
hypothesis of nonstationarity. We use the confidence interval for the d parameter from an ARFIMA
model for the first difference of the log of the series to perform the test. If the right-hand end of the
95% CI is less than 0, we conclude that the differenced series was overdifferenced, which implies
that the original series was not nonstationary.
More formally, if yt is I(1), then yt = yt yt1 must be I(0). If yt is I(d) with d < 0,
then yt is overdifferenced and yt is I(d) with d < 1.
We use seasonal indicators to account for the seasonal effects. In the output below, we specify the
mpl option to use the MPL estimator that is less biased in the presence of covariates.
arfima computes the maximum likelihood estimates (MLE) for the parameters of this stationary
and invertible Gaussian process. Alternatively, the maximum MPL estimates may be computed. See
Methods and formulas for a description of these two estimation techniques, but suffice it to say
that the MLE estimates for d are biased in the presence of exogenous variables, even the constant
term, for small samples. The MPL estimator reduces this bias; see Hauser (1999) and Doornik and
Ooms (2004).

arfima Autoregressive fractionally integrated moving-average models

65

. use http://www.stata-press.com/data/r14/mumps2, clear


(Hipel and Mcleod (1994), http://robjhyndman.com/tsdldata/epi/mumps.dat)
. arfima D.log i.month, ma(1 2) mpl
Iteration 0:
log modified profile likelihood = 53.766763
Iteration 1:
log modified profile likelihood = 54.388641
Iteration 2:
log modified profile likelihood = 54.934726 (backed up)
Iteration 3:
log modified profile likelihood = 54.937524 (backed up)
Iteration 4:
log modified profile likelihood = 55.002186
Iteration 5:
log modified profile likelihood =
55.20462
Iteration 6:
log modified profile likelihood = 55.205939
Iteration 7:
log modified profile likelihood = 55.205949
Iteration 8:
log modified profile likelihood = 55.205949
Refining estimates:
Iteration 0:
log modified profile likelihood = 55.205949
Iteration 1:
log modified profile likelihood = 55.205949
ARFIMA regression
Sample: 1928m2 - 1972m6
Number of obs
=
533
Wald chi2(14)
=
1360.28
Log modified profile likelihood = 55.205949
Prob > chi2
=
0.0000
OIM
Std. Err.

D.log

Coef.

P>|z|

[95% Conf. Interval]

month
February
March
April
May
June
July
August
September
October
November
December

-.220719
.0314683
-.2800296
-.3703179
-.4722035
-.9613239
-1.063042
-.7577301
-.3024251
-.0115317
.0247135

.0428112
.0424718
.0460084
.0449932
.0446764
.0448375
.0449272
.0452529
.0462887
.0426911
.0430401

-5.16
0.74
-6.09
-8.23
-10.57
-21.44
-23.66
-16.74
-6.53
-0.27
0.57

0.000
0.459
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.787
0.566

-.3046275
-.0517749
-.3702043
-.4585029
-.5597676
-1.049204
-1.151098
-.8464242
-.3931494
-.0952046
-.0596435

-.1368105
.1147115
-.1898548
-.2821329
-.3846394
-.873444
-.9749868
-.669036
-.2117009
.0721413
.1090705

_cons

.3656807

.0303215

12.06

0.000

.3062517

.4251096

ma
L1.
L2.

.258056
.1972011

.0684414
.0506439

3.77
3.89

0.000
0.000

.1239133
.0979409

.3921986
.2964612

-.2329426

.067336

-3.46

0.001

-.3649187

-.1009664

D.log

ARFIMA

We interpret the fact that the estimated 95% CI is strictly less than 0 to mean that the differenced
series is overdifferenced, which implies that the original series is stationary.

66

arfima Autoregressive fractionally integrated moving-average models

Stored results
arfima stores the following in e():
Scalars
e(N)
e(k)
e(k eq)
e(k dv)
e(k aux)
e(df m)
e(ll)
e(chi2)
e(p)
e(s2)
e(tmin)
e(tmax)
e(ar max)
e(ma max)
e(rank)
e(ic)
e(rc)
e(converged)
e(constant)
Macros
e(cmd)
e(cmdline)
e(depvar)
e(covariates)
e(method)
e(eqnames)
e(title)
e(tmins)
e(tmaxs)
e(chi2type)
e(vce)
e(vcetype)
e(ma)
e(ar)
e(technique)
e(tech steps)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
e(asbalanced)
e(asobserved)
Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(V)
e(V modelbased)
Functions
e(sample)

number of observations
number of parameters
number of equations in e(b)
number of dependent variables
number of auxiliary parameters
model degrees of freedom
log likelihood
2

significance
idiosyncratic error variance estimate, if e(method) = mpl
minimum time
maximum time
maximum AR lag
maximum MA lag
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
0 if noconstant, 1 otherwise
arfima
command as typed
name of dependent variable
list of covariates
mle or mpl
names of equations
title in estimation output
formatted minimum time
formatted maximum time
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
lags for MA terms
lags for AR terms
maximization technique
number of iterations performed before switching techniques
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
factor variables fvset as asbalanced
factor variables fvset as asobserved
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample

arfima Autoregressive fractionally integrated moving-average models

67

Methods and formulas


Methods and formulas are presented under the following headings:
Introduction
The likelihood function
The autocovariance function
The profile likelihood
The MPL

Introduction
We model an observed second-order stationary time-series yt , t = 1, . . . , T , using the
ARFIMA(p, d, q) model defined as

(Lp )(1 L)d (yt xt ) = (Lq )t


where

(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq

X
(1 L) =
(1)j
d

j=0

(j + d)
Lj
(j + 1)(d)

and the lag operator is defined as L yt = ytj , t = 1, . . . , T and j = 1, . . . , t 1; t N (0, 2 );


() is the gamma function; and 0.5 < d < 0.5, d 6= 0. The row vector xt contains the exogenous
variables specified as indepvars in the arfima syntax.
The process is stationary and invertible for 0.5 < d < 0.5; the roots of the AR polynomial, (z) =
1 1 z 2 z 2 p z p = 0, and the MA polynomial, (z) = 1 + 1 z + 2 z 2 + + q z q = 0,
lie outside the unit circle and there are no common roots. When 0 < d < 0.5, the process has
long
P memory in that the autocovariance function, h , decays to 0 at a hyperbolic rate, such that
has long memory in that the autocovariance
h= |h | = . When 0.5 < d < 0, the process also P

function, h , decays to 0 at a hyperbolic rate such that


h= |h | < . (As discussed in the
text, some authors refer to ARFIMA processes with 0.5 < d < 0 as having intermediate memory,
but we follow Box, Jenkins, and Reinsel [2008] and refer to them as long-memory processes.)
Granger and Joyeux (1980), Hosking (1981), Sowell (1992b), Sowell (1992a), Baillie (1996), and
Palma (2007) provide overviews of long-memory processes, fractional integration, and introductions
to ARFIMA models.

The likelihood function


Estimation of the ARFIMA parameters , , d, and 2 is done by the method of maximum
0
b0,
likelihood. The log Gaussian likelihood of y given parameter estimates
b = (b
0 , b
, db,
b2 ) is

`(y|b
) =


1
b + (y X
b 1 (y X
b )0 V
b)
T log(2) + log |V|
2

(2)

68

arfima Autoregressive fractionally integrated moving-average models

where the covariance matrix V has a Toeplitz structure



1
2
0
0
1
1
V=
..
..
..
.
.
.

T 1

T 2

T 3

. . . T 1
. . . T 2
..
..

.
.
...
0

Var(yt ) = 0 , Cov(yt , yth ) = h (for h = 1, . . . , t 1), and t = 1, . . . , T (Sowell 1992b).


We use the DurbinLevinson algorithm (Palma 2007; Golub and Van Loan 2013) to factor and
invert V. Using only the vector of autocovariances , the DurbinLevinson algorithm will compute
b 0.5 L
b 1 (y X
b ), where L is lower triangular and V = LDL0 and D = Diag(),
b
 = D
t = Var(yt ). The algorithm performs these computations without generating the T T matrix L1 .
During optimization, we restrict the fractional-integration parameter to (0.5, 0.5) using a logistic
transform, d = log {(x + 0.5)/(0.5 x)}, so that the range of d encompasses the real line. During
the Refining estimates step, the fractional-integration parameter is transformed back to the restricted
space, where we obtain its standard error from the observed information matrix.

The autocovariance function


Computation of the autocovariances h is given by Sowell (1992b) with numerical enhancements
by Doornik and Ooms (2003) and is reviewed by Palma (2007, sec. 3.2.4). We reproduce it here.
The autocovariance of an ARFIMA(0, d, 0) process is

h = 2

(h + d)
(1 2d)
(1 d)(d) (1 + h d)

where h = 0, 1, . . . . For ARFIMA(p, d, q ), we have

h = 2

q X
p
X

(i)j C(d, p + i h, j )

(3)

i=q j=1

where

(i) =

minX
(q,q+i)

k ki

k= max(0,i)

1
p
Y

Y
j = j
(1 i j )
(j m )

i=1

m6=j

and


h  2p
F (d + h, 1, 1 d + h, ) + F (d h, 1, 1 d h, ) 1
2

F () is the hypergeometric series (Gradshteyn and Ryzhik 2007)


C(d, h, ) =

F (a, b, c, x) = 1 +

ab
a(a + 1)b(b + 1) 2 a(a + 1)(a + 2)b(b + 1)(b + 2) 3
x+
x +
x +
c1
c(c + 1) 1 2
c(c + 1)(c + 2) 1 2 3

The series recursions are evaluated backward as Doornik and Ooms (2003) emphasize. Doornik and
Ooms (2003) also provide other computational enhancements, such as not dividing by j in (3).

arfima Autoregressive fractionally integrated moving-average models

69

The profile likelihood


Doornik and Ooms (2003) show that the parameters 2 and can be concentrated out of the
likelihood. Using (2), the MLE for 2 is

1
b 1 (y X
b )0 R
b)
(y X
T

b2 =
where R =

1
2 V

(4)

and

b 1 X)1 X0 R
b 1 y
b = (X0 R

(5)

is the weighted least-squares estimates for . Substituting (4) into (2) results in the profile likelihood

`p (y|b
r ) =

T
2


1 + log(2) +

1
b + log
log |R|
b2
T

We compute the MLEs using the profile likelihood for the reduced parameter set r = (0 , 0 , d).
Equations (4) and (5) provide MLEs for 2 and to create the full parameter vector =
(0 , 0 , 0 , d, 2 ). We follow with the Refining estimates step, optimizing on the log likelihood
(1). The refining step does not change the estimates; it produces the coefficient variancecovariance
matrix from the observed information matrix.
Using this profile likelihood prevents the use of the BHHH optimization method because there are
no observation-level scores.

The MPL
The small-sample MLE for d can be biased when there are exogenous variables in the model. The
MPL reduces this bias (Hauser 1999; Doornik and Ooms 2004). The mpl option will direct arfima
to use this optimization criterion. The MPL is expressed as

`m (y|b
r ) =

T
{1 + log(2)}
2

1
1

T
2


b
log |R|

T k2
2

log
b2

1
b 1 X|
log |X0 R
2

where k = rank(X) (An and Bloomfield 1993).


There is no MPL estimator for 2 , and you will notice its absence from the coefficient table.
However, the unbiased estimate assuming ARFIMA(0, 0, 0),

e2 =

b 1 (y X
b )0 R
b)
(y X
T k

is stored in e() for postestimation computation of the forecast and residual root mean squared errors.

References
An, S., and P. Bloomfield. 1993. Cox and Reids modification in regression models with correlated errors. Technical
report, Department of Statistics, North Carolina State University, Raleigh, NC.
Baillie, R. T. 1996. Long memory processes and fractional integration in econometrics. Journal of Econometrics 73:
559.

70

arfima Autoregressive fractionally integrated moving-average models

Beran, J. 1994. Statistics for Long-Memory Processes. Boca Raton: Chapman & Hall/CRC.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Doornik, J. A., and M. Ooms. 2003. Computational aspects of maximum likelihood estimation of autoregressive
fractionally integrated moving average models. Computational Statistics & Data Analysis 42: 333348.
. 2004. Inference and forecasting for ARFIMA models with an application to US and UK inflation. Studies in
Nonlinear Dynamics & Econometrics 8: 123.
Golub, G. H., and C. F. Van Loan. 2013. Matrix Computations. 4th ed. Baltimore: Johns Hopkins University Press.
Gradshteyn, I. S., and I. M. Ryzhik. 2007. Table of Integrals, Series, and Products. 7th ed. San Diego: Elsevier.
Granger, C. W. J., and R. Joyeux. 1980. An introduction to long-memory time series models and fractional differencing.
Journal of Time Series Analysis 1: 1529.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Hauser, M. A. 1999. Maximum likelihood estimators for ARMA and ARFIMA models: a Monte Carlo study. Journal
of Statistical Planning and Inference 80: 229255.
Hosking, J. R. M. 1981. Fractional differencing. Biometrika 68: 165176.
Hurst, H. E. 1951. Long-term storage capacity of reservoirs. Transactions of the American Society of Civil Engineers
116: 770779.
Palma, W. 2007. Long-Memory Time Series: Theory and Methods. Hoboken, NJ: Wiley.
Sowell, F. 1992a. Modeling long-run behavior with the fractional ARIMA model. Journal of Monetary Economics
29: 277302.
. 1992b. Maximum likelihood estimation of stationary univariate fractionally integrated time series models. Journal
of Econometrics 53: 165188.

Also see
[TS] arfima postestimation Postestimation tools for arfima
[TS] tsset Declare data to be time-series data
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] sspace State-space models
[U] 20 Estimation and postestimation commands

Title
arfima postestimation Postestimation tools for arfima
Postestimation commands
Methods and formulas

predict
References

margins
Also see

Remarks and examples

Postestimation commands
The following postestimation commands are of special interest after arfima:
Command

Description

estat acplot
irf
psdensity

estimate autocorrelations and autocovariances


create and analyze IRFs
estimate the spectral density

The following standard postestimation commands are also available:

Command

Description

contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom

contrasts and ANOVA-style joint tests of estimates


Akaikes and Schwarzs Bayesian information criteria (AIC and BIC)
summary statistics for the estimation sample
variancecovariance matrix of the estimators (VCE)
cataloging estimation results
dynamic forecasts and simulations
point estimates, standard errors, testing, and inference for linear combinations
of coefficients
likelihood-ratio test
marginal means, predictive margins, marginal effects, and average marginal
effects
graph the results from margins (profile plots, interaction plots, etc.)
point estimates, standard errors, testing, and inference for nonlinear combinations
of coefficients
predictions, residuals, influence statistics, and other diagnostic measures
point estimates, standard errors, testing, and inference for generalized predictions
pairwise comparisons of estimates
Wald tests of simple and composite linear hypotheses
Wald tests of nonlinear hypotheses

lrtest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl

estat ic, margins, marginsplot, nlcom, and predictnl are not appropriate after arfima, mpl.

71

72

arfima postestimation Postestimation tools for arfima

predict
Description for predict
predict creates a new variable containing predictions such as expected values, fractionally
differenced series, and innovations. All predictions are available as static one-step-ahead predictions,
and the dependent variable is also available as a dynamic multistep prediction.

Menu for predict


Statistics

>

Postestimation

Syntax for predict




predict

type

statistic

newvar

if

 

in

 

, statistic options

Description

Main

xb
residuals
rstandard
fdifference

predicted values; the default


predicted innovations
standardized innovations
fractionally differenced series

These statistics are available both in and out of sample; type predict
the estimation sample.

options

. . . if e(sample) . . . if wanted only for

Description

Options



rmse( type newvar)
dynamic(datetime)

put the estimated root mean squared error of the predicted statistic
in a new variable; only permitted with options xb and residuals
forecast the time series starting at datetime; only permitted with
option xb

datetime is a # or a time literal, such as td(1jan1995) or tq(1995q1); see [D] datetime.

Options for predict



Main

xb, the default, calculates the predictions for the level of depvar.
residuals calculates the predicted innovations.
rstandard calculates the standardized innovations.
fdifference calculates the fractionally differenced predictions of depvar.

arfima postestimation Postestimation tools for arfima

73


Options


rmse( type newvar) puts the root mean squared errors of the predicted statistics into the specified
new variables. The root mean squared errors measure the variances due to the disturbances but do
not account for estimation error. rmse() is only permitted with the xb and residuals options.
dynamic(datetime) specifies when predict starts producing dynamic forecasts. The specified datetime must be in the scale of the time variable specified in tsset, and the datetime must be
inside a sample for which observations on the dependent variables are available. For example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of 2008, assuming
that your time variable is quarterly; see [D] datetime. If the model contains exogenous variables,
they must be present for the whole predicted sample. dynamic() may only be specified with xb.

margins
Description for margins
margins estimates margins of response for expected values.

Menu for margins


Statistics

>

Postestimation

Syntax for margins



margins
margins

marginlist

 

marginlist

, options

, predict(statistic . . . )

statistic

Description

xb
residuals
rstandard
fdifference

predicted values;
not allowed with
not allowed with
not allowed with

options

the default
margins
margins
margins

Statistics not allowed with margins are functions of stochastic quantities other than e(b).
For the full syntax, see [R] margins.

Remarks and examples


Remarks are presented under the following headings:
Forecasting after ARFIMA
IRF results for ARFIMA

74

arfima postestimation Postestimation tools for arfima

Forecasting after ARFIMA


We assume that you have already read [TS] arfima. In this section, we illustrate some of the
features of predict after fitting an ARFIMA model using arfima.

Example 1
We have monthly data on the one-year Treasury bill secondary market rate imported from the
Federal Reserve Bank (FRED) database using freduse; see Drukker (2006) and Stata YouTube video:
Using freduse to download time-series data from the Federal Reserve for an introduction to freduse.
Below we fit an ARFIMA model with two autoregressive terms and one moving-average term to the
data.
. use http://www.stata-press.com/data/r14/tb1yr
(FRED, 1-year treasury bill; secondary market rate, monthly 1959-2001)
. arfima tb1yr, ar(1/2) ma(1)
Iteration 0:
log likelihood = -235.31856
Iteration 1:
log likelihood = -235.26104 (backed up)
Iteration 2:
log likelihood = -235.25974 (backed up)
Iteration 3:
log likelihood = -235.2544 (backed up)
Iteration 4:
log likelihood = -235.13353
Iteration 5:
log likelihood = -235.13063
Iteration 6:
log likelihood = -235.12108
Iteration 7:
log likelihood = -235.11917
Iteration 8:
log likelihood = -235.11869
Iteration 9:
log likelihood = -235.11868
Refining estimates:
Iteration 0:
log likelihood = -235.11868
Iteration 1:
log likelihood = -235.11868
ARFIMA regression
Sample: 1959m7 - 2001m8
Number of obs
=
506
Wald chi2(4)
=
1864.15
Log likelihood = -235.11868
Prob > chi2
=
0.0000
OIM
Std. Err.

P>|z|

5.496709

2.920357

1.88

0.060

-.2270864

11.2205

ar
L1.
L2.

.2326107
.3885212

.1136655
.0835665

2.05
4.65

0.041
0.000

.0098304
.2247337

.4553911
.5523086

ma
L1.

.7755848

.0669562

11.58

0.000

.6443531

.9068166

.4606489

.0646542

7.12

0.000

.333929

.5873688

/sigma2

.1466495

.009232

15.88

0.000

.1285551

.1647439

tb1yr

Coef.

_cons

[95% Conf. Interval]

tb1yr

ARFIMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

All the parameters are statistically significant at the 5% level, and they indicate a high degree of
dependence in the series. In fact, the confidence interval for the fractional-difference parameter d
indicates that the series may be nonstationary. We will proceed as if the series is stationary and
suppose that it is fractionally integrated of order 0.46.

arfima postestimation Postestimation tools for arfima

75

We begin our postestimation analysis by predicting the series in sample:


. predict ptb
(option xb assumed)

We continue by using the estimated fractional-difference parameter to fractionally difference the


original series and by plotting the original series, the predicted series, and the fractionally differenced
series. See [TS] arfima for a definition of the fractional-difference operator.
. predict fdtb, fdifference

10

15

. twoway tsline tb1yr ptb fdtb, legend(cols(1))

1960m1

1970m1

1980m1
month

1990m1

2000m1

1Year Treasury Bill: Secondary Market Rate


xb prediction
tb1yr fractionally differenced

The above graph shows that the in-sample predictions appear to track the original series well and
that the fractionally differenced series looks much more like a stationary series than does the original.

Example 2
In this example, we use the above estimates to produce a dynamic forecast and a confidence
interval for the forecast for the one-year treasury bill rate and plot them.
We begin by extending the dataset and using predict to put the dynamic forecast in the new
ftb variable and the root mean squared error of the forecast in the new rtb variable. (As discussed
in Methods and formulas, the root mean squared error of the forecast accounts for the idiosyncratic
error but not for the estimation error.)
. tsappend, add(12)
. predict ftb, xb dynamic(tm(2001m9)) rmse(rtb)

Now we compute a 90% confidence interval around the dynamic forecast and plot the original
series, the in-sample forecast, the dynamic forecast, and the confidence interval of the dynamic
forecast.
. scalar z = invnormal(0.95)
. generate lb = ftb - z*rtb if month>=tm(2001m9)
(506 missing values generated)
. generate ub = ftb + z*rtb if month>=tm(2001m9)
(506 missing values generated)

76

arfima postestimation Postestimation tools for arfima

. twoway tsline tb1yr ftb if month>tm(1998m12) ||


>
tsrline lb ub if month>=tm(2001m9),
>
legend(cols(1) label(3 "90% prediction interval"))

1999m1

2000m1

2001m1
month

2002m1

1Year Treasury Bill: Secondary Market Rate


xb prediction, dynamic(tm(2001m9))
90% prediction interval

IRF results for ARFIMA


We assume that you have already read [TS] irf and [TS] irf create. In this section, we illustrate
how to calculate the impulseresponse function (IRF) of an ARFIMA model.

Example 3
Here we use the estimates obtained in example 1 to calculate the IRF of the ARFIMA model; see
[TS] irf and [TS] irf create for more details about IRFs.

arfima postestimation Postestimation tools for arfima


. irf
(file
(file
(file
. irf

77

create arfima, step(50) set(myirf)


myirf.irf created)
myirf.irf now active)
myirf.irf updated)
graph irf
arfima, tb1yr, tb1yr
1.5

.5

0
0

50

step
95% CI

impulseresponse function (irf)

Graphs by irfname, impulse variable, and response variable

The figure shows that a shock to tb1yr causes an initial spike in tb1yr, after which the impact
of the shock starts decaying slowly. This behavior is characteristic of long-memory processes.

Methods and formulas


Denote h , h = 1, . . . , t, to be the autocovariance function of the ARFIMA(p, d, q ) process for
two observations, yt and yth , h time periods apart. The covariance matrix V of the process of
length T has a Toeplitz structure of

1
2
. . . T 1
0
0
1
. . . T 2
1
V=
..
..
..
..
..

.
.
.
.
.
T 1 T 2 T 3 . . .
0
where the process variance is 0 = Var(yt ). We factor V = LDL0 , where L is lower triangular and
D = Diag(t ). The structure of L1 is of importance.

1
0
0
...
0
0
1
0
...
0
0
1,1

2,2
2,1
1
...
0
0
L1 =

..
..
..
..
..
..

.
.
.
.
.
.
T 1,T 1 T 1,T 2 T 1,T 2 . . . T 1,1 1
Let z = yt xt . The best linear predictor of zt+1 based on z1 , z2 , . . . , zt is zbt+1 =
Pt t
1
up to, but
k=1 t,k ztk+1 . Define t = (t,t , t,t1 , . . . , t1,1 ) to be the tth row of L
1
not including, the diagonal. Then t = Vt t , where Vt is the t t upper left submatrix of V and
t = (1 , 2 , . . . , t )0 . Hence, the best linear predictor of the innovations is computed as b
 = L1 z,
b In practice, the computation is
b =b
and the one-step predictions are y
 + X.

78

arfima postestimation Postestimation tools for arfima



b 1 y X
b + X
b
b=L
y
b and V
b are computed from the maximum likelihood estimates. We use the DurbinLevinson
where L
b , invert L
b , and scale y X
b using
algorithm (Palma 2007; Golub and Van Loan 2013) to factor V
only the vector of estimated autocovariances
b.
The prediction error variances of the one-step predictions are computed recursively in the Durbin
Levinson algorithm. They are the t elements in the diagonal matrix D computed from the Cholesky
2
).
factorization of V. The recursive formula is 0 = 0 , and t = t1 (1 t,t

b 1b
z, where
Forecasting is carried out as described by Beran (1994, sec. 8.7), b
zT +k =
e0k V
0
T +k1 ,
bT +k2 , . . . ,
bk ). The forecast mean squared error is computed as MSE(b
zT +k ) =
b0

ek = (b
0 b 1
1
b
ek . Computation of V
ek is carried out efficiently using algorithm 4.7.2 of Golub and Van

ek V
Loan (2013).

References
Beran, J. 1994. Statistics for Long-Memory Processes. Boca Raton: Chapman & Hall/CRC.
Drukker, D. M. 2006. Importing Federal Reserve economic data. Stata Journal 6: 384386.
Golub, G. H., and C. F. Van Loan. 2013. Matrix Computations. 4th ed. Baltimore: Johns Hopkins University Press.
Palma, W. 2007. Long-Memory Time Series: Theory and Methods. Hoboken, NJ: Wiley.

Also see
[TS] arfima Autoregressive fractionally integrated moving-average models
[TS] estat acplot Plot parametric autocorrelation and autocovariance functions
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] psdensity Parametric spectral density estimation after arima, arfima, and ucm
[U] 20 Estimation and postestimation commands

Title
arima ARIMA, ARMAX, and other dynamic regression models
Description
Options
References

Quick start
Remarks and examples
Also see

Menu
Stored results

Syntax
Methods and formulas

Description
arima fits univariate models for a time series, where the disturbances are allowed to follow a
linear autoregressive moving-average (ARMA) specification. When independent variables are included
in the specification, such models are often called ARMAX models; and when independent variables are
not specified, they reduce to BoxJenkins autoregressive integrated moving-average (ARIMA) models
in the dependent variable.

Quick start
AR(1) model using tsset data

arima y, ar(1)
MA(1) model

arima y, ma(1)
ARMA(2,1) model

arima y, ar(1/2) ma(1)


Same as above
arima y, arima(2,0,1)
As above, and take first difference of y and restrict estimation to years 1990 to 2010
arima D.y if tin(1990,2010), ar(1/2) ma(1)
Same as above
arima y if tin(1990,2010), arima(2,1,1)
Multiplicative SARIMA model with quarterly data
arima y, arima(2,1,1) sarima(2,1,0,4)
ARMAX model with covariates x1 and x2, an AR(1) process, and robust standard errors

arima y x, ar(1) vce(robust)

Menu
Statistics

>

Time series

>

ARIMA and ARMAX models

79

80

arima ARIMA, ARMAX, and other dynamic regression models

Syntax
Basic syntax for a regression model with ARMA disturbances


arima depvar indepvars , ar(numlist) ma(numlist)
Basic syntax for an ARIMA(p, d, q) model
arima depvar , arima(# p ,# d ,# q )
Basic syntax for a multiplicative seasonal ARIMA(p, d, q) (P, D, Q)s model
arima depvar , arima(# p ,# d ,# q ) sarima(# P ,# D ,# Q ,# s )
Full syntax
arima depvar

indepvars

options

 

if

 

in

 

weight

 

, options

Description

Model

noconstant
arima(# p ,# d ,# q )
ar(numlist)
ma(numlist)
constraints(constraints)
collinear

suppress constant term


specify ARIMA(p, d, q ) model for dependent variable
autoregressive terms of the structural model disturbance
moving-average terms of the structural model disturbance
apply specified linear constraints
keep collinear variables

Model 2

sarima(# P ,# D ,# Q ,# s )
mar(numlist, #s )
mma(numlist, #s )

specify period-#s multiplicative seasonal ARIMA term


multiplicative seasonal autoregressive term; may be repeated
multiplicative seasonal moving-average term; may be repeated

Model 3

condition
savespace
diffuse
p0(# | matname)
state0(# | matname)

use conditional MLE instead of full MLE


conserve memory during estimation
use diffuse prior for starting Kalman filter recursions
use alternate prior for starting Kalman recursions; seldom used
use alternate state vector for starting Kalman filter recursions

SE/Robust

vce(vcetype)

vcetype may be opg, robust, or oim

Reporting

level(#)
detail
nocnsreport
display options

set confidence level; default is level(95)


report list of gaps in time series
do not display constraints
control columns and column formats, row spacing, and line width

Maximization

maximize options

control the maximization process; seldom used

coeflegend

display legend instead of statistics

arima ARIMA, ARMAX, and other dynamic regression models

81

You must tsset your data before using arima; see [TS] tsset.
depvar and indepvars may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, statsby, and xi are allowed; see [U] 11.1.10 Prefix commands.
iweights are allowed; see [U] 11.1.6 weight.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.

Options


Model

noconstant; see [R] estimation options.


arima(# p ,# d ,# q ) is an alternative, shorthand notation for specifying models with ARMA disturbances.
The dependent variable and any independent variables are differenced # d times, and 1 through # p
lags of autocorrelations and 1 through # q lags of moving averages are included in the model. For
example, the specification
. arima D.y, ar(1/2) ma(1/3)

is equivalent to
. arima y, arima(2,1,3)

The latter is easier to write for simple ARMAX and ARIMA models, but if gaps in the AR or MA
lags are to be modeled, or if different operators are to be applied to independent variables, the
first syntax is required.
ar(numlist) specifies the autoregressive terms of the structural model disturbance to be included in
the model. For example, ar(1/3) specifies that lags of 1, 2, and 3 of the structural disturbance
be included in the model; ar(1 4) specifies that lags 1 and 4 be included, perhaps to account for
additive quarterly effects.
If the model does not contain regressors, these terms can also be considered autoregressive terms
for the dependent variable.
ma(numlist) specifies the moving-average terms to be included in the model. These are the terms for
the lagged innovations (white-noise disturbances).
constraints(constraints), collinear; see [R] estimation options.
If constraints are placed between structural model parameters and ARMA terms, the first few
iterations may attempt steps into nonstationary areas. This process can be ignored if the final
solution is well within the bounds of stationary solutions.

Model 2

sarima(# P ,# D ,# Q ,#s ) is an alternative, shorthand notation for specifying the multiplicative seasonal
components of models with ARMA disturbances. The dependent variable and any independent
variables are lag-# s seasonally differenced #D times, and 1 through # P seasonal lags of autoregressive
terms and 1 through # Q seasonal lags of moving-average terms are included in the model. For
example, the specification
. arima DS12.y, ar(1/2) ma(1/3) mar(1/2,12) mma(1/2,12)

is equivalent to
. arima y, arima(2,1,3) sarima(2,1,2,12)

82

arima ARIMA, ARMAX, and other dynamic regression models

mar(numlist, # s ) specifies the lag-# s multiplicative seasonal autoregressive terms. For example,
mar(1/2,12) requests that the first two lag-12 multiplicative seasonal autoregressive terms be
included in the model.
mma(numlist, # s ) specified the lag-# s multiplicative seasonal moving-average terms. For example,
mma(1 3,12) requests that the first and third (but not the second) lag-12 multiplicative seasonal
moving-average terms be included in the model.

Model 3

condition specifies that conditional, rather than full, maximum likelihood estimates be produced.
The presample values for t and t are taken to be their expected value of zero, and the estimate
of the variance of t is taken to be constant over the entire sample; see Hamilton (1994, 132).
This estimation method is not appropriate for nonstationary series but may be preferable for long
series or for models that have one or more long AR or MA lags. diffuse, p0(), and state0()
have no meaning for models fit from the conditional likelihood and may not be specified with
condition.
If the series is long and stationary and the underlying data-generating process does not have a long
memory, estimates will be similar, whether estimated by unconditional maximum likelihood (the
default), conditional maximum likelihood (condition), or maximum likelihood from a diffuse
prior (diffuse).
In small samples, however, results of conditional and unconditional maximum likelihood may
differ substantially; see Ansley and Newbold (1980). Whereas the default unconditional maximum
likelihood estimates make the most use of sample information when all the assumptions of the model
are met, Harvey (1989) and Ansley and Kohn (1985) argue for diffuse priors often, particularly in
ARIMA models corresponding to an underlying structural model.
The condition or diffuse options may also be preferred when the model contains one or more
long AR or MA lags; this avoids inverting potentially large matrices (see diffuse below).
When condition is specified, estimation is performed by the arch command (see [TS] arch),
and more control of the estimation process can be obtained using arch directly.
condition cannot be specified if the model contains any multiplicative seasonal terms.
savespace specifies that memory use be conserved by retaining only those variables required for
estimation. The original dataset is restored after estimation. This option is rarely used and should
be used only if there is not enough space to fit a model without the option. However, arima
requires considerably more temporary storage during estimation than most estimation commands
in Stata.
diffuse specifies that a diffuse prior (see Harvey 1989 or 1993) be used as a starting point for the
Kalman filter recursions. Using diffuse, nonstationary models may be fit with arima (see the
p0() option below; diffuse is equivalent to specifying p0(1e9)).
By default, arima uses the unconditional expected value of the state vector t (see Methods and
formulas) and the mean squared error (MSE) of the state vector to initialize the filter. When the
process is stationary, this corresponds to the expected value and expected variance of a random draw
from the state vector and produces unconditional maximum likelihood estimates of the parameters.
When the process is not stationary, however, this default is not appropriate, and the unconditional
MSE cannot be computed. For a nonstationary process, another starting point must be used for the
recursions.
In the absence of nonsample or presample information, diffuse may be specified to start the
recursions from a state vector of zero and a state MSE matrix corresponding to an effectively

arima ARIMA, ARMAX, and other dynamic regression models

83

infinite variance on this initial state. This method amounts to an uninformative and improper prior
that is updated to a proper MSE as data from the sample become available; see Harvey (1989).
Nonstationary models may also correspond to models with infinite variance given a particular
specification. This and other problems with nonstationary series make convergence difficult and
sometimes impossible.
diffuse can also be useful if a model contains one or more long AR or MA lags. Computation
of the unconditional MSE of the state vector (see Methods and formulas) requires construction
and inversion of a square matrix that is of dimension {max(p, q + 1)}2 , where p and q are the
maximum AR and MA lags, respectively. If q = 27, for example, we would require a 784-by-784
matrix. Estimation with diffuse does not require this matrix.
For large samples, there is little difference between using the default starting point and the diffuse
starting point. Unless the series has a long memory, the initial conditions affect the likelihood of
only the first few observations.
p0(# | matname) is a rarely specified option that can be used for nonstationary series or when an
alternate prior for starting the Kalman recursions is desired (see diffuse above for a discussion
of the default starting point and Methods and formulas for background).
matname specifies a matrix to be used as the MSE of the state vector for starting the Kalman filter
recursions P1|0 . Instead, one number, #, may be supplied, and the MSE of the initial state vector
P1|0 will have this number on its diagonal and all off-diagonal values set to zero.
This option may be used with nonstationary series to specify a larger or smaller diagonal for P1|0
than that supplied by diffuse. It may also be used with state0() when you believe that you
have a better prior for the initial state vector and its MSE.

state0(# | matname) is a rarely used option that specifies an alternate initial state vector, 1|0 (see
Methods and formulas), for starting the Kalman filter recursions. If # is specified, all elements of
the vector are taken to be #. The default initial state vector is state0(0).

SE/Robust

vce(vcetype) specifies the type of standard error reported, which includes types that are robust to
some kinds of misspecification (robust) and that are derived from asymptotic theory (oim, opg);
see [R] vce option.
For state-space models in general and ARMAX and ARIMA models in particular, the robust or
quasimaximum likelihood estimates (QMLEs) of variance are robust to symmetric nonnormality
in the disturbances, including, as a special case, heteroskedasticity. The robust variance estimates
are not generally robust to functional misspecification of the structural or ARMA components of
the model; see Hamilton (1994, 389) for a brief discussion.

Reporting

level(#); see [R] estimation options.


detail specifies that a detailed list of any gaps in the series be reported, including gaps due to
missing observations or missing data for the dependent variable or independent variables.
nocnsreport; see [R] estimation options.
display options: noci, nopvalues, vsquish, cformat(% fmt), pformat(% fmt), sformat(% fmt),
and nolstretch; see [R] estimation options.

84

arima ARIMA, ARMAX, and other dynamic regression models

Maximization

 
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), gtolerance(#), nonrtolerance(#), and from(init specs); see [R] maximize for all options except gtolerance(), and see below for information on gtolerance().
These options are sometimes more important for ARIMA models than most maximum likelihood
models because of potential convergence problems with ARIMA models, particularly if the specified
model and the sample data imply a nonstationary model.
Several alternate optimization methods, such as BerndtHallHallHausman (BHHH) and Broyden
FletcherGoldfarbShanno (BFGS), are provided for ARIMA models. Although ARIMA models are
not as difficult to optimize as ARCH models, their likelihoods are nevertheless generally not quadratic
and often pose optimization difficulties; this is particularly true if a model is nonstationary or
nearly nonstationary. Because each method approaches optimization differently, some problems
can be successfully optimized by an alternate method when one method fails.
Setting technique() to something other than the default or BHHH changes the vcetype to vce(oim).
The following options are all related to maximization and are either particularly important in fitting
ARIMA models or not available for most other estimators.
technique(algorithm spec) specifies the optimization technique to use to maximize the
likelihood function.
technique(bhhh) specifies the BerndtHallHallHausman (BHHH) algorithm.
technique(dfp) specifies the DavidonFletcherPowell (DFP) algorithm.
technique(bfgs) specifies the BroydenFletcherGoldfarbShanno (BFGS) algorithm.
technique(nr) specifies Statas modified NewtonRaphson (NR) algorithm.
You can specify multiple optimization methods. For example,
technique(bhhh 10 nr 20)
requests that the optimizer perform 10 BHHH iterations, switch to NewtonRaphson for 20
iterations, switch back to BHHH for 10 more iterations, and so on.
The default for arima is technique(bhhh 5 bfgs 10).
gtolerance(#) specifies the tolerance for the gradient relative to the coefficients. When
|gi bi | gtolerance() for all parameters bi and the corresponding elements of the
gradient gi , the gradient tolerance criterion is met. The default gradient tolerance for arima
is gtolerance(.05).
gtolerance(999) may be specified to disable the gradient criterion. If the optimizer becomes
stuck with repeated (backed up) messages, the gradient probably still contains substantial
values, but an uphill direction cannot be found for the likelihood. With this option, results can
often be obtained, but whether the global maximum likelihood has been found is unclear.
When the maximization is not going well, it is also possible to set the maximum number of
iterations (see [R] maximize) to the point where the optimizer appears to be stuck and to inspect
the estimation results at that point.
from(init specs) allows you to set the starting values of the model coefficients; see [R] maximize
for a general discussion and syntax options.
The standard syntax for from() accepts a matrix, a list of values, or coefficient name value
pairs; see [R] maximize. arima also accepts from(armab0), which sets the starting value for
all ARMA parameters in the model to zero prior to optimization.

arima ARIMA, ARMAX, and other dynamic regression models

85

ARIMA models may be sensitive to initial conditions and may have coefficient values that
correspond to local maximums. The default starting values for arima are generally good,
particularly in large samples for stationary series.

The following option is available with arima but is not shown in the dialog box:
coeflegend; see [R] estimation options.

Remarks and examples


Remarks are presented under the following headings:
Introduction
ARIMA models
Multiplicative seasonal ARIMA models
ARMAX models
Dynamic forecasting
Video example

Introduction
arima fits both standard ARIMA models that are autoregressive in the dependent variable and
structural models with ARMA disturbances. Good introductions to the former models can be found in
Box, Jenkins, and Reinsel (2008); Hamilton (1994); Harvey (1993); Newton (1988); Diggle (1990);
and many others. The latter models are developed fully in Hamilton (1994) and Harvey (1989), both of
which provide extensive treatment of the Kalman filter (Kalman 1960) and the state-space form used
by arima to fit the models. Becketti (2013) discusses ARIMA models and Statas arima command,
and he devotes an entire chapter explaining how the principles of ARIMA models are applied to real
datasets in practice.
Consider a first-order autoregressive moving-average process. Then arima estimates all the parameters in the model

yt = xt + t
t = t1 + t1 + t

structural equation
disturbance, ARMA(1, 1)

where

t

is the first-order autocorrelation parameter


is the first-order moving-average parameter
i.i.d. N (0, 2 ), meaning that t is a white-noise disturbance

You can combine the two equations and write a general ARMA(p, q) in the disturbances process as

yt = xt + 1 (yt1 xt1 ) + 2 (yt2 xt2 ) + + p (ytp xtp )


+ 1 t1 + 2 t2 + + q tq + t
It is also common to write the general form of the ARMA model more succinctly using lag operator
notation as
(Lp )(yt xt ) = (Lq )t
ARMA(p, q)
where

(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq

and Lj yt = ytj .
Multiplicative seasonal ARMAX and ARIMA models can also be fit.

86

arima ARIMA, ARMAX, and other dynamic regression models

For stationary series, full or unconditional maximum likelihood estimates are obtained via the
Kalman filter. For nonstationary series, if some prior information is available, you can specify initial
values for the filter by using state0() and p0() as suggested by Hamilton (1994) or assume an
uninformative prior by using the diffuse option as suggested by Harvey (1989).
Missing data are allowed and are handled using the Kalman filter and methods suggested by Harvey
(1989 and 1993); see Methods and formulas.

ARIMA models
Pure ARIMA models without a structural component do not have regressors and are often written
as autoregressions in the dependent variable, rather than autoregressions in the disturbances from a
structural equation. For example, an ARMA(1, 1) model can be written as

yt = + yt1 + t1 + t

(1a)

Other than a scale factor for the constant term , these models are equivalent to the ARMA in the
disturbances formulation estimated by arima, though the latter are more flexible and allow a wider
class of models.
To see this effect, replace xt in the structural equation above with a constant term 0 so that

yt = 0 + t
= 0 + t1 + t1 + t
= 0 + (yt1 0 ) + t1 + t
= (1 )0 + yt1 + t1 + t

(1b)

Equations (1a) and (1b) are equivalent, with = (1 )0 , so whether we consider an ARIMA model
as autoregressive in the dependent variable or disturbances is immaterial. Our illustration can easily
be extended from the ARMA(1, 1) case to the general ARIMA(p, d, q) case.

arima ARIMA, ARMAX, and other dynamic regression models

87

Example 1: ARIMA model


Enders (2004, 8793) considers an ARIMA model of the U.S. Wholesale Price Index (WPI)
using quarterly data over the period 1960q1 through 1990q4. The simplest ARIMA model that includes
differencing and both autoregressive and moving-average components is the ARIMA(1,1,1) specification.
We can fit this model with arima by typing
. use http://www.stata-press.com/data/r14/wpi1
. arima wpi, arima(1,1,1)
(setting optimization to BHHH)
Iteration 0:
log likelihood = -139.80133
Iteration 1:
log likelihood = -135.6278
Iteration 2:
log likelihood = -135.41838
Iteration 3:
log likelihood = -135.36691
Iteration 4:
log likelihood = -135.35892
(switching optimization to BFGS)
Iteration 5:
log likelihood = -135.35471
Iteration 6:
log likelihood = -135.35135
Iteration 7:
log likelihood = -135.35132
Iteration 8:
log likelihood = -135.35131
ARIMA regression
Sample: 1960q2 - 1990q4
Log likelihood = -135.3513

Number of obs
Wald chi2(2)
Prob > chi2

=
=
=

123
310.64
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

.7498197

.3340968

2.24

0.025

.0950019

1.404637

ar
L1.

.8742288

.0545435

16.03

0.000

.7673256

.981132

ma
L1.

-.4120458

.1000284

-4.12

0.000

-.6080979

-.2159938

/sigma

.7250436

.0368065

19.70

0.000

.6529042

.7971829

D.wpi

Coef.

_cons

wpi

ARMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

Examining the estimation results, we see that the AR(1) coefficient is 0.874, the MA(1) coefficient
is 0.412, and both are highly significant. The estimated standard deviation of the white-noise
disturbance  is 0.725.
This model also could have been fit by typing
. arima D.wpi, ar(1) ma(1)

The D. placed in front of the dependent variable wpi is the Stata time-series operator for differencing.
Thus we would be modeling the first difference in WPI from the second quarter of 1960 through
the fourth quarter of 1990 because the first observation is lost because of differencing. This second
syntax allows a richer choice of models.

88

arima ARIMA, ARMAX, and other dynamic regression models

Example 2: ARIMA model with additive seasonal effects


After examining first-differences of WPI, Enders chose a model of differences in the natural
logarithms to stabilize the variance in the differenced series. The raw data and first-difference of the
logarithms are graphed below.
US Wholesale Price Index difference of logs

25

.04

.02

50

75

.02

.04

100

.06

.08

125

US Wholesale Price Index

1960q1

1970q1

1980q1

1990q1

1960q1

1970q1

quarterly date

1980q1

1990q1

quarterly date

On the basis of the autocorrelations, partial autocorrelations (see graphs below), and the results of
preliminary estimations, Enders identified an ARMA model in the log-differenced series.

10

20
Lag

Bartletts formula for MA(q) 95% confidence bands

30

40

0.40

0.40

Autocorrelations of D.ln_wpi
0.20
0.00
0.20
0.40

Partial autocorrelations of D.ln_wpi


0.20
0.00
0.20
0.40

0.60

0.60

. ac D.ln_wpi, ylabels(-.4(.2).6)
. pac D.ln_wpi, ylabels(-.4(.2).6)

10

20
Lag

30

40

95% Confidence bands [se = 1/sqrt(n)]

In addition to an autoregressive term and an MA(1) term, an MA(4) term is included to account
for a remaining quarterly effect. Thus the model to be fit is

ln(wpit ) = 0 + 1 { ln(wpit1 ) 0 } + 1 t1 + 4 t4 + t

arima ARIMA, ARMAX, and other dynamic regression models

89

We can fit this model with arima and Statas standard difference operator:
. arima D.ln_wpi, ar(1) ma(1 4)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
Iteration 1:
log likelihood =
Iteration 2:
log likelihood =
Iteration 3:
log likelihood =
Iteration 4:
log likelihood =
(switching optimization to BFGS)
Iteration 5:
log likelihood =
Iteration 6:
log likelihood =
Iteration 7:
log likelihood =
Iteration 8:
log likelihood =
Iteration 9:
log likelihood =
Iteration 10: log likelihood =

382.67447
384.80754
384.84749
385.39213
385.40983
385.9021
385.95646
386.02979
386.03326
386.03354
386.03357

ARIMA regression
Sample:

1960q2 - 1990q4

Log likelihood =

386.0336

D.ln_wpi

Coef.

Number of obs
Wald chi2(3)
Prob > chi2

=
=
=

123
333.60
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ln_wpi
_cons

.0110493

.0048349

2.29

0.022

.0015731

.0205255

ar
L1.

.7806991

.0944946

8.26

0.000

.5954931

.965905

ma
L1.
L4.

-.3990039
.3090813

.1258753
.1200945

-3.17
2.57

0.002
0.010

-.6457149
.0737003

-.1522928
.5444622

/sigma

.0104394

.0004702

22.20

0.000

.0095178

.0113609

ARMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

In this final specification, the log-differenced series is still highly autocorrelated at a level of 0.781,
though innovations have a negative impact in the ensuing quarter (0.399) and a positive seasonal
impact of 0.309 in the following year.

Technical note
In one way, the results differ from most of Statas estimation commands: the standard error of
the coefficients is reported as OPG Std. Err. The default standard errors and covariance matrix
for arima estimates are derived from the outer product of gradients (OPG). This is one of three
asymptotically equivalent methods of estimating the covariance matrix of the coefficients (only two of
which are usually tractable to derive). Discussions and derivations of all three estimates can be found
in Davidson and MacKinnon (1993), Greene (2012), and Hamilton (1994). Bollerslev, Engle, and
Nelson (1994) suggest that the OPG estimates are more numerically stable in time-series regressions
when the likelihood and its derivatives depend on recursive computations, which is certainly the case
for the Kalman filter. To date, we have found no numerical instabilities in either estimate of the
covariance matrixsubject to the stability and convergence of the overall model.

90

arima ARIMA, ARMAX, and other dynamic regression models

Most of Statas estimation commands provide covariance estimates derived from the Hessian of
the likelihood function. These alternate estimates can also be obtained from arima by specifying the
vce(oim) option.

Multiplicative seasonal ARIMA models


Many time series exhibit a periodic seasonal component, and a seasonal ARIMA model, often
abbreviated SARIMA, can then be used. For example, monthly sales data for air conditioners have a
strong seasonal component, with sales high in the summer months and low in the winter months.
In the previous example, we accounted for quarterly effects by fitting the model

(1 1 L){ ln(wpit ) 0 } = (1 + 1 L + 4 L4 )t


This is an additive seasonal ARIMA model, in the sense that the first- and fourth-order MA terms work
additively: (1 + 1 L + 4 L4 ).
Another way to handle the quarterly effect would be to fit a multiplicative seasonal ARIMA model.
A multiplicative SARIMA model of order (1, 1, 1) (0, 0, 1)4 for the ln(wpit ) series is

(1 1 L){ ln(wpit ) 0 } = (1 + 1 L)(1 + 4,1 L4 )t


or, upon expanding terms,

ln(wpit ) = 0 + 1 { ln(wpit ) 0 } + 1 t1 + 4,1 t4 + 1 4,1 t5 + t

(2)

In the notation (1, 1, 1) (0, 0, 1)4 , the (1, 1, 1) means that there is one nonseasonal autoregressive
term (1 1 L) and one nonseasonal moving-average term (1 + 1 L) and that the time series is
first-differenced one time. The (0, 0, 1)4 indicates that there is no lag-4 seasonal autoregressive term,
that there is one lag-4 seasonal moving-average term (1 + 4,1 L4 ), and that the series is seasonally
differenced zero times. This is known as a multiplicative SARIMA model because the nonseasonal
and seasonal factors work multiplicatively: (1 + 1 L)(1 + 4,1 L4 ). Multiplying the terms imposes
nonlinear constraints on the parameters of the fifth-order lagged values; arima imposes these constraints
automatically.
To further clarify the notation, consider a (2, 1, 1) (1, 1, 2)4 multiplicative SARIMA model:

(1 1 L 2 L2 )(1 4,1 L4 )4 zt = (1 + 1 L)(1 + 4,1 L4 + 4,2 L8 )t

(3)

where denotes the difference operator yt = yt yt1 and s denotes the lag-s seasonal
difference operator s yt = yt yts . Expanding (3), we have

zet = 1 zet1 + 2 zet2 + 4,1 zet4 1 4,1 zet5 2 4,1 zet6


+ 1 t1 + 4,1 t4 + 1 4,1 t5 + 4,2 t8 + 1 4,2 t9 + t
where

zet = 4 zt = (zt zt4 ) = zt zt1 (zt4 zt5 )


and zt = yt xt if regressors are included in the model, zt = yt 0 if just a constant term is
included, and zt = yt otherwise.

arima ARIMA, ARMAX, and other dynamic regression models

91

More generally, a (p, d, q) (P, D, Q)s multiplicative SARIMA model is


q
Q
(Lp )s (LP )d D
s zt = (L )s (L )t

where

s (LP ) = (1 s,1 Ls s,2 L2s s,P LP s )


s (LQ ) = (1 + s,1 Ls + s,2 L2s + + s,Q LQs )

(Lp ) and (Lq ) were defined previously, d means apply the operator d times, and similarly
for D
s . Typically, d and D will be 0 or 1; and p, q , P , and Q will seldom be more than 2 or 3. s
will typically be 4 for quarterly data and 12 for monthly data. In fact, the model can be extended to
include both monthly and quarterly seasonal factors, as we explain below.
If a plot of the data suggests that the seasonal effect is proportional to the mean of the series, then
the seasonal effect is probably multiplicative and a multiplicative SARIMA model may be appropriate.
Box, Jenkins, and Reinsel (2008, sec. 9.3.1) suggest starting with a multiplicative SARIMA model with
any data that exhibit seasonal patterns and then exploring nonmultiplicative SARIMA models if the
multiplicative models do not fit the data well. On the other hand, Chatfield (2004, 14) suggests that
taking the logarithm of the series will make the seasonal effect additive, in which case an additive
SARIMA model as fit in the previous example would be appropriate. In short, the analyst should
probably try both additive and multiplicative SARIMA models to see which provides better fits and
forecasts.
Unless diffuse is used, arima must create square matrices of dimension {max(p, q + 1)}2 , where
p and q are the maximum AR and MA lags, respectively; and the inclusion of long seasonal terms can
make this dimension rather large. For example, with monthly data, you might fit a (0, 1, 1)(0, 1, 2)12
2
SARIMA model. The maximum MA lag is 2 12 + 1 = 25, requiring a matrix with 26 = 676 rows
and columns.

Example 3: Multiplicative SARIMA model


One of the most common multiplicative SARIMA specifications is the (0, 1, 1) (0, 1, 1)12 airline
model of Box, Jenkins, and Reinsel (2008, sec. 9.2). The dataset airline.dta contains monthly
international airline passenger data from January 1949 through December 1960. After first- and
seasonally differencing the data, we do not suspect the presence of a trend component, so we use the
noconstant option with arima:

92

arima ARIMA, ARMAX, and other dynamic regression models


. use http://www.stata-press.com/data/r14/air2
(TIMESLAB: Airline passengers)
. generate lnair = ln(air)
. arima lnair, arima(0,1,1) sarima(0,1,1,12) noconstant
(setting optimization to BHHH)
Iteration 0:
log likelihood =
223.8437
Iteration 1:
log likelihood = 239.80405
(output omitted )
Iteration 8:
log likelihood = 244.69651
ARIMA regression
Sample: 14 - 144
Number of obs
Wald chi2(2)
Log likelihood = 244.6965
Prob > chi2

DS12.lnair

Coef.

OPG
Std. Err.

P>|z|

=
=
=

131
84.53
0.0000

[95% Conf. Interval]

ARMA
ma
L1.

-.4018324

.0730307

-5.50

0.000

-.5449698

-.2586949

ma
L1.

-.5569342

.0963129

-5.78

0.000

-.745704

-.3681644

/sigma

.0367167

.0020132

18.24

0.000

.0327708

.0406625

ARMA12

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

Thus our model of the monthly number of international airline passengers is

12 lnairt = 0.402t1 0.557t12 + 0.224t13 + t

b = 0.037
In (2), for example, the coefficient on t13 is the product of the coefficients on the t1 and t12
terms (0.224 0.402 0.557). arima labeled the dependent variable DS12.lnair to indicate
that it has applied the difference operator and the lag-12 seasonal difference operator 12 to
lnair; see [U] 11.4.4 Time-series varlists for more information.
We could have fit this model by typing
. arima DS12.lnair, ma(1) mma(1, 12) noconstant

For simple multiplicative models, using the sarima() option is easier, though this second syntax
allows us to incorporate more complicated seasonal terms.

The mar() and mma() options can be repeated, allowing us to control for multiple seasonal
patterns. For example, we may have monthly sales data that exhibit a quarterly pattern as businesses
purchase our product at the beginning of calendar quarters when new funds are budgeted, and our
product is purchased more frequently in a few months of the year than in most others, even after we
control for quarterly fluctuations. Thus we might choose to fit the model

(1L)(14,1 L4 )(112,1 L12 )(4 12 salest 0 ) = (1+L)(1+4,1 L4 )(1+12,1 L12 )t


Although this model looks rather complicated, estimating it using arima is straightforward:
. arima DS4S12.sales, ar(1) mar(1, 4) mar(1, 12) ma(1) mma(1, 4) mma(1, 12)

arima ARIMA, ARMAX, and other dynamic regression models

93

If we instead wanted to include two lags in the lag-4 seasonal AR term and the first and third (but
not the second) term in the lag-12 seasonal MA term, we would type
. arima DS4S12.sales, ar(1) mar(1 2, 4) mar(1, 12) ma(1) mma(1, 4) mma(1 3, 12)

However, models with multiple seasonal terms can be difficult to fit. Usually, one seasonal factor
with just one or two AR or MA terms is adequate.

ARMAX models
Thus far all our examples have been pure ARIMA models in which the dependent variable was
modeled solely as a function of its past values and disturbances. Also, arima can fit ARMAX models,
which model the dependent variable in terms of a linear combination of independent variables, as
well as an ARMA disturbance process. The prais command (see [TS] prais), for example, allows
you to control for only AR(1) disturbances, whereas arima allows you to control for a much richer
dynamic error structure. arima allows for both nonseasonal and seasonal ARMA components in the
disturbances.

Example 4: ARMAX model


For a simple example of a model including covariates, we can estimate an update of Friedman and
Meiselmans (1963) equation representing the quantity theory of money. They postulate a straightforward relationship between personal-consumption expenditures (consump) and the money supply
as measured by M2 (m2).
consumpt = 0 + 1 m2t + t
Friedman and Meiselman fit the model over a period ending in 1956; we will refit the model over
the period 1959q1 through 1981q4. We restrict our attention to the period prior to 1982 because the
Federal Reserve manipulated the money supply extensively in the later 1980s to control inflation, and
the relationship between consumption and the money supply becomes much more complex during
the later part of the decade.
To demonstrate arima, we will include both an autoregressive term and a moving-average term for
the disturbances in the model; the original estimates included neither. Thus we model the disturbance
of the structural equation as
t = t1 + t1 + t
As per the original authors, the relationship is estimated on seasonally adjusted data, so there is no
need to include seasonal effects explicitly. Obtaining seasonally unadjusted data and simultaneously
modeling the structural and seasonal effects might be preferable.
We will restrict the estimation to the desired sample by using the tin() function in an if
expression; see [FN] Selecting time-span functions. By leaving the first argument of tin() blank,
we are including all available data through the second date (1981q4). We fit the model by typing

94

arima ARIMA, ARMAX, and other dynamic regression models


. use http://www.stata-press.com/data/r14/friedman2, clear
. arima consump m2 if tin(, 1981q4), ar(1) ma(1)
(setting optimization to BHHH)
Iteration 0:
log likelihood = -344.67575
Iteration 1:
log likelihood = -341.57248
(output omitted )
Iteration 10: log likelihood = -340.50774
ARIMA regression
Sample: 1959q1 - 1981q4
Number of obs
Wald chi2(3)
Log likelihood = -340.5077
Prob > chi2

consump

Coef.

OPG
Std. Err.

P>|z|

=
=
=

92
4394.80
0.0000

[95% Conf. Interval]

consump
m2
_cons

1.122029
-36.09872

.0363563
56.56703

30.86
-0.64

0.000
0.523

1.050772
-146.9681

1.193286
74.77062

ar
L1.

.9348486

.0411323

22.73

0.000

.8542308

1.015467

ma
L1.

.3090592

.0885883

3.49

0.000

.1354293

.4826891

/sigma

9.655308

.5635157

17.13

0.000

8.550837

10.75978

ARMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

We find a relatively small money velocity with respect to consumption (1.122) over this period,
although consumption is only one facet of the income velocity. We also note a very large first-order
autocorrelation in the disturbances, as well as a statistically significant first-order moving average.

arima ARIMA, ARMAX, and other dynamic regression models

95

We might be concerned that our specification has led to disturbances that are heteroskedastic or
non-Gaussian. We refit the model by using the vce(robust) option.
. arima consump m2 if tin(, 1981q4), ar(1) ma(1) vce(robust)
(setting optimization to BHHH)
Iteration 0:
log pseudolikelihood = -344.67575
Iteration 1:
log pseudolikelihood = -341.57248
(output omitted )
Iteration 10: log pseudolikelihood = -340.50774
ARIMA regression
Sample:

1959q1 - 1981q4

Number of obs
Wald chi2(3)
Prob > chi2

Log pseudolikelihood = -340.5077

consump

Coef.

Semirobust
Std. Err.

P>|z|

=
=
=

92
1176.26
0.0000

[95% Conf. Interval]

consump
m2
_cons

1.122029
-36.09872

.0433302
28.10478

25.89
-1.28

0.000
0.199

1.037103
-91.18308

1.206954
18.98564

ar
L1.

.9348486

.0493428

18.95

0.000

.8381385

1.031559

ma
L1.

.3090592

.1605359

1.93

0.054

-.0055854

.6237038

/sigma

9.655308

1.082639

8.92

0.000

7.533375

11.77724

ARMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

We note a substantial increase in the estimated standard errors, and our once clearly significant
moving-average term is now only marginally significant.

Dynamic forecasting
Another feature of the arima command is the ability to use predict afterward to make dynamic
forecasts. Suppose that we wish to fit the regression model

yt = 0 + 1 xt + yt1 + t
by using a sample of data from t = 1 . . . T and make forecasts beginning at time f .
If we use regress or prais to fit the model, then we can use predict to make one-step-ahead
forecasts. That is, predict will compute

c0 +
c1 xf + byf 1
ybf =
Most importantly, here predict will use the actual value of y at period f 1 in computing the
forecast for time f . Thus, if we use regress or prais, we cannot make forecasts for any periods
beyond f = T + 1 unless we have observed values for y for those periods.

96

arima ARIMA, ARMAX, and other dynamic regression models

If we instead fit our model with arima, then predict can produce dynamic forecasts by using
the Kalman filter. If we use the dynamic(f ) option, then for period f predict will compute

c0 +
c1 xf + byf 1
ybf =
by using the observed value of yf 1 just as predict after regress or prais. However, for period
f + 1 predict newvar, dynamic(f ) will compute

c0 +
c1 xf +1 + bybf
ybf +1 =
using the predicted value of yf instead of the observed value. Similarly, the period f + 2 forecast
will be
c0 +
c1 xf +2 + bybf +1
ybf +2 =
Of course, because our model includes the regressor xt , we can make forecasts only through periods
for which we have observations on xt . However, for pure ARIMA models, we can compute dynamic
forecasts as far beyond the final period of our dataset as desired.
For more information on predict after arima, see [TS] arima postestimation.

Video example
Time series, part 5: Introduction to ARMA/ARIMA models

Stored results
arima stores the following in e():
Scalars
e(N)
e(N gaps)
e(k)
e(k eq)
e(k eq model)
e(k dv)
e(k1)
e(df m)
e(ll)
e(sigma)
e(chi2)
e(p)
e(tmin)
e(tmax)
e(ar max)
e(ma max)
e(rank)
e(ic)
e(rc)
e(converged)

number of observations
number of gaps
number of parameters
number of equations in e(b)
number of equations in overall model test
number of dependent variables
number of variables in first equation
model degrees of freedom
log likelihood
sigma
2

significance
minimum time
maximum time
maximum AR lag
maximum MA lag
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise

arima ARIMA, ARMAX, and other dynamic regression models


Macros
e(cmd)
e(cmdline)
e(depvar)
e(covariates)
e(eqnames)
e(wtype)
e(wexp)
e(title)
e(tmins)
e(tmaxs)
e(chi2type)
e(vce)
e(vcetype)
e(ma)
e(ar)
e(mari)
e(mmai)
e(seasons)
e(unsta)
e(opt)
e(ml method)
e(user)
e(technique)
e(tech steps)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(V)
e(V modelbased)
Functions
e(sample)

97

arima
command as typed
name of dependent variable
list of covariates
names of equations
weight type
weight expression
title in estimation output
formatted minimum time
formatted maximum time
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
lags for moving-average terms
lags for autoregressive terms
multiplicative AR terms and lag i=1... (# seasonal AR terms)
multiplicative MA terms and lag i=1... (# seasonal MA terms)
seasonal lags in model
unstationary or blank
type of optimization
type of ml method
name of likelihood-evaluator program
maximization technique
number of iterations performed before switching techniques
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample

Methods and formulas


Estimation is by maximum likelihood using the Kalman filter via the prediction error decomposition;
see Hamilton (1994), Gourieroux and Monfort (1997), or, in particular, Harvey (1989). Any of these
sources will serve as excellent background for the fitting of these models with the state-space form;
each source also provides considerable detail on the method outlined below.
Methods and formulas are presented under the following headings:
ARIMA model
Kalman filter equations
Kalman filter or state-space representation of the ARIMA model
Kalman filter recursions
Kalman filter initial conditions
Likelihood from prediction error decomposition
Missing data

98

arima ARIMA, ARMAX, and other dynamic regression models

ARIMA model
The model to be fit is

yt = xt + t
p
q
X
X
t =
i ti +
j tj + t
i=1

j=1

which can be written as the single equation

yt = x t +

p
X

i (yti xti ) +

i=1

q
X

j tj + t

j=1

Some of the s and s may be constrained to zero or, for multiplicative seasonal models, the products
of other parameters.

Kalman filter equations


We will roughly follow Hamiltons (1994) notation and write the Kalman filter
t = Ft1 + vt
0

(state equation)

y t = A x t + H t + wt
and

vt
wt


N


0,

Q 0
0 R

(observation equation)



We maintain the standard Kalman filter matrix and vector notation, although for univariate models
yt , wt , and R are scalars.

Kalman filter or state-space representation of the ARIMA model


A univariate ARIMA model can be cast in state-space form by defining the Kalman filter matrices
as follows (see Hamilton [1994], or Gourieroux and Monfort [1997], for details):

arima ARIMA, ARMAX, and other dynamic regression models

1 2
1 0
F=
0 1
0 0

t1
0

...
vt =

...

...
0

. . . p1
...
0
...
0
...
1

99

p
0

0
0

A0 =
H0 = [ 1 1

. . . q ]

wt = 0
The Kalman filter representation does not require the moving-average terms to be invertible.

Kalman filter recursions


To demonstrate how missing data are handled, the updating recursions for the Kalman filter will
be left in two steps. Writing the updating equations as one step using the gain matrix K is common.
We will provide the updating equations with little justification; see the sources listed above for details.
As a linear combination of a vector of random variables, the state t can be updated to its expected
value on the basis of the prior state as
t|t1 = Ft1 + vt1

(4)

This state is a quadratic form that has the covariance matrix

Pt|t1 = FPt1 F0 + Q
The estimator of yt is

(5)

bt|t1 = xt + H0 t|t1
y

which implies an innovation or prediction error

bt|t1
bt = yt y
This value or vector has mean squared error (MSE)

Mt = H0 Pt|t1 H + R
Now the expected value of t conditional on a realization of yt is

with MSE

t = t|t1 + Pt|t1 HM1


t
t b

(6)

0
Pt = Pt|t1 Pt|t1 HM1
t H Pt|t1

(7)

This expression gives the full set of Kalman filter recursions.

100

arima ARIMA, ARMAX, and other dynamic regression models

Kalman filter initial conditions


When the series is stationary, conditional on xt , the initial conditions for the filter can be
considered a random draw from the stationary distribution of the state equation. The initial values of
the state and the state MSE are the expected values from this stationary distribution. For an ARIMA
model, these can be written as
1|0 = 0
and

vec(P1|0 ) = (Ir2 F F)1 vec(Q)

where vec() is an operator representing the column matrix resulting from stacking each successive
column of the target matrix.
If the series is not stationary, the initial state conditions do not constitute a random draw from a
stationary distribution, and some other values must be chosen. Hamilton (1994) suggests that they be
chosen based on prior expectations, whereas Harvey suggests a diffuse and improper prior having a
state vector of 0 and an infinite variance. This method corresponds to P1|0 with diagonal elements of
. Stata allows either approach to be taken for nonstationary seriesinitial priors may be specified
with state0() and p0(), and a diffuse prior may be specified with diffuse.

Likelihood from prediction error decomposition


Given the outputs from the Kalman filter recursions and assuming that the state and observation
vectors are Gaussian, the likelihood for the state-space model follows directly from the resulting
multivariate normal in the predicted innovations. The log likelihood for observation t is
lnLt =


1
t
ln(2) + ln(|Mt |) b0t M1
t b
2

This command supports the Huber/White/sandwich estimator of the variance using vce(robust).
See [P] robust, particularly Maximum likelihood estimators and Methods and formulas.

Missing data
Missing data, whether a missing dependent variable yt , one or more missing covariates xt , or
completely missing observations, are handled by continuing the state-updating equations without any
contribution from the data; see Harvey (1989 and 1993). That is, (4) and (5) are iterated for every
missing observation, whereas (6) and (7) are ignored. Thus, for observations with missing data,
t = t|t1 and Pt = Pt|t1 . Without any information from the sample, this effectively assumes
that the prediction error for the missing observations is 0. Other methods of handling missing data
on the basis of the EM algorithm have been suggested, for example, Shumway (1984, 1988).

arima ARIMA, ARMAX, and other dynamic regression models

101


George Edward Pelham Box (19192013) was born in Kent, England, and earned degrees
in statistics at the University of London. After work in the chemical industry, he taught and
researched at Princeton and the University of Wisconsin. His many major contributions to statistics
include papers and books in Bayesian inference, robustness (a term he introduced to statistics),
modeling strategy, experimental design and response surfaces, time-series analysis, distribution
theory, transformations, and nonlinear estimation.

Gwilym Meirion Jenkins (19331982) was a British mathematician and statistician who spent
his career in industry and academia, working for extended periods at Imperial College London
and the University of Lancaster before running his own company. His interests were centered on
time series and he collaborated with G. E. P. Box on what are often called BoxJenkins models.
The last years of Jenkinss life were marked by a slowly losing battle against Hodgkins disease.

References
Ansley, C. F., and R. J. Kohn. 1985. Estimation, filtering, and smoothing in state space models with incompletely
specified initial conditions. Annals of Statistics 13: 12861316.
Ansley, C. F., and P. Newbold. 1980. Finite sample properties of estimators for autoregressive moving average models.
Journal of Econometrics 13: 159183.
Baum, C. F. 2000. sts15: Tests for stationarity of a time series. Stata Technical Bulletin 57: 3639. Reprinted in
Stata Technical Bulletin Reprints, vol. 10, pp. 356360. College Station, TX: Stata Press.
Baum, C. F., and T. Room. 2001. sts18: A test for long-range dependence in a time series. Stata Technical Bulletin
60: 3739. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 370373. College Station, TX: Stata Press.
Baum, C. F., and R. I. Sperling. 2000. sts15.1: Tests for stationarity of a time series: Update. Stata Technical Bulletin
58: 3536. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 360362. College Station, TX: Stata Press.
Baum, C. F., and V. L. Wiggins. 2000. sts16: Tests for long memory in a time series. Stata Technical Bulletin 57:
3944. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 362368. College Station, TX: Stata Press.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Berndt, E. K., B. H. Hall, R. E. Hall, and J. A. Hausman. 1974. Estimation and inference in nonlinear structural
models. Annals of Economic and Social Measurement 3/4: 653665.
Bollerslev, T., R. F. Engle, and D. B. Nelson. 1994. ARCH models. In Vol. 4 of Handbook of Econometrics, ed.
R. F. Engle and D. L. McFadden. Amsterdam: Elsevier.
Box, G. E. P. 1983. Obituary: G. M. Jenkins, 19331982. Journal of the Royal Statistical Society, Series A 146:
205206.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Box-Steffensmeier, J. M., J. R. Freeman, M. P. Hitt, and J. C. W. Pevehouse. 2014. Time Series Analysis for the
Social Sciences. New York: Cambridge University Press.
Chatfield, C. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
David, J. S. 1999. sts14: Bivariate Granger causality test. Stata Technical Bulletin 51: 4041. Reprinted in Stata
Technical Bulletin Reprints, vol. 9, pp. 350351. College Station, TX: Stata Press.
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
DeGroot, M. H. 1987. A conversation with George Box. Statistical Science 2: 239258.
Diggle, P. J. 1990. Time Series: A Biostatistical Introduction. Oxford: Oxford University Press.
Enders, W. 2004. Applied Econometric Time Series. 2nd ed. New York: Wiley.
Friedman, M., and D. Meiselman. 1963. The relative stability of monetary velocity and the investment multiplier in
the United States, 18971958. In Stabilization Policies, Commission on Money and Credit, 123126. Englewood
Cliffs, NJ: Prentice Hall.

102

arima ARIMA, ARMAX, and other dynamic regression models

Gourieroux, C. S., and A. Monfort. 1997. Time Series and Dynamic Models. Trans. ed. G. M. Gallo. Cambridge:
Cambridge University Press.
Greene, W. H. 2012. Econometric Analysis. 7th ed. Upper Saddle River, NJ: Prentice Hall.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
. 1993. Time Series Models. 2nd ed. Cambridge, MA: MIT Press.
Hipel, K. W., and A. I. McLeod. 1994. Time Series Modelling of Water Resources and Environmental Systems.
Amsterdam: Elsevier.
Holan, S. H., R. Lund, and G. Davis. 2010. The ARMA alphabet soup: A tour of ARMA model variants. Statistics
Surveys 4: 232274.
Kalman, R. E. 1960. A new approach to linear filtering and prediction problems. Transactions of the ASMEJournal
of Basic Engineering, Series D 82: 3545.
McDowell, A. W. 2002. From the help desk: Transfer functions. Stata Journal 2: 7185.
. 2004. From the help desk: Polynomial distributed lag models. Stata Journal 4: 180189.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
Pickup, M. 2015. Introduction to Time Series Analysis. Thousand Oaks, CA: Sage.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. 2007. Numerical Recipes: The Art of Scientific
Computing. 3rd ed. New York: Cambridge University Press.
Sanchez, G. 2012. Comparing predictions after arima with manual computations. The Stata Blog: Not Elsewhere
Classified. http://blog.stata.com/2012/02/16/comparing-predictions-after-arima-with-manual-computations/.
Shumway, R. H. 1984. Some applications of the EM algorithm to analyzing incomplete time series data. In Time
Series Analysis of Irregularly Observed Data, ed. E. Parzen, 290324. New York: Springer.
. 1988. Applied Statistical Time Series Analysis. Upper Saddle River, NJ: Prentice Hall.
Wang, Q., and N. Wu. 2012. Menu-driven X-12-ARIMA seasonal adjustment in Stata. Stata Journal 12: 214241.

Also see
[TS] arima postestimation Postestimation tools for arima
[TS] tsset Declare data to be time-series data
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] dfactor Dynamic-factor models
[TS] forecast Econometric model forecasting
[TS] mgarch Multivariate GARCH models
[TS] mswitch Markov-switching regression models
[TS] prais Prais Winsten and Cochrane Orcutt regression
[TS] sspace State-space models
[TS] ucm Unobserved-components model
[R] regress Linear regression
[U] 20 Estimation and postestimation commands

Title
arima postestimation Postestimation tools for arima
Postestimation commands
Reference

predict
Also see

margins

Remarks and examples

Postestimation commands
The following postestimation commands are of special interest after arima:
Command

Description

estat acplot
estat aroots
irf
psdensity

estimate autocorrelations and autocovariances


check stability condition of estimates
create and analyze IRFs
estimate the spectral density

The following standard postestimation commands are also available:


Command
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
test
testnl

Akaikes and Schwarzs Bayesian information criteria (AIC and BIC)


summary statistics for the estimation sample
variancecovariance matrix of the estimators (VCE)
cataloging estimation results
dynamic forecasts and simulations
point estimates, standard errors, testing, and inference for linear combinations
of coefficients
likelihood-ratio test
marginal means, predictive margins, marginal effects, and average marginal
effects
graph the results from margins (profile plots, interaction plots, etc.)
point estimates, standard errors, testing, and inference for nonlinear combinations
of coefficients
predictions, residuals, influence statistics, and other diagnostic measures
point estimates, standard errors, testing, and inference for generalized predictions
Wald tests of simple and composite linear hypotheses
Wald tests of nonlinear hypotheses

103

104

arima postestimation Postestimation tools for arima

predict
Description for predict
predict creates a new variable containing predictions such as expected values and mean squared
errors. All predictions are available as static one-step-ahead predictions or as dynamic multistep
predictions, and you can control when dynamic predictions begin.

Menu for predict


Statistics

>

Postestimation

Syntax for predict




predict

type

statistic

newvar

if

 

in

 

, statistic options

Description

Main

xb
stdp
y
mse
residuals
yresiduals

predicted values for mean equationthe differenced series; the default


standard error of the linear prediction
predicted values for the mean equation in y the undifferenced series
mean squared error of the predicted values
residuals or predicted innovations
residuals or predicted innovations in y , reversing any time-series operators

These statistics are available both in and out of sample; type predict . . . if e(sample)
the estimation sample.
Predictions are not available for conditional ARIMA models fit to panel data.

options

. . . if wanted only for

Description

Options

dynamic(time constant)
t0(time constant)
structural

how to handle the lags of yt


set starting point for the recursions to time constant
calculate considering the structural component only

time constant is a # or a time literal, such as td(1jan1995) or tq(1995q1); see


Conveniently typing SIF values in [D] datetime.

Options for predict


Six statistics can be computed using predict after arima: the predictions from the model (the
default also given by xb), the standard error of the linear prediction (stdp), the predictions after
reversing any time-series operators applied to the dependent variable (y), the MSE of xb (mse), the
predictions of residuals or innovations (residual), and the predicted residuals or innovations in terms
of y (yresiduals). Given the dynamic nature of the ARMA component and because the dependent
variable might be differenced, there are other ways of computing each. We can use all the data on
the dependent variable that is available right up to the time of each prediction (the default, which is

arima postestimation Postestimation tools for arima

105

often called a one-step prediction), or we can use the data up to a particular time, after which the
predicted value of the dependent variable is used recursively to make later predictions (dynamic()).
Either way, we can consider or ignore the ARMA disturbance component (the component is considered
by default and is ignored if you specify structural).
All calculations can be made in or out of sample.

Main

xb, the default, calculates the predictions from the model. If D.depvar is the dependent variable,
these predictions are of D.depvar and not of depvar itself.
stdp calculates the standard error of the linear prediction xb. stdp does not include the variation
arising from the disturbance equation; use mse to calculate standard errors and confidence bands
around the predicted values.
y specifies that predictions of depvar be made, even if the model was specified in terms of, say,
D.depvar.
mse calculates the MSE of the predictions.
residuals calculates the residuals. If no other options are specified, these are the predicted innovations
t ; that is, they include the ARMA component. If structural is specified, these are the residuals
t from the structural equation; see structural below.
yresiduals calculates the residuals in terms of depvar, even if the model was specified in terms of,
say, D.depvar. As with residuals, the yresiduals are computed from the model, including any
ARMA component. If structural is specified, any ARMA component is ignored, and yresiduals
are the residuals from the structural equation; see structural below.

Options

dynamic(time constant) specifies how lags of yt in the model are to be handled. If dynamic() is
not specified, actual values are used everywhere that lagged values of yt appear in the model to
produce one-step-ahead forecasts.
dynamic(time constant) produces dynamic (also known as recursive) forecasts. time constant
specifies when the forecast is to switch from one step ahead to dynamic. In dynamic forecasts,
references to yt evaluate to the prediction of yt for all periods at or after time constant; they
evaluate to the actual value of yt for all prior periods.
For example, dynamic(10) would calculate predictions in which any reference to yt with t < 10
evaluates to the actual value of yt and any reference to yt with t 10 evaluates to the prediction of
yt . This means that one-step-ahead predictions are calculated for t < 10 and dynamic predictions
thereafter. Depending on the lag structure of the model, the dynamic predictions might still refer
some actual values of yt .
You may also specify dynamic(.) to have predict automatically switch from one-step-ahead to
dynamic predictions at p + q , where p is the maximum AR lag and q is the maximum MA lag.
t0(time constant) specifies the starting point for the recursions to compute the predicted statistics;
disturbances are assumed to be 0 for t < t0(). The default is to set t0() to the minimum t
observed in the estimation sample, meaning that observations before that are assumed to have
disturbances of 0.
t0() is irrelevant if structural is specified because then all observations are assumed to have
disturbances of 0.
t0(5) would begin recursions at t = 5. If the data were quarterly, you might instead type
t0(tq(1961q2)) to obtain the same result.

106

arima postestimation Postestimation tools for arima

The ARMA component of ARIMA models is recursive and depends on the starting point of the
predictions. This includes one-step-ahead predictions.
structural specifies that the calculation be made considering the structural component only, ignoring
the ARMA terms, producing the steady-state equilibrium predictions.

margins
Description for margins
margins estimates margins of response for expected values.

Menu for margins


Statistics

>

Postestimation

Syntax for margins



margins
margins

marginlist

 

marginlist

, options

, predict(statistic . . . )

predict(statistic . . . ) . . .

 

options

statistic

Description

xb
y
stdp
mse
residuals
yresiduals

predicted values for mean equationthe differenced series; the default


predicted values for the mean equation in y the undifferenced series
not allowed with margins
not allowed with margins
not allowed with margins
not allowed with margins

Statistics not allowed with margins are functions of stochastic quantities other than e(b).
For the full syntax, see [R] margins.

Remarks and examples


Remarks are presented under the following headings:
Forecasting after ARIMA
IRF results for ARIMA

Forecasting after ARIMA


We assume that you have already read [TS] arima. In this section, we illustrate some of the features
of predict after fitting ARIMA, ARMAX, and other dynamic models by using arima. In example 2
of [TS] arima, we fit the model

ln(wpit ) = 0 + 1 { ln(wpit1 ) 0 } + 1 t1 + 4 t4 + t

arima postestimation Postestimation tools for arima

107

by typing
. use http://www.stata-press.com/data/r14/wpi1
. arima D.ln_wpi, ar(1) ma(1 4)
(output omitted )

If we use the command


. predict xb, xb

then Stata computes xbt as

xbt = b0 + b1 { ln(wpit1 ) b0 } + b1 b
t1 + b4 b
t4
where

b
tj =

ln(wpitj ) xbtj
0

tj >0
otherwise

meaning that predict newvar, xb calculates predictions by using the metric of the dependent variable.
In this example, the dependent variable represented changes in ln(wpit ), and so the predictions are
likewise for changes in that variable.
If we instead use
. predict y, y

Stata computes yt as yt = xbt + ln(wpit1 ) so that yt represents the predicted levels of ln(wpit ). In
general, predict newvar, y will reverse any time-series operators applied to the dependent variable
during estimation.
If we want to ignore the ARMA error components when making predictions, we use the structural
option,
. predict xbs, xb structural

which generates xbst = b0 because there are no regressors in this model, and
. predict ys, y structural

generates yst = b0 + ln(wpit1 )

Example 1: Dynamic forecasts


An attractive feature of the arima command is the ability to make dynamic forecasts. In example 4
of [TS] arima, we fit the model

consumpt = 0 + 1 m2t + t
t = t1 + t1 + t
First, we refit the model by using data up through the first quarter of 1978, and then we will evaluate
the one-step-ahead and dynamic forecasts.
. use http://www.stata-press.com/data/r14/friedman2
. keep if time<=tq(1981q4)
(67 observations deleted)
. arima consump m2 if tin(, 1978q1), ar(1) ma(1)
(output omitted )

108

arima postestimation Postestimation tools for arima

To make one-step-ahead forecasts, we type


. predict chat, y
(52 missing values generated)

(Because our dependent variable contained no time-series operators, we could have instead used
predict chat, xb and accomplished the same thing.) We will also make dynamic forecasts,
switching from observed values of consump to forecasted values at the first quarter of 1978:
. predict chatdy, dynamic(tq(1978q1)) y
(52 missing values generated)

The following graph compares the forecasted values to the observed values for the first few years
following the estimation sample:

1200

Billions of dollars
1400
1600
1800

2000

Personal consumption

1977q1

1978q1

1979q1
1980q1
Quarter

Observed
Dynamic forecast (1978q1)

1981q1

1982q1

Onestepahead forecast

The one-step-ahead forecasts never deviate far from the observed values, though over time the
dynamic forecasts have larger errors. To understand why that is the case, rewrite the model as

consumpt = 0 + 1 m2t + t1 + t1 + t



= 0 + 1 m2t + consumpt1 0 1 m2t1 + t1 + t
This form shows that the forecasted value of consumption at time t depends on the value of consumption
at time t 1. When making the one-step-ahead forecast for period t, we know the actual value of
consumption at time t 1. On the other hand, with the dynamic(tq(1978q1)) option, the forecasted
value of consumption for period 1978q1 is based on the observed value of consumption in period
1977q4, but the forecast for 1978q2 is based on the forecast value for 1978q1, the forecast for 1978q3
is based on the forecast value for 1978q2, and so on. Thus, with dynamic forecasts, prior forecast
errors accumulate over time. The following graph illustrates this effect.

arima postestimation Postestimation tools for arima

109

200

Forecast Actual
150
100
50

Forecast error

1978q1

1979q1

1980q1
Quarter

Onestepahead forecast

1981q1

1982q1

Dynamic forecast (1978q1)

IRF results for ARIMA


We assume that you have already read [TS] irf and [TS] irf create. In this section, we illustrate
how to calculate the impulseresponse function (IRF) of an ARIMA model.

Example 2
Consider a model of the quarterly U.S. money supply, as measured by M1, from Enders (2004).
Enders (2004, 9397) discusses why seasonal shopping patterns cause seasonal effects in M1. The
variable lnm1 contains data on the natural log of the money supply. We fit seasonal and nonseasonal
ARIMA models and compare the IRFs calculated from both models.
We fit the following nonseasonal ARIMA model

4 lnm1t = 1 (4 lnm1t1 ) + 4 (4 lnm1t4 ) + t


The code below fits the above model and saves a set of IRF results to a file called myirf.irf.

110

arima postestimation Postestimation tools for arima


. use http://www.stata-press.com/data/r14/m1nsa, clear
(U.S. money supply (M1) from Enders (2004), 95-99.)
. arima DS4.lnm1, ar(1 4) noconstant nolog
ARIMA regression
Sample: 1961q2 - 2008q2
Number of obs
Wald chi2(2)
Log likelihood = 579.3036
Prob > chi2

DS4.lnm1

Coef.

OPG
Std. Err.

P>|z|

=
=
=

189
78.34
0.0000

[95% Conf. Interval]

ARMA
ar
L1.
L4.

.3551862
-.3275808

.0503011
.0594953

7.06
-5.51

0.000
0.000

.2565979
-.4441895

.4537745
-.210972

/sigma

.0112678

.0004882

23.08

0.000

.0103109

.0122246

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
. irf create nonseasonal, set(myirf) step(30)
(file myirf.irf created)
(file myirf.irf now active)
(file myirf.irf updated)

We fit the following seasonal ARIMA model

(1 1 L)(1 4,1 L4 )4 lnm1t = t


The code below fits this nonseasonal ARIMA model and saves a set of IRF results to the active IRF
file, which is myirf.irf.
. arima DS4.lnm1, ar(1) mar(1,4) noconstant nolog
ARIMA regression
Sample: 1961q2 - 2008q2
Number of obs
Wald chi2(2)
Log likelihood = 588.6689
Prob > chi2

DS4.lnm1

Coef.

=
=
=

189
119.78
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

ARMA
ar
L1.

.489277

.0538033

9.09

0.000

.3838245

.5947296

ar
L1.

-.4688653

.0601248

-7.80

0.000

-.5867076

-.3510229

/sigma

.0107075

.0004747

22.56

0.000

.0097771

.0116379

ARMA4

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
. irf create seasonal, step(30)
(file myirf.irf updated)

We now have two sets of IRF results in the file myirf.irf. We can graph both IRF functions side
by side by calling irf graph.

arima postestimation Postestimation tools for arima

111

. irf graph irf


nonseasonal, DS4.lnm1, DS4.lnm1

seasonal, DS4.lnm1, DS4.lnm1

.5

.5
0

10

20

30

10

20

30

step
95% CI

impulseresponse function (irf)

Graphs by irfname, impulse variable, and response variable

The trajectories of the IRF functions are similar: each figure shows that a shock to lnm1 causes a
temporary oscillation in lnm1 that dies out after about 15 time periods. This behavior is characteristic
of short-memory processes.

See [TS] psdensity for an introduction to estimating spectral densities using the parameters estimated
by arima.

Reference
Enders, W. 2004. Applied Econometric Time Series. 2nd ed. New York: Wiley.

Also see
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] estat acplot Plot parametric autocorrelation and autocovariance functions
[TS] estat aroots Check the stability condition of ARIMA estimates
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] psdensity Parametric spectral density estimation after arima, arfima, and ucm
[U] 20 Estimation and postestimation commands

Title
corrgram Tabulate and graph autocorrelations
Description
Syntax
Remarks and examples
Acknowledgment

Quick start
Options for corrgram
Stored results
References

Menu
Options for ac and pac
Methods and formulas
Also see

Description
corrgram produces a table of the autocorrelations, partial autocorrelations, and portmanteau (Q)
statistics. It also displays a character-based plot of the autocorrelations and partial autocorrelations.
See [TS] wntestq for more information on the Q statistic.
ac produces a correlogram (a graph of autocorrelations) with pointwise confidence intervals that
is based on Bartletts formula for MA(q) processes.
pac produces a partial correlogram (agraph of partial autocorrelations) with confidence intervals
calculated using a standard error of 1/ n. The residual variances for each lag may optionally be
included on the graph.

Quick start
Produce correlogram for y using tsset data
corrgram y
As above, but limit the number of computed autocorrelations to 10
corrgram y, lags(10)
Plot the autocorrelation function for y
ac y
As above, and generate newv to hold the autocorrelations
ac y, generate(newv)
Plot partial autocorrelation function for y and include standardized residual variances in the graph
pac y, srv

Menu
corrgram
Statistics

>

Time series

>

Graphs

>

Autocorrelations & partial autocorrelations

>

Time series

>

Graphs

>

Correlogram (ac)

>

Time series

>

Graphs

>

Partial correlogram (pac)

ac
Statistics

pac
Statistics

112

corrgram Tabulate and graph autocorrelations

Syntax
Autocorrelations, partial autocorrelations, and portmanteau (Q) statistics
    

corrgram varname if
in
, corrgram options
Graph autocorrelations with confidence intervals

    
ac varname if
in
, ac options
Graph partial autocorrelations with confidence intervals
    

pac varname if
in
, pac options
corrgram options

Description

Main

lags(#)
noplot
yw

calculate # autocorrelations
suppress character-based plots
calculate partial autocorrelations by using YuleWalker equations

ac options

Description

Main

lags(#)
generate(newvar)
level(#)
fft

calculate # autocorrelations
generate a variable to hold the autocorrelations
set confidence level; default is level(95)
calculate autocorrelation by using Fourier transforms

Plot

line options
marker options
marker label options

change look of dropped lines


change look of markers (color, size, etc.)
add marker labels; change look or position

CI plot

ciopts(area options)

affect rendition of the confidence bands

Add plots

addplot(plot)

add other plots to the generated graph

Y axis, X axis, Titles, Legend, Overall

twoway options

any options other than by() documented in [G-3] twoway options

113

114

corrgram Tabulate and graph autocorrelations

pac options

Description

Main

lags(#)
generate(newvar)
yw
level(#)

calculate # partial autocorrelations


generate a variable to hold the partial autocorrelations
calculate partial autocorrelations by using YuleWalker equations
set confidence level; default is level(95)

Plot

line options
marker options
marker label options

change look of dropped lines


change look of markers (color, size, etc.)
add marker labels; change look or position

CI plot

ciopts(area options)

affect rendition of the confidence bands

SRV plot

srv
srvopts(marker options)

include standardized residual variances in graph


affect rendition of the plotted standardized residual variances (SRVs)

Add plots

addplot(plot)

add other plots to the generated graph

Y axis, X axis, Titles, Legend, Overall

twoway options

any options other than by() documented in [G-3] twoway options

You must tsset your data before using corrgram, ac, or pac; see [TS] tsset. Also, the time series
must be dense (nonmissing and no gaps in the time variable) in the sample if you specify the fft option.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.

Options for corrgram




Main

lags(#) specifies the number of autocorrelations to calculate. The default is to use min(bn/2c 2, 40),
where bn/2c is the greatest integer less than or equal to n/2.
noplot prevents the character-based plots from being in the listed table of autocorrelations and partial
autocorrelations.
yw specifies that the partial autocorrelations be calculated using the YuleWalker equations instead
of using the default regression-based technique. yw cannot be used if srv is used.

Options for ac and pac




Main

lags(#) specifies the number of autocorrelations to calculate. The default is to use min(bn/2c 2, 40),
where bn/2c is the greatest integer less than or equal to n/2.
generate(newvar) specifies a new variable to contain the autocorrelation (ac command) or partial
autocorrelation (pac command) values. This option is required if the nograph option is used.

corrgram Tabulate and graph autocorrelations

115

nograph (implied when using generate() in the dialog box) prevents ac and pac from constructing
a graph. This option requires the generate() option.
yw (pac only) specifies that the partial autocorrelations be calculated using the YuleWalker equations
instead of using the default regression-based technique. yw cannot be used if srv is used.
level(#) specifies the confidence level, as a percentage, for the confidence bands in the ac or pac
graph. The default is level(95) or as set by set level; see [R] level.
fft (ac only) specifies that the autocorrelations be calculated using two Fourier transforms. This
technique can be faster than simply iterating over the requested number of lags.

Plot

line options, marker options, and marker label options affect the rendition of the plotted autocorrelations (with ac) or partial autocorrelations (with pac).
line options specify the look of the dropped lines, including pattern, width, and color; see
[G-3] line options.
marker options specify the look of markers. This look includes the marker symbol, the marker
size, and its color and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see
[G-3] marker label options.

CI plot

ciopts(area options) affects the rendition of the confidence bands; see [G-3] area options.

SRV plot

srv (pac only) specifies that the standardized residual variances be plotted with the partial autocorrelations. srv cannot be used if yw is used.
srvopts(marker options) (pac only) affects the rendition of the plotted standardized residual
variances; see [G-3] marker options. This option implies the srv option.

Add plots

addplot(plot) adds specified plots to the generated graph; see [G-3] addplot option.

Y axis, X axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).

Remarks and examples


Remarks are presented under the following headings:
Basic examples
Video example

116

corrgram Tabulate and graph autocorrelations

Basic examples
corrgram tabulates autocorrelations, partial autocorrelations, and portmanteau (Q) statistics and
plots the autocorrelations and partial autocorrelations. The Q statistics are the same as those produced
by [TS] wntestq. ac produces graphs of the autocorrelations, and pac produces graphs of the partial
autocorrelations. See Becketti (2013) for additional examples of how these commands are used in
practice.

Example 1
Here we use the international airline passengers dataset (Box, Jenkins, and Reinsel 2008, Series G).
This dataset has 144 observations on the monthly number of international airline passengers from
1949 through 1960. We can list the autocorrelations and partial autocorrelations by using corrgram.
. use http://www.stata-press.com/data/r14/air2
(TIMESLAB: Airline passengers)
. corrgram air, lags(20)
-1
0
1 -1
0
1
LAG
AC
PAC
Q
Prob>Q [Autocorrelation] [Partial Autocor]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

0.9480
0.8756
0.8067
0.7526
0.7138
0.6817
0.6629
0.6556
0.6709
0.7027
0.7432
0.7604
0.7127
0.6463
0.5859
0.5380
0.4997
0.4687
0.4499
0.4416

0.9589
-0.3298
0.2018
0.1450
0.2585
-0.0269
0.2043
0.1561
0.5686
0.2926
0.8402
0.6127
-0.6660
-0.3846
0.0787
-0.0266
-0.0581
-0.0435
0.2773
-0.0405

132.14
245.65
342.67
427.74
504.8
575.6
643.04
709.48
779.59
857.07
944.39
1036.5
1118
1185.6
1241.5
1289
1330.4
1367
1401.1
1434.1

0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

corrgram Tabulate and graph autocorrelations

117

We can use ac to produce a graph of the autocorrelations.

1.00

Autocorrelations of air
0.50
0.00
0.50

1.00

. ac air, lags(20)

10
Lag

15

20

Bartletts formula for MA(q) 95% confidence bands

The data probably have a trend component as well as a seasonal component. First-differencing
will mitigate the effects of the trend, and seasonal differencing will help control for seasonality. To
accomplish this goal, we can use Statas time-series operators. Here we graph the partial autocorrelations
after controlling for trends and seasonality. We also use srv to include the standardized residual
variances.

Partial autocorrelations of DS12.air


0.50
0.00
0.50
1.00

. pac DS12.air, lags(20) srv

10
Lag

15

20

95% CI
Partial autocorrelations of DS12.air
Standardized variances
95% Confidence bands [se = 1/sqrt(n)]

See [U] 11.4.4 Time-series varlists for more information about time-series operators.

118

corrgram Tabulate and graph autocorrelations

Video example
Time series, part 4: Correlograms and partial correlograms

Stored results
corrgram stores the following in r():
Scalars
r(lags)
r(ac#)
r(pac#)
r(q#)

number of lags
AC for lag #
PAC for lag #
Q for lag #

Matrices
r(AC)
r(PAC)
r(Q)

vector of autocorrelations
vector of partial autocorrelations
vector of Q statistics

Methods and formulas


Box, Jenkins, and Reinsel (2008, sec. 2.1.4); Newton (1988); Chatfield (2004); and Hamilton (1994)
provide excellent descriptions of correlograms. Newton (1988) also discusses the calculation of the
various quantities.
The autocovariance function for a time series x1 , x2 , . . . , xn is defined for |v| < n as
n|v|
1 X
b
(xi x)(xi+v x)
R(v)
=
n i=1

where x is the sample mean, and the autocorrelation function is then defined as

bv =

b
R(v)
b
R(0)

The variance of bv is given by Bartletts formula for MA(q) processes. From Brockwell and Davis (2002,
94), we have

1/n

 v=1
v1
P 2
Var(b
v ) =
1
b (i)
v>1
n 1+2
i=1

The partial autocorrelation at lag v measures the correlation between xt and xt+v after the effects
of xt+1 , . . . , xt+v1 have been removed. By default, corrgram and pac use a regression-based
method to estimate it. We run an OLS regression of xt on xt1 , . . . , xtv and a constant term. The
estimated coefficient on xtv is our estimate of the v th partial autocorrelation. The residual variance
b .
is the estimated variance of that regression, which we then standardize by dividing by R(0)
If the yw option is specified, corrgram and pac use the YuleWalker equations to estimate the
partial autocorrelations. Per Enders (2010, 6667), let vv denote the v th partial autocorrelation
coefficient. We then have
b11 = b1

corrgram Tabulate and graph autocorrelations

and for v > 1

bv
bvv =

v1
P
j=1

119

bv1,j bvj

v1
P
j=1

bv1,j bj

and

bvj = bv1,j bvv bv1,vj

j = 1, 2, . . . , v 1

Unlike the regression-based method, the YuleWalker equations-based method ensures that the firstsample partial autocorrelation equal the first-sample autocorrelation coefficient, as must be true in the
population; see Greene (2008, 725).
McCullough (1998) discusses other methods of estimating vv ; he finds that relative to other
methods, such as linear regression, the YuleWalker equations-based method performs poorly, in part
because it is susceptible to numerical error. Box, Jenkins, and Reinsel (2008, 69) also caution against
using the YuleWalker equations-based method, especially with data that are nearly nonstationary.

Acknowledgment
The ac and pac commands are based on the ac and pac commands written by Sean Becketti (1992),
a past editor of the Stata Technical Bulletin and author of the Stata Press book Introduction to Time
Series Using Stata.

References
Becketti, S. 1992. sts1: Autocorrelation and partial autocorrelation graphs. Stata Technical Bulletin 5: 2728. Reprinted
in Stata Technical Bulletin Reprints, vol. 1, pp. 221223. College Station, TX: Stata Press.
. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Brockwell, P. J., and R. A. Davis. 2002. Introduction to Time Series and Forecasting. 2nd ed. New York: Springer.
Chatfield, C. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Enders, W. 2010. Applied Econometric Time Series. 3rd ed. New York: Wiley.
Greene, W. H. 2008. Econometric Analysis. 6th ed. Upper Saddle River, NJ: Prentice Hall.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
McCullough, B. D. 1998. Algorithm choice for (partial) autocorrelation functions. Journal of Economic and Social
Measurement 24: 265278.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.

Also see
[TS] tsset Declare data to be time-series data
[TS] pergram Periodogram
[TS] wntestq Portmanteau (Q) test for white noise

Title
cumsp Cumulative spectral distribution
Description
Options
Also see

Quick start
Remarks and examples

Menu
Methods and formulas

Syntax
References

Description
cumsp plots the cumulative sample spectral-distribution function evaluated at the natural frequencies
for a (dense) time series.

Quick start
Plot cumulative sample spectral-distribution function for y using tsset data
cumsp y
As above, and create newv containing the cumulative distribution estimates
cumsp y, generate(newv)

Menu
Statistics

>

Time series

>

Graphs

>

Cumulative spectral distribution

120

cumsp Cumulative spectral distribution

121

Syntax
cumsp varname

if

 

in

 

, options

Description

options
Main

generate(newvar)

create newvar holding distribution values

Plot

cline options
marker options
marker label options

affect rendition of the plotted points connected by lines


change look of markers (color, size, etc.)
add marker labels; change look or position

Add plots

addplot(plot)

add other plots to the generated graph

Y axis, X axis, Titles, Legend, Overall

twoway options

any options other than by() documented in [G-3] twoway options

You must tsset your data before using cumsp; see [TS] tsset. Also, the time series must be dense
(nonmissing with no gaps in the time variable) in the sample specified.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.

Options


Main

generate(newvar) specifies a new variable to contain the estimated cumulative spectral-distribution


values.

Plot

cline options affect the rendition of the plotted points connected by lines; see [G-3] cline options.
marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see [G-3] marker label options.

Add plots

addplot(plot) provides a way to add other plots to the generated graph; see [G-3] addplot option.

Y axis, X axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).

122

cumsp Cumulative spectral distribution

Remarks and examples


Example 1
Here we use the international airline passengers dataset (Box, Jenkins, and Reinsel 2008, Series G).
This dataset has 144 observations on the monthly number of international airline passengers from
1949 through 1960. In the cumulative sample spectral distribution function for these data, we also
request a vertical line at frequency 1/12. Because the data are monthly, there will be a pronounced
jump in the cumulative sample spectral-distribution plot at the 1/12 value if there is an annual cycle
in the data.
. use http://www.stata-press.com/data/r14/air2
(TIMESLAB: Airline passengers)
. cumsp air, xline(.083333333)

1.00
0.00

0.00

0.20

0.40

0.60

0.80

Airline Passengers (19491960)


Cumulative spectral distribution
0.20
0.40
0.60
0.80

1.00

Sample spectral distribution function

0.00

0.10

0.20
0.30
Frequency

0.40

0.50

Points evaluated at the natural frequencies

The cumulative sample spectral-distribution function clearly illustrates the annual cycle.

Methods and formulas


A time series of interest is decomposed into a unique set of sinusoids of various frequencies and
amplitudes.
A plot of the sinusoidal amplitudes versus the frequencies for the sinusoidal decomposition of a
time series gives us the spectral density of the time series. If we calculate the sinusoidal amplitudes
for a discrete set of natural frequencies (1/n, 2/n, . . . , q/n), we obtain the periodogram.
Let x(1), . . . , x(n) be a time series, and let k = (k 1)/n denote the natural frequencies for
k = 1, . . . , bn/2c + 1 where b c indicates the greatest integer function. Define

Ck2

1
= 2
n

n
2
X


2i(t1)k
x(t)e




t=1

A plot of nCk2 versus k is then called the periodogram.

cumsp Cumulative spectral distribution

123

The sample spectral density may then be defined as fb(k ) = nCk2 .


If we let fb(1 ), . . . , fb(Q ) be the sample spectral density function of the time series evaluated
at the frequencies j = (j 1)/Q for j = 1, . . . , Q and we let q = bQ/2c + 1, then
k
X

Fb(k ) =

fb(j )

i=1
q
X

fb(j )

i=1

is the sample spectral-distribution function of the time series.

References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.

Also see
[TS] tsset Declare data to be time-series data
[TS] corrgram Tabulate and graph autocorrelations
[TS] pergram Periodogram

Title
dfactor Dynamic-factor models
Description
Options
References

Quick start
Remarks and examples
Also see

Menu
Stored results

Syntax
Methods and formulas

Description
dfactor estimates the parameters of dynamic-factor models by maximum likelihood. Dynamicfactor models are flexible models for multivariate time series in which unobserved factors have a
vector autoregressive structure, exogenous covariates are permitted in both the equations for the latent
factors and the equations for observable dependent variables, and the disturbances in the equations
for the dependent variables may be autocorrelated.

Quick start
Dynamic-factor model with y1 and y2 a function of x and an unobserved factor that follows a
third-order autoregressive process using tsset data
dfactor (y1 y2=x) (f=, ar(1/3))
As above, but with equations for the observed variables following an autoregressive process of order 1
dfactor (y1 y2=x, ar(1)) (f=, ar(1/3))
As above, but with an unstructured covariance matrix for the errors of y1 and y2
dfactor (y1 y2=x, ar(1) covstructure(unstructured)) (f=, ar(1/3))

Menu
Statistics

>

Multivariate time series

>

Dynamic-factor models

124

dfactor Dynamic-factor models

125

Syntax
dfactor obs eq

fac eq

 

if

 

in

 

, options

obs eq specifies the equation for the observed dependent variables, and it has the form

 

(depvars = exog d
, sopts )
fac eq specifies the equation for the unobserved factors, and it has the form
 


, sopts )
(facvars = exog f
depvars are the observed dependent variables. exog d are the exogenous variables that enter into
the equations for the observed dependent variables. (All factors are automatically entered into the
equations for the observed dependent variables.) facvars are the names for the unobserved factors
in the model. You may specify the names of existing variables in facvars, but dfactor treats
them only as names and takes no notice that they are also variables. exog f are the exogenous
variables that enter into the equations for the factors.
options

Description

Model

constraints(constraints)

apply specified linear constraints

SE/Robust

vce(vcetype)

vcetype may be oim or robust

Reporting

level(#)
nocnsreport
display options

set confidence level; default is level(95)


do not display constraints
control columns and column formats, row spacing, display of
omitted variables and base and empty cells, and
factor-variable labeling

Maximization

maximize options
from(matname)

control the maximization process; seldom used


specify initial values for the maximization process; seldom used

Advanced

method(method)

specify the method for calculating the log likelihood; seldom used

coeflegend

display legend instead of statistics

126

dfactor Dynamic-factor models

Description

sopts
Model

suppress constant term from the equation; allowed only


in obs eq
ar(numlist)
autoregressive terms
arstructure(arstructure)
structure of autoregressive coefficient matrices
covstructure(covstructure) covariance structure
noconstant

arstructure

Description

diagonal
ltriangular
general

diagonal matrix; the default


lower triangular matrix
general matrix

covstructure

Description

identity
dscalar
diagonal
unstructured

identity matrix
diagonal scalar matrix
diagonal matrix
symmetric, positive-definite matrix

method

Description

hybrid

use the stationary Kalman filter and the De Jong diffuse Kalman
filter; the default
use the stationary De Jong method and the De Jong diffuse Kalman
filter

dejong

You must tsset your data before using dfactor; see [TS] tsset.
exog d and exog f may contain factor variables; see [U] 11.4.3 Factor variables.
depvars, exog d, and exog f may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.

Options


Model

constraints(constraints) apply linear constraints. Some specifications require linear constraints for
parameter identification.
noconstant suppresses the constant term.
ar(numlist) specifies the vector autoregressive lag structure in the equation. By default, no lags are
included in either the observable or the factor equations.
arstructure(diagonal|ltriangular|general) specifies the structure of the matrices in the vector
autoregressive lag structure.

dfactor Dynamic-factor models

127

arstructure(diagonal) specifies the matrices to be diagonalseparate parameters for each


lag, but no cross-equation autocorrelations. arstructure(diagonal) is the default for both
the observable and the factor equations.
arstructure(ltriangular) specifies the matrices to be lower triangularparameterizes a
recursive, or Wold causal, structure.
arstructure(general) specifies the matrices to be general matricesseparate parameters for
each possible autocorrelation and cross-correlation.
covstructure(identity | dscalar | diagonal | unstructured) specifies the covariance structure
of the errors.
covstructure(identity) specifies a covariance matrix equal to an identity matrix, and it is the
default for the errors in the factor equations.
covstructure(dscalar) specifies a covariance matrix equal to 2 times an identity matrix.
covstructure(diagonal) specifies a diagonal covariance matrix, and it is the default for the
errors in the observable variables.
covstructure(unstructured) specifies a symmetric, positive-definite covariance matrix with
parameters for all variances and covariances.

SE/Robust

vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, causes dfactor to use the observed information matrix estimator.
vce(robust) causes dfactor to use the Huber/White/sandwich estimator.

Reporting

level(#); see [R] estimation options.


nocnsreport; see [R] estimation options.
display options: noci, nopvalues, noomitted, vsquish, noemptycells, baselevels,
allbaselevels, nofvlabel, fvwrap(#), fvwrapon(style), cformat(% fmt), pformat(% fmt),
and sformat(% fmt); see [R] estimation options.

Maximization

 
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), and from(matname); see [R] maximize for all options except from(), and
see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the maximization process. from(b0) causes dfactor
to begin the maximization algorithm with the values in b0. b0 must be a row vector; the number
of columns must equal the number of parameters in the model; and the values in b0 must be
in the same order as the parameters in e(b). This option is seldom used.

Advanced

method(method) specifies how to compute the log likelihood. dfactor writes the model in statespace form and uses sspace to estimate the parameters; see [TS] sspace. method() offers two
methods for dealing with some of the technical aspects of the state-space likelihood. This option
is seldom used.

128

dfactor Dynamic-factor models

method(hybrid), the default, uses the Kalman filter with model-based initial values when the
model is stationary and uses the De Jong (1988, 1991) diffuse Kalman filter when the model
is nonstationary.
method(dejong) uses the De Jong (1988) method for estimating the initial values for the Kalman
filter when the model is stationary and uses the De Jong (1988, 1991) diffuse Kalman filter
when the model is nonstationary.
The following option is available with dfactor but is not shown in the dialog box:
coeflegend; see [R] estimation options.

Remarks and examples


Remarks are presented under the following headings:
An introduction to dynamic-factor models
Some examples

An introduction to dynamic-factor models


dfactor estimates the parameters of dynamic-factor models by maximum likelihood (ML). Dynamicfactor models represent a vector of k endogenous variables as linear functions of nf < k unobserved
factors and some exogenous covariates. The unobserved factors and the disturbances in the equations
for the observed variables may follow vector autoregressive structures.
Dynamic-factor models have been developed and applied in macroeconomics; see Geweke (1977),
Sargent and Sims (1977), Stock and Watson (1989, 1991), and Watson and Engle (1983).
Dynamic-factor models are very flexible; in a sense, they are too flexible. Constraints must be
imposed to identify the parameters of dynamic-factor and static-factor models. The parameters in the
default specifications in dfactor are identified, but other specifications require additional restrictions.
The factors are identified only up to a sign, which means that the coefficients on the unobserved factors
can flip signs and still produce the same predictions and the same log likelihood. The flexibility of
the model sometimes produces convergence problems.
dfactor is designed to handle cases in which the number of modeled endogenous variables, k ,
is small. The ML estimator is implemented by writing the model in state-space form and by using
the Kalman filter to derive and implement the log likelihood. As k grows, the number of parameters
quickly exceeds the number that can be estimated.

dfactor Dynamic-factor models

129

A dynamic-factor model has the form

yt = Pft + Qxt + ut
ft = Rwt + A1 ft1 + A2 ft2 + + Atp ftp + t
ut = C1 ut1 + C2 ut2 + + Ctq utq + t
where the definitions are given in the following table:
Item
yt
P
ft
Q
xt
ut
R
wt
Ai
t
Ci
t

Dimension
k1
k nf
nf 1
k nx
nx 1
k1
nf nw
nw 1
nf nf
nf 1
kk
k1

Definition
vector of dependent variables
matrix of parameters
vector of unobservable factors
matrix of parameters
vector of exogenous variables
vector of disturbances
matrix of parameters
vector of exogenous variables
matrix of autocorrelation parameters for i {1, 2, . . . , p}
vector of disturbances
matrix of autocorrelation parameters for i {1, 2, . . . , q}
vector of disturbances

By selecting different numbers of factors and lags, the dynamic-factor model encompasses the six
models in the table below:
Dynamic factors with vector autoregressive errors
Dynamic factors
Static factors with vector autoregressive errors
Static factors
Vector autoregressive errors
Seemingly unrelated regression

(DFAR)
(DF)
(SFAR)
(SF)
(VAR)
(SUR)

nf
nf
nf
nf
nf
nf

>0
>0
>0
>0
=0
=0

p>0
p>0
p=0
p=0
p=0
p=0

q
q
q
q
q
q

>0
=0
>0
=0
>0
=0

In addition to the time-series models, dfactor can estimate the parameters of SF models and SUR
models. dfactor can place equality constraints on the disturbance covariances, which sureg and
var do not allow.

Some examples
Example 1: Dynamic-factor model
Stock and Watson (1989, 1991) wrote a simple macroeconomic model as a DF model, estimated the
parameters by ML, and extracted an economic indicator. In this example, we estimate the parameters
of a DF model. In [TS] dfactor postestimation, we extend this example and extract an economic
indicator for the differenced series.
We have data on an industrial-production index, ipman; real disposable income, income; an
aggregate weekly hours index, hours; and aggregate unemployment, unemp. We believe that these
variables are first-difference stationary. We model their first-differences as linear functions of an
unobserved factor that follows a second-order autoregressive process.

130

dfactor Dynamic-factor models


. use http://www.stata-press.com/data/r14/dfex
(St. Louis Fed (FRED) macro data)
. dfactor (D.(ipman income hours unemp) = , noconstant) (f = , ar(1/2))
searching for initial values ....................
(setting technique to bhhh)
Iteration 0:
log likelihood = -675.19823
Iteration 1:
log likelihood = -666.74344
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = -662.09507
Iteration 1:
log likelihood = -662.09507
Dynamic-factor model
Sample: 1972m2 - 2008m11

Number of obs
Wald chi2(6)
Prob > chi2

Log likelihood = -662.09507

Coef.

=
=
=

442
751.95
0.0000

OIM
Std. Err.

P>|z|

[95% Conf. Interval]

f
f
L1.
L2.

.2651932
.4820398

.0568663
.0624635

4.66
7.72

0.000
0.000

.1537372
.3596136

.3766491
.604466

.3502249

.0287389

12.19

0.000

.2938976

.4065522

.0746338

.0217319

3.43

0.001

.0320401

.1172276

.2177469

.0186769

11.66

0.000

.1811407

.254353

-.0676016

.0071022

-9.52

0.000

-.0815217

-.0536816

.1383158
.2773808
.0911446
.0237232

.0167086
.0188302
.0080847
.0017932

8.28
14.73
11.27
13.23

0.000
0.000
0.000
0.000

.1055675
.2404743
.0752988
.0202086

.1710641
.3142873
.1069903
.0272378

D.ipman

D.income

D.hours

D.unemp

var(De.ipman)
var(De.income)
var(De.hours)
var(De.unemp)

Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.

For a discussion of the atypical iteration log, see example 1 in [TS] sspace.
The header in the output describes the estimation sample, reports the log-likelihood function at the
maximum, and gives the results of a Wald test against the null hypothesis that the coefficients on the
independent variables, the factors, and the autoregressive components are all zero. In this example,
the null hypothesis that all parameters except for the variance parameters are zero is rejected at all
conventional levels.
The results in the estimation table indicate that the unobserved factor is quite persistent and that
it is a significant predictor for each of the observed variables.

dfactor Dynamic-factor models

131

dfactor writes the DF model as a state-space model and uses the same methods as sspace to
estimate the parameters. Example 5 in [TS] sspace writes the model considered here in state-space
form and uses sspace to estimate the parameters.

Technical note
The signs of the coefficients on the unobserved factors are not identified. They are not identified
because we can multiply the unobserved factors and the coefficients on the unobserved factors by
negative one without changing the log likelihood or any of the model predictions.
Altering either the starting values for the maximization process, the maximization technique()
used, or the platform on which the command is run can cause the signs of the estimated coefficients
on the unobserved factors to change.
Changes in the signs of the estimated coefficients on the unobserved factors do not alter the
implications of the model or the model predictions.

Example 2: Dynamic-factor model with covariates


Here we extend the previous example by allowing the errors in the equations for the observables to
be autocorrelated. This extension yields a constrained VAR model with an unobserved autocorrelated
factor.
We estimate the parameters by typing

132

dfactor Dynamic-factor models


. dfactor (D.(ipman income hours unemp) = , noconstant ar(1)) (f = , ar(1/2))
searching for initial values ..............
(setting technique to bhhh)
Iteration 0:
log likelihood = -659.68789
Iteration 1:
log likelihood = -631.6043
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = -610.28846
Iteration 1:
log likelihood = -610.28846
Dynamic-factor model
Sample: 1972m2 - 2008m11
Number of obs
=
442
Wald chi2(10)
=
990.91
Log likelihood = -610.28846
Prob > chi2
=
0.0000

Coef.

OIM
Std. Err.

P>|z|

[95% Conf. Interval]

f
f
L1.
L2.

.4058457
.3663499

.0906183
.0849584

4.48
4.31

0.000
0.000

.2282371
.1998344

.5834544
.5328654

De.ipman
e.ipman
LD.

-.2772149

.068808

-4.03

0.000

-.4120761

-.1423538

De.income
e.income
LD.

-.2213824

.0470578

-4.70

0.000

-.3136141

-.1291508

De.hours
e.hours
LD.

-.3969317

.0504256

-7.87

0.000

-.495764

-.2980994

De.unemp
e.unemp
LD.

-.1736835

.0532071

-3.26

0.001

-.2779675

-.0693995

.3214972

.027982

11.49

0.000

.2666535

.3763408

.0760412

.0173844

4.37

0.000

.0419684

.110114

.1933165

.0172969

11.18

0.000

.1594151

.2272179

-.0711994

.0066553

-10.70

0.000

-.0842435

-.0581553

.1387909
.2636239
.0822919
.0218056

.0154558
.0179043
.0071096
.0016658

8.98
14.72
11.57
13.09

0.000
0.000
0.000
0.000

.1084981
.2285322
.0683574
.0185407

.1690837
.2987157
.0962265
.0250704

D.ipman

D.income

D.hours

D.unemp

var(De.ipman)
var(De.income)
var(De.hours)
var(De.unemp)

Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.

dfactor Dynamic-factor models

133

The autoregressive (AR) terms are displayed in error notation. e.varname stands for the error in
the equation for varname. The estimate of the pth AR term from y1 on y2 is reported as Lpe.y1 in
equation e.y2. In the above output, the estimated first-order AR term of D.ipman on D.ipman is
0.277 and is labeled as LDe.ipman in equation De.ipman.

The previous two examples illustrate how to use dfactor to estimate the parameters of DF models.
Although the previous example indicates that the more general DFAR model fits the data well, we use
these data to illustrate how to estimate the parameters of more restrictive models.

Example 3: A VAR with constrained error variance


In this example, we use dfactor to estimate the parameters of a SUR model with constraints on the
error-covariance matrix. The model is also a constrained VAR with constraints on the error-covariance
matrix, because we include the lags of two dependent variables as exogenous variables to model the
dynamic structure of the data. Previous exploratory work suggested that we should drop the lag of
D.unemp from the model.

134

dfactor Dynamic-factor models


. constraint 1 [cov(De.unemp,De.income)]_cons = 0
. dfactor (D.(ipman income unemp) = LD.(ipman income), noconstant
> covstructure(unstructured)), constraints(1)
searching for initial values ............
(setting technique to bhhh)
Iteration 0:
log likelihood = -569.34353
Iteration 1:
log likelihood = -548.7669
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = -535.12973
Iteration 1:
log likelihood = -535.12973
Dynamic-factor model
Sample: 1972m3 - 2008m11

Number of obs
Wald chi2(6)
Prob > chi2

Log likelihood = -535.12973


( 1) [cov(De.income,De.unemp)]_cons = 0

Coef.

=
=
=

441
88.32
0.0000

OIM
Std. Err.

P>|z|

[95% Conf. Interval]

D.ipman
ipman
LD.

.206276

.0471654

4.37

0.000

.1138335

.2987185

income
LD.

.1867384

.0512139

3.65

0.000

.086361

.2871158

D.income
ipman
LD.

.1043733

.0434048

2.40

0.016

.0193015

.1894451

income
LD.

-.1957893

.0471305

-4.15

0.000

-.2881634

-.1034153

ipman
LD.

-.0865823

.0140747

-6.15

0.000

-.1141681

-.0589964

income
LD.

-.0200749

.0152828

-1.31

0.189

-.0500285

.0098788

.3243902

.0218533

14.84

0.000

.2815584

.3672219

.0445794

.013696

3.25

0.001

.0177358

.071423

-.0298076
.2747234

.0047755
.0185008

-6.24
14.85

0.000
0.000

-.0391674
.2384624

-.0204478
.3109844

(constrained)
.0019453
14.85

0.000

.0250738

.0326994

D.unemp

var(De.ipman)
cov(De.ipman,
De.income)
cov(De.ipman,
De.unemp)
var(De.income)
cov(De.income,
De.unemp)
var(De.unemp)

0
.0288866

Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.

The output indicates that the model fits well, except that the lag of first-differenced income is not
a significant predictor of first-differenced unemployment.

dfactor Dynamic-factor models

135

Technical note
The previous example shows how to use dfactor to estimate the parameters of a SUR model
with constraints on the error-covariance matrix. Neither sureg nor var allows for constraints on the
error-covariance matrix. Without the constraints on the error-covariance matrix and including the lag
of D.unemp,
. dfactor (D.(ipman income unemp) = LD.(ipman income unemp),
> noconstant covstructure(unstructured))
(output omitted )
. var D.(ipman income unemp), lags(1) noconstant
(output omitted )

and
. sureg (D.ipman LD.(ipman income unemp), noconstant)
>
(D.income LD.(ipman income unemp), noconstant)
>
(D.unemp LD.(ipman income unemp), noconstant)
(output omitted )

produce the same estimates after allowing for small numerical differences.

Example 4: A lower-triangular VAR with constrained error variance


The previous example estimated the parameters of a constrained VAR model with a constraint on
the error-covariance matrix. This example makes two refinements on the previous one: we use an
unconditional estimator instead of a conditional estimator, and we constrain the AR parameters to
have a lower triangular structure. (See the next technical note for a discussion of conditional and
unconditional estimators.) The results are

136

dfactor Dynamic-factor models


. constraint 1 [cov(De.unemp,De.income)]_cons = 0
. dfactor (D.(ipman income unemp) = , ar(1) arstructure(ltriangular) noconstant
> covstructure(unstructured)), constraints(1)
searching for initial values ............
(setting technique to bhhh)
Iteration 0:
log likelihood = -543.89917
Iteration 1:
log likelihood = -541.47792
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = -540.36159
Iteration 1:
log likelihood = -540.36159
Dynamic-factor model
Sample: 1972m2 - 2008m11

Number of obs
Wald chi2(6)
Prob > chi2

Log likelihood = -540.36159


( 1) [cov(De.income,De.unemp)]_cons = 0

Coef.

=
=
=

442
75.48
0.0000

OIM
Std. Err.

P>|z|

[95% Conf. Interval]

De.ipman
e.ipman
LD.

.2297308

.0473147

4.86

0.000

.1369957

.3224659

De.income
e.ipman
LD.

.1075441

.0433357

2.48

0.013

.0226077

.1924805

e.income
LD.

-.2209485

.047116

-4.69

0.000

-.3132943

-.1286028

De.unemp
e.ipman
LD.

-.0975759

.0151301

-6.45

0.000

-.1272304

-.0679215

e.income
LD.

-.0000467

.0147848

-0.00

0.997

-.0290244

.0289309

e.unemp
LD.

-.0795348

.0482213

-1.65

0.099

-.1740469

.0149773

.3335286

.0224282

14.87

0.000

.2895702

.377487

.0457804

.0139123

3.29

0.001

.0185127

.0730481

-.0329438
.2743375

.0051423
.0184657

-6.41
14.86

0.000
0.000

-.0430226
.2381454

-.022865
.3105296

(constrained)
.00199
14.68

0.000

.0253083

.0331092

var(De.ipman)
cov(De.ipman,
De.income)
cov(De.ipman,
De.unemp)
var(De.income)
cov(De.income,
De.unemp)
var(De.unemp)

0
.0292088

Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.

The estimated AR terms of D.income and D.unemp on D.unemp are 0.000047 and 0.079535,
and they are not significant at the 1% or 5% levels. The estimated AR term of D.ipman on D.income
is 0.107544 and is significant at the 5% level but not at the 1% level.

dfactor Dynamic-factor models

137

Technical note
We obtained the unconditional estimator in example 4 by specifying the ar() option instead of
including the lags of the endogenous variables as exogenous variables, as we did in example 3. The
unconditional estimator has an additional observation and is more efficient. This change is analogous
to estimating an AR coefficient by arima instead of using regress on the lagged endogenous variable.
For example, to obtain the unconditional estimator in a univariate model, typing
. arima D.ipman, ar(1) noconstant technique(nr)
(output omitted )

will produce the same estimated AR coefficient as


. dfactor (D.ipman, ar(1) noconstant)
(output omitted )

We obtain the conditional estimator by typing either


. regress D.ipman LD.ipman, noconstant
(output omitted )

or
. dfactor (D.ipman = LD.ipman, noconstant)
(output omitted )

Example 5: A static factor model


In this example, we fit regional unemployment data to an SF model. We have data on the
unemployment levels for the four regions in the U.S. census: west for the West, south for the
South, ne for the Northeast, and midwest for the Midwest. We treat the variables as first-difference
stationary and model the first-differences of these variables. Using dfactor yields

138

dfactor Dynamic-factor models


. use http://www.stata-press.com/data/r14/urate
(Monthly unemployment rates in US Census regions)
. dfactor (D.(west south ne midwest) = , noconstant ) (z = )
searching for initial values .............
(setting technique to bhhh)
Iteration 0:
log likelihood = 872.71993
Iteration 1:
log likelihood = 873.04786
(output omitted )
Refining estimates:
Iteration 0:
log likelihood =
873.0755
Iteration 1:
log likelihood =
873.0755
Dynamic-factor model
Sample: 1990m2 - 2008m12
Number of obs
Wald chi2(4)
Log likelihood =
873.0755
Prob > chi2

Coef.

OIM
Std. Err.

=
=
=

227
342.56
0.0000

P>|z|

[95% Conf. Interval]

D.west
z

.0978324

.0065644

14.90

0.000

.0849664

.1106983

.0859494

.0061762

13.92

0.000

.0738442

.0980546

.0918607

.0072814

12.62

0.000

.0775893

.106132

.0861102

.0074652

11.53

0.000

.0714787

.1007417

.0036887
.0038902
.0064074
.0074749

.0005834
.0005228
.0007558
.0008271

6.32
7.44
8.48
9.04

0.000
0.000
0.000
0.000

.0025453
.0028656
.0049261
.0058538

.0048322
.0049149
.0078887
.009096

D.south

D.ne

D.midwest

var(De.west)
var(De.south)
var(De.ne)
var(De.midw~t)

Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.

The estimates indicate that we could reasonably suppose that the unobserved factor has the same
effect on the changes in unemployment in all four regions. The output below shows that we cannot
reject the null hypothesis that these coefficients are the same.
. test
( 1)
( 2)
( 3)

[D.west]z = [D.south]z = [D.ne]z = [D.midwest]z


[D.west]z - [D.south]z = 0
[D.west]z - [D.ne]z = 0
[D.west]z - [D.midwest]z = 0
chi2( 3) =
3.58
Prob > chi2 =
0.3109

Example 6: A static factor with constraints


In this example, we impose the constraint that the unobserved factor has the same impact on
changes in unemployment in all four regions. This constraint was suggested by the results of the
previous example. The previous example did not allow for any dynamics in the variables, a problem
we alleviate by allowing the disturbances in the equation for each observable to follow an AR(1)
process.

dfactor Dynamic-factor models


. constraint 2 [D.west]z = [D.south]z
. constraint 3 [D.west]z = [D.ne]z
. constraint 4 [D.west]z = [D.midwest]z
. dfactor (D.(west south ne midwest) = , noconstant ar(1)) (z = ),
> constraints(2/4)
searching for initial values .............
(setting technique to bhhh)
Iteration 0:
log likelihood = 827.97004
Iteration 1:
log likelihood = 874.74471
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = 880.97488
Iteration 1:
log likelihood = 880.97488
Dynamic-factor model
Sample: 1990m2 - 2008m12
Number of obs
=
Wald chi2(5)
=
Log likelihood = 880.97488
Prob > chi2
=
( 1) [D.west]z - [D.south]z = 0
( 2) [D.west]z - [D.ne]z = 0
( 3) [D.west]z - [D.midwest]z = 0

Coef.

OIM
Std. Err.

P>|z|

227
363.34
0.0000

[95% Conf. Interval]

De.west
e.west
LD.

.1297198

.0992663

1.31

0.191

-.0648386

.3242781

De.south
e.south
LD.

-.2829014

.0909205

-3.11

0.002

-.4611023

-.1047004

e.ne
LD.

.2866958

.0847851

3.38

0.001

.12052

.4528715

De.midwest
e.midwest
LD.

.0049427

.0782188

0.06

0.950

-.1483634

.1582488

.0904724

.0049326

18.34

0.000

.0808047

.1001401

.0904724

.0049326

18.34

0.000

.0808047

.1001401

.0904724

.0049326

18.34

0.000

.0808047

.1001401

.0904724

.0049326

18.34

0.000

.0808047

.1001401

.0038959
.0035518
.0058173
.0075444

.0005111
.0005097
.0006983
.0008268

7.62
6.97
8.33
9.12

0.000
0.000
0.000
0.000

.0028941
.0025528
.0044488
.0059239

.0048977
.0045507
.0071859
.009165

De.ne

D.west

D.south

D.ne

D.midwest

var(De.west)
var(De.south)
var(De.ne)
var(De.midw~t)

Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.

139

140

dfactor Dynamic-factor models

The results indicate that the model might not fit well. Two of the four AR coefficients are statistically
insignificant, while the two significant coefficients have opposite signs and sum to about zero. We
suspect that a DF model might fit these data better than an SF model with autocorrelated disturbances.

Stored results
dfactor stores the following in e():
Scalars
e(N)
e(k)
e(k aux)
e(k eq)
e(k eq model)
e(k dv)
e(k obser)
e(k factor)
e(o ar max)
e(f ar max)
e(df m)
e(ll)
e(chi2)
e(p)
e(tmin)
e(tmax)
e(stationary)
e(rank)
e(ic)
e(rc)
e(converged)

significance
minimum time in sample
maximum time in sample
1 if the estimated parameters indicate a stationary model, 0 otherwise
rank of VCE
number of iterations
return code
1 if converged, 0 otherwise

Macros
e(cmd)
e(cmdline)
e(depvar)
e(obser deps)
e(covariates)
e(indeps)
e(factor deps)
e(tvar)
e(eqnames)
e(model)
e(title)
e(tmins)
e(tmaxs)
e(o ar)
e(f ar)
e(observ cov)
e(factor cov)
e(chi2type)
e(vce)
e(vcetype)
e(opt)
e(method)
e(initial values)
e(technique)
e(tech steps)
e(datasignature)
e(datasignaturevars)
e(properties)

dfactor
command as typed
unoperated names of dependent variables in observation equations
names of dependent variables in observation equations
list of covariates
independent variables
names of unobserved factors in model
variable denoting time within groups
names of equations
type of dynamic-factor model specified
title in estimation output
formatted minimum time
formatted maximum time
list of AR terms for disturbances
list of AR terms for factors
structure of observation-error covariance matrix
structure of factor-error covariance matrix
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
type of optimization
likelihood method
type of initial values
maximization technique
iterations taken in maximization technique(s)
the checksum
variables used in calculation of checksum
b V

number of observations
number of parameters
number of auxiliary parameters
number of equations in e(b)
number of equations in overall model test
number of dependent variables
number of observation equations
number of factors specified
number of AR terms for the disturbances
number of AR terms for the factors
model degrees of freedom
log likelihood
2

dfactor Dynamic-factor models


e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
e(asbalanced)
e(asobserved)
Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(V)
e(V modelbased)
Functions
e(sample)

141

program used to implement estat


program used to implement predict
predictions allowed by margins
predictions disallowed by margins
factor variables fvset as asbalanced
factor variables fvset as asobserved
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample

Methods and formulas


dfactor writes the specified model as a state-space model and uses sspace to estimate the
parameters by maximum likelihood. See Lutkepohl (2005, 619621) for how to write the DF model
in state-space form. See [TS] sspace for the technical details.

References
De Jong, P. 1988. The likelihood for a state space model. Biometrika 75: 165169.
. 1991. The diffuse Kalman filter. Annals of Statistics 19: 10731083.
Geweke, J. 1977. The dynamic factor analysis of economic time series models. In Latent Variables in Socioeconomic
Models, ed. D. J. Aigner and A. S. Goldberger, 365383. Amsterdam: North-Holland.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Sargent, T. J., and C. A. Sims. 1977. Business cycle modeling without pretending to have too much a priori economic
theory. In New Methods in Business Cycle Research: Proceedings from a Conference, ed. C. A. Sims, 45109.
Minneapolis: Federal Reserve Bank of Minneapolis.
Stock, J. H., and M. W. Watson. 1989. New indexes of coincident and leading economic indicators. In NBER
Macroeconomics Annual 1989, ed. O. J. Blanchard and S. Fischer, vol. 4, 351394. Cambridge, MA: MIT Press.
. 1991. A probability model of the coincident economic indicators. In Leading Economic Indicators: New
Approaches and Forecasting Records, ed. K. Lahiri and G. H. Moore, 6389. Cambridge: Cambridge University
Press.
Watson, M. W., and R. F. Engle. 1983. Alternative algorithms for the estimation of dymanic factor, MIMIC and
varying coefficient regression models. Journal of Econometrics 23: 385400.

Also see
[TS] dfactor postestimation Postestimation tools for dfactor
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] sspace State-space models
[TS] tsset Declare data to be time-series data
[TS] var Vector autoregressive models
[R] regress Linear regression
[R] sureg Zellners seemingly unrelated regression
[U] 20 Estimation and postestimation commands

Title
dfactor postestimation Postestimation tools for dfactor
Postestimation commands
Also see

predict

Remarks and examples

Methods and formulas

Postestimation commands
The following standard postestimation commands are available after dfactor:
Command

Description

estat ic
estat summarize
estat vce
estimates
forecast
lincom

Akaikes and Schwarzs Bayesian information criteria (AIC and BIC)


summary statistics for the estimation sample
variancecovariance matrix of the estimators (VCE)
cataloging estimation results
dynamic forecasts and simulations
point estimates, standard errors, testing, and inference for linear combinations
of coefficients
likelihood-ratio test
point estimates, standard errors, testing, and inference for nonlinear combinations
of coefficients
predictions, residuals, influence statistics, and other diagnostic measures
point estimates, standard errors, testing, and inference for generalized predictions
Wald tests of simple and composite linear hypotheses
Wald tests of nonlinear hypotheses

lrtest
nlcom
predict
predictnl
test
testnl

142

dfactor postestimation Postestimation tools for dfactor

143

predict
Description for predict
predict creates a new variable containing predictions such as expected values, unobserved
factors, autocorrelated disturbances, and innovations. The root mean squared error is available for
all predictions. All predictions are also available as static one-step-ahead predictions or as dynamic
multistep predictions, and you can control when dynamic predictions begin.

Menu for predict


Statistics

>

Postestimation

Syntax for predict




predict

type

statistic

{ stub* | newvarlist }

if

 

in

 

, statistic options

Description

Main

y
xb
xbf
factors
residuals
innovations

dependent variable, which is xbf + residuals


linear predictions using the observable independent variables
linear predictions using the observable independent variables plus the factor
contributions
unobserved factor variables
autocorrelated disturbances
innovations, the observed dependent variable minus the predicted y

These statistics are available both in and out of sample; type predict
the estimation sample.

options

. . . if e(sample) . . . if wanted only for

Description

Options

equation(eqnames)
rmse(stub* | newvarlist)
dynamic(time constant)

specify name(s) of equation(s) for which predictions are to be made


put estimated root mean squared errors of predicted objects in new
variables
begin dynamic forecast at specified time

Advanced

smethod(method)

method for predicting unobserved states

method

Description

onestep
smooth
filter

predict using past information


predict using all sample information
predict using past and contemporaneous information

144

dfactor postestimation Postestimation tools for dfactor

Options for predict


The mathematical notation used in this section is defined in Description of [TS] dfactor.

Main

y, xb, xbf, factors, residuals, and innovations specify the statistic to be predicted.
y, the default, predicts the dependent variables. The predictions include the contributions of the
unobserved factors, the linear predictions by using the observable independent variables, and
bb
b t+u
bt.
any autocorrelation, P
ft + Qx

b t.
xb calculates the linear prediction by using the observable independent variables, Qx
xbf calculates the contributions of the unobserved factors plus the linear prediction by using the
bb
b t.
observable independent variables, P
ft + Qx

b t+A
b 1b
b 2b
b tpb
factors estimates the unobserved factors, b
ft = Rw
ft1 + A
ft2 + + A
ftp .
b 1u
b 2u
b tq u
bt = C
b t1 + C
b t2 + + C
b tq .
residuals calculates the autocorrelated residuals, u
bb
b tu
bt.
innovations calculates the innovations, b
t = yt P
ft + Qx


Options

equation(eqnames) specifies the equation(s) for which the predictions are to be calculated.
You specify equation names, such as equation(income consumption) or equation(factor1
factor2), to identify the equations. For the factors statistic, you must specify names of equations
for factors; for all other statistics, you must specify names of equations for observable variables.
If you do not specify equation() and do not specify stub*, the results are the same as if you
had specified the name of the first equation for the predicted statistic.
equation() may not be specified with stub*.
rmse(stub* | newvarlist) puts the root mean squared errors of the predicted objects into the specified
new variables. The root mean squared errors measure the variances due to the disturbances but do
not account for estimation error.
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly, see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with xb, xbf, innovations, smethod(filter), or smethod(smooth).

Advanced

smethod(method) specifies the method used to predict the unobserved states in the model. smethod()
may not be specified with xb.
smethod(onestep), the default, causes predict to use previous information on the dependent
variables. The Kalman filter is performed on previous periods, but only the one-step predictions
are made for the current period.
smethod(smooth) causes predict to estimate the states at each time period using all the sample
data by the Kalman smoother.

dfactor postestimation Postestimation tools for dfactor

145

smethod(filter) causes predict to estimate the states at each time period using previous
and contemporaneous data by the Kalman filter. The Kalman filter is performed on previous
periods and the current period. smethod(filter) may be specified only with factors and
residuals.

Remarks and examples


We assume that you have already read [TS] dfactor. In this entry, we illustrate some of the features
of predict after using dfactor.
dfactor writes the specified model as a state-space model and estimates the parameters by
maximum likelihood. The unobserved factors and the residuals are states in the state-space form of
the model, and they are estimated by the Kalman filter or the Kalman smoother. The smethod()
option controls how these states are estimated.
The Kalman filter or Kalman smoother is run over the specified sample. Changing the sample can
alter the predicted value for a given observation, because the Kalman filter and Kalman smoother are
recursive algorithms.
After estimating the parameters of a dynamic-factor model, there are many quantities of potential
interest. Here we will discuss several of these statistics and illustrate how to use predict to compute
them.

Example 1: One-step, out-of-sample forecasts


Lets begin by estimating the parameters of the dynamic-factor model considered in example 2 in
[TS] dfactor.

146

dfactor postestimation Postestimation tools for dfactor


. use http://www.stata-press.com/data/r14/dfex
(St. Louis Fed (FRED) macro data)
. dfactor (D.(ipman income hours unemp) = , noconstant ar(1)) (f = , ar(1/2))
(output omitted )

While several of the six statistics computed by predict might be of interest, we will look only at
a few of these statistics for D.ipman. We begin by obtaining one-step predictions in the estimation
sample and a six-month dynamic forecast for D.ipman. The graph of the in-sample predictions
indicates that our model accounts only for a small fraction of the variability in D.ipman.
. tsappend, add(6)
. predict Dipman_f, dynamic(tm(2008m12)) equation(D.ipman)
(option y assumed; fitted values)

. tsline D.ipman Dipman_f if month<=tm(2008m11), lcolor(gs13) xtitle("")


> legend(rows(2))

1970m1

1980m1

1990m1

2000m1

2010m1

Dipman
y prediction, Dipman, dynamic(tm(2008m12))

Graphing the last year of the sample and the six-month out-of-sample forecast yields

. tsline D.ipman Dipman_f if month>=tm(2008m1), xtitle("") legend(rows(2))

2008m1

2008m4

2008m7

2008m10

2009m1

Dipman
y prediction, Dipman, dynamic(tm(2008m12))

2009m4

dfactor postestimation Postestimation tools for dfactor

147

Example 2: Estimating an unobserved factor


Another common task is to estimate an unobserved factor. We can estimate the unobserved factor
at each time period by using only previous information (the smethod(onestep) option), previous
and contemporaneous information (the smethod(filter) option), or all the sample information (the
smethod(smooth) option). We are interested in the one-step predictive power of the unobserved
factor, so we use the default, smethod(onestep).

. predict fac if e(sample), factor


. tsline D.ipman fac, lcolor(gs10) xtitle("") legend(rows(2))

1970m1

1980m1

1990m1

2000m1

2010m1

Dipman
factors, f, onestep

Methods and formulas


dfactor estimates the parameters by writing the model in state-space form and using sspace.
Analogously, predict after dfactor uses the methods described in [TS] sspace postestimation. The
unobserved factors and the residuals are states in the state-space form of the model.
See Methods and formulas of [TS] sspace postestimation for how predictions are made after
estimating the parameters of a state-space model.

Also see
[TS] dfactor Dynamic-factor models
[TS] sspace State-space models
[TS] sspace postestimation Postestimation tools for sspace
[U] 20 Estimation and postestimation commands

Title
dfgls DF-GLS unit-root test
Description
Options
Acknowledgments

Quick start
Remarks and examples
References

Menu
Stored results
Also see

Syntax
Methods and formulas

Description
dfgls performs a modified DickeyFuller t test for a unit root in which the series has been
transformed by a generalized least-squares regression.

Quick start
Modified DickeyFuller unit-root test for y1 using GLS-transformed series using tsset data
dfgls y1
As above, for series y2 that has no linear time trend
dfgls y2, notrend
As above, but with at most 2 lags
dfgls y2, notrend maxlag(2)

Menu
Statistics

>

Time series

>

Tests

>

DF-GLS test for a unit root

148

dfgls DF-GLS unit-root test

149

Syntax
dfgls varname

if

 

in

 

, options

Description

options
Main

maxlag(#)
notrend
ers

use # as the highest lag order for DickeyFuller GLS regressions


series is stationary around a mean instead of around a linear time trend
present interpolated critical values from Elliott, Rothenberg, and Stock (1996)

You must tsset your data before using dfgls; see [TS] tsset.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.

Options


Main

maxlag(#) sets the value of k , the highest lag order for the first-differenced, detrended variable
in the DickeyFuller regression. By default, dfgls sets k according to the method proposed by
Schwert (1989); that is, dfgls sets kmax = floor[12{(T + 1)/100}0.25 ].
notrend specifies that the alternative hypothesis be that the series is stationary around a mean instead
of around a linear time trend. By default, a trend is included.
ers specifies that dfgls should present interpolated critical values from tables presented by Elliott,
Rothenberg, and Stock (1996), which they obtained from simulations. See Critical values under
Methods and formulas for details.

Remarks and examples


dfgls tests for a unit root in a time series. It performs the modified DickeyFuller t test (known
as the DF-GLS test) proposed by Elliott, Rothenberg, and Stock (1996). Essentially, the test is an
augmented DickeyFuller test, similar to the test performed by Statas dfuller command, except
that the time series is transformed via a generalized least squares (GLS) regression before performing
the test. Elliott, Rothenberg, and Stock and later studies have shown that this test has significantly
greater power than the previous versions of the augmented DickeyFuller test.
dfgls performs the DF-GLS test for the series of models that include 1 to k lags of the firstdifferenced, detrended variable, where k can be set by the user or by the method described in
Schwert (1989). Stock and Watson (2015, 651655) provide an excellent discussion of the approach.
As discussed in [TS] dfuller, the augmented DickeyFuller test involves fitting a regression of the
form
yt = + yt1 + t + 1 yt1 + 2 yt2 + + k ytk + t
and then testing the null hypothesis H0 : = 0. The DF-GLS test is performed analogously but on
GLS-detrended data. The null hypothesis of the test is that yt is a random walk, possibly with drift.
There are two possible alternative hypotheses: yt is stationary about a linear time trend or yt is
stationary with a possibly nonzero mean but with no linear time trend. The default is to use the
former. To specify the latter alternative, use the notrend option.

150

dfgls DF-GLS unit-root test

Example 1
Here we use the German macroeconomic dataset and test whether the natural log of investment
exhibits a unit root. We use the default options with dfgls.
. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. dfgls ln_inv
DF-GLS for ln_inv
Number of obs =
80
Maxlag = 11 chosen by Schwert criterion
DF-GLS tau
1% Critical
5% Critical
10% Critical
[lags]
Test Statistic
Value
Value
Value
11
-2.925
10
-2.671
9
-2.766
8
-3.259
7
-3.536
6
-3.115
5
-3.054
4
-3.016
3
-2.071
2
-1.675
1
-1.752
Opt Lag (Ng-Perron seq t) =
Min SC
= -6.169137 at lag
Min MAIC = -6.136371 at lag

-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
7 with RMSE
4 with RMSE
1 with RMSE

-2.763
-2.798
-2.832
-2.865
-2.898
-2.929
-2.958
-2.986
-3.012
-3.035
-3.055

-2.489
-2.523
-2.555
-2.587
-2.617
-2.646
-2.674
-2.699
-2.723
-2.744
-2.762

.0388771
.0398949
.0440319

The null hypothesis of a unit root is not rejected for lags 13, it is rejected at the 10% level for
lags 910, and it is rejected at the 5% level for lags 48 and 11. For comparison, we also test for
a unit root in log of investment by using dfuller with two different lag specifications. We need to
use the trend option with dfuller because it is not included by default.

dfgls DF-GLS unit-root test

151

. dfuller ln_inv, lag(4) trend


Augmented Dickey-Fuller test for unit root
Number of obs
=
87
Interpolated Dickey-Fuller
Test
1% Critical
5% Critical
10% Critical
Statistic
Value
Value
Value
Z(t)

-3.133

-4.069

-3.463

-3.158

MacKinnon approximate p-value for Z(t) = 0.0987


. dfuller ln_inv, lag(7) trend
Augmented Dickey-Fuller test for unit root
Number of obs
=
84
Interpolated Dickey-Fuller
Test
1% Critical
5% Critical
10% Critical
Statistic
Value
Value
Value
Z(t)

-3.994

-4.075

-3.466

-3.160

MacKinnon approximate p-value for Z(t) = 0.0090

The critical values and the test statistic produced by dfuller with 4 lags do not support rejecting
the null hypothesis, although the MacKinnon approximate p-value is less than 0.1. With 7 lags, the
critical values and the test statistic reject the null hypothesis at the 5% level, and the MacKinnon
approximate p-value is less than 0.01.
That the dfuller results are not as strong as those produced by dfgls is not surprising because
the DF-GLS test with a trend has been shown to be more powerful than the standard augmented
DickeyFuller test.

Stored results
If maxlag(0) is specified, dfgls stores the following in r():
Scalars
r(rmse0)
r(dft0)

RMSE
DF-GLS statistic

Otherwise, dfgls stores the following in r():


Scalars
r(maxlag)
r(N)
r(sclag)
r(maiclag)
r(optlag)
Matrices
r(results)

highest lag order k


number of observations
lag chosen by Schwarz criterion
lag chosen by modified AIC method
lag chosen by sequential-t method
k, MAIC, SIC, RMSE, and DF-GLS statistics

Methods and formulas


dfgls tests for a unit root. There are two possible alternative hypotheses: yt is stationary around
a linear trend or yt is stationary with no linear time trend. Under the first alternative hypothesis, the
DF-GLS test is performed by first estimating the intercept and trend via GLS. The GLS estimation is
performed by generating the new variables, yet , xt , and zt , where

152

dfgls DF-GLS unit-root test

ye1 = y1
yet = yt yt1 ,

t = 2, . . . , T

x1 = 1
xt = 1 ,

t = 2, . . . , T

z1 = 1
zt = t (t 1)
and = 1 (13.5/T ). An OLS regression is then estimated for the equation

yet = 0 xt + 1 zt + t
The OLS estimators b0 and b1 are then used to remove the trend from yt ; that is, we generate

y = yt (b0 + b1 t)
Finally, we perform an augmented DickeyFuller test on the transformed variable by fitting the OLS
regression
k
X

yt = + yt1
+
j ytj
+ t
(1)
j=1

and then test the null hypothesis H0: = 0 by using tabulated critical values.
To perform the DF-GLS test under the second alternative hypothesis, we proceed as before but
define = 1 (7/T ), eliminate z from the GLS regression, compute y = yt 0 , fit the augmented
DickeyFuller regression by using the newly transformed variable, and perform a test of the null
hypothesis that = 0 by using the tabulated critical values.
dfgls reports the DF-GLS statistic and its critical values obtained from the regression in (1) for
k {1, 2, . . . , kmax }. By default, dfgls sets kmax = floor[12{(T + 1)/100}0.25 ] as proposed by
Schwert (1989), although you can override this choice with another value. The sample size available
with kmax lags is used in all the regressions. Because there are kmax lags of the first-differenced
series, kmax + 1 observations are lost, leaving T kmax observations. dfgls requires that the sample
of T + 1 observations on yt = (y0 , y1 , . . . , yT ) have no gaps.
dfgls reports the results of three different methods for choosing which value of k to use. These
are method 1, the NgPerron sequential t; method 2, the minimum Schwarz information criterion
(SIC); and method 3, the NgPerron modified Akaike information criterion (MAIC). Although the SIC
has a long history in time-series modeling, the NgPerron sequential t was developed by Ng and
Perron (1995), and the MAIC was developed by Ng and Perron (2000).
The SIC can be calculated using either the log likelihood or the sum-of-squared errors from a
regression; dfgls uses the latter definition. Specifically, for each k
SIC

= ln(rmse
d ) + (k + 1)

ln(T kmax )
(T kmax )

dfgls DF-GLS unit-root test

where

rmse
d =

1
(T kmax )

T
X

153

ebt2

t=kmax +1

dfgls reports the value of the smallest SIC and the k that produced it.
Ng and Perron (1995) derived a sequential-t algorithm for choosing k :
i. Set n = 0 and run the regression in method 2 with all kmax n lags. If the coefficient on
kmax is significantly different from zero at level , choose k to kmax . Otherwise, continue
to ii.
ii. If n < kmax , set n = n + 1 and continue to iii. Otherwise, set k = 0 and stop.
iii. Run the regression in method 2 with kmax n lags. If the coefficient on kmax n is
significantly different from zero at level , choose k to kmax n. Otherwise, return to ii.
Per Ng and Perron (1995), dfgls uses = 10%. dfgls reports the k selected by this sequential-t
algorithm and the rmse
d from the regression.
Method 3 is based on choosing k to minimize the MAIC. The MAIC is calculated as
MAIC(k)

= ln(rmse
d )+

where

(k) =

1
rmse
d

b2
2 0

2{ (k) + k}
T kmax
T
X

yet2

t=kmax +1

and ye was defined previously.


Critical values
By default, dfgls uses the 5% and 10% critical values computed from the response surface
analysis of Cheung and Lai (1995). Because Cheung and Lai (1995) did not present results for the
1% case, the 1% critical values are always interpolated from the critical values presented by ERS.
ERS presented critical values, obtained from simulations, for the DF-GLS test with a linear trend
and showed that the critical values for the mean-only DF-GLS test were the same as those for the ADF
test. If dfgls is run with the ers option, dfgls will present interpolated critical values from these
tables. The method of interpolation is standard. For the trend case, below 50 observations and above
200 there is no interpolation; the values for 50 and are reported from the tables. For a value N
that lies between two values in the table, say, N1 and N2 , with corresponding critical values CV1
and CV2 , the critical value

CV

= CV1 +

N N1
(CV2 CV1 )
N1

is presented. The same method is used for the mean-only case, except that interpolation is possible
for values between 50 and 500.

154

dfgls DF-GLS unit-root test

Acknowledgments
We thank Christopher F. Baum of the Department of Economics at Boston College and author of
the Stata Press books An Introduction to Modern Econometrics Using Stata and An Introduction to
Stata Programming and Richard Sperling for a previous version of dfgls.

References
Cheung, Y.-W., and K. S. Lai. 1995. Lag order and critical values of a modified DickeyFuller test. Oxford Bulletin
of Economics and Statistics 57: 411419.
Dickey, D. A., and W. A. Fuller. 1979. Distribution of the estimators for autoregressive time series with a unit root.
Journal of the American Statistical Association 74: 427431.
Elliott, G. R., T. J. Rothenberg, and J. H. Stock. 1996. Efficient tests for an autoregressive unit root. Econometrica
64: 813836.
Ng, S., and P. Perron. 1995. Unit root tests in ARMA models with data-dependent methods for the selection of the
truncation lag. Journal of the American Statistical Association 90: 268281.
. 2000. Lag length selection and the construction of unit root tests with good size and power. Econometrica 69:
15191554.
Schwert, G. W. 1989. Tests for unit roots: A Monte Carlo investigation. Journal of Business and Economic Statistics
2: 147159.
Stock, J. H., and M. W. Watson. 2015. Introduction to Econometrics. Updated 3rd ed. Hoboken, NJ: Pearson.

Also see
[TS] dfuller Augmented DickeyFuller unit-root test
[TS] pperron PhillipsPerron unit-root test
[TS] tsset Declare data to be time-series data
[XT] xtunitroot Panel-data unit-root tests

Title
dfuller Augmented DickeyFuller unit-root test
Description
Options
References

Quick start
Remarks and examples
Also see

Menu
Stored results

Syntax
Methods and formulas

Description
dfuller performs the augmented DickeyFuller test that a variable follows a unit-root process.
The null hypothesis is that the variable contains a unit root, and the alternative is that the variable
was generated by a stationary process. You may optionally exclude the constant, include a trend term,
and include lagged values of the difference of the variable in the regression.

Quick start
Augmented DickeyFuller test for presence of a unit root in y using tsset data
dfuller y
As above, but with a trend term
dfuller y, trend
Augmented DickeyFuller test for presence of a unit root in y with a drift term
dfuller y, drift
As above, but include 3 lagged differences and display the regression table
dfuller y, drift lags(3) regress

Menu
Statistics

>

Time series

>

Tests

>

Augmented Dickey-Fuller unit-root test

155

156

dfuller Augmented DickeyFuller unit-root test

Syntax
dfuller varname

if

 

in

 

, options

Description

options
Main

noconstant
trend
drift
regress
lags(#)

suppress constant term in regression


include trend term in regression
include drift term in regression
display regression table
include # lagged differences

You must tsset your data before using dfuller; see [TS] tsset.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.

Options


Main

noconstant suppresses the constant term (intercept) in the model and indicates that the process
under the null hypothesis is a random walk without drift. noconstant cannot be used with the
trend or drift option.
trend specifies that a trend term be included in the associated regression and that the process under
the null hypothesis is a random walk, perhaps with drift. This option may not be used with the
noconstant or drift option.
drift indicates that the process under the null hypothesis is a random walk with nonzero drift. This
option may not be used with the noconstant or trend option.
regress specifies that the associated regression table appear in the output. By default, the regression
table is not produced.
lags(#) specifies the number of lagged difference terms to include in the covariate list.

Remarks and examples


Dickey and Fuller (1979) developed a procedure for testing whether a variable has a unit root or,
equivalently, that the variable follows a random walk. Hamilton (1994, 528529) describes the four
different cases to which the augmented DickeyFuller test can be applied. The null hypothesis is
always that the variable has a unit root. They differ in whether the null hypothesis includes a drift
term and whether the regression used to obtain the test statistic includes a constant term and time
trend. Becketti (2013, chap. 9) provides additional examples showing how to conduct these tests.
The true model is assumed to be

yt = + yt1 + ut
where ut is an independent and identically distributed zero-mean error term. In cases one and two,
presumably = 0, which is a random walk without drift. In cases three and four, we allow for a
drift term by letting be unrestricted.

dfuller Augmented DickeyFuller unit-root test

157

The DickeyFuller test involves fitting the model

yt = + yt1 + t + ut
by ordinary least squares (OLS), perhaps setting = 0 or = 0. However, such a regression is likely
to be plagued by serial correlation. To control for that, the augmented DickeyFuller test instead fits
a model of the form

yt = + yt1 + t + 1 yt1 + 2 yt2 + + k ytk + t

(1)

where k is the number of lags specified in the lags() option. The noconstant option removes the
constant term from this regression, and the trend option includes the time trend t, which by
default is not included. Testing = 0 is equivalent to testing = 1, or, equivalently, that yt follows
a unit root process.
In the first case, the null hypothesis is that yt follows a random walk without drift, and (1) is fit
without the constant term and the time trend t. The second case has the same null hypothesis as
the first, except that we include in the regression. In both cases, the population value of is zero
under the null hypothesis. In the third case, we hypothesize that yt follows a unit root with drift, so
that the population value of is nonzero; we do not include the time trend in the regression. Finally,
in the fourth case, the null hypothesis is that yt follows a unit root with or without drift so that is
unrestricted, and we include a time trend in the regression.
The following table summarizes the four cases.

Case
1
2
3
4

Process under
null hypothesis

Regression
restrictions

dfuller
option

Random walk without drift


Random walk without drift
Random walk with drift
Random walk with or
without drift

= 0, = 0
=0
=0
(none)

noconstant
(default)
drift
trend

Except in the third case, the t-statistic used to test H0: = 0 does not have a standard distribution.
Hamilton (1994, chap. 17) derives the limiting distributions, which are different for each of the
three other cases. The critical values reported by dfuller are interpolated based on the tables in
Fuller (1996). MacKinnon (1994) shows how to approximate the p-values on the basis of a regression
surface, and dfuller also reports that p-value. In the third case, where the regression includes a
constant term and under the null hypothesis the series has a nonzero drift parameter , the t statistic
has the usual t distribution; dfuller reports the one-sided critical values and p-value for the test of
H0 against the alternative Ha: < 0, which is equivalent to < 1.
Deciding which case to use involves a combination of theory and visual inspection of the data.
If economic theory favors a particular null hypothesis, the appropriate case can be chosen based on
that. If a graph of the data shows an upward trend over time, then case four may be preferred. If the
data do not show a trend but do have a nonzero mean, then case two would be a valid alternative.

Example 1
In this example, we examine the international airline passengers dataset from Box, Jenkins, and
Reinsel (2008, Series G). This dataset has 144 observations on the monthly number of international
airline passengers from 1949 through 1960. Because the data show a clear upward trend, we use the
trend option with dfuller to include a constant and time trend in the augmented DickeyFuller
regression.

158

dfuller Augmented DickeyFuller unit-root test


. use http://www.stata-press.com/data/r14/air2
(TIMESLAB: Airline passengers)
. dfuller air, lags(3) trend regress
Augmented Dickey-Fuller test for unit root
Test
Statistic
Z(t)

-6.936

Number of obs

140

Interpolated Dickey-Fuller
1% Critical
5% Critical
10% Critical
Value
Value
Value
-4.027

-3.445

-3.145

MacKinnon approximate p-value for Z(t) = 0.0000


D.air

Coef.
air
L1.
LD.
L2D.
L3D.
_trend
_cons

-.5217089
.5572871
.095912
.14511
1.407534
44.49164

Std. Err.

.0752195
.0799894
.0876692
.0879922
.2098378
7.78335

-6.94
6.97
1.09
1.65
6.71
5.72

P>|t|

0.000
0.000
0.276
0.101
0.000
0.000

[95% Conf. Interval]

-.67048
.399082
-.0774825
-.0289232
.9925118
29.09753

-.3729379
.7154923
.2693065
.3191433
1.822557
59.88575

Here we can overwhelmingly reject the null hypothesis of a unit root at all common significance
levels. From the regression output, the estimated of 0.522 implies that = (1 0.522) = 0.478.
Experiments with fewer or more lags in the augmented regression yield the same conclusion.

Example 2
In this example, we use the German macroeconomic dataset to determine whether the log of
consumption follows a unit root. We will again use the trend option, because consumption grows
over time.

dfuller Augmented DickeyFuller unit-root test

159

. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. tsset qtr
time variable: qtr, 1960q1 to 1982q4
delta: 1 quarter
. dfuller ln_consump, lags(4) trend
Augmented Dickey-Fuller test for unit root
Number of obs
=
87
Interpolated Dickey-Fuller
Test
1% Critical
5% Critical
10% Critical
Statistic
Value
Value
Value
Z(t)

-1.318

-4.069

-3.463

-3.158

MacKinnon approximate p-value for Z(t) = 0.8834

As we might expect from economic theory, here we cannot reject the null hypothesis that log
consumption exhibits a unit root. Again using different numbers of lag terms yield the same conclusion.

Stored results
dfuller stores the following in r():
Scalars
r(N)
r(lags)
r(Zt)
r(p)

number of observations
number of lagged differences
DickeyFuller test statistic
MacKinnon approximate p-value (if there is a constant or trend in associated regression)

Methods and formulas


In the OLS estimation of an AR(1) process with Gaussian errors,

yt = yt1 + t
where t are independent and identically distributed as N (0, 2 ) and y0 = 0, the OLS estimate (based
on an n-observation time series) of the autocorrelation parameter is given by
n
X

bn =

If || < 1, then

yt1 yt
t=1
n
X
yt2
t=1

n(b
n ) N (0, 1 2 )

If this result were valid when = 1, the resulting distribution would have a variance of zero. When
= 1, the OLS estimate b still converges in probability to one, though we need to find a suitable
nondegenerate distribution so that we can perform hypothesis tests of H0 : = 1. Hamilton (1994,
chap. 17) provides a superb exposition of the requisite theory.

160

dfuller Augmented DickeyFuller unit-root test

To compute the test statistics, we fit the augmented DickeyFuller regression

yt = + yt1 + t +

k
X

j ytj + et

j=1

via OLS where, depending on the options specified, the constant term or time trend t is omitted
and k is the number of lags specified in the lags() option. The test statistic for H0 : = 0 is
b , where
Zt = /b
b is the standard error of b.
The critical values included in the output are linearly interpolated from the table of values that
appears in Fuller (1996), and the MacKinnon approximate p-values use the regression surface published
in MacKinnon (1994).


David Alan Dickey (1945 ) was born in Ohio and obtained degrees in mathematics at Miami
University and a PhD in statistics at Iowa State University in 1976 as a student of Wayne Fuller.
He works at North Carolina State University and specializes in time-series analysis.

Wayne Arthur Fuller (1931 ) was born in Iowa, obtained three degrees at Iowa State University
and then served on the faculty between 1959 and 2001. He has made many distinguished
contributions to time series, measurement-error models, survey sampling, and econometrics.

References
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Dickey, D. A., and W. A. Fuller. 1979. Distribution of the estimators for autoregressive time series with a unit root.
Journal of the American Statistical Association 74: 427431.
Fuller, W. A. 1996. Introduction to Statistical Time Series. 2nd ed. New York: Wiley.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
MacKinnon, J. G. 1994. Approximate asymptotic distribution functions for unit-root and cointegration tests. Journal
of Business and Economic Statistics 12: 167176.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.

Also see
[TS] tsset Declare data to be time-series data
[TS] dfgls DF-GLS unit-root test
[TS] pperron PhillipsPerron unit-root test
[XT] xtunitroot Panel-data unit-root tests

Title
estat acplot Plot parametric autocorrelation and autocovariance functions
Description
Options
Also see

Quick start
Remarks and examples

Menu for estat


Methods and formulas

Syntax
References

Description
estat acplot plots the estimated autocorrelation and autocovariance functions of a stationary
process using the parameters of a previously fit parametric model.
estat acplot is available after arima and arfima; see [TS] arima and [TS] arfima.

Quick start
Autocorrelation function using estimates from arima or arfima
estat acplot
Autocovariance function using estimates from arima or arfima
estat acplot, covariance
As above, and save results in mydata.dta
estat acplot, covariance saving(mydata)

Menu for estat


Statistics

>

Postestimation

161

162

estat acplot Plot parametric autocorrelation and autocovariance functions

Syntax
estat acplot

, options

options

Description



saving( filename , . . . )

save results to filename; save variables in double precision;


save variables with prefix stubname
set confidence level; default is level(95)
use # autocorrelations
calculate autocovariances; the default is to calculate autocorrelations
report short-memory ACF; only allowed after arfima

level(#)
lags(#)
covariance
smemory
CI plot

ciopts(rcap options)

affect rendition of the confidence bands

Plot

marker options
marker label options
cline options

change look of markers (color, size, etc.)


add marker labels; change look or position
affect rendition of the plotted points

Y axis, X axis, Titles, Legend, Overall

twoway options

any options other than by() documented in [G-3] twoway options

Options


saving( filename , suboptions ) creates a Stata data file (.dta file) consisting of the autocorrelation
estimates, standard errors, and confidence bounds.
Five variables are saved: lag (lag number), ac (autocorrelation estimate), se (standard error),
ci l (lower confidence bound), and ci u (upper confidence bound).
double specifies that the variables be saved as doubles, meaning 8-byte reals. By default, they
are saved as floats, meaning 4-byte reals.
name(stubname) specifies that variables be saved with prefix stubname.
replace indicates that filename be overwritten if it exists.
level(#) specifies the confidence level, as a percentage, for confidence intervals. The default is
level(95) or as set by set level; see [R] level.
lags(#) specifies the number of autocorrelations to calculate. The default is to use
min{floor(n/2) 2, 40}, where floor(n/2) is the greatest integer less than or equal to n/2 and
n is the number of observations.
covariance specifies the calculation of autocovariances instead of the default autocorrelations.
smemory specifies that the ARFIMA fractional integration parameter be ignored. The computed autocorrelations are for the short-memory ARMA component of the model. This option is allowed only
after arfima.

CI plot

ciopts(rcap options) affects the rendition of the confidence bands; see [G-3] rcap options.

estat acplot Plot parametric autocorrelation and autocovariance functions

163

Plot

marker options affect the rendition of markers drawn at the plotted points, including their shape,
size, color, and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see [G-3] marker label options.
cline options affect whether lines connect the plotted points and the rendition of those lines; see
[G-3] cline options.

Y axis, X axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, except by(). These
include options for titling the graph (see [G-3] title options) and options for saving the graph to
disk (see [G-3] saving option).

Remarks and examples


The dependent variable evolves over time because of random shocks in the time domain representation. The autocovariances j , j {0, 1, . . . , }, of a covariance-stationary process yt specify its
variance and dependence structure, and the autocorrelations j , j {1, 2, . . . , }, provide a scalefree measure of yt s dependence structure. The autocorrelation at lag j specifies whether realizations
at time t and realizations at time t j are positively related, unrelated, or negatively related. estat
acplot uses the estimated parameters of a parametric model to estimate and plot the autocorrelations
and autocovariances of a stationary process.

164

estat acplot Plot parametric autocorrelation and autocovariance functions

Example 1
In example 1 of [TS] arima, we fit an ARIMA(1,1,1) model of the U.S. Wholesale Price Index
(WPI) using quarterly data over the period 1960q1 through 1990q4.
. use http://www.stata-press.com/data/r14/wpi1
. arima wpi, arima(1,1,1)
(setting optimization to BHHH)
Iteration 0:
log likelihood = -139.80133
Iteration 1:
log likelihood = -135.6278
Iteration 2:
log likelihood = -135.41838
Iteration 3:
log likelihood = -135.36691
Iteration 4:
log likelihood = -135.35892
(switching optimization to BFGS)
Iteration 5:
log likelihood = -135.35471
Iteration 6:
log likelihood = -135.35135
Iteration 7:
log likelihood = -135.35132
Iteration 8:
log likelihood = -135.35131
ARIMA regression
Sample:

1960q2 - 1990q4

Number of obs
Wald chi2(2)
Prob > chi2

Log likelihood = -135.3513

=
=
=

123
310.64
0.0000

OPG
Std. Err.

P>|z|

[95% Conf. Interval]

.7498197

.3340968

2.24

0.025

.0950019

1.404637

ar
L1.

.8742288

.0545435

16.03

0.000

.7673256

.981132

ma
L1.

-.4120458

.1000284

-4.12

0.000

-.6080979

-.2159938

/sigma

.7250436

.0368065

19.70

0.000

.6529042

.7971829

D.wpi

Coef.

_cons

wpi

ARMA

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

Now we use estat acplot to estimate the autocorrelations implied by the estimated ARMA
parameters. We include lags(50) to indicate that autocorrelations be computed for 50 lags. By
default, a 95% confidence interval is provided for each autocorrelation.

estat acplot Plot parametric autocorrelation and autocovariance functions

165

. estat acplot, lags(50)

.2

Autocorrelations
.4
.6

.8

Parametric autocorrelations of D.wpi


with 95% confidence intervals

10

20
30
quarterly lag

40

50

The graph is similar to a typical autocorrelation function of an AR(1) process with a positive
coefficient. The autocorrelations of a stationary AR(1) process decay exponentially toward zero.

Methods and formulas


The autocovariance function for ARFIMA models is described in Methods and formulas of [TS] arfima.
The autocovariance function for ARIMA models is obtained by setting the fractional difference parameter
to zero.
Box, Jenkins, and Reinsel (2008) provide excellent descriptions of the autocovariance function for
ARIMA and seasonal ARIMA models. Palma (2007) provides an excellent summary of the autocovariance
function for ARFIMA models.

References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Palma, W. 2007. Long-Memory Time Series: Theory and Methods. Hoboken, NJ: Wiley.

Also see
[TS] arfima Autoregressive fractionally integrated moving-average models
[TS] arima ARIMA, ARMAX, and other dynamic regression models

Title
estat aroots Check the stability condition of ARIMA estimates
Description
Options
Reference

Quick start
Remarks and examples
Also see

Menu for estat


Stored results

Syntax
Methods and formulas

Description
estat aroots checks the eigenvalue stability condition after estimating the parameters of an
ARIMA model using arima. A graph of the eigenvalues of the companion matrices for the AR and
MA polynomials is also produced.

estat aroots is available only after arima; see [TS] arima.

Quick start
Verify that all eigenvalues of the autoregressive polynomial lie inside the unit circle after arima
estat aroots
As above, but suppress the graph
estat aroots, nograph
Label each plotted eigenvalue with its distance from the unit circle
estat aroots, dlabel

Menu for estat


Statistics

>

Postestimation

166

estat aroots Check the stability condition of ARIMA estimates

167

Syntax
estat aroots

, options

options

Description

nograph
dlabel
modlabel

suppress graph of eigenvalues for the companion matrices


label eigenvalues with the distance from the unit circle
label eigenvalues with the modulus

Grid

nogrid

pgrid( . . . )

suppress polar grid circles


specify radii and appearance of polar grid circles; see Options for details

Plot

marker options

change look of markers (color, size, etc.)

Reference unit circle

rlopts(cline options)

affect rendition of reference unit circle

Y axis, X axis, Titles, Legend, Overall

twoway options

any options other than by() documented in [G-3] twoway options

Options
nograph specifies that no graph of the eigenvalues of the companion matrices be drawn.
dlabel labels each eigenvalue with its distance from the unit circle. dlabel cannot be specified
with modlabel.
modlabel labels the eigenvalues with their moduli. modlabel cannot be specified with dlabel.

Grid

nogrid suppresses the polar grid circles.





pgrid( numlist
, line options ) determines the radii and appearance of the polar grid circles.
By default, the graph includes nine polar grid circles with radii 0.1, 0.2, . . . , 0.9 that have the grid
line style. The numlist specifies the radii for the polar grid circles. The line options determine the
appearance of the polar grid circles; see [G-3] line options. Because the pgrid() option can be
repeated, circles with different radii can have distinct appearances.

Plot

marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.

Reference unit circle

rlopts(cline options) affect the rendition of the reference unit circle; see [G-3] cline options.

168

estat aroots Check the stability condition of ARIMA estimates

Y axis, X axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, except by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).

Remarks and examples


Inference after arima requires that the variable yt be covariance stationary. The variable yt is
covariance stationary if its first two moments exist and are time invariant. More explicitly, yt is
covariance stationary if
1. E(yt ) is finite and not a function of t;
2. Var(yt ) is finite and independent of t; and
3. Cov(yt , ys ) is a finite function of |t s| but not of t or s alone.
The stationarity of an ARMA process depends on the autoregressive (AR) parameters. If the inverse
roots of the AR polynomial all lie inside the unit circle, the process is stationary, invertible, and
has an infinite-order moving-average (MA) representation. Hamilton (1994, chap. 1) shows that if
the modulus of each eigenvalue of the matrix F() is strictly less than 1, the estimated ARMA is
stationary; see Methods and formulas for the definition of the matrix F().
The MA part of an ARMA process can be rewritten as an infinite-order AR process provided that
the MA process is invertible. Hamilton (1994, chap. 1) shows that if the modulus of each eigenvalue
of the matrix F() is strictly less than 1, the estimated ARMA is invertible; see Methods and formulas
for the definition of the matrix F().

Example 1
In this example, we check the stability condition of the SARIMA model that we fit in example 3
of [TS] arima. We begin by reestimating the parameters of the model.
. use http://www.stata-press.com/data/r14/air2
(TIMESLAB: Airline passengers)
. generate lnair = ln(air)

estat aroots Check the stability condition of ARIMA estimates


. arima lnair, arima(0,1,1) sarima(0,1,1,12) noconstant
(setting optimization to BHHH)
Iteration 0:
log likelihood =
223.8437
Iteration 1:
log likelihood = 239.80405
Iteration 2:
log likelihood = 244.10265
Iteration 3:
log likelihood = 244.65895
Iteration 4:
log likelihood = 244.68945
(switching optimization to BFGS)
Iteration 5:
log likelihood = 244.69431
Iteration 6:
log likelihood = 244.69647
Iteration 7:
log likelihood = 244.69651
Iteration 8:
log likelihood = 244.69651
ARIMA regression
Sample: 14 - 144
Number of obs
Wald chi2(2)
Log likelihood = 244.6965
Prob > chi2

DS12.lnair

Coef.

OPG
Std. Err.

P>|z|

=
=
=

169

131
84.53
0.0000

[95% Conf. Interval]

ARMA
ma
L1.

-.4018324

.0730307

-5.50

0.000

-.5449698

-.2586949

ma
L1.

-.5569342

.0963129

-5.78

0.000

-.745704

-.3681644

/sigma

.0367167

.0020132

18.24

0.000

.0327708

.0406625

ARMA12

Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.

We can now use estat aroots to check the stability condition of the MA part of the model.
. estat aroots
Eigenvalue stability condition
Eigenvalue
.824798
.824798
.9523947
-.824798
-.824798
-.4761974
-.4761974
2.776e-16
2.776e-16
.4761974
.4761974
-.9523947
.4018324

+
-

.4761974i
.4761974i

+
+
+
+
-

.4761974i
.4761974i
.824798i
.824798i
.9523947i
.9523947i
.824798i
.824798i

Modulus
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.401832

All the eigenvalues lie inside the unit circle.


MA parameters satisfy invertibility condition.

170

estat aroots Check the stability condition of ARIMA estimates

.5

Imaginary
0

.5

Inverse roots of MA polynomial

.5

0
Real

.5

Because the modulus of each eigenvalue is strictly less than 1, the MA process is invertible and
can be represented as an infinite-order AR process.
The graph produced by estat aroots displays the eigenvalues with the real components on the x
axis and the imaginary components on the y axis. The graph indicates visually that these eigenvalues
are just inside the unit circle.

Stored results
aroots stores the following in r():
Matrices
r(Re ar)
real part of the eigenvalues of F ()
r(Im ar)
imaginary part of the eigenvalues of F ()
r(Modulus ar) modulus of the eigenvalues of F ()
r(ar)
F (), the AR companion matrix
real part of the eigenvalues of F ()
r(Re ma)
imaginary part of the eigenvalues of F ()
r(Im ma)
r(Modulus ma) modulus of the eigenvalues of F ()
r(ma)
F (), the MA companion matrix

Methods and formulas


Recall the general form of the ARMA model,
(Lp )(yt xt ) = (Lq )t
where

(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq

and Lj yt = ytj .

estat aroots Check the stability condition of ARIMA estimates

171

estat aroots forms the companion matrix

1
1

0
F() =
.
..

2
0
1
..
.

. . . r1
...
0
...
0
..
..
.
.
...
1

r
0

0
..
.
0

where = and r = p for the AR part of ARMA, and = and r = q for the MA part of
ARMA. aroots obtains the eigenvalues
of F by using matrix eigenvalues. The modulus of the

complex eigenvalue r + ci is r2 + c2 . As shown by Hamilton (1994, chap. 1), a process is stable


and invertible if the modulus of each eigenvalue of F is strictly less than 1.

Reference
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.

Also see
[TS] arima ARIMA, ARMAX, and other dynamic regression models

Title
estat sbknown Test for a structural break with a known break date
Description
Options
Reference

Quick start
Remarks and examples
Also see

Menu for estat


Stored results

Syntax
Methods and formulas

Description
estat sbknown performs a Wald or a likelihood-ratio (LR) test of whether the coefficients in a
time-series regression vary over the periods defined by known break dates.
estat sbknown requires that the current estimation results be from regress or ivregress 2sls.

Quick start
Test for a structural break at January 1983 for current estimation results
estat sbknown, break(tm(1983m1))
As above, but for the first quarter of 1997
estat sbknown, break(tq(1997q1))
As above, but perform an LR test instead of a Wald test
estat sbknown, break(tq(1997q1)) lr
Perform a Wald test for multiple breaks at dates 1997q1 and 2005q1
estat sbknown, break(tq(1997q1) tq(2005q1))

Menu for estat


Statistics

>

Postestimation

172

estat sbknown Test for a structural break with a known break date

173

Syntax
estat sbknown, break(time constant list)
options

options

Description

break(time constantlist)
 specify one or more break dates
breakvars( varlist , constant ) specify variables to be included in the test; by default,
all coefficients are tested
wald
request a Wald test; the default
lr
request an LR test

break() is required.
You must tsset your data before using estat sbknown; see [TS] tsset.

Options
break(time constant list) specifies a list of one or more hypothesized break dates. break() is
required with at least one break date.
time constant list is a list of one or more time constant elements specified using dates in Stata
internal form (SIF) or human-readable form (HRF) format. If you specify the time constant list
using HRF, you must use one of the datetime pseudofunctions; see [D] datetime.



breakvars( varlist , constant ) specifies variables to be included in the test. By default, all
the coefficients are tested.
constant specifies that a constant be included in the list of variables to be tested. constant may
be specified only if the original model was fit with a constant term.
wald requests that a Wald test be performed. This is the default.
lr requests that an LR test be performed instead of a Wald test.

Remarks and examples


estat sbknown performs a test of the null hypothesis that the coefficients do not vary over the
subsamples defined by the specified known break dates. The null hypothesis of no structural break
can be tested using a Wald or an LR test.
Consider the linear regression

yt = xt + t
A model with a structural break allows the coefficients to change after a break date. If b is the break
date, the model is

xt + t
if t b
yt =
xt ( + ) + t if t > b
For this model, the null and alternative hypotheses are H0 : = 0 and Ha : 6= 0.
For a classical linear model where t is independent and identically distributed, this test is known
as the Chow (1960) test.

174

estat sbknown Test for a structural break with a known break date

Example 1: Test for a single known break date


In usmacro.dta, we have data for the fedfunds series from the third quarter of 1954 to the fourth
quarter of 2010 from the Federal Reserve Economic Database (FRED), a macroeconomic database
provided by the Federal Reserve Bank of Saint Louis. The data are plotted below.

federal funds rate


10
15

20

U.S. Federal Funds Rate

1950q1

1960q1

1970q1
1980q1
1990q1
quarterly time variable

2000q1

2010q1

We note that the 1970s and 1980s were characterized by periods of high interest rates, with the
interest rate peaking in 1981q2.
We want to model the federal funds rate as a function of its first lag, but we are concerned that
there may be a structural break after 1981q2. We fit the model parameters using regress, and then
we use estat sbknown to test for a structural break.
. use http://www.stata-press.com/data/r14/usmacro
(Federal Reserve Economic Data - St. Louis Fed)
. regress fedfunds L.fedfunds
(output omitted )
. estat sbknown, break(tq(1981q2))
Wald test for a structural break: Known break date
Number of obs =
Sample:
1954q4 - 2010q4
Break date: 1981q2
Ho: No structural break
chi2(2)
=
6.4147
Prob > chi2 =
0.0405
Exogenous variables:
L.fedfunds
Coefficients included in test: L.fedfunds _cons

225

We reject the null hypothesis of no structural break at the 5% level.

Example 2: Test for multiple known breaks


Suppose we divide the data into three subsamples for the periods 1954q4 to 1970q1, 1970q2 to
1995q1, and 1995q2 to 2010q4, specified by the break dates at 1970q1 and 1995q1. We would like
to test whether the coefficients are the same in these subsamples. We do this by specifying multiple
dates in the break() option.

estat sbknown Test for a structural break with a known break date
. estat sbknown, break(tq(1970q1) tq(1995q1))
Wald test for a structural break: Known break date
Number of obs =
Sample:
1954q4 - 2010q4
Break date: 1970q1 1995q1
Ho: No structural break
chi2(4)
=
4.6739
Prob > chi2 =
0.3224
Exogenous variables:
L.fedfunds
Coefficients included in test: L.fedfunds _cons

175

225

We fail to reject the null hypothesis of no structural break for the specified dates.

Stored results
estat sbknown stores the following in r():
Scalars
r(chi2)
r(p)
r(df)
Macros
r(breakdate)
r(breakvars)
r(test)

2 test statistic

level of significance
degrees of freedom
list of break dates
list of variables whose coefficients are included in the test
type of test

Methods and formulas


A test for a structural break with a known break date can be constructed by fitting a linear regression
with an indicator variable as
yt = xt + (b t)xt + t
The null hypothesis of no structural break is H0 : = 0. This can be tested by constructing a Wald
statistic or an LR statistic, both with 2 (k) as the limiting distribution, where k is the number of
parameters in the model.
A regression model with multiple breaks may be expressed as



yt = xt + xt (b1 t < b2 )1 + (b2 t < b3 )2 + + (bm t)m + t
where b1 , . . . , bm are m 2 break dates. The null hypothesis of no structural break is a joint test
given by H0 : 1 = = m = 0.

Reference
Chow, G. C. 1960. Tests of equality between sets of coefficients in two linear regressions. Econometrica 28: 591605.

176

estat sbknown Test for a structural break with a known break date

Also see
[TS] estat sbsingle Test for a structural break with an unknown break date
[TS] tsset Declare data to be time-series data
[R] ivregress Single-equation instrumental-variables regression
[R] regress Linear regression

Title
estat sbsingle Test for a structural break with an unknown break date
Description
Options
References

Quick start
Remarks and examples
Also see

Menu for estat


Stored results

Syntax
Methods and formulas

Description
estat sbsingle performs a test of whether the coefficients in a time-series regression vary over
the periods defined by an unknown break date.
estat sbsingle requires that the current estimation results be from regress or ivregress
2sls.

Quick start
Supremum Wald test for a structural break at an unknown break date for current estimation results
using default symmetric trimming of 15%
estat sbsingle
Same as above
estat sbsingle, swald
As above, but also report average Wald test
estat sbsingle, swald awald
Supremum Wald test with symmetric trimming of 20%
estat sbsingle, trim(20)
As above, but use asymmetric trimming with a left trim of 10% and a right trim of 20%
estat sbsingle, ltrim(10) rtrim(20)

Menu for estat


Statistics

>

Postestimation

177

178

estat sbsingle Test for a structural break with an unknown break date

Syntax
estat sbsingle

, options

Description



breakvars( varlist , constant ) specify variables to be included in the test; by default,
all coefficients are tested
trim(#)
specify a trimming percentage; default is trim(15)
ltrim(# l )
specify a left trimming percentage
rtrim(# r )
specify a right trimming percentage
swald
request a supremum Wald test; the default
awald
request an average Wald test
ewald
request an exponential Wald test
all
report all tests
slr
request a supremum likelihood-ratio (LR) test
alr
request an average LR test
elr
request an exponential LR test
generate(newvarlist)
create newvarlist containing Wald or LR test statistics
nodots
suppress iteration dots
options

You must tsset your data before using estat sbsingle; see [TS] tsset.

Options



breakvars( varlist , constant ) specifies variables to be included in the test. By default, all
the coefficients are tested.
constant specifies that a constant be included in the list of variables to be tested. constant may
be specified only if the original model was fit with a constant term.
trim(#) specifies an equal left and right trimming percentage as an integer. Specifying trim(#)
causes the observation at the #th percentile to be treated as the first possible break date and the
observation at the (100 #)th percentile to be treated as the last possible break date. By default,
the trimming percentage is set to 15 but may be set to any value between 1 and 49.
ltrim(# l ) specifies a left trimming percentage as an integer. Specifying ltrim(# l ) causes the
observation at the # l th percentile to be treated as the first possible break date. This option must be
specified with rtrim(# r ) and may not be combined with trim(#). # l must be between 1 and
99.
rtrim(# r ) specifies a right trimming percentage as an integer. Specifying rtrim(# r ) causes the
observation at the (100 # r )th percentile to be treated as the last possible break date. This option
must be specified with ltrim(# l ) and may not be combined with trim(#). # r must be less than
(100 # l ). Specifying # l = # r is equivalent to specifying trim(#) with # = # l = # r .
swald requests that a supremum Wald test be performed. This is the default.
awald requests that an average Wald test be performed.
ewald requests that an exponential Wald test be performed.
all specifies that all tests be displayed in a table.
slr requests that a supremum LR test be performed.

estat sbsingle Test for a structural break with an unknown break date

179

alr requests that an average LR test be performed.


elr requests that an exponential LR test be performed.
generate(newvarlist) creates either one or two new variables containing the Wald statistics, LR
statistics, or both that are transformed and used to calculate the requested Wald or LR tests. If you
request only Wald-type tests (swald, awald, or ewald) or only LR-type tests (slr, alr, or elr),
then you may specify only one varname in generate(). By default, newvar will contain Wald
or LR statistics, depending on the type of test specified.
A variable containing Wald statistics and a variable containing LR statistics are created if you specify
both Wald-type and LR-type tests and specify two varnames in generate(). If you only specify
one varname in generate() with Wald-type and LR-type tests specified, then Wald statistics are
returned.
If no test is specified and generate() is specified, Wald statistics are returned.
nodots suppresses display of the iteration dots. By default, one dot character is displayed for each
iteration in the range of possible break dates.

Remarks and examples


estat sbsingle constructs a test statistic for a structural break without imposing a known break
date by combining the test statistics computed for each possible break date in the sample. estat
sbsingle uses the maximum, an average, or the exponential of the average of the tests computed
at each possible break date. The test at each possible break date can be either a Wald or an LR test.
The limiting distribution of each of these test statistics is known but nonstandard. Not only is each
test statistic a function of many sample statistics, but each of these test statistics also depends on
the unknown break date, which is not identified under the null hypothesis; see Davies (1987) for a
seminal treatment.
Tests that use the maximum of the sample tests are known as supremum tests. Specifically, the
supremum Wald test uses the maximum of the sample Wald tests, and the supremum LR test uses
the maximum of the sample LR tests. The intuition behind these tests is to compare the maximum
sample test with what could be expected under the null hypothesis of no break (Quandt [1960], Kim
and Siegmund [1989], and Andrews [1993]).
Supremum tests have much less power than average tests and exponential tests. Average tests
use the average of the sample tests, and exponential tests use the natural log of the average of
the exponential of the sample tests. An average test is optimal when the alternative hypothesis is a
small change in parameter values at the structural break. An exponential test is optimal when the
alternative hypothesis is a larger structural break. See Andrews and Ploberger (1994) for details about
the properties of average and exponential tests.
All tests implemented in estat sbsingle are a function of the sample statistics computed over
a range of possible break dates. However, not all sample observations can be tested as break dates
because there are insufficient observations to estimate the parameters for dates too near the beginning or
the end of the sample. This identification problem is solved by trimming, which excludes observations
too near the beginning or the end of the sample from the set of possible break dates. Andrews (1993)
recommends a symmetric trimming of 15% when the researcher has no other information on good
trimming values.
Much research went into deriving the properties of the implemented tests, and we have cited only
a few of the many papers on the subject. See Perron (2006) for an excellent survey.

180

estat sbsingle Test for a structural break with an unknown break date

Example 1: Test for a structural break with unknown break date


In usmacro.dta, we have data for the fedfunds series from the third quarter of 1954 to the
fourth quarter of 2010 from the Federal Reserve Economic Database (FRED) provided by the Federal
Reserve Bank of Saint Louis.
Consider a model for the federal funds rate as a function of its first lag and the inflation rate
(inflation). We want to test whether coefficients changed at an unknown break date. Below, we
fit the model using regress and perform the test using estat sbsingle.
. use http://www.stata-press.com/data/r14/usmacro
(Federal Reserve Economic Data - St. Louis Fed)
. regress fedfunds L.fedfunds inflation
(output omitted )
. estat sbsingle
1
2
3
4
5
..................................................
..................................................
..................................................
....

50
100
150

Test for a structural break: Unknown break date


Number of obs =
Full sample:
Trimmed sample:
Estimated break date:
Ho: No structural break

222

1955q3 - 2010q4
1964q1 - 2002q3
1980q4

Test

Statistic

p-value

swald

14.1966

0.0440

Exogenous variables:
L.fedfunds inflation
Coefficients included in test: L.fedfunds inflation _cons

By default, a supremum Wald test is performed. The output indicates that we reject the null hypothesis
of no structural break at the 5% level and that the estimated break date is 1980q4.
Some researchers perform more than one test. Below, we present results for the supremum Wald,
average Wald, and average LR tests.
. estat sbsingle, swald awald alr
1
2
3
4
5
..................................................
..................................................
..................................................
....

50
100
150

Test for a structural break: Unknown break date


Number of obs =
Full sample:
Trimmed sample:
Ho: No structural break

222

1955q3 - 2010q4
1964q1 - 2002q3

Test

Statistic

p-value

swald
awald
alr

14.1966
4.5673
4.6319

0.0440
0.1474
0.1411

Exogenous variables:
L.fedfunds inflation
Coefficients included in test: L.fedfunds inflation _cons

estat sbsingle Test for a structural break with an unknown break date

181

Only the supremum Wald test rejects the null hypothesis of no break.

Example 2: Testing for a structural break in a subset of coefficients


Below, we test the null hypothesis that there is a break in the intercept, when we assume that
there is no break in either the autoregressive coefficient or the coefficient on inflation.
. estat sbsingle, breakvars(, constant)
1
2
3
4
5
..................................................
..................................................
..................................................
....
Test for a structural break: Unknown break date
Number of obs =
Full sample:
1955q3 - 2010q4
Trimmed sample:
1964q1 - 2002q3
Estimated break date:
2001q1
Ho: No structural break
Test
Statistic
p-value
swald

6.7794

50
100
150

222

0.1141

Exogenous variables:
L.fedfunds inflation
Coefficients included in test: _cons

We fail to reject the null hypothesis of no structural break in the intercept when there is no break in
any other coefficient.

Example 3: Reviewing sample test statistics


The observation-level Wald or LR test statistics sometimes provide useful diagnostic information.
Below, we use the generate() option to store the observation-level Wald statistics in the new variable
wald, which we subsequently plot using tsline.

182

estat sbsingle Test for a structural break with an unknown break date
. estat sbsingle, breakvars(L.fedfunds) generate(wald)
(output omitted )
. tsline wald, title("Wald test statistics")

Wald test statistics


2
3

Wald test statistics

1950q1

1960q1

1970q1

1980q1
1990q1
date (quarters)

2000q1

2010q1

We see a spike in the value of the test statistic at the estimated break date of 1980q4. The bump to
left of the spike may indicate a second break.

Example 4: Structural break test with an endogenous regressor


We can use estat sbsingle to test for a structural break in a regression with endogenous
variables. Suppose we want to estimate the New Keynesian hybrid Phillips curve, which defines
inflation as a function of the lagged value of inflation (L.inflation), the output gap (ogap), and
the expected value of inflation in t + 1 {Et (F.inflation)}, conditional on information available at
time t (Gali and Gertler 1999). See [U] 11.4.4 Time-series varlists.
Expected future inflation cannot be directly observed, so macroeconomists use instruments to
predict the one-step-ahead inflation rate. This prediction is obtained by regressing the one-step ahead
inflation rate on a set of instruments.
We can write this mathematically as

inflation = + L.inflation + ogap + Et (F.inflation) + t


and

F.inflation = zt + t+1
where zt is a vector of instruments. The forecasted values given by Et (F.inflation|zt ) = zt b
are
uncorrelated with t+1 by construction.
In this example, we fit the Phillips curve model for the period 1970q1 to 1997q4. We are interested
in testing whether expectation of future inflation changed during this period. We instrument the future
value of inflation with the first two lags of inflation, the federal funds rate, and the output gap. We
use ivregress 2sls to fit the model.

estat sbsingle Test for a structural break with an unknown break date

183

. ivregress 2sls inflation L.inflation ogap


> (F.inflation = L(1/2).inflation L(1/2).ogap L(1/2).fedfunds)
> if tin(1970q1,1997q4)
(output omitted )
. estat sbsingle, breakvars(F.inflation)
1
2
3
4
5
..................................................
............................
Test for a structural break: Unknown break date
Number of obs =
Full sample:
1970q1 - 1997q4
Trimmed sample:
1974q2 - 1993q4
Estimated break date:
1981q3
Ho: No structural break
Test
Statistic
p-value
swald

6.7345

50

112

0.1164

Coefficients included in test: F.inflation

We fail to reject the null hypothesis of no structural break in the coefficient of expected future inflation.

Stored results
estat sbsingle stores the following in r():
Scalars
r(chi2)
r(p)
r(df)
Macros
r(ltrim)
r(rtrim)
r(breakvars)
r(breakdate)
r(test)

2 test statistic

level of significance
degrees of freedom
start of trim date
end of trim date
list of variables whose coefficients are included in the test
estimated break date only after supremum tests
type of test

Methods and formulas


Each supremum test statistic is the maximum value of the test statistic that is obtained from a
series of Wald or LR tests over a range of possible break dates in the sample. Let b denote a possible
break date in the range [b1 , b2 ] for a sample size T .
The supremum test statistic for testing the null hypothesis of no structural change in k coefficients
is given by
supremum ST = sup ST (b)
b1 bb2

184

estat sbsingle Test for a structural break with an unknown break date

where ST (b) is the Wald or LR test statistic evaluated at a potential break date b. The average and
the exponential versions of the test statistic are
b2
X
1
ST (b)
b2 b1 + 1
b=b1
"

#
b2
X
1
1
exponential ST = ln
exp
ST (b)
b2 b1 + 1
2

average ST =

b=b1

respectively.
The limiting distributions of the test statistics are given by
supremum ST d

sup

S()

[1 ,2 ]

average ST d
exponential ST d
where

S() =

Z 2
1
S() d
2 1 1



Z 2
1
1
ln
S() d
exp
2 1 1
2

{Bk () Bk (1)}0 {Bk () Bk (1)}


(1 )

Bk () is a vector of k -dimensional independent Brownian motions, 1 = b1 /T , 2 = b2 /T , and


= 2 (1 1 )/{1 (1 2 )}.
Computing the p-values for the nonstandard limiting distributions is computationally complicated.
For each test, the reported p-value is computed using the method in Hansen (1997).

References
Andrews, D. W. K. 1993. Tests for parameter instability and structural change with unknown change point. Econometrica
61: 821856.
Andrews, D. W. K., and W. Ploberger. 1994. Optimal tests when a nuisance parameter is present only under the
alternative. Econometrica 62: 13831414.
Davies, R. B. 1987. Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika
74: 3343.
Gali, J., and M. Gertler. 1999. Inflation dynamics: A structural econometric analysis. Journal of Monetary Economics
44: 195222.
Hansen, B. E. 1997. Approximate asymptotic
Statistics 15: 6067.

p values for structural-change tests. Journal of Business and Economic

Kim, H.-J., and D. Siegmund. 1989. The likelihood ratio test for a change-point in simple linear regression. Biometrika
76: 409423.
Perron, P. 2006. Dealing with structural breaks. In Palgrave Handbook of Econometrics: Econometric Theory, Vol I,
ed. T. C. Mills and K. Patterson, 278352. Basingstoke, UK: Palgrave.
Quandt, R. E. 1960. Tests of the hypothesis that a linear regression system obeys two separate regimes. Journal of
the American Statistical Association 55: 324330.

estat sbsingle Test for a structural break with an unknown break date

Also see
[TS] estat sbknown Test for a structural break with a known break date
[TS] tsset Declare data to be time-series data
[R] ivregress Single-equation instrumental-variables regression
[R] regress Linear regression

185

Title
fcast compute Compute dynamic forecasts after var, svar, or vec
Description
Options
Also see

Quick start
Remarks and examples

Menu
Methods and formulas

Syntax
References

Description
fcast compute produces dynamic forecasts of the dependent variables in a model previously fit
by var, svar, or vec. fcast compute creates new variables and, if necessary, extends the time
frame of the dataset to contain the prediction horizon.

Quick start
Dynamic forecasts stored in f y1, f y2, and f y3 after fitting a model with var for dependent
variables y1, y2, and y3
fcast compute f
As above, but begin forecast on the first quarter of 1979
fcast compute f , dynamic(q(1979q1))
As above, but specify that 10 periods should be forecasted
fcast compute f , dynamic(q(1979q1)) step(10)

Menu
Statistics

>

Multivariate time series

>

VEC/VAR forecasts

186

>

Compute forecasts (required for graph)

fcast compute Compute dynamic forecasts after var, svar, or vec

187

Syntax
After var and svar
fcast compute prefix

, options1

, options2

After vec
fcast compute prefix

prefix is the prefix appended to the names of the dependent variables to create the names of the
variables holding the dynamic forecasts.
options1

Description

Main

step(#)
dynamic(time constant)
estimates(estname)
replace

set # periods to forecast; default is step(1)


begin dynamic forecasts at time constant
use previously stored results estname; default is to use active
results
replace existing forecast variables that have the same prefix

Std. Errors

nose
bs
bsp
bscentile
reps(#)
nodots


saving(filename , replace )

suppress asymptotic standard errors


obtain standard errors from bootstrapped residuals
obtain standard errors from parametric bootstrap
estimate bounds by using centiles of bootstrapped dataset
perform # bootstrap replications; default is reps(200)
suppress the usual dot after each bootstrap replication
save bootstrap results as filename; use replace to overwrite
existing filename

Reporting

level(#)

set confidence level; default is level(95)

188

fcast compute Compute dynamic forecasts after var, svar, or vec

options2

Description

Main

step(#)
dynamic(time constant)
estimates(estname)
replace
differences

set # periods to forecast; default is step(1)


begin dynamic forecasts at time constant
use previously stored results estname; default is to use active
results
replace existing forecast variables that have the same prefix
save dynamic predictions of the first-differenced variables

Std. Errors

suppress asymptotic standard errors

nose
Reporting

level(#)

set confidence level; default is level(95)

Default is to use asymptotic standard errors if no options are specified.


fcast compute can be used only after var, svar, and vec; see [TS] var, [TS] var svar, and [TS] vec.
You must tsset your data before using fcast compute; see [TS] tsset.

Options


Main

step(#) specifies the number of periods to be forecast. The default is step(1).


dynamic(time constant) specifies the period to begin the dynamic forecasts. The default is the period
after the last observation in the estimation sample. The dynamic() option accepts either a Stata
date function that returns an integer or an integer that corresponds to a date using the current tsset
format. dynamic() must specify a date in the range of two or more periods into the estimation
sample to one period after the estimation sample.
estimates(estname) specifies that fcast compute use the estimation results stored as estname. By
default, fcast compute uses the active estimation results. See [R] estimates for more information
on manipulating estimation results.
replace causes fcast compute to replace the variables in memory with the specified predictions.
differences specifies that fcast compute also save dynamic predictions of the first-differenced
variables. differences can be specified only with vec estimation results.

Std. Errors

nose specifies that the asymptotic standard errors of the forecasted levels and, thus the asymptotic
confidence intervals for the levels, not be calculated. By default, the asymptotic standard errors
and the asymptotic confidence intervals of the forecasted levels are calculated.
bs specifies that fcast compute use confidence bounds estimated by a simulation method based on
bootstrapping the residuals.
bsp specifies that fcast compute use confidence bounds estimated via simulation in which the
innovations are drawn from a multivariate normal distribution.
bscentile specifies that fcast compute use centiles of the bootstrapped dataset to estimate the
bounds of the confidence intervals. By default, fcast compute uses the estimated standard errors
and the quantiles of the standard normal distribution determined by level().

fcast compute Compute dynamic forecasts after var, svar, or vec

189

reps(#) gives the number of repetitions used in the simulations. The default is reps(200).
nodots specifies that no dots be displayed while obtaining the simulation-based standard errors. By
default, for each replication, a dot is displayed.


saving(filename , replace ) specifies the name of the file to hold the dataset that contains the
bootstrap replications. The replace option overwrites any file with this name.
replace specifies that filename be overwritten if it exists. This option is not shown in the dialog
box.

Reporting

level(#) specifies the confidence level, as a percentage, for confidence intervals. The default is
level(95) or as set by set level; see [U] 20.7 Specifying the width of confidence intervals.

Remarks and examples


Researchers often use VARs and VECMs to construct dynamic forecasts. fcast compute computes
dynamic forecasts of the dependent variables in a VAR or VECM previously fit by var, svar, or vec.
If you are interested in conditional, one-step-ahead predictions, use predict (see [TS] var, [TS] var
svar, and [TS] vec).
To obtain and analyze dynamic forecasts, you fit a model, use fcast compute to compute the
dynamic forecasts, and use fcast graph to graph the results.

Example 1
Typing
.
.
.
.

use http://www.stata-press.com/data/r14/lutkepohl2
var dln_inc dln_consump dln_inv if qtr<tq(1979q1)
fcast compute m2_, step(8)
fcast graph m2_dln_inc m2_dln_inv m2_dln_consump, observed

fits a VAR with two lags, computes eight-step dynamic predictions for each endogenous variable, and
produces the graph

190

fcast compute Compute dynamic forecasts after var, svar, or vec

Forecast for dln_inv

.02

.1 .05 0 .05 .1

.04

Forecast for dln_inc

1978q3 1979q1 1979q3 1980q1 1980q3

.02

.04

Forecast for dln_consump

1978q3 1979q1 1979q3 1980q1 1980q3

95% CI
observed

forecast

The graph shows that the model is better at predicting changes in income and investment than in
consumption. The graph also shows how quickly the predictions from the two-lag model settle down
to their mean values.

fcast compute creates new variables in the dataset. If there are K dependent variables in the
previously fitted model, fcast compute generates 4K new variables:

K new variables that hold the forecasted levels, named by appending the specified prefix to
the name of the original variable
K estimated lower bounds for the forecast interval, named by appending the specified prefix
and the suffix LB to the name of the original variable
K estimated upper bounds for the forecast interval, named by appending the specified prefix
and the suffix UB to the name of the original variable
K estimated standard errors of the forecast, named by appending the specified prefix and the
suffix SE to the name of the original variable
If you specify options so that fcast compute does not calculate standard errors, the 3K variables
that hold them and the bounds of the confidence intervals are not generated.
If the model previously fit is a VECM, specifying differences generates another K variables
that hold the forecasts of the first differences of the dependent variables, named by appending the
prefix prefixD to the name of the original variable.

Example 2
Plots of the forecasts from different models along with the observations from a holdout sample
can provide insights to their relative forecasting performance. Continuing the previous example,

fcast compute Compute dynamic forecasts after var, svar, or vec

191

.05

.05

.1

. var dln_inc dln_consump dln_inv if qtr<tq(1979q1), lags(1/6)


(output omitted )
. fcast compute m6_, step(8)
. graph twoway line m6_dln_inv m2_dln_inv dln_inv qtr
> if m6_dln_inv < ., legend(cols(1))

1978q4

1979q2

1979q4
quarter

1980q2

1980q4

m6_dln_inv, dyn(1979q1)
m2_dln_inv, dyn(1979q1)
firstdifference of ln_inv

The model with six lags predicts changes in investment better than the two-lag model in some periods
but markedly worse in other periods.

Methods and formulas


Predictions after var and svar
A VAR with endogenous variables yt and exogenous variables xt can be written as

yt = v + A1 yt1 + + Ap ytp + Bxt + ut


where

t = 1, . . . , T
yt = (y1t , . . . , yKt )0 is a K 1 random vector,
the Ai are fixed (K K) matrices of parameters,
xt is an (M 1) vector of exogenous variables,
B is a (K M ) matrix of coefficients,
v is a (K 1) vector of fixed parameters, and
ut is assumed to be white noise; that is,
E(ut ) = 0K
E(ut u0t ) =
E(ut u0s ) = 0K for t 6= s
fcast compute will dynamically predict the variables in the vector yt conditional on p initial values
of the endogenous variables and any exogenous xt . Adopting the notation from Lutkepohl (2005,
402) to fit the case at hand, the optimal h-step-ahead forecast of yt+h conditional on xt is

192

fcast compute Compute dynamic forecasts after var, svar, or vec

b 1 yt (h 1) + + A
b p yt (h p) + Bx
b t
b+A
yt (h) = v

(1)

If there are no exogenous variables, (1) becomes

b 1 yt (h 1) + + A
b p yt (h p)
b+A
yt (h) = v
When there are no exogenous variables, fcast compute can compute the asymptotic confidence
bounds.
As shown by Lutkepohl (2005, 204205), the asymptotic estimator of the covariance matrix of
the prediction error is given by

b (h)
b (h) =
b y (h) + 1

b
y
T

(2)

where

b y (h) =

h1
X

b i
b
b 0i

i=0

(h1
)
(h1
)0
T
X  h1i
1 X X 0  b 0 h1i b b
0 b0
b
b
Z B
i
(h) =
Zt B
i
T t=0 i=0 t
i=0

1 0
0 ...
0
0
b
b
b
b
b A1 A2 . . . Ap1 Ap
v

0
0
0 IK 0 . . .
b =

B
0
0
0 0 IK
.
..
..
..
..

.
.
.
0 0
0 ...
0
0
Zt = (1, yt , . . . , ytp1
)0
b 0 = IK

i
X
bj
bi =
b ij A

IK

i = 1, 2, . . .

j=1

b j = 0 for j > p
A
b is the estimate of the covariance matrix of the innovations, and
b is the estimated VCE of the

coefficients in the VAR. The formula in (2) is general enough to handle the case in which constraints
are placed on the coefficients in the VAR(p).
b y (h) is the estimated mean squared error (MSE) of the
Equation (2) is made up of two terms.
b
b (h)
forecast. y (h) estimates the error in the forecast arising from the unseen innovations. T 1
estimates the error in the forecast that is due to using estimated coefficients instead of the true
coefficients. As the sample size grows, uncertainty with respect to the coefficient estimates decreases,
b (h) goes to zero.
and T 1

fcast compute Compute dynamic forecasts after var, svar, or vec

193

If yt is normally distributed, the bounds for the asymptotic (1 )100% interval around the
forecast for the k th component of yt , h periods ahead, are

bk,t (h) z( 2 )
y
bk (h)

(3)

b (h).
where
bk (h) is the k th diagonal element of
b
y
Specifying the bs option causes the standard errors to be computed via simulation, using bootstrapped
residuals. Both var and svar contain estimators for the coefficients of a VAR that are conditional
on the first p observations on the endogenous variables in the data. Similarly, these algorithms
are conditional on the first p observations of the endogenous variables in the data. However, the
simulation-based estimates of the standard errors are also conditional on the estimated coefficients.
The asymptotic standard errors are not conditional on the coefficient estimates because the second
term on the right-hand side of (2) accounts for the uncertainty arising from using estimated parameters.
For a simulation with R repetitions, this method uses the following algorithm:
1. Fit the model and save the estimated coefficients.
2. Use the estimated coefficients to calculate the residuals.
3. Repeat steps 3a3c R times.
3a. Draw a simple random sample with replacement of size T + h from the residuals.
When the tth observation is drawn, all K residuals are selected, preserving any
contemporaneous correlation among the residuals.
3b. Use the sampled residuals, p initial values of the endogenous variables, any
exogenous variables, and the estimated coefficients to construct a new sample
dataset.
3c. Save the simulated endogenous variables for the h forecast periods in the bootstrapped
dataset.
4. For each endogenous variable and each forecast period, the simulated standard error is the
estimated standard error of the R simulated forecasts. By default, the upper and lower bounds
of the (1 )100% are estimated using the simulation-based estimates of the standard errors
and the normality assumption, as in (3). If the bscentile option is specified, the sample
centiles for the upper and lower bounds of the R simulated forecasts are used for the upper
and lower bounds of the confidence intervals.
If the bsp option is specified, a parametric simulation algorithm is used. Specifically, everything
is as above except that 3a is replaced by 3a(bsp) as follows:
3a(bsp). Draw T + h observations from a multivariate normal distribution with covariance
b
matrix .
The algorithm above assumes that h forecast periods come after the original sample of T
observations. If the h forecast periods lie within the original sample, smaller simulated datasets are
sufficient.
Dynamic forecasts after vec
Methods and formulas of [TS] vec discusses how to obtain the one-step predicted differences and
levels. fcast compute uses the previous dynamic predictions as inputs for later dynamic predictions.

194

fcast compute Compute dynamic forecasts after var, svar, or vec

Per Lutkepohl (2005, sec. 6.5), fcast compute uses


b (h) =

b
y

T
T d

 h1
X

b i
b
bi

i=0

b i are the estimated matrices of impulseresponse functions, T is the number of observations


where the
b is the estimated cross-equation variance
in the sample, d is the number of degrees of freedom, and
b
matrix. The formulas for d and are given in Methods and formulas of [TS] vec.
b (h).
The estimated standard errors at step h are the square roots of the diagonal elements of
b
y
Per Lutkepohl (2005), the estimated forecast-error variance does not consider parameter uncertainty.
As the sample size gets infinitely large, the importance of parameter uncertainty diminishes to zero.

References
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.

Also see
[TS] fcast graph Graph forecasts after fcast compute
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
fcast graph Graph forecasts after fcast compute
Description
Options

Quick start
Remarks and examples

Menu
Also see

Syntax

Description
fcast graph graphs dynamic forecasts of the endogenous variables from a VAR(p) or VECM that
has already been obtained from fcast compute; see [TS] fcast compute.

Quick start
Graph forecasts in f y1 after fcast compute
fcast graph f_y1
As above, and include observed values of the predicted variable
fcast graph f_y1, observed
As above, but suppress confidence bands
fcast graph f_y1, observed noci

Menu
Statistics

>

Multivariate time series

>

VEC/VAR forecasts

195

>

Graph forecasts

196

fcast graph Graph forecasts after fcast compute

Syntax
fcast graph varlist

if

 

in

 

, options

where varlist contains one or more forecasted variables generated by fcast compute.
Description

options
Main

differences
noci
observed

graph forecasts of the first-differenced variables (vec only)


suppress confidence bands
include observed values of the predicted variables

Forecast plot

cline options

affect rendition of the forecast lines

CI plot

ciopts(area options)

affect rendition of the confidence bands

Observed plot

obopts(cline options)

affect rendition of the observed values

Y axis, Time axis, Titles, Legend, Overall

twoway options
byopts(by option)

any options other than by() documented in [G-3] twoway options


affect appearance of the combined graph; see [G-3] by option

Options


Main

differences specifies that the forecasts of the first-differenced variables be graphed. This option is
available only with forecasts computed by fcast compute after vec. The differences option
implies noci.
noci specifies that the confidence intervals be suppressed. By default, the confidence intervals are
included.
observed specifies that observed values of the predicted variables be included in the graph. By
default, observed values are not graphed.

Forecast plot

cline options affect the rendition of the plotted lines corresponding to the forecast;
[G-3] cline options.

see

CI plot

ciopts(area options) affects the rendition of the confidence bands for the forecasts; see
[G-3] area options.

Observed plot

obopts(cline options) affects the rendition of the observed values of the predicted variables; see
[G-3] cline options. This option implies the observed option.

fcast graph Graph forecasts after fcast compute

197

Y axis, Time axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, excluding by().
byopts(by option) are documented in [G-3] by option. These options affect the appearance of the
combined graph.

Remarks and examples


fcast graph graphs dynamic forecasts created by fcast compute.

Example 1
In this example, we use a cointegrating VECM to model the state-level unemployment rates in
Missouri, Indiana, Kentucky, and Illinois, and we graph the forecasts against a 6-month holdout
sample.
. use http://www.stata-press.com/data/r14/urates
. vec missouri indiana kentucky illinois if t < tm(2003m7), trend(rconstant)
> rank(2) lags(4)
(output omitted )
. fcast compute m1_, step(6)
. fcast graph m1_missouri m1_indiana m1_kentucky m1_illinois, observed

5.5

6.5

Forecast for indiana

Forecast for missouri

Forecast for illinois

5.5

5.5

6.5

6.5

7.5

Forecast for kentucky

2003m6

2003m8

2003m10

2003m122003m6

95% CI
observed

2003m8

2003m10

2003m12

forecast

Because the 95% confidence bands for the predicted unemployment rates in Missouri and Indiana do
not contain all their observed values, the model does not reliably predict these unemployment rates.

198

fcast graph Graph forecasts after fcast compute

Also see
[TS] fcast compute Compute dynamic forecasts after var, svar, or vec
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
forecast Econometric model forecasting
Description
Also see

Quick start

Syntax

Remarks and examples

References

Description
forecast is a suite of commands for obtaining forecasts by solving models, collections of
equations that jointly determine the outcomes of one or more variables. Equations can be stochastic
relationships fit using estimation commands such as regress, ivregress, var, or reg3; or they can
be nonstochastic relationships, called identities, that express one variable as a deterministic function
of other variables. Forecasting models may also include exogenous variables whose values are already
known or determined by factors outside the purview of the system being examined. The forecast
commands can also be used to obtain dynamic forecasts in single-equation models.
The forecast suite lets you incorporate outside information into your forecasts through the use
of add factors and similar devices, and you can specify the future path for some model variables
and obtain forecasts for other variables conditional on that path. Each set of forecast variables has its
own name prefix or suffix, so you can compare forecasts based on alternative scenarios. Confidence
intervals for forecasts can be obtained via stochastic simulation and can incorporate both parameter
uncertainty and additive error terms.
forecast works with both time-series and panel datasets. Time-series datasets may not contain
any gaps, and panel datasets must be strongly balanced.
This manual entry provides an overview of forecasting models and several examples showing how
the forecast commands are used together. See the individual subcommands manual entries for
detailed discussions of the various options available and specific remarks about those subcommands.

Quick start
Estimate a linear and an ARIMA regression and store their results as myreg and myarima, respectively
regress y1 x1 x2
estimates store myreg
arima y2 x3 y1, ar(1) ma(1)
estimates store myarima
Create a forecast model with the name mymodel
forecast create mymodel
Add stored estimates myreg and myarima to forecast model mymodel
forecast estimates myreg
forecast estimates myarima
Compute dynamic forecasts from 2012 through 2020 of y1 and y2 using mymodel with nonmissing
values of x1, x2, and x3 for the entire forecast horizon
forecast solve, begin(2012) end(2020)
See [TS] forecast adjust, [TS] forecast coefvector, [TS] forecast create, [TS] forecast describe,
[TS] forecast drop, [TS] forecast estimates, [TS] forecast exogenous, [TS] forecast identity,
[TS] forecast list, and [TS] forecast solve for additional Quick starts.
199

200

forecast Econometric model forecasting

Syntax
forecast subcommand . . .

, options

subcommand

Description

create
estimates
identity
coefvector
exogenous
solve
adjust
describe
list
clear
drop
query

create a new model


add estimation result to current model
specify an identity (nonstochastic equation)
specify an equation via a coefficient vector
declare exogenous variables
obtain one-step-ahead or dynamic forecasts
adjust a variable by add factoring, replacing, etc.
describe a model
list all forecast commands composing current model
clear current model from memory
drop forecast variables
check whether a forecast model has been started

Remarks and examples


A forecasting model is a system of equations that jointly determine the outcomes of one or more
endogenous variables, whereby the term endogenous variables contrasts with exogenous variables,
whose values are not determined by the interplay of the systems equations. A model, in the context
of the forecast commands, consists of
1. zero or more stochastic equations fit using Stata estimation commands and added to the
current model using forecast estimates. These stochastic equations describe the behavior
of endogenous variables.
2. zero or more nonstochastic equations (identities) defined using forecast identity. These
equations often describe the behavior of endogenous variables that are based on accounting
identities or adding-up conditions.
3. zero or more equations stored as coefficient vectors and added to the current model using
forecast coefvector. Typically, you will fit your equations in Stata and use forecast
estimates to add them to the model. forecast coefvector is used to add equations
obtained elsewhere.
4. zero or more exogenous variables declared using forecast exogenous.
5. at least one stochastic equation or identity.
6. optional adjustments to be made to the variables of the model declared using forecast adjust.
One use of adjustments is to produce forecasts under alternative scenarios.
The forecast commands are designed to be easy to use, so without further ado, we dive headfirst
into an example.

forecast Econometric model forecasting

201

Example 1: Kleins model


Example 3 of [R] reg3 shows how to fit Kleins (1950) model of the U.S. economy using the
three-stage least-squares estimator (3SLS). Here we focus on how to make forecasts from that model
once the parameters have been estimated. In Kleins model, there are seven equations that describe
the seven endogenous variables. Three of those equations are stochastic relationships, while the rest
are identities:

ct
it
wpt
yt
pt
kt
wt

= 0 + 1 pt + 2 pt1 + 3 wt + 1t
= 4 + 5 pt + 6 pt1 + 7 kt1 + 2t
= 8 + 9 yt + 10 yt1 + 11 yrt + 3t
= ct + it + gt
= yt tt wpt
= kt1 + it
= wgt + wpt

(1)
(2)
(3)
(4)
(5)
(6)
(7)

The variables in the model are defined as follows:


Name

Description

Type

c
p
wp
wg
w
i
k
y
g
t
yr

Consumption
Private-sector profits
Private-sector wages
Government-sector wages
Total wages
Investment
Capital stock
National income
Government spending
Indirect bus. taxes + net exports
Time trend = Year 1931

endogenous
endogenous
endogenous
exogenous
endogenous
endogenous
endogenous
endogenous
exogenous
exogenous
exogenous

Our model has four exogenous variables: government-sector wages (wg), government spending (g),
a time-trend variable (yr), and, for simplicity, a variable that lumps indirect business taxes and net
exports together (t). To make out-of-sample forecasts, we must populate those variables over the
entire forecast horizon before solving our model. (We use the phrases solve our model and obtain
forecasts from our model interchangeably.)
We will illustrate the entire process of fitting and forecasting our model, though our focus will be
on the latter task. See [R] reg3 for a more in-depth look at fitting models like this one. Before we
solve our model, we first estimate the parameters of the stochastic equations by loading the dataset
and calling reg3:

202

forecast Econometric model forecasting


. use http://www.stata-press.com/data/r14/klein2
. reg3 (c p L.p w) (i p L.p L.k) (wp y L.y yr), endog(w p y) exog(t wg g)
Three-stage least-squares regression
Equation
c
i
wp

Obs

Parms

RMSE

"R-sq"

chi2

21
21
21

3
3
3

.9443305
1.446736
.7211282

0.9801
0.8258
0.9863

864.59
162.98
1594.75

0.0000
0.0000
0.0000

[95% Conf. Interval]

Coef.

Std. Err.

P>|z|

c
p
--.
L1.

.1248904
.1631439

.1081291
.1004382

1.16
1.62

0.248
0.104

-.0870387
-.0337113

.3368194
.3599992

w
_cons

.790081
16.44079

.0379379
1.304549

20.83
12.60

0.000
0.000

.715724
13.88392

.8644379
18.99766

p
--.
L1.

-.0130791
.7557238

.1618962
.1529331

-0.08
4.94

0.936
0.000

-.3303898
.4559805

.3042316
1.055467

k
L1.

-.1948482

.0325307

-5.99

0.000

-.2586072

-.1310893

_cons

28.17785

6.793768

4.15

0.000

14.86231

41.49339

y
--.
L1.

.4004919
.181291

.0318134
.0341588

12.59
5.31

0.000
0.000

.3381388
.1143411

.462845
.2482409

yr
_cons

.149674
1.797216

.0279352
1.115854

5.36
1.61

0.000
0.107

.094922
-.3898181

.2044261
3.984251

wp

Endogenous variables:
Exogenous variables:

c i wp w p y
L.p L.k L.y yr t wg g

The output from reg3 indicates that we have a total of six endogenous variables even though our
model in fact has seven. The discrepancy stems from (6) of our model. The capital stock variable (k)
is a function of the endogenous investment variable and is therefore itself endogenous. However, kt
does not appear in any of our models stochastic equations, so we did not declare it in the endog()
option of reg3; from a purely estimation perspective, the contemporaneous value of the capital stock
variable is irrelevant, though it does play a role in terms of solving our model. We next store the
estimation results using estimates store:
. estimates store klein

Now we are ready to define our model using the forecast commands. We first tell Stata to
initialize a new model; we will call our model kleinmodel:
. forecast create kleinmodel
Forecast model kleinmodel started.

forecast Econometric model forecasting

203

The name you give the model mainly controls how output from forecast commands is labeled.
More importantly, forecast create creates the internal data structures Stata uses to keep track of
your model.
The next step is to add all the equations to the model. To add the three stochastic equations we
fit using reg3, we use forecast estimates:
. forecast estimates klein
Added estimation results from reg3.
Forecast model kleinmodel now contains 3 endogenous variables.

That command tells Stata to find the estimates stored as klein and add them to our model. forecast
estimates uses those estimation results to determine that there are three endogenous variables (c, i,
and wp), and it will save the estimated parameters and other information that forecast solve will
later need to obtain predictions for those variables. forecast estimates confirmed our request by
reporting that the estimation results added were from reg3.
forecast estimates reports that our forecast model has three endogenous variables because our
reg3 command included three left-hand-side variables. The fact that we specified three additional
endogenous variables in the endog() option of reg3 so that reg3 reports a total of six endogenous
variables is irrelevant to forecast. All that matters is the number of left-hand-side variables in the
model.
We also need to specify the four identities, equations (4) through (7), that determine the other four
endogenous variables in our model. To do that, we use forecast identity:
. forecast
Forecast
. forecast
Forecast

identity y = c +
model kleinmodel
identity p = y model kleinmodel

i +
now
t now

g
contains 4 endogenous variables.
wp
contains 5 endogenous variables.

. forecast
Forecast
. forecast
Forecast

identity k = L.k + i
model kleinmodel now contains 6 endogenous variables.
identity w = wg + wp
model kleinmodel now contains 7 endogenous variables.

You specify identities similarly to how you use the generate command, except that the left-hand-side
variable is an endogenous variable in your model rather than a new variable you want to create in your
dataset. Time-series operators often come in handy when specifying identities; here we expressed
capital, a stock variable, as its previous value plus current-period investment, a flow variable. An
identity defines an endogenous variable, so each time we use forecast identity, the number of
endogenous variables in our forecast model increases by one.
Finally, we will tell Stata about the four exogenous variables. We do that with the forecast
exogenous command:
. forecast
Forecast
. forecast
Forecast
. forecast
Forecast
. forecast
Forecast

exogenous wg
model kleinmodel
exogenous g
model kleinmodel
exogenous t
model kleinmodel
exogenous yr
model kleinmodel

now contains 1 declared exogenous variable.


now contains 2 declared exogenous variables.
now contains 3 declared exogenous variables.
now contains 4 declared exogenous variables.

204

forecast Econometric model forecasting

forecast keeps track of the exogenous variables that you declare using the forecast exogenous
command and reports the number currently in the model. When you later use forecast solve,
forecast verifies that these variables contain nonmissing data over the forecast horizon. In fact, we
could have instead typed
. forecast exogenous wg g t yr

but to avoid confusing ourselves, we prefer to issue one command for each variable in our model.
Now Stata knows everything it needs to know about the structure of our model. klein2.dta in
memory contains annual observations from 1920 to 1941. Before we make out-of-sample forecasts,
we should first see how well our model works by comparing its forecasts with actual data. There
are a couple of ways to do that. The first is to produce static forecasts. In static forecasts, actual
values of all lagged variables that appear in the model are used. Because actual values will be missing
beyond the last historical time period in the dataset, static forecasts can only forecast one period
into the future (assuming only first lags appear in the model); for that reason, they are often called
one-step-ahead forecasts. To obtain these one-step-ahead forecasts, we type
. forecast solve, prefix(s_) begin(1921) static
Computing static forecasts for model kleinmodel.
Starting period: 1921
Ending period:
1941
Forecast prefix: s_
1921: ............................................
1922: ..............................................
1923: .............................................
(output omitted )
1940: .............................................
1941: ..............................................
Forecast 7 variables spanning 21 periods.

We specified begin(1921) to request that the first year for which forecasts are produced be 1921. Our
model includes variables that are lagged one period; because our data start in 1920, 1921 is the first
year in which we can evaluate all the equations of the model. If we did not specify the begin(1921)
option, forecast solve would have started forecasting in 1941. By default, forecast solve looks
for the earliest time period in which any of the endogenous variables contains a missing value and
begins forecasting in that period. In klein2.dta, k is missing in 1941.
The header of the output confirms that we requested static forecasts for our model, and it indicates
that it will produce forecasts from 1921 through 1941, the last year in our dataset. By default,
forecast solve produces a status report in which the time period being forecast is displayed along
with a dot for each iteration the equation solver performs. The footer of the output confirms that we
forecast seven endogenous variables for 21 years.

forecast Econometric model forecasting

205

The command we just typed will create seven new variables in our dataset, one for each endogenous
variable, containing the static forecasts. Because we specified prefix(s ), the seven new variables
will be named s c, s i, s wp, s y, s p, s k, and s w. Here we graph a subset of the variables
and their forecasts:
Static Forecasts
Consumption

40

50

60

70

40 50 60 70 80 90

Total Income

1920

1925

1930
year

1935

1940

1920

1925

1935

1940

Private Wages

10

20 30 40 50 60

Investment

1930
year

1920

1925

1930
year

1935

1940

1920

1925

1930
year

1935

1940

Solid lines denote actual values.


Dashed lines denote forecast values.

Our static forecasts appear to fit the data relatively well. Had they not fit well, we would have to go
back and reexamine the specification of our model. If the static forecasts are poor, then the dynamic
forecasts that use previous periods forecast values are unlikely to work well either. On the other
hand, even if the model produces good static forecasts, it may not produce accurate dynamic forecasts
more than one or two periods into the future.
Another way to check how well a model forecasts is to produce dynamic forecasts for time periods
in which observed values are available. Here we begin dynamic forecasts in 1936, giving us six years
data with which to compare actual and forecast values and then graph our results:
. forecast solve, prefix(d_) begin(1936)
Computing dynamic forecasts for model kleinmodel.
Starting period: 1936
Ending period:
1941
Forecast prefix: d_
1936: ............................................
1937: ..........................................
1938: .............................................
1939: .............................................
1940: ............................................
1941: ..............................................
Forecast 7 variables spanning 6 periods.

206

forecast Econometric model forecasting

Dynamic Forecasts
Consumption

40

50

60

70

40 50 60 70 80 90

Total Income

1920

1925

1930
year

1935

1940

1920

1930
year

1935

1940

Private Wages

20 30 40 50 60

Investment

1925

1920

1925

1930
year

1935

1940

1920

1925

1930
year

1935

1940

Solid lines denote actual values.


Dashed lines denote forecast values.

Most of the in-sample forecasts look okay, though our model was unable to predict the outsized
increase in investment in 1936 and the sharp drop in 1938.

Our first example was particularly easy because all the endogenous variables appeared in levels.
However, oftentimes the endogenous variables are better modeled using mathematical transformations
such as logarithms, first differences, or percentage changes; transformations of the endogenous
variables may appear as explanatory variables in other equations. The next few examples illustrate
these complications.

Example 2: Models with transformed endogenous variables


hardware.dta contains hypothetical quarterly sales data from the Hughes Hardware Company,
a huge regional distributor of building products. Hughes Hardware has three main product lines:
dimensional lumber (dim), sheet goods such as plywood and fiberboard (sheet), and miscellaneous
hardware, including fasteners and hand tools (misc). Based on past experience, we know that
dimensional lumber sales are closely tied to the level of new home construction and that other product
lines sales can be modeled in terms of the quantity of lumber sold. We are going to use the following
set of equations to model sales of the three product lines:

%dimt = 10 + 11 ln(startst ) + 12 %gdpt + 13 unratet + 1t


sheett = 20 + 21 dimt + 22 %gdpt + 23 unratet + 2t
misct = 30 + 31 dimt + 32 %gdpt + 33 unratet + 3t
Here startst represents the number of new homes for which construction began in quarter t, gdpt
denotes real (inflation-adjusted) gross domestic product (GDP), and unratet represents the quarterly
average unemployment rate. Our equation for dimt is written in terms of percentage changes from
quarter to quarter rather than in levels, and the percentage change in GDP appears as a regressor in
each equation rather than the level of GDP itself. In our model, these three macroeconomic factors
are exogenous, and here we will reserve the last few years data to make forecasts; in practice, we
would need to make our own forecasts of these macroeconomic variables or else purchase a forecast.
We will approximate the percentage change variables by taking first-differences of the natural
logarithms of the respective underlying variables. In terms of estimation, this does not present any
challenges. Here we load the dataset into memory, create the necessary log-transformed variables,

forecast Econometric model forecasting

207

and fit the three equations using regress with the data through the end of 2009. We use quietly
to suppress the output from regress to save space, and we store each set of estimation results as
we go. In Stata, we type
. use http://www.stata-press.com/data/r14/hardware, clear
(Hughes Hardware sales data)
. generate lndim = ln(dim)
. generate lngdp = ln(gdp)
. generate lnstarts = ln(starts)
.
.
.
.
.
.

quietly regress
estimates store
quietly regress
estimates store
quietly regress
estimates store

D.lndim lnstarts D.lngdp unrate if qdate <= tq(2009q4)


dim
sheet dim D.lngdp unrate if qdate <= tq(2009q4)
sheet
misc dim D.lngdp unrate if qdate <= tq(2009q4)
misc

The equations for sheet goods and miscellaneous items do not present any challenges for forecast,
so we proceed by creating a new forecast model named salesfcast and adding those two equations:
. forecast create salesfcast, replace
(Forecast model kleinmodel ended.)
Forecast model salesfcast started.
. forecast estimates sheet
Added estimation results from regress.
Forecast model salesfcast now contains 1 endogenous variable.
. forecast estimates misc
Added estimation results from regress.
Forecast model salesfcast now contains 2 endogenous variables.

The equation for dimensional lumber requires more finesse. First, because our dependent variable
contains a time-series operator, we must use the names() option of forecast estimates to specify
a valid name for the endogenous variable being added:
. forecast estimates dim, names(dlndim)
Added estimation results from regress.
Forecast model salesfcast now contains 3 endogenous variables.

We have entered the endogenous variable dlndim into our model, but it represents the left-hand-side
variable of the regression equation we just added. That is, dlndim is the first-difference of the
logarithm of dim, the sales variable we ultimately want to forecast. We can specify an identity to
reverse the first-differencing, providing us with a variable containing the logarithm of dim:
. forecast identity lndim = L.lndim + dlndim
Forecast model salesfcast now contains 4 endogenous variables.

Finally, we can specify another identity to obtain dim from lndim:


. forecast identity dim = exp(lndim)
Forecast model salesfcast now contains 5 endogenous variables.

208

forecast Econometric model forecasting

Now we can solve the model. We will obtain dynamic forecasts starting in the first quarter of
2010, and we will use the log(off) option to suppress the iteration log:
. forecast solve, begin(tq(2010q1)) log(off)
Computing dynamic forecasts for model salesfcast.
Starting period: 2010q1
Ending period:
2012q3
Forecast prefix: f_
Forecast 5 variables spanning 11 periods.

We did not specify the prefix() or suffix() option, so by default, forecast prefixed our forecast
variables with f . The following graph illustrates our forecasts:
Hughes Hardware Sales ($mil.)
350
Dimensional Lumber
300

250
160
Sheet Goods
130

100
200
Miscellany
150

100
2008q1

2009q1

2010q1

2011q1

Forecast

Actual

2012q1

Our model performed well in 2010, but it did not forecast the pickup in sales that occurred in 2011
and 2012.

Technical note
For more information about working with log-transformed variables, see the second technical note
in [TS] forecast estimates.

The forecast commands can also be used to make forecasts for strongly balanced panel datasets.
A panel dataset is strongly balanced when all the panels have the same number of observations, and
the observations for different panels were all made at the same times. Our next example illustrates
how to produce a forecast with panel data and highlights a couple of key assumptions one must make.

Example 3: Forecasting a panel dataset


In the previous example, we mentioned that Hughes Hardware was a regional distributor of building
products. In fact, Hughes Hardware operates in five states across the southern United States: Texas,
Oklahoma, Louisiana, Arkansas, and Mississippi. The company is in the process of deciding whether
it should open additional distribution centers or move existing ones to new locations. As part of the
process, we need to make sales forecasts for each of the states the company serves.

forecast Econometric model forecasting

209

To make our state-level forecasts, we will use essentially the same model that we did for the
company-wide forecast, though we will also include state-specific effects. The model we will use is

%dimit = 10 + 11 ln(startsit ) + 12 rgspgrowthit + 13 unrateit + u1i + 1it


sheetit = 20 + 21 dimit + 22 rgspgrowthit + 23 unrateit + u2i + 2it
miscit = 30 + 31 dimit + 32 rgspgrowthit + 33 unrateit + u3i + 3it
The subscript i indexes states, and we have replaced the gdp variable that was in our previous model
with rgspgrowth, which measures the annual growth rate in real gross state product (GSP), the
state-level analogue to national GDP. The GSP data are released only annually, so we have replicated
the annual growth rate for all four quarterly observations in a given year. For example, rgspgrowth
is about 5.3 for the four observations for the state of Texas in the year 2007; in 2007, Texass real
GSP was 5.3% higher than in 2006.
The state-level error terms are u1i , u2i , and u3i . Here we will use the fixed-effects estimator and
fit the three equations via xtreg, fe, again using data only through the end of 2009 so that we
can examine how well our model forecasts. Our first task is to fit the three equations and store the
estimation results. At the same time, we will also use predict to obtain the predicted fixed-effects
terms. You will see why in just a moment. Because the regression results are not our primary concern
here, we will use quietly to suppress the output.
In Stata, we type
. use http://www.stata-press.com/data/r14/statehardware, clear
(Hughes state-level sales data)
. generate lndim = ln(dim)
. generate lnstarts = ln(starts)
. quietly xtreg D.lndim lnstarts rgspgrowth unrate if qdate <= tq(2009q4), fe
. predict dlndim_u, u
(45 missing values generated)
. estimates store dim
. quietly xtreg sheet dim rgspgrowth unrate if qdate <= tq(2009q4), fe
. predict sheet_u, u
(40 missing values generated)
. estimates store sheet
. quietly xtreg misc dim rgspgrowth unrate if qdate <= tq(2009q4), fe
. predict misc_u, u
(40 missing values generated)
. estimates store misc

Having fit the model, we are almost ready to make forecasts. First, though, we need to consider
how to handle the state-level error terms. If we simply created a forecast model, added our three
estimation results, then called forecast solve, Stata would forecast miscit , for example, as a
function of dimit , rgspgrowthit , unrateit , and the estimate of the constant term 30 . However,
our model implies that miscit also depends on u3i and the idiosyncratic error term 3it . We will
ignore the idiosyncratic error for now (but see the discussion of simulations in [TS] forecast solve).
By construction, u3i has a mean of zero when averaged across all panels, but in general, u3i is
nonzero for any individual panel. Therefore, we should include it in our forecasts.
After you fit a model with xtreg, you can predict the panel-specific error component for the
subset of observations in the estimation sample. Typically, xtreg is used in situations where the
number of observations per panel T is modest. In those cases, the estimates of the panel-specific
error components are likely to be noisy (analogous to estimating a sample mean with just a few
observations). Often asymptotic analyses of panel-data estimators assume T is fixed, and in those
cases, the estimators of the panel-specific errors are inconsistent.

210

forecast Econometric model forecasting

However, in forecasting applications, the number of observations per panel is usually larger than
in most other panel-data applications. With enough observations, we can have more confidence in
the estimated panel-specific errors. If we are willing to assume that we have decent estimates of the
panel-specific errors and that those panel-level effects will remain constant over the forecast horizon,
then we can incorporate them into our forecasts. Because predict only provided us with estimates
of the panel-level effects for the estimation sample, we need to extend them into the forecast horizon.
An easy way to do that is to use egen to create a new set of variables:
. by state: egen dlndim_u2 = mean(dlndim_u)
. by state: egen sheet_u2 = mean(sheet_u)
. by state: egen misc_u2 = mean(misc_u)

We can use forecast adjust to incorporate these terms into our forecasts. The following commands
define our forecast model, including the estimated panel-specific terms:
. forecast create statemodel, replace
(Forecast model salesfcast ended.)
Forecast model statemodel started.
. forecast estimates dim, name(dlndim)
Added estimation results from xtreg.
Forecast model statemodel now contains 1 endogenous
. forecast adjust dlndim = dlndim + dlndim_u2
Endogenous variable dlndim now has 1 adjustment.
. forecast identity lndim = L.lndim + dlndim
Forecast model statemodel now contains 2 endogenous
. forecast identity dim = exp(lndim)
Forecast model statemodel now contains 3 endogenous
. forecast estimates sheet
Added estimation results from xtreg.
Forecast model statemodel now contains 4 endogenous
. forecast adjust sheet = sheet + sheet_u2
Endogenous variable sheet now has 1 adjustment.
. forecast estimates misc
Added estimation results from xtreg.
Forecast model statemodel now contains 5 endogenous
. forecast adjust misc = misc + misc_u2
Endogenous variable misc now has 1 adjustment.

variable.

variables.
variables.

variables.

variables.

We used forecast adjust to perform our adjustment to dlndim immediately after we added those
estimation results so that we would not forget to do so and before we used identities to obtain the
actual dim variable. However, we could have specified the adjustment at any time. Regardless of
when you specify an adjustment, forecast solve performs those adjustments immediately after the
variable being adjusted is computed.

forecast Econometric model forecasting

211

Now we can solve our model. Here we obtain dynamic forecasts beginning in the first quarter of
2010:
. forecast solve, begin(tq(2010q1))
Computing dynamic forecasts for model statemodel.
Starting period: 2010q1
Ending period:
2011q4
Number of panels: 5
Forecast prefix: f_
Solving panel 1
Solving panel 2
Solving panel 3
Solving panel 4
Solving panel 5
Forecast 5 variables spanning 8 periods for 5 panels.

Here is our state-level forecast for sheet goods:


Sales of Sheet Goods ($mil.)
MS

8
6
4

13 14 15 16 17

10

LA

AR

2008

2010

2012

TX

70

80

10 11 12

90 100 110

OK

2008

2010

2012

2008

Forecast

2010

2012

Actual

Similar to our company-wide forecast, our state-level forecast failed to call the bottom in sales that
occurred in 2011. Because our model missed the shift in sales momentum in every one of the five
states, we would be inclined to go back and try respecifying one or more of the equations in our
model. On the other hand, if our model forecasted most of the states well but performed poorly in
just a few states, then we would first want to investigate whether any events in those states could
account for the unexpected results.

Technical note
Stata also provides the areg command for fitting a linear regression with a large dummy-variable
set and is designed for situations where the number of groups (panels) is fixed, while the number of
observations per panel increases with the sample size. When the goal is to create a forecast model
for panel data, you should nevertheless use xtreg rather than areg. The forecast commands
require knowledge of the panel-data settings declared using xtset as well as panel-related estimation
information saved by the other panel-data commands in order to produce forecasts with panel datasets.

212

forecast Econometric model forecasting

In the previous example, none of our equations contained lagged dependent variables as regressors.
If an equation did contain a lagged dependent variable, then one could use a dynamic panel-data
(DPD) estimator such as xtabond, xtdpd, or xtdpdsys. DPD estimators are designed for cases
where the number of observations per panel T is small. As shown by Nickell (1981), the bias
of the standard fixed- and random-effects estimators in the presence of lagged dependent variables
is of order 1/T and is thus particularly severe when each panel has relatively few observations.
Judson and Owen (1999) perform Monte Carlo experiments to examine the relative performance of
different panel-data estimators in the presence of lagged dependent variables when used with panel
datasets having dimensions more commonly encountered in macroeconomic applications. Based on
their results, while the bias of the standard fixed-effects estimator (LSDV in their notation) is not
inconsequential even when T = 20, for T = 30, the fixed-effects estimator does work as well as most
alternatives. The only estimator that appreciably outperformed the standard fixed-effects estimator
when T = 30 is the least-squares dummy variable corrected estimator (LSDVC in their notation).
Bruno (2005) provides a Stata implementation of that estimator. Many datasets used in forecasting
situations contain even more observations per panel, so the Nickell bias is unlikely to be a major
concern.
In this manual entry, we have provided an overview of the forecast commands and provided
several examples to get you started. The command-specific entries fill in the details.

Video example
Tour of forecasting

References
Box-Steffensmeier, J. M., J. R. Freeman, M. P. Hitt, and J. C. W. Pevehouse. 2014. Time Series Analysis for the
Social Sciences. New York: Cambridge University Press.
Bruno, G. S. F. 2005. Estimation and inference in dynamic unbalanced panel-data models with a small number of
individuals. Stata Journal 5: 473500.
Judson, R. A., and A. L. Owen. 1999. Estimating dynamic panel data models: a guide for macroeconomists. Economics
Letters 65: 915.
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
Nickell, S. J. 1981. Biases in dynamic models with fixed effects. Econometrica 49: 14171426.

Also see
[TS] var Vector autoregressive models
[TS] tsset Declare data to be time-series data
[R] ivregress Single-equation instrumental-variables regression
[R] reg3 Three-stage estimation for systems of simultaneous equations
[R] regress Linear regression
[XT] xtreg Fixed-, between-, and random-effects and population-averaged linear models
[XT] xtset Declare data to be panel data

Title
forecast adjust Adjust a variable by add factoring, replacing, etc.
Description
Reference

Quick start
Also see

Syntax

Remarks and examples

Stored results

Description
forecast adjust specifies an adjustment to be applied to an endogenous variable in the model.
Adjustments are typically used to produce alternative forecast scenarios or to incorporate outside
information into a model. For example, you could use forecast adjust with a macroeconomic
model to simulate the effect of an oil price shock whereby the price of oil spikes $50 higher than
your model otherwise predicts in a given quarter.

Quick start
Adjust the endogenous variable y in forecast to account for the variable shock in 1990
forecast adjust y = y + shock if year==1990
Adjust the endogenous variable y in forecast to account for a structural change in its mean that
occurred in year 2000
forecast adjust y = y + 400000 if year > 2000

Syntax
forecast adjust varname = exp

if

 

in

varname is the name of an endogenous variable that has been previously added to the model using
forecast estimates or forecast coefvector.
exp represents a Stata expression; see [U] 13 Functions and expressions.

Remarks and examples


When preparing a forecast, you often want to produce several different scenarios. The baseline
scenario is the default forecast that your model produces. It reflects the interplay among the equations
and exogenous variables without any outside forces acting on the model. Users of forecasts often
want answers to questions like What happens to the economy if housing prices decline 10% more
than your baseline forecast suggests they will? or What happens to unemployment and interest rates
if tax rates increase? forecast adjust lets you explore such questions by specifying alternative
paths for one or more endogenous variables in your model.

Example 1: Revisiting the Klein model


In example 1 of [TS] forecast, we produced a baseline forecast for the classic Klein (1950) model.
We noted that investment declined quite substantially in 1938. Suppose the government had a plan
such as a one-year investment tax credit that it could enact in 1939 to stimulate investment. Based
on discussions with accountants, tax experts, and business leaders, say this plan would encourage an
additional $1 billion in investment in 1939. How would this additional investment affect the economy?
213

214

forecast adjust Adjust a variable by add factoring, replacing, etc.

To answer this question, we first refit the Klein (1950) model from [TS] forecast using the data
through 1938 and then obtain dynamic forecasts starting in 1939. We will prefix these forecast
variables with bl to indicate they are the baseline forecasts. In Stata, we type
. use http://www.stata-press.com/data/r14/klein2
.
>
.
.
.

.
.
.
.
.

quietly reg3 (c p L.p w) (i p L.p L.k)


endog(w p y) exog(t wg g)
estimates store klein
forecast create kleinmodel
Forecast model kleinmodel started.
forecast estimates klein
Added estimation results from reg3.
Forecast model kleinmodel now contains
forecast identity y = c + i + g
Forecast model kleinmodel now contains
forecast identity p = y - t - wp
Forecast model kleinmodel now contains
forecast identity k = L.k + i
Forecast model kleinmodel now contains
forecast identity w = wg + wp
Forecast model kleinmodel now contains
forecast exogenous wg
Forecast model kleinmodel now contains

. forecast
Forecast
. forecast
Forecast

(wp y L.y yr) if year < 1939,

3 endogenous variables.
4 endogenous variables.
5 endogenous variables.
6 endogenous variables.
7 endogenous variables.
1 declared exogenous variable.

exogenous g
model kleinmodel now contains 2 declared exogenous variables.
exogenous t
model kleinmodel now contains 3 declared exogenous variables.

. forecast exogenous yr
Forecast model kleinmodel now contains 4 declared exogenous variables.
. forecast solve, prefix(bl_) begin(1939)
Computing dynamic forecasts for model kleinmodel.
Starting period: 1939
Ending period:
1941
Forecast prefix: bl_
1939: .......................................................................
....................................................
1940: .......................................................................
................................................
1941: .......................................................................
.................................................
Forecast 7 variables spanning 3 periods.

To model our $1 billion increase in investment in 1939, we type


. forecast adjust i = i + 1 if year == 1939
Endogenous variable i now has 1 adjustment.

While computing the forecasts for 1939, whenever forecast evaluates the equation for i, it will set
i to be higher than it would otherwise be by 1. Now we re-solve our model using the prefix alt
to indicate this is an alternative forecast:

forecast adjust Adjust a variable by add factoring, replacing, etc.

215

. forecast solve, prefix(alt_) begin(1939)


Computing dynamic forecasts for model kleinmodel.
Starting period:
Ending period:
Forecast prefix:
1939:
1940:
1941:

1939
1941
alt_

.......................................................................
...................................................
.......................................................................
..............................................
.......................................................................
................................................

Forecast 7 variables spanning 3 periods.

The following graph shows how investment and total income respond to this policy shock.
Effect of $1 billion investment tax credit
Total Income

60

80

$ Billion

$ Billion

100

10

15

120

Investment

1938

1939

1940

1941

1938

year

1939

1940

1941

year

Solid lines denote forecast without tax credit


Dashed lines denote forecast with tax credit

Both investment and total income would be higher not just in 1939 but also in 1940; the higher
capital stock implied by the additional investment raises total output (and hence income) even after
the tax credit expires. Lets look at these two variables in more detail:
. list year bl_i alt_i bl_y alt_y if year >= 1938, sep(0)

19.
20.
21.
22.

year

bl_i

alt_i

bl_y

alt_y

1938
1939
1940
1941

-1.9
3.757227
7.971523
16.16375

-1.9
6.276423
9.501909
16.20362

60.9
75.57685
89.67435
123.0809

60.9
80.71709
94.08473
124.238

Although we simulated a policy that we thought would encourage $1 billion in investment,


investment in fact rises about $2.5 billion in 1939 according to our model. That is because higher
investment raises total income, which also affects private-sector profits, which beget further changes
in investment, and so on.
The investment multiplier in this example might strike you as implausibly large, but it highlights an
important attribute of forecasting models. Studying each equations estimated coefficients in isolation
can help to unveil some specification errors, but one must also consider how those equations interact.

216

forecast adjust Adjust a variable by add factoring, replacing, etc.

It is possible to construct models in which each equation appears to be well specified, but the model
nevertheless forecasts poorly or suggests unlikely behavior in response to policy shocks.
In the previous example, we applied a single adjustment to a single endogenous variable in a
single time period. However, forecast allows you to specify forecast adjust multiple times with
each endogenous variable, and many real-world policy simulations require adjustments to multiple
variables. You can also consider policies that affect variables for multiple periods.
For example, suppose we wanted to see what would happen if our investment tax credit lasted
two years instead of one. One way would be to use forecast adjust twice:
. forecast adjust i = i + 1 if year == 1939
. forecast adjust i = i + 1 if year == 1940

A second way would be to make that adjustment using one command:


. forecast adjust i = i + 1 if year == 1939 | year == 1940

To make adjustments lasting more than one or two periods, you should create an adjustment variable,
which makes more sense. A third way to simulate our two-year tax credit is
. generate i_adj = 0
. replace i_adj = 1 if year == 1939 | year == 1940
. forecast adjust i = i + i_adj

So far in our discussion of forecast adjust, we have always shown an endogenous variable
being adjusted by adding a number or variable to it. However, any valid expression is allowed on the
right-hand side of the equals sign. If you want to explore the effects of a policy that will increase
investment by 10% in 1939, you could type
. forecast adjust i = 1.1*i if year == 1939

If you believe investment will be 2.0 in 1939, you could type


. forecast adjust i = -2.0 if year == 1939

An alternative way to force forecasts of endogenous variables to take on prespecified values is


discussed in example 1 of [TS] forecast solve.

Stored results
forecast adjust stores the following in r():
Macros
r(lhs)
r(rhs)
r(basenames)
r(fullnames)

left-hand-side (endogenous) variable


right-hand side of identity
base names of variables found on right-hand side
full names of variables found on right-hand side

Reference
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.

Also see
[TS] forecast Econometric model forecasting
[TS] forecast solve Obtain static and dynamic forecasts

Title
forecast clear Clear current model from memory

Description

Syntax

Remarks and examples

Also see

Description
forecast clear removes the current forecast model from memory.

Syntax
forecast clear

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast allows you to have only one model in memory
at a time. You use forecast clear to remove the current model from memory. Forecast models
themselves do not consume a significant amount of memory, so there is no need to clear a model from
memory unless you intend to create a new one. An alternative to forecast clear is the replace
option with forecast create.
Calling forecast clear when no forecast model exists in memory does not result in an error.

Also see
[TS] forecast Econometric model forecasting
[TS] forecast create Create a new forecast model

217

Title
forecast coefvector Specify an equation via a coefficient vector
Description
Methods and formulas

Quick start
Also see

Syntax

Options

Remarks and examples

Description
forecast coefvector adds equations that are stored as coefficient vectors to your forecast model.
Typically, equations are added using forecast estimates and forecast identity. forecast
coefvector is used in less-common situations where you have a vector of parameters that represent
a linear equation.
Most users of the forecast commands will not need to use forecast coefvector. We recommend skipping this manual entry until you are familiar with the other features of forecast.

Quick start
Incorporate coefficient vector of the endogenous equation of y to be used by forecast solve
forecast coefvector y
As above, but include the variance of the estimated parameters stored in matrix mymat
forecast coefvector y, variance(mymat)

Syntax
forecast coefvector cname

, options

cname is a Stata matrix with one row.


options

Description

variance(vname)
errorvariance(ename)


names(namelist , replace )

specify parameter variance matrix


specify additive error variance matrix
use namelist for names of left-hand-side variables

Options
variance(vname) specifies that Stata matrix vname contains the variance matrix of the estimated
parameters. This option only has an effect if you specify the simulate() option when calling
forecast solve and request sim techniques betas or residuals. See [TS] forecast solve.
errorvariance(ename) specifies that the equations being added include an additive error term with
variance ename, where ename is the name of a Stata matrix. The number of rows and columns in
ename must match the number of equations represented by coefficient vector cname. This option
only has an effect if you specify the simulate() option when calling forecast solve and
request sim techniques errors or residuals. See [TS] forecast solve.
218

forecast coefvector Specify an equation via a coefficient vector

219



names(namelist , replace ) instructs forecast coefvector to use namelist as the names of the
left-hand-side variables in the coefficient vector being added. By default, forecast coefvector
uses the equation names on the column stripe of cname. You must use this option if any of the
equation names stored with cname contains time-series operators.

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. This manual entry also assumes that you are familiar with Statas
matrices and the concepts of row and column names that can be attached to them; see [P] matrix.
You use forecast coefvector to add endogenous variables to your model that are defined by linear
equations, where the linear equations are stored in a coefficient (parameter) vector.
Remarks are presented under the following headings:
Introduction
Simulations with coefficient vectors

Introduction
forecast coefvector can be used to add equations that you obtained elsewhere to your model.
For example, you might see the estimated coefficients for an equation in an article and want to add
that equation to your model. User-written estimators that do not implement a predict command can
also be included in forecast models via forecast coefvector. forecast coefvector can also
be useful in situations where you want to simulate time-series data, as the next example illustrates.

Example 1: A shock to an autoregressive process


Consider the following autoregressive process:

yt = 0.9yt1 0.6yt2 + 0.3yt3


Suppose yt is initially equal to zero. How does yt evolve in response to a one-unit shock at time
t = 5? We can use forecast coefvector to find out. First, we create a small dataset with time
variable t and set our target variable y equal to zero:
. set obs 20
number of observations (_N) was 0, now 20
. generate t = _n
. tsset t
time variable: t, 1 to 20
delta: 1 unit
. generate y = 0

Now lets think about our coefficient vector. The only tricky part is in labeling the columns. We can
represent the lagged values of yt using time-series operators; there is just one equation, corresponding
to variable y. We can use matrix coleq to apply both variable and equation names to the columns
of our matrix. In Stata, we type

220

forecast coefvector Specify an equation via a coefficient vector


. matrix y = (.9, -.6, 0.3)
. matrix coleq y = y:L.y y:L2.y y:L3.y
. matrix list y
y[1,3]
y:
y:
L. L2.
y
y
r1
.9 -.6

y:
L3.
y
.3

forecast coefvector ignores the row name of the vector being added (r1 here), so we can leave
it as is. Next we create a forecast model and add y:
. forecast create
Forecast model started.
. forecast coefvector y
Forecast model now contains 1 endogenous variable.

To shock our system at t = 5, we can use forecast adjust:


. forecast adjust y = 1 in 5
Endogenous variable y now has 1 adjustment.

Now we can solve our model. Because our y variable is filled in for the entire dataset, forecast
solve will not be able to automatically determine when forecasting should commence. We have three
lags in our process, so we will start at t = 4. To reduce the amount of output, we specify log(off):
. forecast solve, begin(4) log(off)
Computing dynamic forecasts for current model.
Starting period:
Ending period:
Forecast prefix:

4
20
f_

Forecast 1 variable spanning 17 periods.

.2

Response
.4
.6

.8

ImpulseResponse Function

10
t

15

20

Evolution of yt in response to a unit shock at t = 5.

The graph shows our shock causing y to jump to 1 at t = 5. At t = 6, we can see that y = 0.9, and
at t = 7, we can see that y = 0.9 0.9 0.6 1 = 0.21.

forecast coefvector Specify an equation via a coefficient vector

221

The previous example used a coefficient vector representing a single equation. However, coefficient
vectors can contain multiple equations. For example, say we read an article and saw the following
results displayed:

xt = 0.2 + 0.3xt1 0.8zt


zt = 0.1 + 0.7zt1 + 0.3xt 0.2xt1
We can add both equations at once to our forecast model. Again the key is in labeling the columns.
forecast coefvector understands cons to mean a constant term, and it looks at the equation
names on the vectors columns to determine how many equations there are and to what endogenous
variables they correspond:
. matrix eqvector = (0.2, 0.3, -0.8, 0.1, 0.7, 0.3, -0.2)
. matrix coleq eqvector = x:_cons x:L.x x:y y:_cons y:L.y y:x y:L.x
. matrix list eqvector
eqvector[1,7]
x:

r1

_cons
.2

x:
L.
x
.3

x:
y
-.8

y:
_cons
.1

y:
L.
y
.7

y:
x
.3

y:
L.
x
-.2

We could then type


. forecast coefvector y

to add our coefficient vector to a model.


Just like with estimation results whose left-hand-side variables contain time-series operators, if
any of the equation names of the coefficient vector being added contains time-series operators, you
must use the names() option of forecast coefvector to specify alternative names.

Simulations with coefficient vectors


The forecast solve command provides the option simulate(sim technique, . . .) to perform
stochastic simulations and obtain measures of forecast uncertainty. How forecast solve handles
coefficient vectors when performing these simulations depends on the options provided with forecast
coefvector. There are four cases to consider:
1. You specify neither variance() nor errorvariance() with forecast coefvector. You
have provided no measures of uncertainty with this coefficient vector. Therefore, forecast
solve treats it like an identity. No random errors or residuals are added to this coefficient
vectors linear combination, nor are the coefficients perturbed in any way.
2. You specify variance() but not errorvariance(). The variance() option provides
the covariance matrix of the estimated parameters in the coefficient vector. Therefore, the
coefficient vector is taken to be stochastic. If you request sim technique betas, this coefficient
vector is assumed to be distributed multivariate normal with a mean equal to the original
value of the vector and covariance matrix as specified in the variance() option, and random
draws are taken from this distribution. If you request sim technique residuals, randomly
chosen static residuals are added to this coefficient vectors linear combination. Because
you did not specify a covariance matrix for the error terms with the errorvariance()
option, sim technique errors cannot draw random errors for this coefficient vectors linear
combination, so sim technique errors has no impact on the equations.

222

forecast coefvector Specify an equation via a coefficient vector

3. You specify errorvariance() but not variance(). Because you specified a covariance
matrix for the assumed additive error term, the equations represented by this coefficient vector
are stochastic. If you request sim technique residuals, randomly chosen static residuals
are added to this coefficient vectors linear combination. If you request sim technique
errors, multivariate normal errors with mean zero and covariance matrix as specified
in the errorvariance() option are added during the simulations. However, specifying
sim technique betas does not affect the equations because there is no covariance matrix
associated with the coefficients.
4. You specify both variance() and errorvariance(). The equations represented by this
coefficient vector are stochastic, and forecast solve treats the coefficient vector just like
an estimation result. sim techniques betas, residuals, and errors all work as expected.

Methods and formulas


Let denote the 1 k coefficient vector being added. Then the matrix specified in the variance()
option must be k k . Row and column names for that matrix are ignored.
Let m denote the number of equations represented by . That is, if is stored as Stata matrix
beta and local macro m is to hold the number of equations, then in Stata parlance,
. local eqnames : coleq beta
. local eq : list uniq eqnames
. local m : list sizeof eq

Then the matrix specified in the errorvariance option must be m m. Row and column names
for that matrix are ignored.

Also see
[TS] forecast Econometric model forecasting
[TS] forecast solve Obtain static and dynamic forecasts
[P] matrix Introduction to matrix commands
[P] matrix rownames Name rows and columns

Title
forecast create Create a new forecast model
Description
Also see

Quick start

Syntax

Option

Remarks and examples

Description
forecast create creates a new forecast model in Stata.

Quick start
Start a forecast model called myforecast
forecast create myforecast
As above, but clear the existing model myforecast from memory if it exists
forecast create myforecast, replace

Syntax
forecast create

name

 

, replace

name is an optional name that can be given to the model. name must follow the naming conventions
described in [U] 11.3 Naming conventions.

Option
replace causes Stata to clear the existing model from memory before creating name. You may have
only one model in memory at a time. By default, forecast create issues an error message if
another model is already in memory.

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. The forecast create command creates a new forecast model
in Stata. You must create a model before you can add equations or solve it. You can have only one
model in memory at a time.
You may optionally specify a name for your model. That name will appear in the output produced
by the various forecast subcommands.

223

224

forecast create Create a new forecast model

Example 1
Here we create a model named salesfcast:
. forecast create salesfcast
Forecast model salesfcast started.

Technical note
Warning: Do not type clear all, clear mata, or clear results after creating a forecast
model with forecast create unless you intend to remove your forecast model. Typing clear all
or clear mata eliminates the internal structures used to store your forecast model. Typing clear
results clears all estimation results from memory. If your forecast model includes estimation results
that rely on the ability to call predict, you will not be able to solve your model.

Also see
[TS] forecast Econometric model forecasting
[TS] forecast clear Clear current model from memory

Title
forecast describe Describe features of the forecast model
Description
Stored results

Quick start
Reference

Syntax
Also see

Options

Remarks and examples

Description
forecast describe displays information about the forecast model currently in memory. For
example, you can obtain information regarding all the endogenous or exogenous variables in the
model, the adjustments used for alternative scenarios, or the solution method used. Typing forecast
describe without specifying a particular aspect of the model is equivalent to typing forecast
describe for every available aspect and can result in more output than you want, particularly if you
also request a detailed description.

Quick start
Display information about the estimates in the current forecast
forecast describe estimates
Display information about coefficient vectors
forecast describe coefvector
Display endogenous variables defined by identities
forecast describe identity
Display names of declared exogenous variables
forecast describe exogenous
Display information about the solution method used
forecast describe solve
Display information about endogenous variables
forecast describe endogenous
All the above
forecast describe

225

226

forecast describe Describe features of the forecast model

Syntax
Describe the current forecast model


forecast describe , options
Describe particular aspects of the current forecast model


forecast describe aspect , options

aspect

Description

estimates
coefvector
identity
exogenous
adjust
solve
endogenous

estimation results
coefficient vectors
identities
declared exogenous variables
adjustments to endogenous variables
forecast solution information
all endogenous variables

options

Description

brief
detail

provide a one-line summary


provide more-detailed information

Specifying detail provides no additional information with aspects exogenous, endogenous, and solve.

Options
brief requests that forecast describe produce a one-sentence summary of the aspect specified.
For example, forecast describe exogenous, brief will tell you just the current forecast
models name and the number of exogenous variables in the model.
detail requests a more-detailed description of the aspect specified. For example, typing forecast
describe estimates lists all the estimation results added to the model using forecast estimates, the estimation commands used, and the number of left-hand-side variables in each estimation
result. When you specify forecast describe estimates, detail, the output includes a list
of all the left-hand-side variables entered with forecast estimates.

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast describe displays information about the forecast
model currently in memory. You can obtain either all the information at once or information about
individual aspects of your model, whereby we use the word aspect to refer to, for example, just
the estimation results, identities, or solution information.

forecast describe Describe features of the forecast model

227

Example 1
In example 1 of [TS] forecast, we created and forecasted Kleins (1950) model of the U.S. economy.
Here we obtain information about all the endogenous variables in the model:
. forecast describe endogenous
Forecast model kleinmodel contains 7 endogenous variables:
Variable
1.
2.
3.
4.
5.
6.
7.

c
i
wp
y
p
k
w

Source
estimates
estimates
estimates
identity
identity
identity
identity

# adjustments
0
0
0
0
0
0
0

As we mentioned in [TS] forecast, there are seven endogenous variables in this model. Three of those
variables (c, i, and wp) were left-hand-side variables in equations we fitted and added to our forecast
model with forecast estimates. The other four variables were defined by identities added with
forecast identity. The right-hand column of the table indicates that none of our endogenous
variables contains adjustments specified using forecast adjust.
We can obtain more information about the estimated equations in our model using forecast
describe estimates:
. forecast describe estimates, detail
Forecast model kleinmodel contains 1 estimation result:
Estimation
result
1. klein

Command
reg3

LHS variables
c
i
wp

Our model has one estimation result, klein, containing results produced by the reg3 command. If
we had not specified the detail option, forecast describe estimates would have simply stated
the number of left-hand-side variables (3) rather than listing them.

228

forecast describe Describe features of the forecast model

At the end of example 1 in [TS] forecast, we obtained dynamic forecasts beginning in 1936. Here
we obtain information about the solution:
. forecast describe solve
Forecast model kleinmodel has been solved:
Forecast horizon
Begin
End
Number of periods
Forecast variables
Prefix
Number of variables
Storage type
Type of forecast
Solution
Technique
Maximum iterations
Tolerance for function values
Tolerance for function zero

1936
1941
6
d_
7
float
Dynamic
Damped Gauss-Seidel (0.200)
500
1.0e-09
(not applicable)

We obtain information about the forecast horizon, how the variables holding our forecasts were
created and stored, and the solution technique used. If we had used the simulate() option with
forecast solve, we would have obtained information about the types of simulations performed and
the variables used to hold the results.

Stored results
When you specify option brief, only a limited number of results are stored. In the tables
below, a superscript B indicates results that are available even after brief is specified. forecast
coefvector saves certain results only if detail is specified; these are indicated by superscript D.
Typing forecast describe without specifying an aspect does not return any results.
forecast describe estimates stores the following in r():
Scalars
r(n estimates)B
r(n lhs)
Macros
r(model)B
r(lhs)
r(estimates)

number of estimation results


number of left-hand-side variables defined by estimation results
name of forecast model, if named
left-hand-side variables
names of estimation results

forecast describe identity stores the following in r():


Scalars
r(n identities)B
Macros
r(model)B
r(lhs)
r(identities)

number of identities
name of forecast model, if named
left-hand-side variables
list of identities

forecast describe Describe features of the forecast model

forecast describe coefvector stores the following in r():


Scalars
r(n coefvectors)B number of coefficient vectors
r(n lhs)B
number of left-hand-side variables defined by coefficient vectors
Macros
r(model)B
r(lhs)
r(rhs)D
r(names)
r(Vnames)D
r(Enames)D

name of forecast model, if named


left-hand-side variables
right-hand-side variables
names of coefficient vectors
names of variance matrices (. if not specified)
names of error variance matrices (. if not specified)

forecast describe exogenous stores the following in r():


Scalars
r(n exogenous)B
Macros
r(model)B
r(exogenous)

number of declared exogenous variables


name of forecast model, if named
declared exogenous variables

forecast describe endogenous stores the following in r():


Scalars
r(n endogenous)B

number of endogenous variables

Macros
r(model)B
r(varlist)
r(source list)
r(adjust cnt)

name of forecast model, if named


endogenous variables
sources of endogenous variables (estimates, identity, coefvector)
number of adjustments per endogenous variable

forecast describe solve stores the following in r():


Scalars
r(periods)
r(Npanels)
r(Nvar)
r(damping)
r(maxiter)
r(vtolerance)
r(ztolerance)
r(sim nreps)
Macros
r(solved)B
r(model)B
r(actuals)
r(double)
r(static)
r(begin)
r(end)
r(technique)
r(sim technique)
r(prefix)
r(suffix)
r(sim prefix i)
r(sim suffix i)
r(sim stat i)

number of periods forecast per panel


number of panels forecast
number of forecast variables
damping parameter for damped GaussSeidel
maximum number of iterations
tolerance for forecast values
tolerance for function zero
number of simulations
solved, if the model has been solved
name of forecast model, if named
actuals, if specified with forecast solve
double, if specified with forecast solve
static, if specified with forecast solve
first period in forecast horizon
last period in forecast horizon
solver technique
specified sim technique
forecast variable prefix
forecast variable suffix
ith simulation statistic prefix
ith simulation statistic suffix
ith simulation statistic

229

230

forecast describe Describe features of the forecast model

forecast describe adjust stores the following in r():


Scalars
r(n adjustments)B total number of adjustments
r(n adjust vars)B number of variables with adjustments
Macros
r(model)B
r(varlist)
r(adjust cnt)
r(adjust list)

name of forecast model, if named


variables with adjustments
number of adjustments per endogenous variable
list of adjustments

Reference
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.

Also see
[TS] forecast Econometric model forecasting
[TS] forecast list List forecast commands composing current model

Title
forecast drop Drop forecast variables
Description
Stored results

Quick start
Also see

Syntax

Options

Remarks and examples

Description
forecast drop drops variables previously created by forecast solve.

Quick start
Remove all variables created by forecast solve from the current dataset
forecast drop
Remove only forecast variables starting with f
forecast drop, prefix(f )
Remove only forecast variables ending with f
forecast drop, suffix(_f)

Syntax
forecast drop

, options

options

Description

prefix(string)
suffix(string)

specify prefix for forecast variables


specify suffix for forecast variables

You can specify prefix() or suffix() but not both.

Options
prefix(string) and suffix(string) specify either a name prefix or a name suffix that will be used to
identify forecast variables to be dropped. You may specify prefix() or suffix() but not both.
By default, forecast drop removes all forecast variables produced by the previous invocation
of forecast solve.
Suppose, however, that you previously specified the simulate() option with forecast solve
and wish to remove variables containing simulation results but retain the variables containing the
point forecasts. Then you can use the prefix() or suffix() option to identify the simulation
variables you want dropped.

231

232

forecast drop Drop forecast variables

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast drop safely removes variables previously created
using forecast solve. Say you previously solved your model and created forecast variables that
were suffixed with f. Do not type
. drop *_f

to remove those variables from the dataset. Rather, type


. forecast drop

The former command is dangerous: Suppose you were given the dataset and asked to produce the
forecast. The person who previously worked with the dataset created other variables that ended with
f. Using drop would remove those variables as well. forecast drop removes only those variables
that were previously created by forecast solve based on the model in memory.
If you do not specify any options, forecast drop removes all the forecast variables created by
the current model, including the variables that contain the point forecasts as well as any variables
that contain simulation results specified by the simulate() option with forecast solve. Suppose
you had typed
. forecast solve, prefix(s_) simulate(betas, statistic(stddev, prefix(sd_)))

Then if you type


. forecast drop, prefix(sd_)

forecast drop will remove the variables containing the standard deviations of the forecasts and
will leave the variables containing the point forecasts (prefixed with s ) untouched.
forecast drop does not exit with an error if a variable it intends to drop does not exist in the
dataset.

Stored results
forecast drop stores the following in r():
Scalars
r(n dropped)

number of variables dropped

Also see
[TS] forecast Econometric model forecasting
[TS] forecast solve Obtain static and dynamic forecasts

Title
forecast estimates Add estimation results to a forecast model
Description
Also see

Quick start

Syntax

Options

Remarks and examples

References

Description
forecast estimates adds estimation results to the forecast model currently in memory. You
must first create a new model using forecast create before you can add estimation results with
forecast estimates. After estimating the parameters of an equation or set of equations, you must
use estimates store to store the estimation results in memory or use estimates save to save
them on disk before adding them to the model.

Quick start
Add estimation results stored in myestimates to the forecast model in memory
forecast estimates myestimates
As above, but specify the prediction produced by predict, pr outcome(#1)
forecast estimates myestimates, predict("pr outcome(#1)")
Add estimates from var estimation stored in memory as varest
forecast estimates varest
Also add the second estimation result saved on disk as notcurrent.ster to the forecast model
forecast estimates using notcurrent, number(2)

233

234

forecast estimates Add estimation results to a forecast model

Syntax
Add estimation result currently in memory to model


forecast estimates name , options
name is the name of a stored estimation result; see [R] estimates store.
Add estimation result currently saved on disk to model


forecast estimates using filename , number(#) options
filename is an estimation results file created by estimates save; see [R] estimates save. If no file
extension is specified, .ster is assumed.
options

Description

predict(p options)


names(namelist , replace )
advise

call predict using p options


use namelist for names of left-hand-side variables
advise whether estimation results can be dropped from memory

Options
predict(p options) specifies the predict options to use when predicting the dependent variables.
For a single-equation estimation command, you simply specify the appropriate options to pass to
predict. If multiple options are required, enclose them in quotation marks:
. forecast estimates ..., predict("pr outcome(#1)")
For a multiple-equation estimation command, you can either specify one set of options that will
be applied to all equations or specify p options, where p is the number of endogenous variables
being added. If multiple options are required for each equation, enclose each equations options
in quotes:
. forecast estimates ..., predict("pr eq(#1)" "pr eq(#2)")
If you do not specify the eq() option for any of the equations, forecast automatically includes
it for you.
If you are adding results from a linear estimation command that forecast recognizes as one
whose predictions can be calculated as x0t , do not specify the predict() option, because this
will slow forecasts computation time substantially. Use the advise option to determine whether
forecast needs to call predict.
If you do not specify any predict options, forecast uses the default type of prediction for the
command whose results are being added.


names(namelist , replace ) instructs forecast estimates to use namelist as the names of the
left-hand-side variables in the estimation result being added. You must use this option if any of
the left-hand-side variables contains time-series operators. By default, forecast estimates uses
the names stored in the e(depvar) macro of the results being added.
forecast estimates creates a new variable in the dataset for each element of namelist. If a
variable of the same name already exists in your dataset, forecast estimates exits with an
error unless you specify the replace option, in which case existing variables are overwritten.

forecast estimates Add estimation results to a forecast model

235

advise requests that forecast estimates report a message indicating whether the estimation
results being added can be removed from memory. This option is useful if you expect your model
to contain more than 300 sets of estimation results, the maximum number that Stata allows you to
store in memory; see [R] limits. This option also provides an indication of the speed with which
the model can be solved: forecast executes much more slowly with estimation results that must
remain in memory.
number(#), for use with forecast estimates using, specifies that the #th set of estimation results
from filename be loaded. This assumes that multiple sets of estimation results have been saved
in filename. The default is number(1). See [R] estimates save for more information on saving
multiple sets of estimation results in a single file.

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast estimates adds stochastic equations previously fit
by Stata estimation commands to a forecast model.
Remarks are presented under the following headings:
Introduction
The advise option
Using saved estimation results
The predict option
Forecasting with ARIMA models

Introduction
After you fit an equation that will become a part of your model, you must use either estimates
store to store the estimation results in memory or estimates save to save the estimation results
to disk. Then you can use forecast estimates to add that equation to your model.
We usually refer to equation in the singular, but of course, you can also use a multiple-equation
estimation command to fit several equations at once and add them to the model. When we discuss
adding a stochastic equation to a model, we really mean adding a single estimation result.
In this discussion, we also need to make a distinction between making a forecast and obtaining a
prediction. We use the word predict to refer to the process of obtaining a fitted value for a single
equation, just as you can use the predict command to obtain fitted values, residuals, or other statistics
after fitting a model with an estimation command. We use the word forecast to mean finding a
solution to the complete set of equations that compose the forecast model. The iterative techniques
we use to solve the model and produce forecasts require that we be able to obtain predictions from
each of the equations in the model.

Example 1: A simple example


Here we illustrate how to add estimation results from a regression model in which none of
the left-hand-side variables contains time-series operators or mathematical transformations. We use
quietly with the estimation command because the output is not relevant here. We type

236

forecast estimates Add estimation results to a forecast model


.
.
.
.

use http://www.stata-press.com/data/r14/klein2
quietly reg3 (c p L.p w) (i p L.p L.k) (wp y L.y yr), endog(w p y) exog(t wg g)
estimates store klein
forecast create kleinmodel
Forecast model kleinmodel started.
. forecast estimates klein
Added estimation results from reg3.
Forecast model kleinmodel now contains 3 endogenous variables.

forecast estimates indicated that three endogenous variables were added to the forecast model.
That is because we specified three equations in our call to reg3. As we mentioned in example 1 in
[TS] forecast, the endog() option of reg3 has no bearing on forecast. All that matters are the
three left-hand-side variables.

Technical note
When you add an estimation result to your forecast model, forecast looks at the macro e(depvar)
to determine the endogenous variables being added. If that macro is empty, forecast tries a few
other macros to account for nonstandard commands. The number of endogenous variables being added
to the model is based on the number of words found in the macro containing the dependent variables.

You can fit equations with the D. and S. first- and seasonal-difference time-series operators
adorning the left-hand-side variables, but in those cases, when you add the equations to the model,
you must use the names() option of forecast estimates. When you specify names(namelist),
forecast estimates uses namelist as the names of the newly declared endogenous variables and
ignores what is in e(depvar). Moreover, forecast does not automatically undo the operators on
left-hand-side variables. For example, you might fit a regression with D.x as the regressand and then
add it to the model using forecast estimates . . ., name(Dx). In that case, forecast will solve
the model in terms of Dx. You must add an identity to convert Dx to the corresponding level variable
x, as the next example illustrates.
Of course, you are free to use the D., S., and L. time-series operators on endogenous variables
when they appear on the right-hand sides of equations. It is only when D. or S. appears on the
left-hand side that you must use the names() option to provide alternative names for them. You
cannot add equations to models for which the L. operator appears on left-hand-side variables. You
cannot use the F. forward operator anywhere in forecast models.

Example 2: Differenced and log-transformed dependent variables


Consider the following model:
D.logC = 10 + 11 D.logW + 12 D.logY + u1t
logW = 20 + 21 L.logW + 22 M + 23 logY + 24 logC + u2t

(1)
(2)

Here logY and M are exogenous variables, so we will assume they are filled in over the forecast
horizon before solving the model. Ultimately, we are interested in forecasting C and W. However,
the first equation is specified in terms of changes in the logarithm of C, and the second equation is
specified in terms of the logarithm of W.

forecast estimates Add estimation results to a forecast model

237

We will refer to variables and transformations like logC, D.logC, and C as related variables
because they are related to one another by simple mathematical functions. Including the related
variables, we in fact have a five-equation model with two stochastic equations and three identities:
dlogC = 10 + 11 D.logW + 12 D.logY + u1t
logC = L.logC + dlogC
C = exp(logC)
logW = 20 + 21 L.logW + 22 M + 23 logY + 24 logC + u2t
W = exp(logW)
To fit (1) and (2) in Stata and create a forecast model, we type
. use http://www.stata-press.com/data/r14/fcestimates, clear
(1978 Automobile Data)
. quietly regress D.logC D.logW D.logY
.
.
.
.

estimates store dlogceq


quietly regress logW L.logW M logY logC
estimates store logweq
forecast create cwmodel, replace
(Forecast model kleinmodel ended.)
Forecast model cwmodel started.
. forecast estimates dlogceq, names(dlogC)
Added estimation results from regress.
Forecast model cwmodel now contains 1 endogenous variable.
. forecast identity logC = L.logC + dlogC
Forecast model cwmodel now contains 2 endogenous variables.
. forecast identity C = exp(logC)
Forecast model cwmodel now contains 3 endogenous variables.
. forecast estimates logweq
Added estimation results from regress.
Forecast model cwmodel now contains 4 endogenous variables.
. forecast identity W = exp(logW)
Forecast model cwmodel now contains 5 endogenous variables.

Because the left-hand-side variable in (1) contains a time-series operator, we had to use the names()
option of forecast estimates when adding that equations estimation results to our forecast model.
Here we named this endogenous variable dlogC. We then added the other four equations to our
model. In general, when we have a set of related variables, we prefer to specify the identities right
after we add the stochastic equation so that we do not forget about them.

Technical note
In the previous example, we undid the log-transformations by simply exponentiating the logarithmic variable. However, that is only an approximation that does not work well in many applications.
Suppose we fit the linear regression model
ln yt = x0t + ut

238

forecast estimates Add estimation results to a forecast model

where ut is a zero-mean regression error term. Then E(yt |xt ) = exp(x0t ) E{ exp(ut )}. Although
E(ut ) = 0, Jensens inequality suggests that E{ exp(ut )} =
6 1, implying that we cannot predict yt
by simply taking the exponential of the linear prediction x0t .
If we assume that ut N (0, 2 ), then E{ exp(ut )} = exp( 2 /2). Moreover, many estimation
commands like regress provide an estimate
b2 of 2 , so for regression models that contain a
logarithmic dependent variable, we can obtain better forecasts for the dependent variable in levels if
we approximate E{ exp(ut )} as exp(b
2 /2). Suppose we run the regression
. regress lny x1 x2 x3
. estimates store myreg

then we could add lny and y as endogenous variables like this:


. forecast estimates lny
. forecast identity y = exp(lny)*=e(rmse)^2 / 2

In the second command, Stata will first evaluate the expression =e(rmse)^2 / 2 and replace it with
its numerical value. After regress, the macro e(rmse) contains the square root of the estimate of

b2 , so the value of this expression will be our estimate of E{ exp(ut )}. Then forecast will forecast
y as the product of this number and exp(lny). Here we had to use a macro expression including
an equals sign to force Stata to evaluate the expression immediately and obtain the expressions
value. Identities are not associated with estimation results, so as soon as we used another estimation
command or restored some other estimation results (perhaps unknowingly by invoking forecast
solve), our reference to e(rmse) would no longer be meaningful. See [U] 18.3.8 Macro expressions
for more information on macro evaluation.
Another alternative would be to use Duans (1983) smearing technique. Stata code for this is
provided in Cameron and Trivedi (2010).
A third alternative is to use the generalized linear model (GLM) as implemented by the glm
command with a log-link function. In a GLM framework, we would be modeling ln {E(yt )} rather
than E { ln(yt )} because we would be using regress, but oftentimes, the two quantities are similar.
Moreover, obtaining predicted values for yt in the GLM does not present the transformation problem as
happens with linear regression. The forecast commands contain special code to handle estimation
results obtained by using glm with the link(log) option, and you do not need to specify an identity
to obtain y as a function of lny. All you would need to do is
. glm y x1 x2 x3, link(log)
. estimates store myglm
. forecast estimates myglm

The advise option


To produce forecasts from your model, forecast must be able to obtain predictions for each
estimation result that you have added. For many of the most commonly used estimation commands such
as regress, ivregress, and var, forecast includes special code to quickly obtain these predictions.
For estimation commands that either require more involved computations to obtain predictions or
are not widely used in forecasting, forecast instead relies on the predict command to obtain
predictions.
The advise option of forecast estimates advises you as to whether forecast includes the
special code to obtain fast predictions for the command whose estimation results are being added
to the model. For example, here we use advise with forecast estimates when building the
Klein (1950) model.

forecast estimates Add estimation results to a forecast model

239

Example 3: Using the advise option


. use http://www.stata-press.com/data/r14/klein2, clear
. quietly reg3 (c p L.p w) (i p L.p L.k) (wp y L.y yr), endog(w p y) exog(t wg g)
. estimates store klein
. forecast create kleinmodel, replace
(Forecast model cwmodel ended.)
Forecast model kleinmodel started.
. forecast estimates klein, advise
(These estimation results are no longer needed; you can drop them.)
Added estimation results from reg3.
Forecast model kleinmodel now contains 3 endogenous variables.

After we typed forecast estimates, Stata advised us that [t]hese estimation results are no longer
needed; you can drop them. That means forecast includes code to obtain predictions from reg3
without having to call predict. forecast has recorded all the information it needs about the
estimation results stored in klein, and we could type
. estimates drop klein

to remove those estimates from memory.

For relatively small models, there is no need to use estimates drop to remove estimation results
from memory. However, Stata allows no more than 300 sets of estimation results to be in memory
at once, and forecast solve requires estimation results to be in memory (and not merely saved
on disk) before it can produce forecasts. For very large models in which that limit may bind, you
can use the advise option to determine which estimation results are needed to solve the model and
which can be dropped.
Suppose we had estimation results from a command for which forecast must call predict to
obtain predictions. Then instead of obtaining the note saying the estimation results were no longer
needed, we would obtain a note stating
. forecast estimates IUsePredict
(These estimation results are needed to solve the model.)

In that case, the estimation results would need to be in memory before calling forecast solve.
The advise option also provides an indication of how quickly forecasts can be produced from
the model. Models for which forecast never needs to call predict can be solved much more
quickly than models that include equations for which forecast must restore estimation results and
call predict to obtain predictions.

Using saved estimation results


Statas estimates commands allow you to save estimation results to disk so that they are available
in subsequent Stata sessions. You can use the using option of forecast estimates to use estimation
results saved on disk without having to first call estimates use. In fact, estimates use can even
retrieve estimation results stored on a website, as the next example demonstrates.

240

forecast estimates Add estimation results to a forecast model

Example 4: Adding saved estimation results


The file klein.ster contains the estimation results produced by reg3 for the three stochastic
equations of Kleins (1950) model. That file is stored on the Stata Press website in the same location
as the example datasets. Here we create a forecast model and add those results:
. use http://www.stata-press.com/data/r14/klein2
. forecast create example4, replace
(Forecast model kleinmodel ended.)
Forecast model example4 started.
. forecast estimates using http://www.stata-press.com/data/r14/klein
Added estimation results from reg3.
Forecast model example4 now contains 3 endogenous variables.

If you do not specify a file extension, forecast estimates assumes the file ends in .ster. You
are more likely to save your estimation results on your computers disk drive rather than a web server,
but in either case, this example shows that you can fit equations in one session of Stata, save the
results to disk, and then build your forecast model later.

The estimates save command allows you to save multiple estimation results to the same file and
numbers them sequentially starting at 1. You can use the number() option of forecast estimates
using to specify which set of estimation results from the specified file you wish to add to the forecast
model. If you do not specify number(), forecast estimates using uses the first set of results.
When you use forecast estimates using, forecast loads the estimation results from disk
and stores them in memory using a temporary name. Later, when you proceed to solve your model,
forecast checks to see whether those estimation results are still in memory. If not, it will attempt
to reload them from the file you had specified. You should therefore not move or rename estimation
result files between the time you add them to your model and the time you solve the model.

The predict option


As we mentioned while discussing the advise option, the forecast commands include code
to quickly obtain predictions from some of the most commonly used commands, while they use
predict to obtain predictions from other estimation commands. When you add estimation results
that require forecast to use predict, by default, forecast assumes that it can pass the option
xb on to predict to obtain the appropriate predicted values. You use the predict() option of
forecast estimates to specify the option that predict must use to obtain predicted values from
the estimates being added.
For example, suppose you used tobit to fit an equation whose dependent variable is left-censored
at zero and then stored the estimation results under the name tobitreg. When solving the model,
you want to use the predicted values of the left-truncated mean, the expected value of the dependent
variable conditional on its being greater than zero. Looking at the Syntax for predict in [R] tobit
postestimation, we see that the appropriate option we must pass to predict is e(0,.). To add this
estimation result to an existing forecast model, we would therefore type
. forecast estimates tobitreg, predict(e(0,.))

Now, whenever forecast calls predict with those estimation results, it will pass the option e(0,.)
so that we obtain the appropriate predictions. If you are adding results from a multiple-equation
estimation command with k dependent variables, then you must specify k predict options within
the predict() option, separated by spaces.

forecast estimates Add estimation results to a forecast model

241

Forecasting with ARIMA models


Practitioners often use ARIMA models to forecast some of the variables in their models, and you
can certainly use estimation results produced by commands such as arima with forecast. There are
just two rules to follow when using commands that use the Kalman filter to obtain predictions. First,
do not specify the predict() option with forecast estimates. The forecast commands know
how to handle these estimators automatically. Second, as we stated earlier, the forecast commands
do not undo any time-series operators that may adorn the left-hand-side variables of estimation
results, so you must use forecast identity to specify identities to recover the underlying variables
in levels.

Example 5: An ARIMA model with first- and seasonal-differencing


wpi1.dta contains quarterly observations on the variable wpi. First, lets fit a multiplicative
seasonal ARIMA model with both first- and seasonal-difference operators applied to the dependent
variable and store the estimation results:
. use http://www.stata-press.com/data/r14/wpi1
. arima wpi, arima(1, 1, 1) sarima(1, 1, 1, 4)
(output omitted )
. estimates store arima

(For details on fitting seasonal ARIMA models, see [TS] arima).


With the difference operators used here, when forecast calls predict, it will obtain predictions
in terms of DS4.wpi. Using the definitions of time-series operators in [TS] tsset, we have
DS4.wpit = (wpit wpit4 ) (wpit1 wpit5 )
so that
wpit = DS4.wpit + wpit4 + (wpit1 wpit5 )
Because our arima results include a dependent variable with time-series operators, we must use the
name() option of forecast estimates to specify an alternative variable name. We will name ours
ds4wpi. Then we can specify an identity by using the previous equation to recover our forecasts in
terms of wpi. We type
. forecast create arimaexample, replace
(Forecast model example4 ended.)
Forecast model arimaexample started.
. forecast estimates arima, name(ds4wpi)
Added estimation results from arima.
Forecast model arimaexample now contains 1 endogenous variable.
. forecast identity wpi = ds4wpi + L4.wpi + (L.wpi - L5.wpi)
Forecast model arimaexample now contains 2 endogenous variables.

242

forecast estimates Add estimation results to a forecast model


. forecast solve, begin(tq(1988q1))
Computing dynamic forecasts for model arimaexample.
Starting period:
Ending period:
Forecast prefix:

1988q1
1990q4
f_

1988q1: .............
1988q2: ...............
1988q3: ...............
(output omitted )
1990q4: ............
Forecast 2 variables spanning 12 periods.

Because our entire forecast model consists of a single equation fit by arima, we can also call predict
to obtain forecasts:
. predict a_wpi, y dynamic(tq(1988q1))
(5 missing values generated)
. list t f_wpi a_wpi in -5/l

120.
121.
122.
123.
124.

f_wpi

a_wpi

1989q4
1990q1
1990q2
1990q3
1990q4

110.2182
111.6782
112.9945
114.3281
115.5142

110.2182
111.6782
112.9945
114.3281
115.5142

Looking at the last few observations in the dataset, we see that the forecasts produced by forecast
(f wpi) match those produced by predict (a wpi). Of course, the advantage of forecast is that
we can combine multiple sets of estimation results and obtain forecasts for an entire system of
equations.

Technical note
Do not add estimation results to your forecast model that you have stored after calling an estimation
command with the by: prefix. The stored estimation results will contain information from only the
last group on which the estimation command was executed. forecast will then use those results for
all observations in the forecast horizon regardless of the value of the group variable you specified
with by:.

References
Cameron, A. C., and P. K. Trivedi. 2010. Microeconometrics Using Stata. Rev. ed. College Station, TX: Stata Press.
Duan, N. 1983. Smearing estimate: A nonparametric retransformation method. Journal of the American Statistical
Association 78: 605610.
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.

forecast estimates Add estimation results to a forecast model

Also see
[TS] forecast Econometric model forecasting
[R] estimates Save and manipulate estimation results
[R] predict Obtain predictions, residuals, etc., after estimation

243

Title
forecast exogenous Declare exogenous variables

Description

Syntax

Remarks and examples

Also see

Description
forecast exogenous declares exogenous variables in the current forecast model.

Syntax
forecast exogenous varlist

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast exogenous declares exogenous variables in your
forecast model.
Before you can solve your model, all the exogenous variables must be filled in with nonmissing
values over the entire forecast horizon. When you use forecast solve, Stata first checks your
exogenous variables and exits with an error message if any of them contains missing values for any
periods being forecast. When you assemble a large model with many variables, it is easy to forget
some variables and then have problems obtaining forecasts. forecast exogenous provides you with
a mechanism to explicitly declare the exogenous variables in your model so that you do not forget
about them.
Declaring exogenous variables with forecast exogenous is not explicitly necessary, but we
nevertheless strongly encourage doing so. Stata can check the exogenous variables before solving the
model and issue an appropriate error message if missing values are found, whereas troubleshooting
models for which forecasting failed is more difficult after the fact.

Example 1
Here we fit a simple single-equation dynamic model with two exogenous variables, x1 and x2:
. use http://www.stata-press.com/data/r14/forecastex1
. quietly regress y L.y x1 x2
. estimates store exregression
. forecast create myexample
Forecast model myexample started.
. forecast estimates exregression
Added estimation results from regress.
Forecast model myexample now contains 1 endogenous variable.
. forecast exogenous x1
Forecast model myexample now contains 1 declared exogenous variable.
. forecast exogenous x2
Forecast model myexample now contains 2 declared exogenous variables.

244

forecast exogenous Declare exogenous variables

Instead of using forecast exogenous twice, we could have instead typed


. forecast exogenous x1 x2

Also see
[TS] forecast Econometric model forecasting

245

Title
forecast identity Add an identity to a forecast model
Description
Stored results

Quick start
Also see

Syntax

Options

Remarks and examples

Description
forecast identity adds an identity to the forecast model currently in memory. You must
first create a new model using forecast create before you can add an identity with forecast
identity. An identity is a nonstochastic equation that expresses an endogenous variable in the model
as a function of other variables in the model. Identities often describe the behavior of endogenous
variables that are based on accounting identities or adding-up conditions.

Quick start
Add an identity to the forecast that states that y3 is the sum of y1 and y2
forecast identity y3=y1+y2
As above, and create new variable newy before adding it to the forecast
forecast identity newy=y1+y2, generate

Syntax
forecast identity varname = exp

, options

options

Description

generate
double

create new variable varname


store new variable as a double instead of as a float

varname is the name of an endogenous variable to be added to the forecast model.

You can only specify double if you also specify generate.

Options
generate specifies that the new variable varname be created equal to exp for all observations in the
current dataset.
double, for use in conjunction with the generate option, requests that the new variable be created
as a double instead of as a float. See [D] data types.

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast identity specifies a nonstochastic equation that
determines the value of an endogenous variable in the model. When you type
. forecast identity varname = exp

246

forecast identity Add an identity to a forecast model

247

forecast identity registers varname as an endogenous variable in your forecast model that is
equal to exp, where exp is a valid Stata expression that is typically a function of other endogenous
variables and exogenous variables in your model and perhaps lagged values of varname as well.
forecast identity was used in all the examples in [TS] forecast.

Example 1: Variables with constant growth rates


Some models contain variables that you are willing to assume will grow at a constant rate throughout
the forecast horizon. For example, say we have a model using annual data and want to assume that
our population variable pop grows at 0.75% per year. Then we can declare endogenous variable pop
by using forecast identity:
. forecast identity pop = 1.0075*L.pop

Typically, you use forecast identity to define the relationship that determines an endogenous
variable that is already in your dataset. For example, in example 1 of [TS] forecast, we used forecast
identity to define total wages as the sum of government and private-sector wages, and the total
wage variable already existed in our dataset.
The generate option of forecast identity is useful when you wish to use a transformation of
one or more endogenous variables as a right-hand-side variable in a stochastic equation that describes
another endogenous variable. For example, say you want to use regress to model variable y as
a function of the ratio of two endogenous variables, u and w, as well as other covariates. Without
the generate option of forecast identity, you would have to define the variable y = u/w
twice: first, you would have to use the generate command to create the variable before fitting your
regression model, and then you would have to use forecast identity to add an identity to your
forecast model to define y in terms of u and w. Assuming you have already created your forecast
model, the generate option allows you to define the ratio variable just once, before you fit the
regression equation. In this example, the ratio variable is easy enough to specify twice, but it is very
easy to forget to include identities that define regressors used in estimation results while building
large forecast models. In other cases, an endogenous variable may be a more complicated function of
other endogenous variables, so having to specify the function only once reduces the chance for error.

Stored results
forecast identity stores the following in r():
Macros
r(lhs)
r(rhs)
r(basenames)
r(fullnames)

left-hand-side (endogenous) variable


right-hand side of identity
base names of variables found on right-hand side
full names of variables found on right-hand side

Also see
[TS] forecast Econometric model forecasting

Title
forecast list List forecast commands composing current model
Description
Also see

Quick start

Syntax

Options

Remarks and examples

Reference

Description
forecast list produces a list of forecast commands that compose the current model.

Quick start
List all forecast commands that compose the current model
forecast list
Save a list of commands to replicate the current forecast model to myforecast.do
forecast list, saving(myforecast)
As above, but save the commands as myforecast.txt
forecast list, saving(myforecast.txt)

Syntax
forecast list

, options

Description


saving(filename , replace ) save list of commands to file
notrim
do not remove extraneous white space

options

Options


saving(filename , replace ) requests that forecast list write the list of commands to disk
with filename. If no extension is specified, .do is assumed. If filename already exists, an error is
issued unless you specify replace, in which case the file is overwritten.
notrim requests that forecast list not remove any extraneous spaces and that commands be
shown exactly as they were originally entered. By default, superfluous white space is removed.

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast list produces a list of all the forecast commands
you would need to enter to re-create the forecast model currently in memory. Unlike using a command
log, forecast list only shows the forecast-related commands but not any estimation command
or other commands you may have issued. If you specify saving(filename), forecast list saves
the list as filename.do, which you can then edit using the Do-file Editor.
248

forecast list List forecast commands composing current model

249

forecast creates models by accumulating estimation results, identities, and other features that you
add to the model by using various forecast subcommands. Once you add a feature to a model, it
remains a part of the model until you clear the entire model from memory. forecast list provides
a list of all the forecast commands you would need to rebuild the current model.
When building all but the smallest forecast models, you will typically write a do-file to load
your dataset, perhaps call some estimation commands, and issue a sequence of forecast commands
to build and solve your forecast model. There are times, though, when you will type a forecast
command interactively and then later want to undo the command or else wish you had not typed the
command in the first place. forecast list provides the solution.
Suppose you use forecast adjust to perform some policy simulations and then decide you want
to remove those adjustments from the model. forecast list makes this easy to do. You simply
call forecast list with the saving() option to produce a do-file that contains all the forecast
commands issued since the model was created. Then you can edit the do-file to remove the forecast
adjust command, type forecast clear, and run the do-file.

Example 1: Kleins model


In example 1 of [TS] forecast, we obtained forecasts from Kleins (1950) macroeconomic model.
If we type forecast list after typing all the commands in that example, we obtain
. forecast list
forecast create kleinmodel
forecast estimates klein
forecast identity y = c + i + g
forecast identity p = y - t - wp
forecast identity k = L.k + i
forecast identity w = wg + wp
forecast exogenous wg
forecast exogenous g
forecast exogenous t
forecast exogenous yr

The forecast solve command is not included in output produced by forecast list because
solving the model does not add any features to the model.

Technical note
To prevent you from accidentally destroying the model in memory, forecast list does not add
the replace option to forecast create even if you specified replace when you originally called
forecast create.

Reference
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.

Also see
[TS] forecast Econometric model forecasting

Title
forecast query Check whether a forecast model has been started

Description

Syntax

Remarks and examples

Stored results

Also see

Description
forecast query issues a message indicating whether a forecast model has been started.

Syntax
forecast query

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. forecast query allows you to check whether a forecast model
has been started. Most users of the forecast commands will not need to use forecast query.
This command is most useful to programmers.
Suppose there is no forecast model in memory:
. forecast query
No forecast model exists.

Now we create a forecast model named fcmodel:


. forecast
Forecast
. forecast
Forecast

create fcmodel
model fcmodel started.
query
model fcmodel exists.

Stored results
forecast query stores the following in r():
Scalars
r(found)
Macros
r(name)

1 if model started; 0 otherwise


model name

Also see
[TS] forecast Econometric model forecasting
[TS] forecast describe Describe features of the forecast model

250

Title
forecast solve Obtain static and dynamic forecasts
Description
Remarks and examples
Also see

Quick start
Stored results

Syntax
Methods and formulas

Options
References

Description
forecast solve computes static or dynamic forecasts based on the model currently in memory.
Before you can solve a model, you must first create a new model using forecast create and add
equations and variables to it using the commands summarized in [TS] forecast.

Quick start
Compute dynamic forecast after forecast create and forecast estimates
forecast solve
As above, but with forecasts starting at 1990q1 and ending at 1995q3
forecast solve, begin(q(1990q1)) end(q(1995q3))
As above, and change prefix of predicted endogenous variables to hat
forecast solve, begin(q(1990q1)) end(q(1995q3)) prefix(hat)
As above, but forecast 11 periods starting at 1990q1
forecast solve, begin(q(1990q1)) prefix(hat) periods(11)
Incorporate forecast uncertainty via simulation and store point forecasts and their standard deviations
in variables prefixed with d and sd
forecast solve, prefix(d_) ///
simulate(betas, statistic(stddev, prefix(sd_)))

251

252

forecast solve Obtain static and dynamic forecasts

Syntax
forecast solve

options
Model

prefix(string)
suffix(string)
begin(time constant)
end(time constant)
periods(#)
double
static
actuals

prefix(stub) | suffix(stub)

options

Description
specify prefix for forecast variables
specify suffix for forecast variables
specify period to begin forecasting
specify period to end forecasting
specify number of periods to forecast
store forecast variables as doubles instead of as floats
produce static forecasts instead of dynamic forecasts
use actual values if available instead of forecasts

Simulation

simulate(sim technique, sim statistic sim options)


specify simulation technique and options
Reporting

log(log level)

specify level of logging display; log level may be detail,


on, brief, or off

Solver

vtolerance(#)
ztolerance(#)
iterate(#)
technique(technique)

specify tolerance for forecast values


specify tolerance for function zero
specify maximum number of iterations
specify solution method; may be dampedgaussseidel #,
gaussseidel, broydenpowell, or newtonraphson

You can specify prefix() or suffix() but not both.

You can specify end() or periods() but not both.


sim technique

Description

betas
errors
residuals

draw multivariate-normal parameter vectors


draw additive errors from multivariate normal distribution
draw additive residuals based on static forecast errors

You can specify one or two sim methods separated by a space, though you cannot specify both errors and residuals.

sim statistic is
statistic(statistic,



prefix(string) | suffix(string) )

and may be repeated up to three times.

forecast solve Obtain static and dynamic forecasts

statistic

Description

mean
variance
stddev

record the mean of the simulation forecasts


record the variance of the simulation forecasts
record the standard deviation of the simulation forecasts

sim options

Description

saving(filename, . . .)

save results to file; save statistics in double precision; save results to


filename every # replications
suppress replication dots
perform # replications; default is reps(50)

nodots
reps(#)

253

Options


Model

prefix(string) and suffix(string) specify a name prefix or suffix that will be used to name the
variables holding the forecast values of the variables in the model. You may specify prefix() or
suffix() but not both. Sometimes, it is more convenient to have all forecast variables start with
the same set of characters, while other times, it is more convenient to have all forecast variables
end with the same set of characters.
If you specify prefix(f ), then the forecast values of endogenous variables x, y, and z will be
stored in new variables f x, f y, and f z.
If you specify suffix( g), then the forecast values of endogenous variables x, y, and z will be
stored in new variables x g, y g, and z g.
begin(time constant) requests that forecast begin forecasting at period time constant. By default,
forecast determines when to begin forecasting automatically.
end(time constant) requests that forecast end forecasting at period time constant. By default,
forecast produces forecasts for all periods on or after begin() in the dataset.
periods(#) specifies the number of periods after begin() to forecast. By default, forecast
produces forecasts for all periods on or after begin() in the dataset.
double requests that the forecast and simulation variables be stored in double precision. The default
is to use single-precision floats. See [D] data types for more information.
static requests that static forecasts be produced. Actual values of variables are used wherever
lagged values of the endogenous variables appear in the model. By default, dynamic forecasts are
produced, which use the forecast values of variables wherever lagged values of the endogenous
variables appear in the model. Static forecasts are also called one-step-ahead forecasts.
actuals specifies how nonmissing values of endogenous variables in the forecast horizon are treated.
By default, nonmissing values are ignored, and forecasts are produced for all endogenous variables.
When you specify actuals, forecast sets the forecast values equal to the actual values if they
are nonmissing. The forecasts for the other endogenous variables are then conditional on the known
values of the endogenous variables with nonmissing data.

254

forecast solve Obtain static and dynamic forecasts

Simulation

simulate(sim technique, sim statistic sim options) allows you to simulate your model to obtain
measures of uncertainty surrounding the point forecasts produced by the model. Simulating a
model involves repeatedly solving the model, each time accounting for the uncertainty associated
with the error terms and the estimated coefficient vectors.
sim technique can be betas, errors, or residuals, or you can specify both betas and one of
errors or residuals separated by a space. You cannot specify both errors and residuals.
The sim technique controls how uncertainty is introduced into the model.
sim statistic specifies a summary statistic to summarize the forecasts over all the simulations.
sim statistic takes the form
statistic(statistic, { prefix(string) | suffix(string) })
where statistic may be mean, variance, or stddev. You may specify either the prefix or the
suffix that will be used to name the variables that will contain the requested statistic. You
may specify up to three sim statistics, allowing you to track the mean, variance, and standard
deviations of your forecasts.


sim options include saving(filename, suboptions ), nodots, and reps(#).


saving(filename, suboptions ) creates a Stata data file (.dta file) consisting of (for each
endogenous variable in the model) a variable containing the simulated values.
double specifies that the results for each replication be saved as doubles, meaning 8-byte reals.
By default, they are saved as floats, meaning 4-byte reals.
replace specifies that filename be overwritten if it exists.
every(#) specifies that results be written to disk every #th replication. every() should be
specified only in conjunction with saving() when the command takes a long time for each
replication. This will allow recovery of partial results should some other software crash your
computer. See [P] postfile.
nodots suppresses display of the replication dots. By default, one dot character is displayed for
each successful replication. If during a replication convergence is not achieved, forecast
solve exits with an error message.
reps(#) requests that forecast solve perform # replications; the default is reps(50).

Reporting

log(log level) specifies the level of logging provided while solving the model. log level may be
detail, on, brief, or off.
log(detail) provides a detailed iteration log including the current values of the convergence
criteria for each period in each panel (in the case of panel data) for which the model is being
solved.
log(on), the default, provides an iteration log showing the current panel and period for which
the model is being solved as well as a sequence of dots for each period indicating the number of
iterations.
log(brief), when used with a time-series dataset, is equivalent to log(on). When used with a
panel dataset, log(brief) produces an iteration log showing the current panel being solved but
does not show which period within the current panel is being solved.
log(off) requests that no iteration log be produced.

forecast solve Obtain static and dynamic forecasts

255

Solver

vtolerance(#), ztolerance(#), and iterate(#) control when the solver of the system of
equations stops. ztolerance() is ignored if either technique(dampedgaussseidel #) or
technique(gaussseidel) is specified. These options are seldom used. See [M-5] solvenl( ).
technique(technique) specifies the technique to use to solve the system of equations. technique
may be dampedgaussseidel #, gaussseidel, broydenpowell, or newtonraphson, where
0 < # < 1 specifies the amount of damping with smaller numbers indicating less damping.
The default is technique(dampedgaussseidel 0.2), which works well in most situations.
If you have convergence issues, first try continuing to use dampedgaussseidel # but with a
larger damping factor. Techniques broydenpowell and newtonraphson usually work well, but
because they require the computation of numerical derivatives, they tend to be much slower. See
[M-5] solvenl( ).

Remarks and examples


For an overview of the forecast commands, see [TS] forecast. This manual entry assumes you
have already read that manual entry. The forecast solve command solves a forecast model in
Stata. Before you can solve a model, you must first create a model using forecast create, and you
must add at least one equation using forecast estimates, forecast coefvector, or forecast
identity. We covered the most commonly used options of forecast solve in the examples in
[TS] forecast.
Here we focus on two sets of options that are available with forecast solve. First, we discuss
the actuals option, which allows you to obtain forecasts conditional on prespecified values for one
or more of the endogenous variables. Then we focus on performing simulations to obtain estimates
of uncertainty around the point forecasts.
Remarks are presented under the following headings:
Performing conditional forecasts
Using simulations to measure forecast accuracy

Performing conditional forecasts


Sometimes, you already know the values of some of the endogenous variables in the forecast
horizon and would like to obtain forecasts for the remaining endogenous variables conditional on
those known values. Other times, you may not know the values but would nevertheless like to specify
a path for some endogenous variables and see how the others would evolve conditional on that path.
To accomplish these types of exercises, you can use the actuals option of forecast solve.

Example 1: Specifying alternative scenarios


gdpoil.dta contains quarterly data on the annualized growth rate of GDP and the percentage
change in the quarterly average price of oil through the end of 2007. We want to explore how GDP
would have evolved if the price of oil had risen 10% in each of the first three quarters of 2008 and
then held steady for several years. We will use a bivariate vector autoregression (VAR) to forecast the
variables gdp and oil. Results obtained from the varsoc command indicate that the HannanQuinn
information criterion is minimized when the VAR includes two lags. First, we fit our VAR model and
store the estimation results:

256

forecast solve Obtain static and dynamic forecasts


. use http://www.stata-press.com/data/r14/gdpoil
. var gdp oil, lags(1 2)
Vector autoregression
Sample: 1986q4 - 2007q4
Number of obs
Log likelihood = -500.0749
AIC
FPE
=
559.0724
HQIC
Det(Sigma_ml) =
441.7362
SBIC
Equation

Parms

gdp
oil

5
5

Coef.

RMSE

R-sq

chi2

P>chi2

1.88516
11.8776

0.1820
0.1140

18.91318
10.93614

0.0008
0.0273

Std. Err.

P>|z|

=
=
=
=

85
12.00176
12.11735
12.28913

[95% Conf. Interval]

gdp
gdp
L1.
L2.

.1498285
.3465238

.1015076
.1022446

1.48
3.39

0.140
0.001

-.0491227
.146128

.3487797
.5469196

oil
L1.
L2.

-.0374609
.0119564

.0167968
.0164599

-2.23
0.73

0.026
0.468

-.070382
-.0203043

-.0045399
.0442172

_cons

1.519983

.4288145

3.54

0.000

.6795226

2.360444

gdp
L1.
L2.

.8102233
1.090244

.6395579
.6442017

1.27
1.69

0.205
0.091

-.4432871
-.1723684

2.063734
2.352856

oil
L1.
L2.

.0995271
-.1870052

.1058295
.103707

0.94
-1.80

0.347
0.071

-.1078949
-.3902672

.3069491
.0162568

_cons

-4.041859

2.701785

-1.50

0.135

-9.33726

1.253543

oil

. estimates store var

The dataset ends in the fourth quarter of 2007, so before we can produce forecasts for 2008 and
beyond, we need to extend our dataset. We can do that using the tsappend command. Here we
extend our dataset three years:
. tsappend, add(12)

forecast solve Obtain static and dynamic forecasts

257

Now we can create a forecast model and obtain baseline forecasts:


. forecast create oilmodel
Forecast model oilmodel started.
. forecast estimates var
Added estimation results from var.
Forecast model oilmodel now contains 2 endogenous variables.
. forecast solve, prefix(bl_)
Computing dynamic forecasts for model oilmodel.
Starting period: 2008q1
Ending period:
2010q4
Forecast prefix: bl_
2008q1: .................
(output omitted )
2010q4: ............
Forecast 2 variables spanning 12 periods.

To see how GDP evolves if oil prices increase 10% in each of the first three quarters of 2008
and then remain flat, we need to obtain a forecast for gdp conditional on a specified path for
oil. The actuals option of forecast solve will do that for us. With the actuals option, if an
endogenous variable contains a nonmissing value for the period currently being forecast, forecast
solve will use that value as the forecast, overriding whatever value might be produced by that
variables underlying estimation result or identity. Then the endogenous variables with missing values
will be forecast conditional on the endogenous variables that do have valid data. Here we fill in oil
with our hypothesized price path:
. replace oil = 10 if qdate == tq(2008q1)
(1 real change made)
. replace oil = 10 if qdate == tq(2008q2)
(1 real change made)
. replace oil = 10 if qdate == tq(2008q3)
(1 real change made)
. replace oil = 0 if qdate > tq(2008q3)
(9 real changes made)

Now we obtain forecasts conditional on our oil variable. We will use the prefix alt for these
forecast variables:
. forecast solve, prefix(alt_) actuals
Computing dynamic forecasts for model oilmodel.
Starting period:
Ending period:
Forecast prefix:

2008q1
2010q4
alt_

2008q1: ...............
(output omitted )
2010q4: ...........
Forecast 2 variables spanning 12 periods.
Forecasts used actual values if available.

258

forecast solve Obtain static and dynamic forecasts

Finally, we make a variable containing the difference between our alternative and our baseline gdp
forecasts and graph it:
. generate diff_gdp = alt_gdp - bl_gdp

Change in Annualized GDP Growth


.4
.3
.2
.1
0

.1

Oils Effect on GDP

12

Quarters since shock


Assumes oil increases 10% for 3 quarters, then holds steady

Our model indicates GDP growth would be about 0.4% less in the second through fourth quarters of
2008 than it would otherwise be, but would be mostly unaffected thereafter if oil prices followed our
hypothetical path. The one-quarter lag in the response of GDP is due to our using a VAR model. In
our VAR model, lagged values of oil predict the current value of gdp, but the current value of oil
does not.

Technical note
The previous example allowed us to demonstrate forecast solves actuals option, but in fact
measuring the economys response to oil shocks is much more difficult than our simple VAR analysis
would suggest. One obvious complication is that positive and negative oil price shocks do not have
symmetric effects on the economy. In our simple model, if a 50% increase in oil prices lowers GDP
by x%, then a 50% decrease in oil prices must raise GDP by x%. However, a 50% decrease in oil
prices is perhaps more likely to portend weakness in the economy rather than an imminent growth
spurt. See, for example, Hamilton (2003) and Kilian and Vigfusson (2013).

Another way to specify alternative scenarios for your forecasts is to use the forecast adjust
command. That command is more flexible in the types of manipulations you can perform on endogenous
variables but, depending on the task at hand, may involve more effort. The actuals option of the
forecast solve and the forecast adjust commands are complementary. There is much overlap
in what you can achieve; in some situations, specifying the actuals option will be easier, while in
other situations, using adjustments via forecast adjust will prove to be easier.

forecast solve Obtain static and dynamic forecasts

259

Using simulations to measure forecast accuracy


To motivate the discussion, we will focus on the simple linear regression model. Even though
forecast can handle models with many equations with equal ease, all the issues that arise can be
illustrated with one equation. Suppose we have the following relationship between variables y and
x:
yt = + xt + t
(1)
where t is a zero-mean error term. Say we fit (1) by ordinary least squares (OLS) using observations
1, . . . , T and obtain the point estimates
b and b. Assuming we have data for exogenous variable x
at time T + 1, we could forecast yT +1 as

b T +1
ybT +1 =
b + x

(2)

However, there are several factors that prevent us from guaranteeing ex ante that yT +1 will indeed
equal ybT +1 . We must assume that (1) specifies the correct relationship between y and x. Even if that
relationship held for times 1 through T , are we sure it will hold at time T + 1? Uncertainty due to
issues like that are inherent to the type of forecasting that the forecast commands are designed for.
Here we discuss two additional sources of uncertainty that forecast solve can help you measure.
First, we estimated and by OLS to obtain
b and b, but we must emphasize the word estimated.
Our estimates are subject to sampling error. When you fit a regression using regress or any other
estimation command, Stata presents not just the point estimates of the parameters but also the standard
errors and confidence intervals representing the level of uncertainty surrounding those point estimates.
Uncertainty surrounding the true values of and mean that there is some level of uncertainty
surrounding our predicted value ybT +1 as well.
Second, (1) states that yt depends not just on , , and xt but also on an unobserved error term
t . When we make our forecast using (2), we assume that the error term will equal its expected value
of zero. Saying a random error has an expected value of zero is clearly not the same as saying it
will be zero every time. If a positive outside shock occurs at T + 1, yT +1 will be higher than our
estimate based on (2) would lead us to believe.
Fortunately, quantifying both these sources of uncertainty is straightforward using simulation. First,
we solve our model as usual, providing us with our point forecasts. To see how uncertainty surrounding
our estimated parameters affects our forecasts, we can take random draws from a multivariate normal
b and whose variance is the covariance matrix produced by regress.
distribution whose mean is (b
, )
We then solve our model using these randomly drawn parameters rather than the original point
estimates. If we repeat the process of drawing random parameters and solving the model many times,
we can use the variance or standard deviation across replications for each time period as a measure
of uncertainty.
To account for uncertainty surrounding the error term, we can also use simulation. Here, at each
replication, we add a random noise term to our forecast for yT +1 , where we draw our random errors
such that they have the same characteristics as t . There are two ways we can do that. First, all the
estimation commands commonly used in forecasting provide us with an estimate of the variance or
standard deviation of the error term. For example, regress labels the estimated standard deviation
of the error term Root RMSE and conveniently saves it in a macro that forecast can access. If
we are willing to assume that all the errors in the equations in our model are normally distributed,
then we can use random-normal errors drawn with means equal to zero and variances as reported by
the estimation command used to fit each equation.
Sometimes the assumption of normality is unpalatable. In those cases, an alternative is to solve the
model to obtain static forecasts and then compute the sample residuals based on the observations for
which we have nonmissing values of the endogenous variables. Then in our simulations, we randomly
choose one of the residuals observed for that equation.

260

forecast solve Obtain static and dynamic forecasts

At each replication, whether we draw errors based on the normal errors or from the pool of
static-forecast residuals, we add the drawn value to our estimate of ybT +1 to provide a simulated value
for our forecast. Then, just like when simulating parameter uncertainty, we can use the variance or
standard deviation across replications to measure uncertainty. In fact, we can perform simulations that
draw both random parameters and random errors to account for both sources of uncertainty at once.

Example 2: Accounting for parameter uncertainty


Here we revisit our Klein (1950) model from example 1 of [TS] forecast and perform simulations
in which we account for uncertainty associated with the estimated parameters of the model. First, we
load the dataset and set up our model:
.
.
>
.
.

use http://www.stata-press.com/data/r14/klein2, clear


quietly reg3 (c p L.p w) (i p L.p L.k) (wp y L.y yr), endog(w p y)
exog(t wg g)
estimates store klein
forecast create kleinmodel, replace
(Forecast model oilmodel ended.)
Forecast model kleinmodel started.
. forecast estimates klein
Added estimation results from reg3.
Forecast model kleinmodel now contains 3 endogenous variables.
. forecast identity y = c + i + g
Forecast model kleinmodel now contains 4 endogenous variables.
. forecast
Forecast
. forecast
Forecast

identity p = y model kleinmodel


identity k = L.k
model kleinmodel

t - wp
now contains 5 endogenous variables.
+ i
now contains 6 endogenous variables.

. forecast
Forecast
. forecast
Forecast
. forecast
Forecast
. forecast
Forecast
. forecast
Forecast

identity w = wg + wp
model kleinmodel now
exogenous wg
model kleinmodel now
exogenous g
model kleinmodel now
exogenous t
model kleinmodel now
exogenous yr
model kleinmodel now

contains 7 endogenous variables.


contains 1 declared exogenous variable.
contains 2 declared exogenous variables.
contains 3 declared exogenous variables.
contains 4 declared exogenous variables.

Now we are ready to solve our model. We are going to begin dynamic forecasts in 1936, and we
are going to perform 100 replications. We will store the point forecasts in variables prefixed with d ,
and we will store the standard deviations of our forecasts in variables prefixed with sd . Because
the simulations involve the use of random numbers, we must remember to set the random-number
seed if we want to be able to replicate our results; see [R] set seed. We type

forecast solve Obtain static and dynamic forecasts

261

. set seed 1
. forecast solve, prefix(d_) begin(1936)
> simulate(betas, statistic(stddev, prefix(sd_)) reps(100))
Computing dynamic forecasts for model kleinmodel.
Starting period: 1936
Ending period:
1941
Forecast prefix: d_
1936: ............................................
1937: ..........................................
1938: .............................................
1939: .............................................
1940: ............................................
1941: ..............................................
Performing simulations (100)
1
2
3
4
5
..................................................
50
..................................................
100
Forecast 7 variables spanning 6 periods.

The key here is the simulate() option. We requested that forecast solve perform 100 simulations
by taking random draws for the parameters (betas), and we requested that it record the standard
deviation (stddev) of each endogenous variable in new variables that begin with sd . Next we
compute the upper and lower bounds of a 95% prediction interval for our forecast of total income y:
. generate d_y_up = d_y + invnormal(0.975)*sd_y
(16 missing values generated)
. generate d_y_dn = d_y + invnormal(0.025)*sd_y
(16 missing values generated)

We obtained 16 missing values after each generate because the simulation summary variables only
contain nonmissing data for the periods in which forecasts were made. The point-forecast variables
that begin with d in this example are filled in with the corresponding actual values of the endogenous
variables for periods before the beginning of the forecast horizon; in our experience, having both the
historical data and forecasts in one set of variables simplifies many tasks. Here we graph our forecast
of total income along with the 95% prediction interval:

262

forecast solve Obtain static and dynamic forecasts

50

60

70

80

90

100

Total Income

1935

1937

1939

1941

Solid lines denote actual values.


Dashed lines denote forecast values.
95% confidence bands based on parameter uncertainty.

Our next example will use the same forecast model, but we will not need the forecast variables
we just created. forecast drop makes removing those variables easy:
. forecast drop
(dropped 14 variables)

forecast drop drops all variables created by the previous invocation of forecast solve, including
both the point-forecast variables and any variables that contain simulation results. In this case,
forecast drop will remove all the variables that begin with sd as well as d y, d c, d i, and
so on. However, we are not done yet. We created the variables d y dn and d y up ourselves, and
they were not part of the forecast model. Therefore, they are not removed by forecast drop, and
we need to do that ourselves:
. drop d_y_dn d_y_up

Example 3: Accounting for both parameter uncertainty and random errors


In the previous example, we measured uncertainty in our model stemming from the fact that our
parameters were estimated. Here we not only simulate random draws for the parameters but also add
random-normal errors to the stochastic equations. We type

forecast solve Obtain static and dynamic forecasts

263

. set seed 1
. forecast solve, prefix(d_) begin(1936)
> simulate(betas errors, statistic(stddev, prefix(sd_)) reps(100))
Computing dynamic forecasts for model kleinmodel.
Starting period: 1936
Ending period:
1941
Forecast prefix: d_
1936: ............................................
1937: ..........................................
1938: .............................................
1939: .............................................
1940: ............................................
1941: ..............................................
Performing simulations (100)
1
2
3
4
5
..................................................
50
..................................................
100
Forecast 7 variables spanning 6 periods.

The only difference between this call to forecast solve and the one in the previous example is that
here we specified betas errors in the simulate() option rather than just betas. Had we wanted
to perform simulations involving the parameters and random draws from the pool of static-forecast
residuals rather than random-normal errors, we would have specified betas residuals. After we
re-create the variables containing the bounds on our prediction interval, we obtain the following graph:

50

60

70

80

90

100

Total Income

1935

1937

1939

1941

Solid lines denote actual values.


Dashed lines denote forecast values.
95% confidence bands based on parameter uncertainty and normally distributed errors.

Notice that by accounting for both parameter and additive error uncertainty, our prediction interval
became much wider.

264

forecast solve Obtain static and dynamic forecasts

Stored results
forecast solve stores the following in r():
Scalars
r(first obs)
r(last obs)
r(Npanels)
r(Nvar)
r(vtolerance)
r(ztolerance)
r(iterate)
r(sim nreps)
r(damping)
Macros
r(prefix)
r(suffix)
r(actuals)
r(static)
r(double)
r(sim technique)
r(logtype)

first observation in forecast horizon


last observation in forecast horizon
(of first panel if forecasting panel data)
number of panels forecast
number of forecast variables
tolerance for forecast values
tolerance for function zero
maximum number of iterations
number of simulations
damping parameter for damped GaussSeidel
forecast variable prefix
forecast variable suffix
actuals, if specified
static, if specified
double, if specified
specified sim technique
on, off, brief, or detail

Methods and formulas


Formalizing the definition of a model provided in [TS] forecast, we represent the endogenous
variables in the model as the k 1 vector y, and we represent the exogenous variables in the model as
the m 1 vector x. We refer to the contemporaneous values as yt and xt ; for notational simplicity,
we refer to lagged values as yt1 and xt1 with the implication that further lags of the variables
can also be included with no loss of generality. We use to refer to the vector of all the estimated
parameters in all the equations of the model. We use ut and ut1 to refer to contemporaneous and
lagged error terms, respectively.
The forecast commands solve models of the form

yit = fi (yi,t , yt1 , xt , xt1 , ut , ut1 ; )

(3)

where i = 1, . . . , k and yi,t refers to the k 1 1 vector of endogenous variables other than yi
at time t. If equation j is an identity, we take ujt = 0 for all t; for stochastic equations, the errors
correspond to the usual regression error terms. Equation (3) does not include subscripts indexing
panels for notational simplicity, but the extension is obvious. A model is solveable if k 1. m may
be zero.
Endogenous variables are added to the forecast model via forecast estimates, forecast
identity, and forecast coefvector. Equations added via forecast estimates are always
stochastic, while equations added via forecast identity are always nonstochastic. Equations added
via forecast coefvector are treated as stochastic if options variance() or errorvariance()
(or both) are specified and nonstochastic if neither is specified.
Exogenous variables are declared using forecast exogenous, but the model may contain additional
exogenous variables. For example, the right-hand side of an equation may contain exogenous variables
that are not declared using forecast exogenous. Before solving the model, forecast solve
determines whether the declared exogenous variables contain missing values over the forecast horizon
and issues an informative error message if any do. Undeclared exogenous variables that contain
missing values within the forecast horizon will cause forecast solve to exit with a less-informative
error message and require the user to do more work to pinpoint the problem.

forecast solve Obtain static and dynamic forecasts

265

Adjustments added via forecast adjust easily fit within the framework of (3). Simply let fi ()
represent the value of yit obtained by first evaluating the appropriate estimation result, coefficient
vector, or identity and then performing the adjustments based on that intermediate result. Endogenous
variables may have multiple adjustments; adjustments are made in the order in which they were
specified via forecast adjust. For single-equation estimation results and coefficient vectors as well
as identities, adjustments are performed right after the equation is evaluated. For multiple-equation
estimation results and coefficient vectors, adjustments are made after all the equations within that set
of results are evaluated. Suppose an estimation result that uses predict includes two left-hand-side
variables, y1t and y2t , and you have added two adjustments to y1t and one adjustment to y2t . Here
forecast solve first calls predict twice to obtain candidate values for y1t and y2t ; then it performs
the two adjustments to y1t , and finally it adjusts y2t .
forecast solve offers four solution techniques: GaussSeidel, damped GaussSeidel, Broyden
Powell, and NewtonRaphson. The GaussSeidel techniques are simple iterative techniques that are
often fast and typically work well, particularly when a damping factor is used. GaussSeidel is simply
damped GaussSeidel without damping (a damping factor of 0). By default, damped GaussSeidel
with a damping factor of 0.2 is used, representing a small amount of damping. As Fair (1984, 250)
notes, while these techniques often work well, there is no guarantee that they will converge. Technique
NewtonRaphson typically works well but is slow because it requires the use of numerical derivatives at
every iteration to obtain a Jacobian matrix. The BroydenPowell (Broyden 1970; Powell 1970) method
is analogous to quasi-Newton methods used for function optimization in that an updating method is
used at each iteration to update an estimate of the Jacobian matrix rather than actually recalculating
it. For additional details as well as a discussion of the convergence criteria, see [M-5] solvenl( ).
If you do not specify the begin() option, forecast solve uses the following algorithm to select
the starting time period. Suppose the time variable t runs from 1 to T . If, at time T , none of the
endogenous variables contains missing values, forecast solve exits with an error message: there
are no periods in which the endogenous variables are not known; therefore, there are no periods
where a forecast is obviously required. Otherwise, consider period T 1. If none of the endogenous
variables contains missing values in that period, then the only period to forecast is T . Otherwise, work
back through time to find the latest period in which all the endogenous variables contain nonmissing
values and then begin forecasting in the subsequent period. In the case of panel datasets, the same
algorithm is applied to each panel, and forecasts for all panels begin on the earliest period selected.
When you specify the simulate() option with sim technique betas, forecast solve draws
random vectors from the multivariate normal distribution for each estimation result individually.
The mean and variance are based on the estimation results e(b) and e(V) macros, respectively.
If the estimation result is from a multiple-equation estimator, the corresponding Stata command
stores in e(b) and e(V) the full parameter vector and covariance matrix for all equations so that
forecast solves simulations will account for covariances among parameters in that estimation
results equations. However, covariances among parameters that appear in different estimation results
are taken to be zero.
If you specify a coefficient vector using forecast coefvector and specify a variance matrix in
the variance() option, then those coefficient vectors are simulated just like the parameter vectors
from estimation results. If you do not specify the variance() option, then the coefficient vector is
assumed to be nonstochastic and therefore is not simulated.
When you specify the simulate() option with sim technique residuals, forecast solve
first obtains static forecasts from your model for all possible periods. For each endogenous variable
defined by a stochastic equation, it then computes residuals as the forecast value minus the actual
value for all observations with nonmissing data. At each replication and for each period in the forecast
horizon, forecast solve randomly selects one element from each stochastic equations pool of
residuals before solving the model for that replication and period. Then whenever forecast solve

266

forecast solve Obtain static and dynamic forecasts

evaluates a stochastic equation, it adds the chosen element to the predicted value for that equation.
Suppose an estimation result represents a multiple-equation estimator with m equations, and suppose
that there are n time periods for which sample residuals are available. Arrange the residuals into the
n m matrix R. Then when forecast solve is randomly selecting residuals for this estimation
result, it will choose a random number j between 1 and n and select the entire j th row from R.
That preserves the correlation structure among the error terms of the estimation results equations.
If you specify a coefficient vector using forecast coefvector and specify either the variance()
option or the errorvariance() option (or both), sim technique residuals considers the equation
represented by the coefficient vector to be stochastic and resamples residuals for that equation.
When you specify the simulate() option with sim technique errors, forecast solve, for
each stochastic equation, replication, and period, takes a random draw from a multivariate normal
distribution with zero mean before solving the model for that replication and period. Then whenever
forecast solve evaluates a stochastic equation, it adds that random draw to the predicted value
for that equation. The variance of the distribution from which errors are drawn is based on the
estimation results for that equation. The forecast commands look in e(rmse), e(sigma), and
e(Sigma) to find the estimated variance. If you add an estimation result that does not set any of those
three macros and you request sim technique errors, forecast solve exits with an error message.
Multiple-equation commands typically set e(Sigma) so that the randomly drawn errors reflect the
estimated error correlation structure.
If you specify a coefficient vector using forecast coefvector and specify the errorvariance()
option, sim technique errors simulates errors for that equation. Otherwise, the equation is treated
like an identity and no errors are added.
forecast solve solves panel-data models by solving for all periods in the forecast horizon for
the first panel in the dataset, then the second dataset, and so on. When you perform simulations with
panel datasets, one replication is completed for all panels in the dataset before moving to the next
replication. Simulations that include residual resampling select residuals from the pool containing
residuals for all panels; forecast solve does not restrict itself to the static-forecast residuals for a
single panel when simulating that panel.

References
Broyden, C. G. 1970. Recent developments in solving nonlinear algebraic systems. In Numerical Methods for Nonlinear
Algebraic Equations, ed. P. Rabinowitz, 6173. London: Gordon and Breach Science Publishers.
Fair, R. C. 1984. Specification, Estimation, and Analysis of Macroeconometric Models. Cambridge, MA: Harvard
University Press.
Hamilton, J. D. 2003. What is an oil shock? Journal of Econometrics 113: 363398.
Kilian, L., and R. J. Vigfusson. 2013. Do oil prices help forecast U.S. real GDP? The role of nonlinearities and
asymmetries. Journal of Business and Economic Statistics 31: 7893.
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
Powell, M. J. D. 1970. A hybrid method for nonlinear equations. In Numerical Methods for Nonlinear Algebraic
Equations, ed. P. Rabinowitz, 87114. London: Gordon and Breach Science Publishers.

Also see
[TS] forecast Econometric model forecasting
[TS] forecast adjust Adjust a variable by add factoring, replacing, etc.
[TS] forecast drop Drop forecast variables
[R] set seed Specify random-number seed and state

Title
irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
Description
Also see

Quick start

Syntax

Remarks and examples

References

Description
irf creates and manipulates IRF files that contain estimates of the IRFs, dynamic-multiplier
functions, and forecast-error variance decompositions (FEVDs) created after estimation by var, svar,
or vec; see [TS] var, [TS] var svar, or [TS] vec.
irf creates and manipulates IRF files that contain estimates of the IRFs created after estimation
by arima or arfima; see [TS] arima or [TS] arfima.
IRFs and FEVDs are described below, and the process of analyzing them is outlined. After reading
this entry, please see [TS] irf create.

Quick start
Fit a VAR model
var y1 y2 y3
Create impulseresponse function myirf and IRF file myirfs.irf
irf create myirf, set(myirfs)
Graph orthogonalized impulseresponse function for dependent variables y1 and y2 given a shock to
y1
irf graph oirf, impulse(y1) response(y1 y2)
As above, but present results in a table
irf table oirf, impulse(y1) response(y1 y2)
Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.
See [TS] irf add, [TS] irf cgraph, [TS] irf ctable, [TS] irf describe, [TS] irf drop, [TS] irf graph,
[TS] irf ograph, [TS] irf rename, [TS] irf set, and [TS] irf table for additional Quick starts.

267

268

irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs

Syntax
irf subcommand . . .

, ...

subcommand

Description

create
set

create IRF file containing IRFs, dynamic-multiplier functions, and FEVDs


set the active IRF file

graph
cgraph
ograph
table
ctable

graph results from active file


combine graphs of IRFs, dynamic-multiplier functions, and FEVDs
graph overlaid IRFs, dynamic-multiplier functions, and FEVDs
create tables of IRFs, dynamic-multiplier functions, and FEVDs from
active file
combine tables of IRFs, dynamic-multiplier functions, and FEVDs

describe
add
drop
rename

describe contents of active file


add results from an IRF file to the active IRF file
drop IRF results from active file
rename IRF results within a file

IRF stands for impulseresponse function; FEVD stands for forecast-error variance decomposition.
irf can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var svar, [TS] vec,
[TS] arima, or [TS] arfima.

Remarks and examples


An IRF measures the effect of a shock to an endogenous variable on itself or on another
endogenous variable; see Lutkepohl (2005, 5163) and Hamilton (1994, 318323) for formal definitions.
Becketti (2013) provides an approachable, gentle introduction to IRF analysis. Of the many types of
IRFs, irf create estimates the five most important: simple IRFs, orthogonalized IRFs, cumulative
IRFs, cumulative orthogonalized IRFs, and structural IRFs.
A dynamic-multiplier function, or transfer function, measures the impact of a unit increase in an
exogenous variable on the endogenous variables over time; see Lutkepohl (2005, chap. 10) for formal
definitions. irf create estimates simple and cumulative dynamic-multiplier functions after var.
The forecast-error variance decomposition (FEVD) measures the fraction of the forecast-error
variance of an endogenous variable that can be attributed to orthogonalized shocks to itself or to
another endogenous variable; see Lutkepohl (2005, 6366) and Hamilton (1994, 323324) for formal
definitions. Of the many types of FEVDs, irf create estimates the two most important: Cholesky
and structural.

irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs

269

To analyze IRFs and FEVDs in Stata, you first fit a model, then use irf create to estimate the
IRFs and FEVDs and save them in a file, and finally use irf graph or any of the other irf analysis
commands to examine results:
. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )
. irf create order1, step(10) set(myirf1)
(file myirf1.irf created)
(file myirf1.irf now active)
(file myirf1.irf updated)
. irf graph oirf, impulse(dln_inc) response(dln_consump)

order1, dln_inc, dln_consump


.006

.004

.002

.002
0

10

step
95% CI

orthogonalized irf

Graphs by irfname, impulse variable, and response variable

Multiple sets of IRFs and FEVDs can be placed in the same file, with each set of results in a
file bearing a distinct name. The irf create command above created file myirf1.irf and put
one set of results in it, named order1. The order1 results include estimates of the simple IRFs,
orthogonalized IRFs, cumulative IRFs, cumulative orthogonalized IRFs, and Cholesky FEVDs.
IRF files are just files: they can be erased by erase, listed by dir, and copied by copy; see
[D] erase, [D] dir, and [D] copy.

Below we use the same estimated var but use a different Cholesky ordering to create a second set
of IRF results, which we will save as order2 in the same file, and then we will graph both results:

270

irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs


. irf create order2, step(10) order(dln_inc dln_inv dln_consump)
(file myirf1.irf updated)
. irf graph oirf, irf(order1 order2) impulse(dln_inc) response(dln_consump)

order1, dln_inc, dln_consump

order2, dln_inc, dln_consump

.01

.005

.005
0

10

10

step
95% CI

orthogonalized irf

Graphs by irfname, impulse variable, and response variable

We have compared results for one model under two different identification schemes. We could just
as well have compared results of two different models. We now use irf table to display the results
tabularly:
. irf table oirf, irf(order1 order2) impulse(dln_inc) response(dln_consump)
Results from order1 order2

step
0
1
2
3
4
5
6
7
8
9
10

(1)
oirf

(1)
Lower

(1)
Upper

(2)
oirf

(2)
Lower

(2)
Upper

.004934
.001309
.003573
-.000692
.000905
.000328
.000021
.000154
.000026
.000026
.000026

.003016
-.000931
.001285
-.002333
-.000541
-.0005
-.000675
-.000206
-.000248
-.000121
-.000061

.006852
.003549
.005862
.00095
.002351
.001156
.000717
.000515
.0003
.000174
.000113

.005244
.001235
.00391
-.000677
.00094
.000341
.000042
.000161
.000027
.00003
.000027

.003252
-.001011
.001542
-.002347
-.000576
-.000518
-.000693
-.000218
-.000261
-.000125
-.000065

.007237
.003482
.006278
.000993
.002456
.001201
.000777
.00054
.000315
.000184
.00012

95% lower and upper bounds reported


(1) irfname = order1, impulse = dln_inc, and response = dln_consump
(2) irfname = order2, impulse = dln_inc, and response = dln_consump

Both the table and the graph show that the two orthogonalized IRFs are essentially the same. In both
functions, an increase in the orthogonalized shock to dln inc causes a short series of increases in
dln consump that dies out after four or five periods.

irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs

271

References
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Box-Steffensmeier, J. M., J. R. Freeman, M. P. Hitt, and J. C. W. Pevehouse. 2014. Time Series Analysis for the
Social Sciences. New York: Cambridge University Press.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.

Also see
[TS] arfima Autoregressive fractionally integrated moving-average models
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] vec Vector error-correction models
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf add Add results from an IRF file to the active IRF file
Description
Option

Quick start
Remarks and examples

Menu
Also see

Syntax

Description
irf add copies results from an existing IRF file on disk to the active IRF file, set by irf set; see
[TS] irf set.

Quick start
Copy the IRF results myirf1 from myirfs.irf to newirf in the active IRF file
irf add newirf = myirf1, using(myirfs)
As above, but copy all IRF results from myirfs.irf to the active file
irf add _all, using(myirfs)
Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

Manage IRF results and files

272

>

Add IRF results

irf add Add results from an IRF file to the active IRF file

273

Syntax
irf add




all | newname= oldname . . . , using(irf filename)

Option
using(irf filename) specifies the file from which results are to be obtained and is required. If
irf filename is specified without an extension, .irf is assumed.

Remarks and examples


If you have not read [TS] irf, please do so.

Example 1
After fitting a VAR model, we create two separate IRF files:
. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )
. irf create original, set(irf1, replace)
(file irf1.irf created)
(file irf1.irf now active)
(file irf1.irf updated)
. irf create order2, order(dln_inc dln_inv dln_consump) set(irf2, replace)
(file irf2.irf created)
(file irf2.irf now active)
(file irf2.irf updated)

We copy IRF results original to the active file giving them the name order1.
. irf add order1 = original, using(irf1)
(file irf2.irf updated)

Here we create new IRF results and save them in the new file irf3.
. irf
(file
(file
(file

create order3, order(dln_inc dln_consump dln_inv) set(irf3, replace)


irf3.irf created)
irf3.irf now active)
irf3.irf updated)

Now we copy all the IRF results in file irf2 into the active file.
. irf add _all, using(irf2)
(file irf3.irf updated)

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf cgraph Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs
Description
Options

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf cgraph makes a graph or a combined graph of IRF results. A graph is drawn for specified
combinations of named IRF results, impulse variables, response variables, and statistics. irf cgraph
combines these graphs into one image, unless separate graphs are requested.
irf cgraph operates on the active IRF file; see [TS] irf set.

Quick start
Combine graphs of an orthogonalized IRF myirf and cumulative IRF mycirf for dependent variables
y1 and y2
irf cgraph (myirf y1 y2 oirf) (mycirf y1 y2 cirf)
As above, but suppress confidence bands and add a title
irf cgraph (myirf y1 y2 oirf) (mycirf y1 y2 cirf), noci
title("My Title")

///

Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

IRF and FEVD analysis

274

>

Combined graphs

irf cgraph Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs

275

Syntax
irf cgraph (spec1 )


 

(spec2 ) . . . (specN )
, options

where (speck ) is
(irfname impulsevar responsevar stat


, spec options )

irfname is the name of a set of IRF results in the active IRF file. impulsevar should be specified as an
endogenous variable for all statistics except dm and cdm; for those, specify as an exogenous variable.
responsevar is an endogenous variable name. stat is one or more statistics from the list below:
stat

Description

Main

irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd

impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition

Notes: 1. No statistic may appear more than once.


2. If confidence intervals are included (the default), only two statistics may be included.
3. If confidence intervals are suppressed (option noci), up to four statistics may be included.

options

Description

Main

set(filename)

make filename active

Options

combine options

affect appearance of combined graph

Y axis, X axis, Titles, Legend, Overall

twoway options

any options other than by() documented in [G-3] twoway options

spec options

level, steps, and rendition of plots and their CIs

individual

graph each combination individually

spec options appear on multiple tabs in the dialog box.


individual does not appear in the dialog box.

276

irf cgraph Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs

spec options

Description

Main

suppress confidence bands

noci
Options

level(#)
lstep(#)
ustep(#)

set confidence level; default is level(95)


use # for first step
use # for maximum step

Plots

plot#opts(line options)

affect rendition of the line plotting the # stat

CI plots

ci#opts(area options)

affect rendition of the confidence interval for the # stat

spec options may be specified within a graph specification, globally, or in both. When specified in a graph
specification, the spec options affect only the specification in which they are used. When supplied globally, the
spec options affect all graph specifications. When supplied in both places, options in the graph specification take
precedence.

Options


Main

noci suppresses graphing the confidence interval for each statistic. noci is assumed when the model
was fit by vec because no confidence intervals were estimated.
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.

Options

level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals. The value set of an overall level() can be overridden by the
level() inside a (speck ).
lstep(#) specifies the first step, or period, to be included in the graph. lstep(0) is the default.
ustep(#), # 1, specifies the maximum step, or period, to be included in the graph.
combine options affect the appearance of the combined graph; see [G-2] graph combine.

Plots

plot1opts(cline options), . . . , plot4opts(cline options) affect the rendition of the plotted statistics. plot1opts() affects the rendition of the first statistic; plot2opts(), the second; and so
on. cline options are as described in [G-3] cline options.

CI plots

ci1opts1(area options) and ci2opts2(area options) affect the rendition of the confidence intervals
for the first (ci1opts()) and second (ci2opts()) statistics. See [TS] irf graph for a description
of this option and [G-3] area options for the suboptions that change the look of the CI.

irf cgraph Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs

277

Y axis, X axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
The following option is available with irf cgraph but is not shown in the dialog box:
individual specifies that each graph be displayed individually. By default, irf cgraph combines
the subgraphs into one image.

Remarks and examples


If you have not read [TS] irf, please do so.
The relationship between irf cgraph and irf graph is syntactically and conceptually the same
as that between irf ctable and irf table; see [TS] irf ctable for a description of the syntax.
irf cgraph is much the same as using irf graph to make individual graphs and then using
graph combine to put them together. If you cannot use irf cgraph to do what you want, consider
the other approach.

Example 1
You have previously issued the commands:
.
.
.
.
.
.
.

use http://www.stata-press.com/data/r14/lutkepohl2
mat a = (., 0, 0\0,.,0\.,.,.)
mat b = I(3)
svar dln_inv dln_inc dln_consump, aeq(a) beq(b)
irf create modela, set(results3) step(8)
svar dln_inc dln_inv dln_consump, aeq(a) beq(b)
irf create modelb, step(8)

278

irf cgraph Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs

You now type


. irf cgraph (modela dln_inc dln_consump oirf sirf)
>
(modelb dln_inc dln_consump oirf sirf)
>
(modela dln_inc dln_consump fevd sfevd, lstep(1))
>
(modelb dln_inc dln_consump fevd sfevd, lstep(1)),
>
title("Results from modela and modelb")

Results from modela and modelb


modela: dln_inc > dln_consump

modelb: dln_inc > dln_consump


.01

.006
.004

.005

.002
0
0
.002

.005
0

4
step

4
step

95% CI for oirf

95% CI for sirf

95% CI for oirf

95% CI for sirf

oirf

sirf

oirf

sirf

modela: dln_inc > dln_consump

modelb: dln_inc > dln_consump

.5

.5

.4

.4

.3

.3

.2

.2

.1

.1
0

4
step

4
step

95% CI for fevd

95% CI for sfevd

95% CI for fevd

95% CI for sfevd

fevd

sfevd

fevd

sfevd

Stored results
irf cgraph stores the following in r():
Scalars
r(k)
Macros
r(individual)
r(save)
r(name)
r(title)
r(save#)
r(name#)
r(title#)
r(ci#)
r(response#)
r(impulse#)
r(irfname#)
r(stats#)

number of specific graph commands


individual, if specified
filename, replace from saving() option for combined graph
name, replace from name() option for combined graph
title of the combined graph
filename, replace from saving() option for individual graphs
name, replace from name() option for individual graphs
title for the #th graph
level applied to the #th confidence interval or noci
response specified in the #th command
impulse specified in the #th command
IRF name specified in the #th command
statistics specified in the #th command

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs
Description
Remarks and examples

Quick start
Methods and formulas

Menu
References

Syntax
Also see

Options

Description
irf create estimates multiple sets of impulseresponse functions (IRFs), dynamic-multiplier
functions, and forecast-error variance decompositions (FEVDs). All of these estimates and their
standard errors are known collectively as IRF results and are saved in an IRF file under a specified
filename. Once you have created a set of IRF results, you can use the other irf commands to analyze
them.

Quick start
Create impulseresponse function myirf with 8 forecast periods in the active IRF file
irf create myirf
As above, and use IRF file myirfs.irf
irf create myirf, set(myirfs)
As above, but compute the IRF for 12 periods
irf create myirf, set(myirfs) step(12)
Note: irf can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var svar, [TS] vec,
[TS] arima, or [TS] arfima.

Menu
Statistics
FEVDs

>

Multivariate time series

>

IRF and FEVD analysis

279

>

Obtain IRFs, dynamic-multiplier functions, and

280

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

Syntax
After var
irf create irfname

, var options

, svar options

, vec options

, arima options

, arfima options

After svar
irf create irfname

After vec
irf create irfname

After arima
irf create irfname

After arfima
irf create irfname

irfname is any valid name that does not exceed 15 characters.


var options

Description

Main



set(filename , replace )
replace
step(#)
order(varlist)
estimates(estname)

make filename active


replace irfname if it already exists
set forecast horizon to #; default is step(8)
specify Cholesky ordering of endogenous variables
use previously stored results estname; default is to use active
results

Std. errors

nose
bs
bsp
nodots
reps(#)


bsaving(filename , replace )

do not calculate standard errors


obtain standard errors from bootstrapped residuals
obtain standard errors from parametric bootstrap
do not display . for each bootstrap replication
use # bootstrap replications; default is reps(200)
save bootstrap results in filename

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

svar options

281

Description

Main



set(filename , replace )
replace
step(#)
estimates(estname)

make filename active


replace irfname if it already exists
set forecast horizon to #; default is step(8)
use previously stored results estname; default is to use active
results

Std. errors

nose
bs
bsp
nodots
reps(#)


bsaving(filename , replace )

do not calculate standard errors


obtain standard errors from bootstrapped residual
obtain standard errors from parametric bootstrap
do not display . for each bootstrap replication
use # bootstrap replications; default is reps(200)
save bootstrap results in filename

vec options

Description

Main



set(filename , replace )
replace
step(#)
estimates(estname)

make filename active


replace irfname if it already exists
set forecast horizon to #; default is step(8)
use previously stored results estname; default is to use active
results

arima options

Description

Main



set(filename , replace )
replace
step(#)
estimates(estname)

make filename active


replace irfname if it already exists
set forecast horizon to #; default is step(8)
use previously stored results estname; default is to use active
results

Std. errors

nose

do not calculate standard errors

arfima options

Description

Main



set(filename , replace )
replace
step(#)
smemory
estimates(estname)

make filename active


replace irfname if it already exists
set forecast horizon to #; default is step(8)
calculate short-memory IRFs
use previously stored results estname; default is to use active
results

Std. errors

nose

do not calculate standard errors

282

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

The default is to use asymptotic standard errors if no options are specified.


irf create is for use after fitting a model with the var, svar, vec, arima, or arfima command; see [TS] var,
[TS] var svar, [TS] vec, [TS] arima, or [TS] arfima.
You must tsset your data before using var, svar, vec, arima, or arfima and, hence, before using irf create;
see [TS] tsset.

Options


Main

set(filename[, replace]) specifies the IRF file to be used. If set() is not specified, the active IRF
file is used; see [TS] irf set.
If set() is specified, the specified file becomes the active file, just as if you had issued an irf
set command.
replace specifies that the results saved under irfname may be replaced, if they already exist. IRF
results are saved in files, and one file may contain multiple IRF results.
step(#) specifies the step (forecast) horizon; the default is eight periods.
order(varlist) is allowed only after estimation by var; it specifies the Cholesky ordering of the
endogenous variables to be used when estimating the orthogonalized IRFs. By default, the order
in which the variables were originally specified on the var command is used.
smemory is allowed only after estimation by arfima; it specifies that the IRFs are calculated based
on a short-memory model with the fractional difference parameter d set to zero.
estimates(estname) specifies that estimation results previously estimated by var, svar, or vec,
and stored by estimates, be used. This option is rarely specified; see [R] estimates.

Std. errors

nose, bs, and bsp are alternatives that specify how (whether) standard errors are to be calculated. If
none of these options is specified, asymptotic standard errors are calculated, except in two cases:
after estimation by vec and after estimation by svar in which long-run constraints were applied.
In those two cases, the default is as if nose were specified, although in the second case, you could
specify bs or bsp. After estimation by vec, standard errors are simply not available.
nose specifies that no standard errors be calculated.
bs specifies that standard errors be calculated by bootstrapping the residuals. bs may not be
specified if there are gaps in the data.
bsp specifies that standard errors be calculated via a multivariate-normal parametric bootstrap.
bsp may not be specified if there are gaps in the data.


nodots, reps(#), and bsaving(filename , replace ) are relevant only if bs or bsp is specified.
nodots specifies that dots not be displayed each time irf create performs a bootstrap replication.
reps(#), # > 50, specifies the number of bootstrap replications to be performed. reps(200) is
the default.


bsaving(filename , replace ) specifies that file filename be created and that the bootstrap
replications be saved in it. New file filename is just a .dta dataset that can be loaded later
using use; see [D] use. If filename is specified without an extension, .dta is assumed.

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

283

Remarks and examples


If you have not read [TS] irf, please do so. An introductory example using IRFs is presented there.
irf create estimates several types of IRFs, dynamic-multiplier functions, and FEVDs. Which
estimates are saved depends on the estimation method previously used to fit the model, as summarized
in the table below:

Saves
simple IRFs
orthogonalized IRFs
dynamic multipliers
cumulative IRFs
cumulative orthogonalized IRFs
cumulative dynamic multipliers
structural IRFs

arima
x
x
x
x
x

Estimation command
arfima
var
svar
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x

Cholesky FEVDs
structural FEVDs

x
x

vec
x
x
x
x

Remarks are presented under the following headings:


Introductory examples
Technical aspects of IRF files
IRFs and FEVDs
IRF results for VARs
An introduction to impulseresponse functions for VARs
An introduction to dynamic-multiplier functions for VARs
An introduction to forecast-error variance decompositions for VARs
IRF results for VECMs
An introduction to impulseresponse functions for VECMs
An introduction to forecast-error variance decompositions for VECMs
IRF results for ARIMA and ARFIMA

Introductory examples
Example 1: After var
Below we compare bootstrap and asymptotic standard errors for a specific FEVD. We begin by
fitting a VAR(2) model to the Lutkepohl data (we use the var command). We next use the irf create
command twice, first to create results with asymptotic standard errors (saved under the name asymp)
and then to re-create the same results, this time with bootstrap standard errors (saved under the name
bs). Because bootstrapping is a random process, we set the random-number seed (set seed 123456)
before using irf create the second time; this makes our results reproducible. Finally, we compare
results by using the IRF analysis command irf ctable.

284

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs


. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr>=tq(1961q2) & qtr<=tq(1978q4), lags(1/2)
(output omitted )
. irf create asymp, step(8) set(results1)
(file results1.irf created)
(file results1.irf now active)
(file results1.irf updated)
. set seed 123456
. irf create bs, step(8) bs reps(250) nodots
(file results1.irf updated)
. irf ctable (asymp dln_inc dln_consump fevd) (bs dln_inc dln_consump fevd),
> noci stderror

step
0
1
2
3
4
5
6
7
8

(1)
fevd

(1)
S.E.

(2)
fevd

(2)
S.E.

0
.282135
.278777
.33855
.339942
.342813
.343119
.343079
.34315

0
.087373
.083782
.090006
.089207
.090494
.090517
.090499
.090569

0
.282135
.278777
.33855
.339942
.342813
.343119
.343079
.34315

0
.102756
.098161
.10586
.104191
.105351
.105258
.105266
.105303

(1) irfname = asymp, impulse = dln_inc, and response = dln_consump


(2) irfname = bs, impulse = dln_inc, and response = dln_consump

Point estimates are, of course, the same. The bootstrap estimates of the standard errors, however,
are larger than the asymptotic estimates, which suggests that the sample size of 71 is not large
enough for the distribution of the estimator of the FEVD to be well approximated by the asymptotic
distribution. Here we would expect the bootstrap confidence interval to be more reliable than the
confidence interval that is based on the asymptotic standard error.

Technical note
The details of the bootstrap algorithms are given in Methods and formulas. These algorithms are
conditional on the first p observations, where p is the order of the fitted VAR. (In an SVAR model, p
is the order of the VAR that underlies the SVAR.) The bootstrapped estimates are conditional on the
first p observations, just as the estimators of the coefficients in VAR models are conditional on the
first p observations. With bootstrap standard errors (option bs), the p initial observations are used
with resampling the residuals to produce the bootstrap samples used for estimation. With the more
parametric bootstrap (option bsp), the p initial observations are used with draws from a multivariate
b to generate the bootstrap samples.
normal distribution with variancecovariance matrix

Technical note
b the estimated variance matrix of the disturbances, in
For var and svar e() results, irf uses ,
computing the asymptotic standard errors of all the functions. The point estimates of the orthogonalized impulseresponse functions, the structural impulseresponse functions, and all the variance

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

285

b As discussed in [TS] var, var and svar use the ML estimator of


decompositions also depend on .
this matrix by default, but they have option dfk, which will instead use an estimator that includes a
small-sample correction. Specifying dfk when the model is fitwhen the var or svar command is
b and will change the IRF results that depend on it.
givenchanges the estimate of

Example 2: After var with exogenous variables


After fitting a VAR, irf create computes estimates of the dynamic multipliers, which describe
the impact of a unit change in an exogenous variable on each endogenous variable. For instance,
below we estimate and report the cumulative dynamic multipliers from a model in which changes in
investment are exogenous. The results indicate that both of the cumulative dynamic multipliers are
significant.
. var dln_inc dln_consump if qtr>=tq(1961q2) & qtr<=tq(1978q4), lags(1/2)
> exog(L(0/2).dln_inv)
(output omitted )
. irf create dm, step(8)
(file results1.irf updated)
. irf table cdm, impulse(dln_inv) irf(dm)
Results from dm

step
0
1
2
3
4
5
6
7
8

step
0
1
2
3
4
5
6
7
8

(1)
cdm

(1)
Lower

(1)
Upper

.032164
.096568
.140107
.150527
.148979
.151247
.150267
.150336
.150525

-.027215
.003479
.022897
.032116
.031939
.033011
.033202
.032858
.033103

.091544
.189656
.257317
.268938
.26602
.269482
.267331
.267813
.267948

(2)
cdm

(2)
Lower

(2)
Upper

.058681
.062723
.126167
.136583
.146482
.146075
.145542
.146309
.145786

.012529
-.005058
.032497
.038691
.04442
.045201
.044988
.045315
.045206

.104832
.130504
.219837
.234476
.248543
.24695
.246096
.247304
.246365

95% lower and upper bounds reported


(1) irfname = dm, impulse = dln_inv, and response = dln_inc
(2) irfname = dm, impulse = dln_inv, and response = dln_consump

286

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

Example 3: After vec


Although all IRFs and orthogonalized IRFs (OIRFs) from models with stationary variables will taper
off to zero, some of the IRFs and OIRFs from models with first-difference stationary variables will not.
This is the key difference between IRFs and OIRFs from systems of stationary variables fit by var or
svar and those obtained from systems of first-difference stationary variables fit by vec. When the
effect of the innovations dies out over time, the shocks are said to be transitory. In contrast, when
the effect does not taper off, shocks are said to be permanent.
In this example, we look at the OIRF from one of the VECMs fit to the unemployment-rate data
analyzed in example 2 of [TS] vec. We see that an orthogonalized shock to Indiana has a permanent
effect on the unemployment rate in Missouri:
. use http://www.stata-press.com/data/r14/urates
. vec missouri indiana kentucky illinois, trend(rconstant) rank(2) lags(4)
(output omitted )
. irf create vec1, set(vecirfs) step(50)
(file vecirfs.irf created)
(file vecirfs.irf now active)
(file vecirfs.irf updated)

Now we can use irf graph to graph the OIRF of interest:


. irf graph oirf, impulse(indiana) response(missouri)

vec1, indiana, missouri


.3

.2

.1

0
0

50

step
Graphs by irfname, impulse variable, and response variable

The graph shows that the estimated OIRF converges to a positive asymptote, which indicates that
an orthogonalized innovation to the unemployment rate in Indiana has a permanent effect on the
unemployment rate in Missouri.

Technical aspects of IRF files


This section is included for programmers wishing to extend the irf system.
irf create estimates a series of impulseresponse functions and their standard errors. Although
these estimates are saved in an IRF file, most users will never need to look at the contents of this
file. The IRF commands fill in, analyze, present, and manage IRF results.

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

287

IRF files are just Stata datasets that have names ending in .irf instead of .dta. The dataset in
the file has a nested panel structure.

Variable irfname contains the irfname specified by the user. Variable impulse records the name
of the endogenous variable whose innovations are the impulse. Variable response records the name
of the endogenous variable that is responding to the innovations. In a model with K endogenous
variables, there are K 2 combinations of impulse and response. Variable step records the periods
for which these estimates were computed.
Below is a catalog of the statistics that irf create estimates and the variable names under which
they are saved in the IRF file.
Statistic
impulseresponse functions
orthogonalized impulseresponse functions
dynamic-multiplier functions
cumulative impulseresponse functions
cumulative orthogonalized impulseresponse functions
cumulative dynamic-multiplier functions
Cholesky forecast-error decomposition
structural impulseresponse functions
structural forecast-error decomposition
standard error of the impulseresponse functions
standard error of the orthogonalized impulseresponse functions
standard error of the cumulative impulseresponse functions
standard error of the cumulative orthogonalized impulseresponse functions
standard error of the Cholesky forecast-error decomposition
standard error of the structural impulseresponse functions
standard error of the structural forecast-error decomposition

Name
irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd
stdirf
stdoirf
stdcirf
stdcoirf
stdfevd
stdsirf
stdsfevd

288

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

In addition to the variables, information is stored in dta characteristics. Much of the following
information is also available in r() after irf describe, where it is often more convenient to obtain
the information. Characteristic dta[version] contains the version number of the IRF file, which
is currently 1.1. Characteristic dta[irfnames] contains a list of all the irfnames in the IRF file.
For each irfname, there are a series of additional characteristics:
Name

Contents

dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname

model]
order]
exog]
exogvars]
constant]

dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname

lags]
exlags]
tmin]
tmax]
timevar]
tsfmt]
varcns]

dta[irfname svarcns]
dta[irfname step]
dta[irfname stderror]
dta[irfname reps]
dta[irfname version]
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname

rank]
trend]
veccns]
sind]
d]

var, sr var, lr var, vec, arima, or arfima


Cholesky order used in IRF estimates
exogenous variables, and their lags, in VAR
exogenous variables in VAR
constant or noconstant, depending on whether
noconstant was specified in var or svar
lags in model
lags of exogenous variables in model
minimum value of timevar in the estimation sample
maximum value of timevar in the estimation sample
name of tsset timevar
format of timevar
constrained or colon-separated list of
constraints placed on VAR coefficients
constrained or colon-separated list of
constraints placed on VAR coefficients
maximum step in IRF estimates
asymptotic, bs, bsp, or none,
depending on the type of standard errors requested
number of bootstrap replications performed
version of the IRF file that originally
held irfname IRF results
number of cointegrating equations
trend() specified in vec
constraints placed on VECM parameters
normalized seasonal indicators included in vec
fractional difference parameter d in arfima

IRFs and FEVDs


irf create can estimate several types of IRFs and FEVDs for VARs and VECMs. irf create can
also estimate IRFs and cumulative IRFs for ARIMA and ARFIMA models. We first discuss IRF results for
VAR and SVAR models, and then we discuss them in the context of VECMs. Because the cointegrating
VECM is an extension of the stationary VAR framework, the section that discusses the IRF results for
VECMs draws on the earlier VAR material. We conclude our discussion with IRF results for ARIMA
and ARFIMA models.

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

289

IRF results for VARs


An introduction to impulseresponse functions for VARs
A pth-order vector autoregressive model (VAR) with exogenous variables is given by

yt = v + A1 yt1 + + Ap ytp + Bxt + ut


where

yt = (y1t , . . . , yKt )0 is a K 1 random vector,


the Ai are fixed K K matrices of parameters,
xt is an R0 1 vector of exogenous variables,
B is a K R0 matrix of coefficients,
v is a K 1 vector of fixed parameters, and
ut is assumed to be white noise; that is,
E(ut ) = 0
E(ut u0t ) =
E(ut u0s ) = 0 for t 6= s
As discussed in [TS] varstable, a VAR can be rewritten in moving-average form only if it is stable.
Any exogenous variables are assumed to be covariance stationary. Because the functions of interest
in this section depend only on the exogenous variables through their effect on the estimated Ai , we
can simplify the notation by dropping them from the analysis. All the formulas given below still
apply, although the Ai are estimated jointly with B on the exogenous variables.
Below we discuss conditions under which the IRFs and forecast-error variance decompositions have a
causal interpretation. Although estimation requires only that the exogenous variables be predetermined,
that is, that E(xjt uit ) = 0 for all i, j , and t, assigning a causal interpretation to IRFs and FEVDs
requires that the exogenous variables be strictly exogenous, that is, that E(xjs uit ) = 0 for all i, j ,
s, and t.
IRFs describe how the innovations to one variable affect another variable after a given number of
periods. For an example of how IRFs are interpreted, see Stock and Watson (2001). They use IRFs to
investigate the effect of surprise shocks to the Federal Funds rate on inflation and unemployment. In
another example, Christiano, Eichenbaum, and Evans (1999) use IRFs to investigate how shocks to
monetary policy affect other macroeconomic variables.

Consider a VAR without exogenous variables:

yt = v + A1 yt1 + + Ap ytp + ut

(1)

The VAR represents the variables in yt as functions of its own lags and serially uncorrelated innovations
ut . All the information about contemporaneous correlations among the K variables in yt is contained
in . In fact, as discussed in [TS] var svar, a VAR can be viewed as the reduced form of a dynamic
simultaneous-equation model.
To see how the innovations affect the variables in yt after, say, i periods, rewrite the model in its
moving-average form

X
yt = +
i uti
(2)
i=0

where is the K 1 time-invariant mean of yt , and



IK
i = Pi
j=1 ij Aj

if i = 0
if i = 1, 2, . . .

290

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

We can rewrite a VAR in the moving-average form only if it is stable. Essentially, a VAR is stable
if the variables are covariance stationary and none of the autocorrelations are too high (the issue of
stability is discussed in greater detail in [TS] varstable).
The i are the simple IRFs. The j, k element of i gives the effect of a 1time unit increase in
the k th element of ut on the j th element of yt after i periods, holding everything else constant.
Unfortunately, these effects have no causal interpretation, which would require us to be able to answer
the question, How does an innovation to variable k , holding everything else constant, affect variable j
after i periods? Because the ut are contemporaneously correlated, we cannot assume that everything
else is held constant. Contemporaneous correlation among the ut implies that a shock to one variable
is likely to be accompanied by shocks to some of the other variables, so it does not make sense to
shock one variable and hold everything else constant. For this reason, (2) cannot provide a causal
interpretation.
This shortcoming may be overcome by rewriting (2) in terms of mutually uncorrelated innovations.
Suppose that we had a matrix P, such that = PP0 . If we had such a P, then P1 P01 = IK ,
and
E{P1 ut (P1 ut )0 } = P1 E{(ut u0t )P01 } = P1 P01 = IK
We can thus use P1 to orthogonalize the ut and rewrite (2) as

yt = +

i PP1 uti

i=0

=+

i P1 uti

i=0

=+

i wti

i=0

where i = i P and wt = P1 ut . If we had such a P, the wk would be mutually orthogonal,


and no information would be lost in the holding-everything-else-constant assumption, implying that
the i would have the causal interpretation that we seek.
Choosing a P is similar to placing identification restrictions on a system of dynamic simultaneous
equations. The simple IRFs do not identify the causal relationships that we wish to analyze. Thus we
seek at least as many identification restrictions as necessary to identify the causal IRFs.
So, where do we get such a P? Sims (1980) popularized the method of choosing P to be the
b The IRFs based on this choice of P are known as the orthogonalized
Cholesky decomposition of .
b is equivalent to imposing a recursive
IRFs. Choosing P to be the Cholesky decomposition of
structure for the corresponding dynamic structural equation model. The ordering of the recursive
structure is the same as the ordering imposed in the Cholesky decomposition. Because this choice is
arbitrary, some researchers will look at the OIRFs with different orderings assumed in the Cholesky
decomposition. The order() option available with irf create facilitates this type of analysis.
The SVAR approach integrates the need to identify the causal IRFs into the model specification and
estimation process. Sufficient identification restrictions can be obtained by placing either short-run or
long-run restrictions on the model. The VAR in (1) can be rewritten as

yt v A1 yt1 Ap ytp = ut
Similarly, a short-run SVAR model can be written as

A(yt v A1 yt1 Ap ytp ) = Aut = Bet

(3)

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

291

where A and B are K K nonsingular matrices of parameters to be estimated, et is a K 1 vector


of disturbances with et N (0, IK ), and E(et e0s ) = 0K for all s 6= t. Sufficient constraints must
be placed on A and B so that P is identified. One way to see the connection is to draw out the
implications of the latter equality in (3). From (3) it can be shown that
= A1 B(A1 B)0

b and B
b are obtained by maximizing the concentrated
As discussed in [TS] var svar, the estimates A
b obtained from the underlying VAR. The short-run
log-likelihood function on the basis of the
b 1 B
b to identify the causal IRFs. The long-run SVAR approach works
SVAR approach chooses P = A
b 1 B
b 1 is the matrix of estimated long-run or accumulated
b =A
b , where A
similarly, with P = C
effects of the reduced-form VAR shocks.
There is one important difference between long-run and short-run SVAR models. As discussed by
Amisano and Giannini (1997, chap. 6), in the short-run model the constraints are applied directly to
the parameters in A and B. Then A and B interact with the estimated parameters of the underlying
VAR. In contrast, in a long-run model, the constraints are placed on functions of the estimated VAR
parameters. Although estimation and inference of the parameters in C is straightforward, obtaining
the asymptotic standard errors of the structural IRFs requires untenable assumptions. For this reason,
irf create does not estimate the asymptotic standard errors of the structural IRFs generated by
long-run SVAR models. However, bootstrap standard errors are still available.
An introduction to dynamic-multiplier functions for VARs
A dynamic-multiplier function measures the effect of a unit change in an exogenous variable on the
endogenous variables over time. Per Lutkepohl (2005, chap. 10), if the VAR with exogenous variables
is stable, it can be rewritten as

yt =

X
i=0

Di xti +

i uti

i=0

where the Di are the dynamic-multiplier functions. (See Methods and formulas for details.) Some
authors refer to the dynamic-multiplier functions as transfer functions because they specify how a
unit change in an exogenous variable is transferred to the endogenous variables.

Technical note
irf create computes dynamic-multiplier functions only after var. After short-run SVAR models,
the dynamic multipliers from the VAR are the same as those from the SVAR. The dynamic multipliers
for long-run SVARs have not yet been worked out.

An introduction to forecast-error variance decompositions for VARs


Another measure of the effect of the innovations in variable k on variable j is the FEVD. This
method, which is also known as innovation accounting, measures the fraction of the error in forecasting
variable j after h periods that is attributable to the orthogonalized innovations in variable k . Because
deriving the FEVD requires orthogonalizing the ut innovations, the FEVD is always predicated upon
a choice of P.

292

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

Lutkepohl (2005, sec. 2.2.2) shows that the h-step forecast error can be written as

bt (h) =
yt+h y

h1
X

i ut+hi

(4)

i=0

where yt+h is the value observed at time t + h and ybt (h) is the h-step-ahead predicted value for
yt+h that was made at time t.
Because the ut are contemporaneously correlated, their distinct contributions to the forecast error
cannot be ascertained. However, if we choose a P such that = PP0 , as above, we can orthogonalize
the ut into wt = P1 ut . We can then ascertain the relative contribution of the distinct elements of
wt . Thus we can rewrite (4) as

bt (h) =
yt+h y

h1
X

i PP1 ut+hi

i=0

h1
X

i wt+hi

i=0

Because the forecast errors can be written in terms of the orthogonalized errors, the forecasterror variance can be written in terms of the orthogonalized error variances. Forecast-error variance
decompositions measure the fraction of the total forecast-error variance that is attributable to each
orthogonalized shock.

Technical note
The details in this note are not critical to the discussion that follows. A forecast-error variance
decomposition is derived for a given P. Per Lutkepohl (2005, sec. 2.3.3), letting mn,i be the m, nth
element of i , we can express the h-step forecast error of the j th component of yt as

bj (h) =
yj,t+h y

h1
X

j1,1 w1,t+hi + + jK,i wK,t+hi

i=0

K
X

jk,0 wk,t+h + + jk,h1 wk,t+1

k=1

The wt , which were constructed using P, are mutually orthogonal with unit variance. This allows
us to compute easily the mean squared error (MSE) of the forecast of variable j at horizon h in terms
of the contributions of the components of wt . Specifically,
2

E[{yj,t+h yj,t (h)} ] =

K
X

2
2
(jk,0
+ + jk,h1
)

k=1

The k th term in the sum above is interpreted as the contribution of the orthogonalized innovations
in variable k to the h-step forecast error of variable j . Note that the k th element in the sum above
can be rewritten as
h1
X
2
2
2
(jk,0
+ + jk,h1
)=
e0j k ek
i=0

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

293

where ei is the ith column of IK . Normalizing by the forecast error for variable j at horizon h yields

Ph1
jk,h =
where MSE{yj,t (h)} =

e0j k ek
MSE{yj,t (h)}

2

i=0

Ph1 PK
i=0

2
k=1 jk,i .

Because the FEVD depends on the choice of P, there are different forecast-error variance decompositions associated with each distinct P. irf create can estimate the FEVD for a VAR or an
b For an SVAR, P is the estimated structural
SVAR. For a VAR, P is the Cholesky decomposition of .
1 b
b
b for long-run SVAR models. Due to the
decomposition, P = A B for short-run models and P = C
same complications that arose with the structural impulseresponse functions, the asymptotic standard
errors of the structural FEVD are not available after long-run SVAR models, but bootstrap standard
errors are still available.

IRF results for VECMs


An introduction to impulseresponse functions for VECMs
As discussed in [TS] vec intro, the VECM is a reparameterization of the VAR that is especially
useful for fitting VARs with cointegrating variables. This implies that the estimated parameters for
the corresponding VAR model can be backed out from the estimated parameters of the VECM model.
This relationship means we can use the VAR form of the cointegrating VECM to discuss the IRFs for
VECMs.
Consider a cointegrating VAR with one lag with no constant or trend,

yt = Ayt1 + ut

(5)

where yt is a K 1 vector of endogenous, first-difference stationary variables among which there


are 1 r < K cointegration equations; A is K K matrix of parameters; and ut is a K 1 vector
of i.i.d. disturbances.
We developed intuition for the IRFs from a stationary VAR by rewriting the VAR as an infiniteorder vector moving-average (VMA) process. While the Granger representation theorem establishes
the existence of a VMA formulation of this model, because the cointegrating VAR is not stable, the
inversion is not nearly so intuitive. (See Johansen [1995, chapters 3 and 4] for more details.) For this
reason, we use (5) to develop intuition for the IRFs from a cointegrating VAR.
Suppose that K is 3, that u1 = (1, 0, 0), and that we want to analyze the time paths of the
variables in y conditional on the initial values y0 = 0, A, and the condition that there are no more
shocks to the system, that is, 0 = u2 = u3 = . These assumptions and (5) imply that

y1 = u1
y2 = Ay1 = Au1
y3 = Ay2 = A2 u1

294

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

and so on. The ith-row element of the first column of As contains the effect of the unit shock to the
first variable after s periods. The first column of As contains the IRF of a unit impulse to the first
variable after s periods. We could deduce the IRFs of a unit impulse to any of the other variables by
administering the unit shock to one of them instead of to the first variable. Thus we can see that the
(i, j)th element of As contains the unit IRF from variable j to variable i after s periods. By starting
with orthogonalized shocks of the form P1 ut , we can use the same logic to derive the OIRFs to be
As P.
For the stationary VAR, stability implies that all the eigenvalues of A have moduli strictly less than
one, which in turn implies that all the elements of As 0 as s . This implies that all the
IRFs from a stationary VAR taper off to zero as s . In contrast, in a cointegrating VAR, some of
the eigenvalues of A are 1, while the remaining eigenvalues have moduli strictly less than 1. This
implies that in cointegrating VARs some of the elements of As are not going to zero as s ,
which in turn implies that some of the IRFs and OIRFs are not going to zero as s . The fact that
the IRFs and OIRFs taper off to zero for stationary VARs but not for cointegrating VARs is one of the
key differences between the two models.
When the IRF or OIRF from the innovation in one variable to another tapers off to zero as time
goes on, the innovation to the first variable is said to have a transitory effect on the second variable.
When the IRF or OIRF does not go to zero, the effect is said to be permanent.
Note that, because some of the IRFs and OIRFs do not taper off to zero, some of the cumulative
IRFs and OIRFs diverge over time.

An introduction to forecast-error variance decompositions for VECMs


The results from An introduction to impulseresponse functions for VECMs can be used to show
that the interpretation of FEVDs for a finite number of steps in cointegrating VARs is essentially the
same as in the stationary case. Because the MSE of the forecast is diverging, this interpretation is valid
only for a finite number of steps. (See [TS] vec intro and [TS] fcast compute for more information
on this point.)

IRF results for ARIMA and ARFIMA


A covariance-stationary additive ARMA(p, q) model can be written as
(Lp )(yt xt ) = (Lq )t
where
(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq
and Lj yt = ytj .
We can rewrite the above model as an infinite-order moving-average process

yt = xt + (L)t
where
(L) =

(L)
= 1 + 1 L + 2 L2 +
(L)

(6)

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

295

This representation shows the impact of the past innovations on the current yt . The ith coefficient
describes the response of yt to a one-time impulse in ti , holding everything else constant. The i
coefficients are collectively referred to as the impulseresponse function of the ARMA model. For a
covariance-stationary series, the i coefficients decay exponentially.
A covariance-stationary multiplicative seasonal ARMA model, often abbreviated SARMA, of order
(p, q) (P, Q)s can be written as
(Lp )s (LP )(yt xt ) = (Lq )s (LQ )t
where

s (LP ) = (1 s,1 Ls s,2 L2s s,P LP s )


s (LQ ) = (1 + s,1 Ls + s,2 L2s + + s,Q LQs )

with (Lp ) and (Lq ) defined as above.


We can express this model as an additive ARMA model by multiplying the terms and imposing
nonlinear constraints on multiplied coefficients. For example, consider the SARMA model given by

(1 1 L)(1 4,1 L4 )yt = t


Expanding the above equation and solving for yt yields

yt = 1 yt1 + 4,1 yt4 1 4,1 yt5 + t


or, in ARMA terms,

yt = 1 yt1 + 4 yt4 + 5 yt5 + t


subject to the constraint 5 = 1 4,1 .
Once we have obtained an ARMA representation of a SARMA process, we obtain the IRFs from (6).
An ARFIMA(p, d, q) model can be written as
(Lp )(1 L)d (yt xt ) = (Lq )t
with (1 L)d denoting a fractional integration operation.
Solving for yt , we obtain

yt = xt + (1 L)d (L)t

This makes it clear that the impulseresponse function for an ARFIMA model corresponds to a
fractionally differenced impulseresponse function for an ARIMA model. Because of the fractional
differentiation, the i coefficients decay very slowly; see Remarks and examples in [TS] arfima.

Methods and formulas


Methods and formulas are presented under the following headings:
Impulseresponse function formulas for VARs
Dynamic-multiplier function formulas for VARs
Forecast-error variance decomposition formulas for VARs
Impulseresponse function formulas for VECMs
Algorithms for bootstrapping the VAR IRF and FEVD standard errors
Impulseresponse function formulas for ARIMA and ARFIMA

296

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

Impulseresponse function formulas for VARs


The previous discussion implies that there are three different choices of P that can be used to
obtain distinct i . P is the Cholesky decomposition of for the OIRFs. For the structural IRFs,
P = A1 B for short-run models, and P = C for long-run models. We will distinguish between
lr
the three by defining oi to be the OIRFs, sr
i to be the short-run structural IRFs, and i to be the
long-run structural IRFs.

b c to be the Cholesky decomposition of ,


b sr = A
b 1 B
b to be the short-run
b P
We also define P
b lr = C
b to be the long-run structural decomposition.
structural decomposition, and P
b i and
b from var or svar, the estimates of the simple IRFs and the
Given estimates of the A
OIRFs are, respectively,

bi =

i
X

bj
b ij A

j=1

and

bc
b oi =
b iP

b j = 0K for j > p.
where A
b and B
b , or C
b , from svar, the estimates of the structural IRFs are either
Given the estimates A
b sr
b b

i = i Psr
or

b lr
b b

i = i Plr

The estimated structural IRFs stored in an IRF file with the variable name sirf may be from
either a short-run model or a long-run model, depending on the estimation results used to create the
IRFs. As discussed in [TS] irf describe, you can easily determine whether the structural IRFs were
generated from a short-run or a long-run SVAR model using irf describe.
Following Lutkepohl (2005, sec. 3.7), estimates of the cumulative IRFs and the cumulative
orthogonalized impulseresponse functions (COIRFs) at period n are, respectively,

bn =

n
X

bi

i=0

and

bn =

n
X

bi

i=0

The asymptotic standard errors of the different impulseresponse functions are obtained by
applications of the delta method. See Lutkepohl (2005, sec. 3.7) and Amisano and Giannini (1997,
chap. 4) for the derivations. See Serfling (1980, sec. 3.3) for a discussion of the delta method. In
presenting the variancecovariance matrix estimators, we make extensive use of the vec() operator,
where vec(X) is the vector obtained by stacking the columns of X.

b n ), and
Lutkepohl (2005, sec. 3.7) derives the asymptotic VCEs of vec(i ), vec(oi ), vec(
2
2
2
b n ). Because vec(i ) is K 1, the asymptotic VCE of vec(i ) is K K , and it is given by
vec(
b G0i
Gi

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

where

Gi =

Pi1

m=0

c 0 )(i1m)
bm
J(M

J = (IK , 0K , . . . , 0K )
b
b2 ... A
b p1 A
bp
A1 A
IK 0K . . . 0K 0K

c
0
I
0K 0K
M=
.K K .
..
..
.

..
.

0K 0K . . .

IK

0K

297

Gi is K 2 K 2 p
J is KKp

b is KpKp
M

b i are the estimates of the coefficients on the lagged variables in the VAR, and
b is the VCE
The A

b
2
2
b 1, . . . , A
b p ).
b is a K p K p matrix whose elements come from the VCE
matrix of
b = vec(A

b
of the VAR coefficient estimator. As such, this VCE is the VCE of the constrained estimator if there
are any constraints placed on the VAR coefficients.
b n ) after n periods is given by
The K 2 K 2 asymptotic VCE matrix for vec(
b F0n
Fn

b
where

Fn =

n
X

Gi

i=1

The K 2 K 2 asymptotic VCE matrix of the vectorized, orthogonalized, IRFs at horizon i, vec(oi ),
is

b C0i
b C0i + Ci
Ci
b

298

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

where

LK solves

C0 = 0

C0 is K 2 K 2 p

b 0 IK )Gi , i = 1, 2, . . .
Ci = (P
c

Ci is K 2 K 2 p

Ci = (IK i )H, i = 0, 1, . . .
n
o1
b c IK )L0
H = L0K LK NK (P
K

Ci is K 2 K 2

vech(F) =

LK vec(F)
for

KK solves

LK is K

1
2

(IK 2 + KK )

(K+1)
K 2
2

DK vech(F) = vec(F)

x11
x21
.
.
.

xK1

x
vech(X) =
22

..
.

xK2
.
..

for

KK is K 2 K 2
NK is K 2 K 2

b = 2D+ (
b
b )D+

K
K
b

1
0
0
D
)
D
D+
=
(D
K
K K
K
DK solves

(K+1)
2

F K K and symmetric

KK vec(G) = vec(G0 ) for any K K matrix G


NK =

H is K 2 K

b is

(K+1)
(K+1)
K 2
2

is K
D+
K

F K K and symmetric

(K+1)
K 2
2

DK is K 2 K

(K+1)
2

for

X K K

vech(X) is K

(K+1)
1
2

xKK
b is the VCE of vech().
b More details about LK , KK , DK and vech() are available in
Note that
b

Lutkepohl (2005, sec. A.12). Finally, as Lutkepohl (2005, 113114) discusses, D+


K is the Moore
Penrose inverse of DK .
As discussed in Amisano and Giannini (1997, chap. 6), the asymptotic standard errors of the
structural IRFs are available for short-run SVAR models but not for long-run SVAR models. Following
Amisano and Giannini (1997, chap. 5), the asymptotic K 2 K 2 VCE of the short-run structural IRFs
after i periods, when a maximum of h periods are estimated, is the i, i block of

n
o
n
o0
e i
e 0 + IK (JM
c i J0 ) (0) IK (JM
c j J0 )
b (h)ij = G
b G

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

299

where

e 0 = 0K
G
n

o
e i = Pi1 P
b 0 J(M
c 0 )i1k JM
c k J0
G
sr
k=0

G0 is K 2 K 2 p
Gi is K 2 K 2 p

b (0) = Q2
b W Q02

b (0) is

b W = Q1
b AB Q01

b W is

b sr
b0 P
Q2 = P
n sr
o
b 1 ), (P
b 01 B1 )
Q1 = (IK B

K 2 K 2
K 2 K 2

Q2 is K 2 K 2
Q1 is K 2 2K 2

sr

b AB is the 2K 2 2K 2 VCE of the estimator of vec(A, B).


and

Dynamic-multiplier function formulas for VARs


This section provides the details of how irf create estimates the dynamic-multiplier functions
and their asymptotic standard errors.
A pth order vector autoregressive model (VAR) with exogenous variables may be written as

yt = v + A1 yt1 + + Ap ytp + B0 xt + B1 xt1 + + Bs xts + ut


where all the notation is the same as above except that the s K R matrices B1 , B2 , . . . , Bs are
explicitly included and s is the number of lags of the R exogenous variables in the model.
Lutkepohl (2005) shows that the dynamic-multipliers Di are consistently estimated by

b i = Jx A
ei B
b
D
x x
where

Jx = (IK , 0K , . . . , 0K )


c B
b
M
e
Ax = e e
0 I
b
b2 ... B
bs
B1 B
... 0


0
b = 0.
B
..
..
..
..

.
.
.

0 0 ... 0

0R 0R . . . 0R 0R
IR 0R . . . 0R 0R

0R 0R
eI = 0R IR
.
..
..
..
..

.
.
.
0R

0
b
e
Bx = B0

e0 = B
b0
B
0

0R . . . IR 0R

I0

0 0
0
0

I0 = [ IR 0R 0R ]

i {0, 1, . . .}
J is K(Kp+Rs)

e x is (Kp+Rs)(Kp+Rs)
A

b is KpRs
B

eI is

RsRs

b 0x is R(Kp+Rs)
B
e is RKp
B

I is RRs

is a K R matrix of 0s and 0
e is a Rs Kp matrix of 0s.
and 0

300

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

Consistent estimators of the cumulative dynamic-multiplier functions are given by

Di =

i
X

bj
D

j=0

Letting x = vec(A1 A2 Ap B1 B2 Bs B0 ) and letting b be the asymptotic variance


x
b i is G
e i G
e0
b
covariance estimator (VCE) of x , Lutkepohl shows that an asymptotic VCE of D
bx i

where
!
i1
X
0
i1j
j
0
j
ei =
e
e J , IR Jx A
e Jx
G
Bx A
Jx A
x
x x
x
j=0

Similarly, an asymptotic VCE of Di is

P

i
j=0


P

i
ej
e0 .
G
G
j
j=0
bx

Forecast-error variance decomposition formulas for VARs


This section provides details of how irf create estimates the Cholesky FEVD, the structural
FEVD, and their standard errors. Beginning with the Cholesky-based forecast-error decompositions,
the fraction of the h-step-ahead forecast-error variance of variable j that is attributable to the Cholesky
orthogonalized innovations in variable k can be estimated as
Ph1 0
b i ek )2
(ej

bjk,h = i=0
d j (h)
MSE
where MSEj (h) is the j th diagonal element of
h1
X

b i
b
b 0i

i=0

(See Lutkepohl [2005, 109] for a discussion of this result.)


bjk,h and MSEj (h) are scalars. The square
of the standard error of
bjk,h is

b djk,h
b d0jk,h + djk,h
djk,h
where

(
Ph1

b c ek )(e0 P
b0
b iP
MSEj (h)(e0j
k c

e0j )Gi
)

djk,h

= MSE2 (h)2
j

djk,h

0
b c ek )2 Ph1 (e0
b b
(e0j i P
m=0 j m ej )Gm
(
Ph1
b i Pc ek )(e0 e0j
b i )H
=
MSEj (h)(e0j
k
i=0

i=0

djk,h is 1K 2 p

)
b c ek )2
b iP
(e0j

Ph1

0b
m=0 (ej m

b m )DK
ej

1
MSEj (h)2

G0 = 0
and DK is the K 2 K{(K + 1)/2} duplication matrix defined previously.

djk,h is 1K

(K+1)
2

G0 is K 2 K 2 p

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

301

For the structural forecast-error decompositions, we follow Amisano and Giannini (1997, sec. 5.2).
They define the matrix of structural forecast-error decompositions at horizon s, when a maximum of
h periods are estimated, as

c
cs = F
b 1 M
fs
W
s
bs =
F

s1
X

b sr
b sr0

i i

for s = 1, . . . , h + 1
!

IK

i=0
s1

X sr
c
fs =
bi
b sr
M

i
i=0

where is the Hadamard, or element-by-element, product.

c s ) is given by
The K 2 K 2 asymptotic VCE of vec(W
e s (h)Z
e0
Z
s
b (h) is as derived previously, and
where
(
es =
Z

c s)
c s)
c s ) vec(W
vec(W
vec(W
sr ,
sr , ,
b 0 ) vec(
b1 )
b sr
vec(
vec(
h)

n
o
c s)
vec(W
b 1 e b sr
c 0 b 1 e
b sr
sr = 2 (IK Fs )D(j ) (Ws Fs )D(IK )NK (j IK )
bj )
vec(
e
If X is an n n matrix, then D(X)
is the n2 n2 matrix with vec(X) on the diagonal and zeros
in all the off-diagonal elements, and NK is as defined previously.

Impulseresponse function formulas for VECMs


We begin by providing the formulas for backing out the estimates of the Ai from the i estimated
by vec. As discussed in [TS] vec intro, the VAR in (1) can be rewritten as a VECM:

yt = v + yt1 + 1 yt1 + p1 yp2 + t


vec estimates and the i . Johansen (1995, 25) notes that
=

p
X

A i IK

(6)

i=1

where IK is the K -dimensional identity matrix, and


i =

p
X
j=i+1

Aj

(7)

302

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

Defining
= IK

p1
X

i=1

and using (6) and (7) allow us to solve for the Ai as

A 1 = + 1 + IK
Ai = i i1

for i = {2, . . . , p 1}

and

Ap = p1
Using these formulas, we can back out estimates of Ai from the estimates of the i and produced
by vec. Then we simply use the formulas for the IRFs and OIRFs presented in Impulseresponse
function formulas for VARs.
The running sums of the IRFs and OIRFs over the steps within each impulseresponse pair are the
cumulative IRFs and OIRFs.

Algorithms for bootstrapping the VAR IRF and FEVD standard errors
irf create offers two bootstrap algorithms for estimating the standard errors of the various IRFs
and FEVDs. Both var and svar contain estimators for the coefficients in a VAR that are conditional on
the first p observations. The two bootstrap algorithms are also conditional on the first p observations.
Specifying the bs option calculates the standard errors by bootstrapping the residuals. For a
bootstrap with R repetitions, this method uses the following algorithm:
1. Fit the model and save the estimated parameters.
2. Use the estimated coefficients to calculate the residuals.
3. Repeat steps 3a to 3c R times.
3a. Draw a simple random sample of size T with replacement from the residuals. The
random samples are drawn over the K 1 vectors of residuals. When the tth vector is
drawn, all K residuals are selected. This preserves the contemporaneous correlations
among the residuals.
3b. Use the p initial observations, the sampled residuals, and the estimated coefficients to
construct a new sample dataset.
3c. Fit the model and calculate the different IRFs and FEVDs.
3d. Save these estimates as observation r in the bootstrapped dataset.
4. For each IRF and FEVD, the estimated standard deviation from the R bootstrapped estimates
is the estimated standard error of that impulseresponse function or forecast-error variance
decomposition.
Specifying the bsp option estimates the standard errors by a multivariate normal parametric
bootstrap. The algorithm for the multivariate normal parametric bootstrap is identical to the one
above, with the exception that 3a is replaced by 3a(bsp):
3a(bsp). Draw T pseudovariates from a multivariate normal distribution with covariance matrix
b
.

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

303

Impulseresponse function formulas for ARIMA and ARFIMA


The previous discussion showed that a SARMA process can be rewritten as an ARMA process and
that for an ARMA process, we can express (L) in terms of (L) and (L),
(L)
(L)

(L) =
Expanding the above, we obtain

0 + 1 L + 2 L2 + =

1 + 1 L + 2 L2 +
1 1 L 2 L2

Given the estimate of the autoregressive terms


b and the moving-average terms b
, the IRF is
obtained by solving the above equation for the weights. The i are calculated using the recursion

bi = bi +

p
X

bj bij

j=1

with 0 = 1 and i = 0 for i > max(p, q + 1).


The asymptotic standard errors for the IRF for ARMA are calculated using the delta method;
b be the estimate of the
see Serfling (1980, sec. 3.3) for a discussion of the delta method. Let
b
variancecovariance matrix for
b and , and let be a matrix of derivatives of i with respect to

b and b
. Then the standard errors for bi are calculated as

b 0
i
i
The IRF for the ARFIMA(p, d, q) model is obtained by applying the filter (1 L)d to (L). The
filter is given by Hassler and Kokoszka (2010) as

(1 L)d =

bi Li

i=0

with b0 = 1 and subsequent bi calculated by the recursion

b
bbi = d + i 1 bbi1
i
The resulting IRF is then given by

bi =

i
X

bjbbij

j=0

The asymptotic standard errors for the IRF for ARFIMA are calculated using the delta method. Let
b
be the estimate of the variancecovariance matrix for
b, b
, and db, and let be a matrix of
b
b
derivatives of i with respect to
b, , and d. Then the standard errors for bi are calculated as

b 0
i
i

304

irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs

References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed. Heidelberg: Springer.
Christiano, L. J., M. Eichenbaum, and C. L. Evans. 1999. Monetary policy shocks: What have we learned and to
what end? In Handbook of Macroeconomics: Volume 1A, ed. J. B. Taylor and M. Woodford. New York: Elsevier.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Hassler, U., and P. Kokoszka. 2010. Impulse responses of fractionally integrated processes with long memory.
Econometric Theory 26: 18551861.
Johansen, S. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Serfling, R. J. 1980. Approximation Theorems of Mathematical Statistics. New York: Wiley.
Sims, C. A. 1980. Macroeconomics and reality. Econometrica 48: 148.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101115.

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf ctable Combined tables of IRFs, dynamic-multiplier functions, and FEVDs
Description
Options

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf ctable makes a table or a combined table of IRF results. A table is made for specified
combinations of named IRF results, impulse variables, response variables, and statistics. irf ctable
combines these tables into one table, unless separate tables are requested.
irf ctable operates on the active IRF file; see [TS] irf set.

Quick start
Combine tables of an orthogonalized IRF myirf and cumulative IRF mycirf for dependent variables
y1 and y2
irf ctable (myirf y1 y2 oirf) (mycirf y1 y2 cirf)
As above, but suppress confidence intervals and add a title
irf ctable (myirf y1 y2 oirf) (mycirf y1 y2 cirf), noci
title("My Title")

///

Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

IRF and FEVD analysis

305

>

Combined tables

306

irf ctable Combined tables of IRFs, dynamic-multiplier functions, and FEVDs

Syntax
irf ctable (spec1 )

(spec2 ) . . .

(specN )

 

, options

where (speck ) is
(irfname impulsevar responsevar stat


, spec options )

irfname is the name of a set of IRF results in the active IRF file. impulsevar should be specified as an
endogenous variable for all statistics except dm and cdm; for those, specify as an exogenous variable.
responsevar is an endogenous variable name. stat is one or more statistics from the list below:
stat

Description

irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd

impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition

options

Description

set(filename)
noci
stderror
individual
title("text")
step(#)
level(#)

make filename active


do not report confidence intervals
include standard errors for each statistic
make an individual table for each combination
use text as overall table title
set common maximum step
set confidence level; default is level(95)

spec options

Description

noci
stderror
level(#)

do not report confidence intervals


include standard errors for each statistic
set confidence level; default is level(95)

ititle("text")

use text as individual subtitle for specific table

spec options may be specified within a table specification, globally, or both. When specified in a table specification,
the spec options affect only the specification in which they are used. When supplied globally, the spec options
affect all table specifications. When specified in both places, options for the table specification take precedence.
ititle() does not appear in the dialog box.

irf ctable Combined tables of IRFs, dynamic-multiplier functions, and FEVDs

307

Options
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
noci suppresses reporting of the confidence intervals for each statistic. noci is assumed when the
model was fit by vec because no confidence intervals were estimated.
stderror specifies that standard errors for each statistic also be included in the table.
individual places each block, or (speck ), in its own table. By default, irf ctable combines all
the blocks into one table.
title("text") specifies a title for the table or the set of tables.
step(#) specifies the maximum number of steps to use for all tables. By default, each table is
constructed using all steps available.
level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals.
The following option is available with irf ctable but is not shown in the dialog box:
ititle("text") specifies an individual subtitle for a specific table. ititle() may be specified only
when the individual option is also specified.

Remarks and examples


If you have not read [TS] irf, please do so.
Also see [TS] irf table for a slightly easier to use, but less powerful, table command.
irf ctable creates a series of tables from IRF results. The information enclosed within each set
of parentheses,


(irfname impulsevar responsevar stat , spec options )
forms a request for a specific table.
The first partirfname impulsevar responsevaridentifies a set of IRF estimates or a set of variance
decomposition estimates. The next partstatspecifies which statistics are to be included in the
table. The last partspec optionsincludes the noci, level(), and stderror options, and places
(or suppresses) additional columns in the table.
Each specific table displays the requested statistics corresponding to the specified combination of
irfname, impulsevar, and responsevar over the step horizon. By default, all the individual tables are
combined into one table. Also by default, all the steps, or periods, available are included in the table.
You can use the step() option to impose a common maximum for all tables.

Example 1
In example 1 of [TS] irf table, we fit a model using var and we saved the IRFs for two different
orderings. The commands we used were
.
.
.
.
.

use
var
irf
irf
irf

http://www.stata-press.com/data/r14/lutkepohl2
dln_inv dln_inc dln_consump
set results4
create ordera, step(8)
create orderb, order(dln_inc dln_inv dln_consump) step(8)

308

irf ctable Combined tables of IRFs, dynamic-multiplier functions, and FEVDs

We then formed the desired table by typing


. irf table oirf fevd, impulse(dln_inc) response(dln_consump) noci std
> title("Ordera versus orderb")

Using irf ctable, we can form the equivalent table by typing


. irf ctable (ordera dln_inc dln_consump oirf fevd)
>
(orderb dln_inc dln_consump oirf fevd),
>
noci std title("Ordera versus orderb")
Ordera versus orderb

step
0
1
2
3
4
5
6
7
8

step
0
1
2
3
4
5
6
7
8

(1)
oirf

(1)
S.E.

(1)
fevd

(1)
S.E.

.005123
.001635
.002948
-.000221
.000811
.000462
.000044
.000151
.000091

.000878
.000984
.000993
.000662
.000586
.000333
.000275
.000162
.000114

0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499

0
.077483
.073722
.075562
.074063
.075019
.075371
.075314
.075386

(2)
oirf

(2)
S.E.

(2)
fevd

(2)
S.E.

.005461
.001578
.003307
-.00019
.000846
.000491
.000069
.000158
.000096

.000925
.000988
.001042
.000676
.000617
.000349
.000292
.000172
.000122

0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438

0
.08159
.077519
.080604
.079019
.079941
.080323
.080287
.080366

(1) irfname = ordera, impulse = dln_inc, and response = dln_consump


(2) irfname = orderb, impulse = dln_inc, and response = dln_consump

The output is displayed in one table. Because the table did not fit horizontally, it automatically
wrapped. At the bottom of the table is a list of keys that appear at the top of each column. The
results in the table above indicate that the orthogonalized IRFs do not change by much. Because the
estimated forecast-error variances do change, we might want to produce two tables that contain the
estimated forecast-error variance decompositions and their 95% confidence intervals:

irf ctable Combined tables of IRFs, dynamic-multiplier functions, and FEVDs

309

. irf ctable (ordera dln_inc dln_consump fevd)


>
(orderb dln_inc dln_consump fevd), individual
Table 1

step
0
1
2
3
4
5
6
7
8

(1)
fevd

(1)
Lower

(1)
Upper

0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499

0
.13663
.149797
.174356
.174066
.175544
.175826
.17577
.175744

0
.440357
.43878
.470552
.464389
.469613
.471277
.470995
.471253

95% lower and upper bounds reported


(1) irfname = ordera, impulse = dln_inc, and response = dln_consump
Table 2

step
0
1
2
3
4
5
6
7
8

(2)
fevd

(2)
Lower

(2)
Upper

0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438

0
.167893
.17686
.212794
.212022
.213718
.214058
.213956
.213923

0
.487721
.48073
.528757
.52177
.52708
.528917
.528674
.528953

95% lower and upper bounds reported


(2) irfname = orderb, impulse = dln_inc, and response = dln_consump

Because we specified the individual option, the output contains two tables, one for each specific
table command. At the bottom of each table is a list of the keys used in that table and a note indicating
the level of the confidence intervals that we requested. The results from table 1 and table 2 indicate
that each estimated function is well within the confidence interval of the other, so we conclude that
the functions are not significantly different.

310

irf ctable Combined tables of IRFs, dynamic-multiplier functions, and FEVDs

Stored results
irf ctable stores the following in r():
Scalars
r(ncols)
r(k umax)
r(k)
Macros
r(key#)
r(tnotes)

number of columns in all tables


number of distinct keys
number of specific table commands
#th key
list of keys applied to each column

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf describe Describe an IRF file
Description
Options

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf describe describes the specification of the estimation command and the specification of the
IRF used to create the IRF results that are saved in an IRF file.

Quick start
Short summary of all IRF results in the active IRF file
irf describe
Summary of model and IRF specification for irf1 in the active IRF file
irf describe irf1
As above, but for irf1 in IRF file myirf.irf
irf describe irf1, using(myirf)
As above, and also set myirf.irf as the active IRF file
irf describe irf1, set(myirf)
Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

Manage IRF results and files

311

>

Describe IRF file

312

irf describe Describe an IRF file

Syntax
irf describe

irf resultslist

 

, options

options

Description

set(filename)
using(irf filename)
detail
variables

make filename active


describe irf filename without making active
show additional details of IRF results
show underlying structure of the IRF dataset

Options
set(filename) specifies the IRF file to be described and set; see [TS] irf set. If filename is specified
without an extension, .irf is assumed.
using(irf filename) specifies the IRF file to be described. The active IRF file, if any, remains
unchanged. If irf filename is specified without an extension, .irf is assumed.
detail specifies that irf describe display detailed information about each set of IRF results.
detail is implied when irf resultslist is specified.
variables is a programmers option; additionally displays the output produced by the describe
command.

Remarks and examples


If you have not read [TS] irf, please do so.
irf describe specified without irf resultslist provides a short summary of the model used to
create each set of results in an IRF file. If irf resultslist is specified, then irf describe provides
details of the model specification and the IRF specification used to create each set of IRF results. If
set() or using() is not specified, the IRF results of the active IRF file are described.

Example 1
. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )

irf describe Describe an IRF file

We create three sets of IRF results:


. irf create order1, set(myirfs, replace)
(file myirfs.irf created)
(file myirfs.irf now active)
(file myirfs.irf updated)
. irf create order2, order(dln_inc dln_inv dln_consump)
(file myirfs.irf updated)
. irf create order3, order(dln_inc dln_consump dln_inv)
(file myirfs.irf updated)
. irf describe
Contains irf results from myirfs.irf (dated 11 Nov 2014 09:22)
irfname
order1
order2
order3

model

endogenous variables and order (*)

var
var
var

dln_inv dln_inc dln_consump


dln_inc dln_inv dln_consump
dln_inc dln_consump dln_inv

(*) order is relevant only when model is var

The output reveals the order in which we specified the variables.


. irf describe order1

irf results for order1


Estimation specification
model: var
endog: dln_inv dln_inc dln_consump
sample: quarterly data from 1960q4 to 1978q4
lags: 1 2
constant: constant
exog: none
exogvars: none
exlags: none
varcns: unconstrained
IRF specification
step: 8
order: dln_inv dln_inc dln_consump
std error: asymptotic
reps: none

Here we see a summary of the model we fit as well as the specification of the IRFs.

313

314

irf describe Describe an IRF file

Stored results
irf describe stores the following in r():
Scalars
r(N)
r(k)
r(width)
r(N max)
r(k max)
r(widthmax)
r(changed)
Macros
r( version)
r(irfnames)
r(irfname model)
r(irfname order)
r(irfname exog)
r(irfname exogvar)
r(irfname constant)
r(irfname lags)
r(irfname exlags)
r(irfname tmin)
r(irfname tmax)
r(irfname timevar)
r(irfname tsfmt)
r(irfname varcns)
r(irfname svarcns)
r(irfname step)
r(irfname stderror)
r(irfname
r(irfname
r(irfname
r(irfname
r(irfname
r(irfname

reps)
version)
rank)
trend)
veccns)
sind)

number of observations in the IRF file


number of variables in the IRF file
width of dataset in the IRF file
maximum number of observations
maximum number of variables
maximum width of the dataset
flag indicating that data have changed since last saved
version of IRF results file
names of IRF results in the IRF file
var, sr var, lr var, or vec
Cholesky order assumed in IRF estimates
exogenous variables, and their lags, in VAR or underlying VAR
exogenous variables in VAR or underlying VAR
constant or noconstant
lags in model
lags of exogenous variables in model
minimum value of timevar in the estimation sample
maximum value of timevar in the estimation sample
name of tsset timevar
format of timevar in the estimation sample
unconstrained or colon-separated list of constraints placed on
VAR coefficients
"." or colon-separated list of constraints placed on SVAR coefficients
maximum step in IRF estimates
asymptotic, bs, bsp, or none, depending on type
of standard errors specified to irf create
"." or number of bootstrap replications performed
version of IRF file that originally held irfname IRF results
"." or number of cointegrating equations
"." or trend() specified in vec
"." or constraints placed on VECM parameters
"." or normalized seasonal indicators included in vec

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf drop Drop IRF results from the active IRF file
Description
Option

Quick start
Remarks and examples

Menu
Also see

Syntax

Description
irf drop removes IRF results from the active IRF file.

Quick start
Drop impulseresponse functions irf1 and irf2 from the active IRF file
irf drop irf1 irf2
Drop irf1 and irf2 from the IRF file myirfs.irf
irf drop irf1 irf2, set(myirfs)
Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

Manage IRF results and files

315

>

Drop IRF results

316

irf drop Drop IRF results from the active IRF file

Syntax
irf drop irf resultslist

, set(filename)

Option
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.

Remarks and examples


If you have not read [TS] irf, please do so.

Example 1
. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )

We create three sets of IRF results:


. irf
(file
(file
(file
. irf
(file
. irf
(file

create order1, set(myirfs, replace)


myirfs.irf created)
myirfs.irf now active)
myirfs.irf updated)
create order2, order(dln_inc dln_inv dln_consump)
myirfs.irf updated)
create order3, order(dln_inc dln_consump dln_inv)
myirfs.irf updated)

. irf describe
Contains irf results from myirfs.irf (dated 11 Nov 2014 09:22)
model
endogenous variables and order (*)
irfname
order1
order2
order3

var
var
var

dln_inv dln_inc dln_consump


dln_inc dln_inv dln_consump
dln_inc dln_consump dln_inv

(*) order is relevant only when model is var

irf drop Drop IRF results from the active IRF file

Now lets remove order1 and order2 from myirfs.irf.


. irf drop order1 order2
(order1 dropped)
(order2 dropped)
file myirfs.irf updated
. irf describe
Contains irf results from myirfs.irf (dated 11 Nov 2014 09:22)
model
endogenous variables and order (*)
irfname
order3

var

dln_inc dln_consump dln_inv

(*) order is relevant only when model is var

order1 and order2 have been dropped.

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

317

Title
irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs
Description
Options

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf graph graphs impulseresponse functions (IRFs), dynamic-multiplier functions, and forecasterror variance decompositions (FEVDs) over time.

Quick start
Graph impulseresponse function for dependent variables y1 and y2 given an unexpected shock to
y1
irf graph irf, impulse(y1) response(y2)
As above, but for orthogonalized shocks
irf graph oirf, impulse(y1) response(y2)
As above, but begin the plot with the third forecast period
irf graph oirf, impulse(y1) response(y2) lstep(3)
As above, but with a separate graph for each IRF in the current IRF file
irf graph oirf, impulse(y1) response(y2) lstep(3) individual
Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

IRF and FEVD analysis

318

>

Graphs by impulse or response

irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs

Syntax
irf graph stat

, options

stat

Description

irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd

impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition

Notes: 1. No statistic may appear more than once.


2. If confidence intervals are included (the default), only two statistics may be included.
3. If confidence intervals are suppressed (option noci), up to four statistics may be included.

options

Description

Main

set(filename)
irf(irfnames)
impulse(impulsevar)
response(endogvars)
noci
level(#)
lstep(#)
ustep(#)

make filename active


use irfnames IRF result sets
use impulsevar as impulse variables
use endogenous variables as response variables
suppress confidence bands
set confidence level; default is level(95)
use # for first step
use # for maximum step

Advanced

individual


iname(namestub , replace )


isaving(filenamestub , replace )

graph each combination individually


stub for naming the individual graphs
stub for saving the individual graphs to files

Plots

plot#opts(cline options)

affect rendition of the line plotting the # stat

CI plots

ci#opts(area options)

affect rendition of the confidence interval for the # stat

Y axis, X axis, Titles, Legend, Overall

twoway options
byopts(by option)

any options other than by() documented in


[G-3] twoway options
how subgraphs are combined, labeled, etc.

319

320

irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs

Options


Main

set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
irf(irfnames) specifies the IRF result sets to be used. If irf() is not specified, each of the results in
the active IRF file is used. (Files often contain just one set of IRF results saved under one irfname;
in that case, those results are used.)
impulse(impulsevar) and response(endogvars) specify the impulse and response variables. Usually
one of each is specified, and one graph is drawn. If multiple variables are specified, a separate
subgraph is drawn for each impulseresponse combination. If impulse() and response() are
not specified, subgraphs are drawn for all combinations of impulse and response variables.
impulsevar should be specified as an endogenous variable for all statistics except dm or cdm; for
those, specify as an exogenous variable.
noci suppresses graphing the confidence interval for each statistic. noci is assumed when the model
was fit by vec because no confidence intervals were estimated.
level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals. Also see [TS] irf cgraph for a graph command that allows the
confidence level to vary over the graphs.
lstep(#) specifies the first step, or period, to be included in the graphs. lstep(0) is the default.
ustep(#), # 1, specifies the maximum step, or period, to be included in the graphs.

Advanced

individual specifies that each graph be displayed individually. By default, irf graph combines
the subgraphs into one image. When individual is specified, byopts() may not be specified,
but the isaving() and iname() options may be specified.


iname(namestub , replace ) specifies that the ith individual graph be stored in memory under
the name namestubi, which must be a valid Stata name of 24 characters or fewer. iname() may
be specified only with the individual option.


isaving(filenamestub , replace ) specifies that the ith individual graph should be saved to disk
in the current working directory under the name filenamestubi.gph. isaving() may be specified
only when the individual option is also specified.

Plots

plot1opts(cline options), . . . , plot4opts(cline options) affect the rendition of the plotted statistics (the stat). plot1opts() affects the rendition of the first statistic; plot2opts(), the second;
and so on. cline options are as described in [G-3] cline options.

CI plots

ci1opts(area options) and ci2opts(area options) affect the rendition of the confidence intervals
for the first (ci1opts()) and second (ci2opts()) statistics in stat. area options are as described
in [G-3] area options.

irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs

321

Y axis, X axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk
(see [G-3] saving option). The saving() and name() options may not be combined with the
individual option.
byopts(by option) is as documented in [G-3] by option and may not be specified when individual
is specified. byopts() affects how the subgraphs are combined, labeled, etc.

Remarks and examples


If you have not read [TS] irf, please do so.
Also see [TS] irf cgraph, which produces combined graphs; [TS] irf ograph, which produces
overlaid graphs; and [TS] irf table, which displays results in tabular form.
irf graph produces one or more graphs and displays them arrayed into one image unless the
individual option is specified, in which case the individual graphs are displayed separately. Each
individual graph consists of all the specified stat and represents one impulseresponse combination.
Because all the specified stat appear on the same graph, putting together statistics with very
different scales is not recommended. For instance, sometimes sirf and oirf are on similar scales
while irf is on a different scale. In such cases, combining sirf and oirf on the same graph looks
fine, but combining either with irf produces an uninformative graph.

Example 1
Suppose that we have results generated from two different SVAR models. We want to know whether
the shapes of the structural IRFs and the structural FEVDs are similar in the two models. We are also
interested in knowing whether the structural IRFs and the structural FEVDs differ significantly from
their Cholesky counterparts.
Filling in the background, we have previously issued the commands:
.
.
.
.
.
.
.

use http://www.stata-press.com/data/r14/lutkepohl2
mat a = (., 0, 0\0,.,0\.,.,.)
mat b = I(3)
svar dln_inv dln_inc dln_consump, aeq(a) beq(b)
irf create modela, set(results3) step(8)
svar dln_inc dln_inv dln_consump, aeq(a) beq(b)
irf create modelb, step(8)

322

irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs

To see whether the shapes of the structural IRFs and the structural FEVDs are similar in the two
models, we type
. irf graph oirf sirf, impulse(dln_inc) response(dln_consump)

modela, dln_inc, dln_consump

modelb, dln_inc, dln_consump

.01

.005

.005
0

step
95% CI for oirf
orthogonalized irf

95% CI for sirf


structural irf

Graphs by irfname, impulse variable, and response variable

The graph reveals that the oirf and the sirf estimates are essentially the same for both models and
that the shapes of the functions are very similar for the two models.
To see whether the structural IRFs and the structural FEVDs differ significantly from their Cholesky
counterparts, we type
. irf graph fevd sfevd, impulse(dln_inc) response(dln_consump) lstep(1)
> legend(cols(1))

modela, dln_inc, dln_consump

modelb, dln_inc, dln_consump

.5

.4

.3

.2

.1
0

step
95% CI for fevd
95% CI for sfevd
fraction of mse due to impulse
(structural) fraction of mse due to impulse
Graphs by irfname, impulse variable, and response variable

irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs

323

This combined graph reveals that the shapes of these functions are also similar for the two models.
However, the graph illuminates one minor difference between them: In modela, the estimated structural
FEVD is slightly larger than the Cholesky-based estimates, whereas in modelb the Cholesky-based
estimates are slightly larger than the structural estimates. For both models, however, the structural
estimates are close to the center of the wide confidence intervals for the two estimates.

Example 2
Lets focus on the results from modela. Suppose that we were interested in examining how
dln consump responded to impulses in its own structural innovations, structural innovations to
dln inc, and structural innovations to dln inv. We type
. irf graph sirf, irf(modela) response(dln_consump)

modela, dln_consump, dln_consump

modela, dln_inc, dln_consump

.01

.005

.005
0

modela, dln_inv, dln_consump


.01

.005

.005
0

step
95% CI

structural irf

Graphs by irfname, impulse variable, and response variable

The upper-left graph shows the structural IRF of an innovation in dln consump on dln consump. It
indicates that the identification restrictions used in modela imply that a positive shock to dln consump
causes an increase in dln consump, followed by a decrease, followed by an increase, and so on,
until the effect dies out after roughly 5 periods.
The upper-right graph shows the structural IRF of an innovation in dln inc on dln consump,
indicating that a positive shock to dln inc causes an increase in dln consump, which dies out after
4 or 5 periods.

Technical note
[TS] irf table contains a technical note warning you to be careful in naming variables when you
fit models. What is said there applies equally here.

324

irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs

Stored results
irf graph stores the following in r():
Scalars
r(k)
Macros
r(stats)
r(irfname)
r(impulse)
r(response)
r(plot#)
r(ci)
r(ciopts#)

number of graphs
statlist
resultslist
impulselist
responselist
contents of plot#opts()
level applied to confidence
intervals or noci
contents of ci#opts()

r(byopts)
r(saving)
r(name)
r(individual)
r(isaving)
r(iname)
r(subtitle#)

contents of byopts()
supplied saving() option
supplied name() option
individual or blank
contents of saving()
contents of name()
subtitle for individual graph #

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf ograph Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs
Description
Options

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf ograph displays plots of irf results on one graph (one pair of axes).
To become familiar with this command, type db irf ograph.

Quick start
Graph of an orthogonalized IRF myirf overlayed on cumulative IRF mycirf for dependent variables
y1 and y2
irf ograph (myirf y1 y2 oirf) (mycirf y1 y2 cirf)
As above, and include confidence bands and add a title
irf cgraph (myirf y1 y2 oirf) (mycirf y1 y2 cirf), ci
title("My Title")

///

Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

IRF and FEVD analysis

325

>

Overlaid graph

326

irf ograph Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs

Syntax
irf ograph (spec1 )


 

(spec2 ) . . . (spec15 )
, options

where (speck ) is
(irfname impulsevar responsevar stat


, spec options )

irfname is the name of a set of IRF results in the active IRF file or ., which means the first named
result in the active IRF file. impulsevar should be specified as an endogenous variable for all statistics
except dm and cdm; for those, specify as an exogenous variable. responsevar is an endogenous variable
name. stat is one or more statistics from the list below:
stat

Description

irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd

impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition

options

Description

Plots

plot options
set(filename)

define the IRF plots


make filename active

Options

common options

level and steps

Y axis, X axis, Titles, Legend, Overall

twoway options

any options other than by() documented in [G-3] twoway options

plot options

Description

Main

set(filename)
irf(irfnames)
impulse(impulsevar)
response(endogvars)
ci

make filename active


use irfnames IRF result sets
use impulsevar as impulse variables
use endogenous variables as response variables
add confidence bands to the graph

irf ograph Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs

spec options

327

Description

Options

common options

level and steps

Plot

cline options

affect rendition of the plotted lines

CI plot

ciopts(area options)

affect rendition of the confidence intervals

common options

Description

Options

level(#)
lstep(#)
ustep(#)

set confidence level; default is level(95)


use # for first step
use # for maximum step

common options may be specified within a plot specification, globally, or in both. When specified in a plot
specification, the common options affect only the specification in which they are used. When supplied globally,
the common options affect all plot specifications. When supplied in both places, options in the plot specification
take precedence.

Options


Plots

plot options defines the IRF plots and are found under the Main, Plot, and CI plot tabs.
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.

Main

set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
irf(irfnames) specifies the IRF result sets to be used. If irf() is not specified, each of the results in
the active IRF file is used. (Files often contain just one set of IRF results saved under one irfname;
in that case, those results are used.)
impulse(varlist) and response(endogvars) specify the impulse and response variables. Usually
one of each is specified, and one graph is drawn. If multiple variables are specified, a separate
subgraph is drawn for each impulseresponse combination. If impulse() and response() are
not specified, subgraphs are drawn for all combinations of impulse and response variables.
ci adds confidence bands to the graph. The noci option may be used within a plot specification to
suppress its confidence bands when the ci option is supplied globally.

Plot

cline options affect the rendition of the plotted lines; see [G-3] cline options.

328

irf ograph Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs

CI plot

ciopts(area options) affects the rendition of the confidence bands for the plotted statistic; see
[G-3] area options. ciopts() implies ci.

Options

level(#) specifies the confidence level, as a percentage, for confidence bands; see [U] 20.7 Specifying
the width of confidence intervals.
lstep(#) specifies the first step, or period, to be included in the graph. lstep(0) is the default.
ustep(#), # 1, specifies the maximum step, or period, to be included.

Y axis, X axis, Titles, Legend, Overall

twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).

Remarks and examples


If you have not read [TS] irf, please do so.
irf ograph overlays plots of IRFs and FEVDs on one graph.

Example 1
We have previously issued the commands:
.
.
.
.

use
var
irf
irf

http://www.stata-press.com/data/r14/lutkepohl2
dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
create order1, step(10) set(myirf1, new)
create order2, step(10) order(dln_inc dln_inv dln_consump)

irf ograph Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs

329

We now wish to compare the oirf for impulse dln inc and response dln consump for two different
Cholesky orderings:

.002

.002

.004

.006

. irf ograph (order1 dln_inc dln_consump oirf)


>
(order2 dln_inc dln_consump oirf)

10

step
order1: oirf of dln_inc > dln_consump
order2: oirf of dln_inc > dln_consump

Technical note
Graph options allow you to change the appearance of each plot. The following graph contains the
plots of the FEVDs (FEVDs) for impulse dln inc and each response using the results from the first
collection of results in the active IRF file (using the . shortcut). In the second plot, we supply the
clpat(dash) option (an abbreviation for clpattern(dash)) to give the line a dashed pattern. In
the third plot, we supply the m(o) clpat(dash dot) recast(connected) options to get small
circles connected by a line with a dashdot pattern; the cilines option plots the confidence bands
by using lines instead of areas. We use the title() option to add a descriptive title to the graph
and supply the ci option globally to add confidence bands to all the plots.

330

irf ograph Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs


. irf ograph (. dln_inc dln_inc fevd)
>
(. dln_inc dln_consump fevd, clpat(dash))
>
(. dln_inc dln_inv fevd, cilines m(o) clpat(dash_dot)
>
recast(connected))
>
, ci title("Comparison of forecast-error variance decomposition")

.2

.4

.6

.8

Comparison of forecasterror variance decomposition

10

step
95% CI of fevd of dln_inc > dln_inc
95% CI of fevd of dln_inc > dln_consump
95% CI of fevd of dln_inc > dln_inv
fevd of dln_inc > dln_inc
fevd of dln_inc > dln_consump
fevd of dln_inc > dln_inv

The clpattern() option is described in [G-3] connect options, msymbol() is described in


[G-3] marker options, title() is described in [G-3] title options, and recast() is described
in [G-3] advanced options.

Stored results
irf ograph stores the following in r():
Scalars
r(plots)
r(ciplots)
Macros
r(irfname#)
r(impulse#)
r(response#)
r(stat#)
r(ci#)

number of plot specifications


number of plotted confidence bands
irfname from (spec#)
impulse from (spec#)
response from (spec#)
statistics from (spec#)
level from (spec#) or noci

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf rename Rename an IRF result in an IRF file
Description
Option

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf rename changes the name of a set of IRF results saved in the active IRF file.

Quick start
Rename impulseresponse function oldirf in the current file to newirf
irf rename oldirf newirf
As above, but for IRF file myirfs.irf
irf rename oldirf newirf, set(myirfs)
Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

Manage IRF results and files

>

Rename IRF results

Syntax
irf rename oldname newname

, set(filename)

Option
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.

Remarks and examples


If you have not read [TS] irf, please do so.

Example 1
. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )

331

332

irf rename Rename an IRF result in an IRF file

We create three sets of IRF results:


. irf create original, set(myirfs, replace)
(file myirfs.irf created)
(file myirfs.irf now active)
(file myirfs.irf updated)
. irf create order2, order(dln_inc dln_inv dln_consump)
(file myirfs.irf updated)
. irf create order3, order(dln_inc dln_consump dln_inv)
(file myirfs.irf updated)
. irf describe
Contains irf results from myirfs.irf (dated 11 Nov 2014 09:22)
irfname
original
order2
order3

model

endogenous variables and order (*)

var
var
var

dln_inv dln_inc dln_consump


dln_inc dln_inv dln_consump
dln_inc dln_consump dln_inv

(*) order is relevant only when model is var

Now lets rename IRF result original to order1.


. irf rename original order1
(81 real changes made)
original renamed to order1
. irf describe
Contains irf results from myirfs.irf (dated 11 Nov 2014 09:22)
irfname
model
endogenous variables and order (*)
order1
order2
order3

var
var
var

dln_inv dln_inc dln_consump


dln_inc dln_inv dln_consump
dln_inc dln_consump dln_inv

(*) order is relevant only when model is var

original has been renamed to order1.

Stored results
irf rename stores the following in r():
Macros
r(irfnames)
r(oldnew)

irfnames after rename


oldname newname

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
irf set Set the active IRF file
Description
Options

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf set without arguments reports the identity of the active IRF file, if there is one. irf set
with a filename specifies that the file be created and set as the active file. irf set, clear specifies
that, if any IRF file is set, it be unset and that there be no active IRF file.

Quick start
Display filename of active IRF file
irf set
Set file myirfs.irf as the active file and create it if it does not exist
irf set myirfs
Set file myirfs.irf as the active file, but replace myirfs.irf if it exists
irf set myirfs, replace
Clear the active IRF file so that no files are active
irf set, clear
Note: irf commands can be used after var, svar, vec, arima, or arfima; see [TS] var, [TS] var
svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

Manage IRF results and files

333

>

Set active IRF file

334

irf set Set the active IRF file

Syntax
Report identity of active file
irf set
Set, and if necessary create, active file


irf set irf filename , replace
Clear any active IRF file
irf set, clear
If irf filename is specified without an extension, .irf is assumed.

Options
replace specifies that if irf filename already exists, the file is to be erased and a new, empty IRF
file is to be created in its place. If it does not already exist, a new, empty file is created.
clear unsets the active IRF file.

Remarks and examples


If you have not read [TS] irf, please do so.
irf set reports the identity of the active IRF file:
. irf set
no irf file active

irf set irf filename creates and sets an IRF file:


. irf set results1
(file results1.irf now active)

We specified the name results1, and results1.irf became the active file.
irf set irf filename can also be used to create a new file:
. use http://www.stata-press.com/data/r14/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inc dln_consump, exog(l.dln_inv)
(output omitted )
. irf set results2
(file results2.irf created)
(file results2.irf now active)
. irf create order1
(file results2.irf updated)

irf set Set the active IRF file

Stored results
irf set stores the following in r():
Macros
r(Orville)

name of active IRF file, if there is an active IRF

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] irf describe Describe an IRF file
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

335

Title
irf table Tables of IRFs, dynamic-multiplier functions, and FEVDs
Description
Options

Quick start
Remarks and examples

Menu
Stored results

Syntax
Also see

Description
irf table makes a table of the values of the requested statistics at each time since impulse. Each
column represents a combination of an impulse variable and a response variable for each statistic
from the named IRF results.

Quick start
Table of impulseresponse function for dependent variables y1 and y2 given an unexpected shock to
y1
irf table irf, impulse(y1) response(y2)
As above, but for orthogonalized shocks
irf table oirf, impulse(y1) response(y2)
As above, but with 3 as the common maximum step horizon for all tables
irf table oirf, impulse(y1) response(y2) step(3)
As above, but with a separate table for each IRF in the active IRF file
irf table oirf, impulse(y1) response(y2) step(3) individual
Note: irf commands may be used only after var, svar, vec, arima, or arfima; see [TS] var,
[TS] var svar, [TS] vec, [TS] arima, or [TS] arfima.

Menu
Statistics

>

Multivariate time series

>

IRF and FEVD analysis

336

>

Tables by impulse or response

irf table Tables of IRFs, dynamic-multiplier functions, and FEVDs

337

Syntax
irf table

stat

 

, options

Description

stat
Main

impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition

irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd

If stat is not specified, all statistics are included, unless option nostructural is also specified, in which case
sirf and sfevd are excluded. You may specify more than one stat.

Description

options
Main

set(filename)
irf(irfnames)
impulse(impulsevar)
response(endogvars)
individual
title("text")

make filename active


use irfnames IRF result sets
use impulsevar as impulse variables
use endogenous variables as response variables
make an individual table for each result set
use text for overall table title

Options

level(#)
noci
stderror
nostructural
step(#)

set confidence level; default is level(95)


suppress confidence intervals
include standard errors in the tables
suppress sirf and sfevd from the default list of statistics
use common maximum step horizon # for all tables

Options


Main

set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
All results are obtained from one IRF file. If you have results in different files that you want in
one table, use irf add to copy results into one file; see [TS] irf add.
irf(irfnames) specifies the IRF result sets to be used. If irf() is not specified, all the results in the
active IRF file are used. (Files often contain just one set of IRF results, saved under one irfname;
in that case, those results are used. When there are multiple IRF results, you may also wish to
specify the individual option.)

338

irf table Tables of IRFs, dynamic-multiplier functions, and FEVDs

impulse(impulsevar) specifies the impulse variables for which the statistics are to be reported. If
impulse() is not specified, each model variable, in turn, is used. impulsevar should be specified
as an endogenous variable for all statistics except dm or cdm; for those, specify as an exogenous
variable.
response(endogvars) specifies the response variables for which the statistics are to be reported. If
response() is not specified, each endogenous variable, in turn, is used.
individual specifies that each set of IRF results be placed in its own table, with its own title and
footer. By default, irf table places all the IRF results in one table with one title and one footer.
individual may not be combined with title().
title("text") specifies a title for the overall table.

Options

level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals.
noci suppresses reporting of the confidence intervals for each statistic. noci is assumed when the
model was fit by vec because no confidence intervals were estimated.
stderror specifies that standard errors for each statistic also be included in the table.
nostructural specifies that stat, when not specified, exclude sirf and sfevd.
step(#) specifies the maximum step horizon for all tables. If step() is not specified, each table is
constructed using all steps available.

Remarks and examples


If you have not read [TS] irf, please do so.
Also see [TS] irf graph, which produces output in graphical form, and see [TS] irf ctable, which
also produces tabular output. irf ctable is more difficult to use but provides more control over
how tables are formed.

Example 1
We have fit a model with var, and we saved the IRFs from two different orderings. The commands
we previously used were
.
.
.
.
.

use
var
irf
irf
irf

http://www.stata-press.com/data/r14/lutkepohl2
dln_inv dln_inc dln_consump
set results4
create ordera, step(8)
create orderb, order(dln_inc dln_inv dln_consump) step(8)

irf table Tables of IRFs, dynamic-multiplier functions, and FEVDs

339

We now wish to compare the two orderings:


. irf table oirf fevd, impulse(dln_inc) response(dln_consump) noci std
> title("Ordera versus orderb")
Ordera versus orderb

step
0
1
2
3
4
5
6
7
8

step
0
1
2
3
4
5
6
7
8

(1)
oirf

(1)
S.E.

(1)
fevd

(1)
S.E.

.005123
.001635
.002948
-.000221
.000811
.000462
.000044
.000151
.000091

.000878
.000984
.000993
.000662
.000586
.000333
.000275
.000162
.000114

0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499

0
.077483
.073722
.075562
.074063
.075019
.075371
.075314
.075386

(2)
oirf

(2)
S.E.

(2)
fevd

(2)
S.E.

.005461
.001578
.003307
-.00019
.000846
.000491
.000069
.000158
.000096

.000925
.000988
.001042
.000676
.000617
.000349
.000292
.000172
.000122

0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438

0
.08159
.077519
.080604
.079019
.079941
.080323
.080287
.080366

(1) irfname = ordera, impulse = dln_inc, and response = dln_consump


(2) irfname = orderb, impulse = dln_inc, and response = dln_consump

The output is displayed as a single table; because the table did not fit horizontally, it wrapped
automatically. At the bottom of the table is a definition of the keys that appear at the top of each
column. The results in the table above indicate that the orthogonalized IRFs do not change by much.

340

irf table Tables of IRFs, dynamic-multiplier functions, and FEVDs

Example 2
Because the estimated FEVDs do change significantly, we might want to produce two tables that
contain the estimated FEVDs and their 95% confidence intervals:
. irf table fevd, impulse(dln_inc) response(dln_consump) individual
Results from ordera

step
0
1
2
3
4
5
6
7
8

(1)
fevd

(1)
Lower

(1)
Upper

0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499

0
.13663
.149797
.174356
.174066
.175544
.175826
.17577
.175744

0
.440357
.43878
.470552
.464389
.469613
.471277
.470995
.471253

95% lower and upper bounds reported


(1) irfname = ordera, impulse = dln_inc, and response = dln_consump
Results from orderb

step
0
1
2
3
4
5
6
7
8

(1)
fevd

(1)
Lower

(1)
Upper

0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438

0
.167893
.17686
.212794
.212022
.213718
.214058
.213956
.213923

0
.487721
.48073
.528757
.52177
.52708
.528917
.528674
.528953

95% lower and upper bounds reported


(1) irfname = orderb, impulse = dln_inc, and response = dln_consump

Because we specified the individual option, the output contains two tables, one for each set of
IRF results. Examining the results in the tables indicates that each of the estimated functions is well

within the confidence interval of the other, so we conclude that the functions are not significantly
different.

Technical note
Be careful in how you name variables when you fit models. Say that you fit one model with var
and used time-series operators to form one of the endogenous variables
. var d.ln_inv

...

and in another model, you created a new variable:


. generate dln_inv = d.ln_inv
. var dln_inv . . .

irf table Tables of IRFs, dynamic-multiplier functions, and FEVDs

341

Say that you saved IRF results from both (perhaps they differ in the number of lags). Now you
wish to use irf table to compare them. You would not be able to specify response(d.ln inv)
or response(dln inv) because neither variable is in both models. Similarly, you could not specify
impulse(d.ln inv) or impulse(dln inv) for the same reason.
All is not lost; if impulse() is not specified, all endogenous variables are used, and similarly if
response() is not specified, so you could obtain the result you desired by simply not specifying
the options, but you will also obtain a lot more, besides. If you want to specify the impulse() or
response() options, be sure to name variables consistently.
Also, you may forget how the endogenous variables were named. If so, irf describe, detail
can provide the answer. In irf describes output, the endogenous variables are listed next to
endog.

Stored results
If the individual option is not specified, irf table stores the following in r():
Scalars
r(ncols)
r(k umax)
r(k)
Macros
r(key#)
r(tnotes)

number of columns in table


number of distinct keys
number of specific table commands
#th key
list of keys applied to each column

If the individual option is specified, then for each irfname, irf table stores the following in
r():
Scalars
r(irfname
r(irfname
r(irfname
Macros
r(irfname
r(irfname

ncols)
k umax)
k)

number of columns in table for irfname


number of distinct keys in table for irfname
number of specific table commands used to create table for irfname

key#)
tnotes)

#th key for irfname table


list of keys applied to each column in table for irfname

Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models

Title
mgarch Multivariate GARCH models

Description

Syntax

Remarks and examples

References

Also see

Description
mgarch estimates the parameters of multivariate generalized autoregressive conditionalheteroskedasticity (MGARCH) models. MGARCH models allow both the conditional mean and the
conditional covariance to be dynamic.
The general MGARCH model is so flexible that not all the parameters can be estimated. For this
reason, there are many MGARCH models that parameterize the problem more parsimoniously.
mgarch implements four commonly used parameterizations: the diagonal vech model, the constant
conditional correlation model, the dynamic conditional correlation model, and the time-varying
conditional correlation model.

Syntax
mgarch model eq

eq . . . eq

 

if

 

in

 

, ...

Family

model

Vech
Diagonal vech

dvech

Conditional correlation
constant conditional correlation
dynamic conditional correlation
varying conditional correlation

ccc
dcc
vcc

Remarks and examples


Remarks are presented under the following headings:
An introduction to MGARCH models
Diagonal vech MGARCH models
Conditional correlation MGARCH models
Constant conditional correlation MGARCH model
Dynamic conditional correlation MGARCH model
Varying conditional correlation MGARCH model
Error distributions and quasimaximum likelihood
Treatment of missing data

342

mgarch Multivariate GARCH models

343

An introduction to MGARCH models


Multivariate GARCH models allow the conditional covariance matrix of the dependent variables to
follow a flexible dynamic structure and allow the conditional mean to follow a vector-autoregressive
(VAR) structure.
The general MGARCH model is too flexible for most problems. There are many restricted MGARCH
models in the literature because there is no parameterization that always provides an optimal trade-off
between flexibility and parsimony.
mgarch implements four commonly used parameterizations: the diagonal vech (DVECH) model,
the constant conditional correlation (CCC) model, the dynamic conditional correlation (DCC) model,
and the time-varying conditional correlation (VCC) model.
Bollerslev, Engle, and Wooldridge (1988); Bollerslev, Engle, and Nelson (1994); Bauwens,
Laurent, and Rombouts (2006); Silvennoinen and Terasvirta (2009); and Engle (2009) provide general
introductions to MGARCH models. We provide a quick introduction organized around the models
implemented in mgarch.
We give a formal definition of the general MGARCH model to establish notation that facilitates
comparisons of the models. The general MGARCH model is given by

yt = Cxt + t
1/2

t = Ht t
where

yt is an m 1 vector of dependent variables;


C is an m k matrix of parameters;
xt is a k 1 vector of independent variables, which may contain lags of yt ;
1/2

Ht

is the Cholesky factor of the time-varying conditional covariance matrix Ht ; and

t is an m 1 vector of zero-mean, unit-variance, and independent and identically distributed


innovations.
In the general MGARCH model, Ht is a matrix generalization of univariate GARCH models. For
example, in a general MGARCH model with one autoregressive conditional heteroskedastic (ARCH)
term and one GARCH term,


vech (Ht ) = s + Avech t1 0t1 + Bvech (Ht1 )

(1)

where the vech() function stacks the unique elements that lie on or below the main diagonal in a
symmetric matrix into a vector, s is a vector of parameters, and A and B are conformable matrices
of parameters. Because this model uses the vech() function to extract and model the unique elements
of Ht , it is also known as the VECH model.
Because it is a conditional covariance matrix, Ht must be positive definite. Equation (1) can be
used to show that the parameters in s, A, and B are not uniquely identified and that further restrictions
must be placed on s, A, and B to ensure that Ht is positive definite for all t.

344

mgarch Multivariate GARCH models

The various MGARCH models proposed in the literature differ in how they trade off flexibility
and parsimony in their specifications for Ht . Increased flexibility allows a model to capture more
complex Ht processes. Increased parsimony makes parameter estimation feasible for more datasets.
An important measure of the flexibilityparsimony trade-off is how fast the number of model parameters
increases with the number of time series m, because many applied models use multiple time series.

Diagonal vech MGARCH models


Bollerslev, Engle, and Wooldridge (1988) derived the diagonal vech (DVECH) model by restricting
A and B to be diagonal. Although the DVECH model is much more parsimonious than the general
model, it can only handle a few series because the number of parameters grows quadratically with
the number of series. For example, there are 3m(m + 1)/2 parameters in a DVECH(1,1) model for
Ht .
Despite the large number of parameters, the diagonal structure implies that each conditional variance
and each conditional covariance depends on its own past but not on the past of the other conditional
variances and covariances. Formally, in the DVECH(1,1) model each element of Ht is modeled by

hij,t = sij + aij i,(t1) j,(t1) + bij hij,(t1)


Parameter estimation can be difficult because it requires that Ht be positive definite for each
t. The requirement that Ht be positive definite for each t imposes complicated restrictions on the
off-diagonal elements.
See [TS] mgarch dvech for more details about this model.

Conditional correlation MGARCH models


Conditional correlation (CC) models use nonlinear combinations of univariate GARCH models to
represent the conditional covariances. In each of the conditional correlation models, the conditional
covariance matrix is positive definite by construction and has a simple structure, which facilitates
parameter estimation. CC models have a slower parameter growth rate than DVECH models as the
number of time series increases.
In CC models, Ht is decomposed into a matrix of conditional correlations Rt and a diagonal
matrix of conditional variances Dt :
1/2

1/2

Ht = Dt Rt Dt

(2)

where each conditional variance follows a univariate GARCH process and the parameterizations of Rt
vary across models.
Equation (2) implies that

hij,t = ij,t i,t j,t

(3)

2
where i,t
is modeled by a univariate GARCH process. Equation (3) highlights that CC models use
nonlinear combinations of univariate GARCH models to represent the conditional covariances and that
the parameters in the model for ij,t describe the extent to which the errors from equations i and j
move together.

mgarch Multivariate GARCH models

345

Comparing (1) and (2) shows that the number of parameters increases more slowly with the number
of time series in a CC model than in a DVECH model.
The three CC models implemented in mgarch differ in how they parameterize Rt .

Constant conditional correlation MGARCH model

Bollerslev (1990) proposed a CC MGARCH model in which the correlation matrix is time invariant.
It is for this reason that the model is known as a constant conditional correlation (CCC) MGARCH
model. Restricting Rt to a constant matrix reduces the number of parameters and simplifies the
estimation but may be too strict in many empirical applications.
See [TS] mgarch ccc for more details about this model.

Dynamic conditional correlation MGARCH model

Engle (2002) introduced a dynamic conditional correlation (DCC) MGARCH model in which the
conditional quasicorrelations Rt follow a GARCH(1,1)-like process. (As described by Engle [2009]
and Aielli [2009], the parameters in Rt are not standardized to be correlations and are thus known
as quasicorrelations.) To preserve parsimony, all the conditional quasicorrelations are restricted to
follow the same dynamics. The DCC model is significantly more flexible than the CCC model without
introducing an unestimable number of parameters for a reasonable number of series.
See [TS] mgarch dcc for more details about this model.

Varying conditional correlation MGARCH model

Tse and Tsui (2002) derived the varying conditional correlation (VCC) MGARCH model in which the
conditional correlations at each period are a weighted sum of a time-invariant component, a measure
of recent correlations among the residuals, and last periods conditional correlations. For parsimony,
all the conditional correlations are restricted to follow the same dynamics.
See [TS] mgarch vcc for more details about this model.

Error distributions and quasimaximum likelihood


By default, mgarch dvech, mgarch ccc, mgarch dcc, and mgarch vcc estimate the parameters
of MGARCH models by maximum likelihood (ML), assuming that the errors come from a multivariate
normal distribution. Both the ML estimator and the quasimaximum likelihood (QML) estimator,
which drops the normality assumption, are assumed to be consistent and normally distributed in large
samples; see Jeantheau (1998), Berkes and Horvath (2003), Comte and Lieberman (2003), Ling and
McAleer (2003), and Fiorentini and Sentana (2007). Specify vce(robust) to estimate the parameters
by QML. The QML parameter estimates are the same as the ML estimates, but the VCEs are different.
Based on low-level assumptions, Jeantheau (1998), Comte and Lieberman (2003), and Ling and
McAleer (2003) prove that some of the ML and QML estimators implemented in mgarch are consistent
and asymptotically normal. Based on higher-level assumptions, Fiorentini and Sentana (2007) prove
that all the ML and QML estimators implemented in mgarch are consistent and asymptotically normal.
The low-level assumption proofs specify the technical restrictions on the data-generating processes
more precisely than the high-level proofs, but they do not cover as many models or cases as the
high-level proofs.

346

mgarch Multivariate GARCH models

It is generally accepted that there could be more low-level theoretical work done to substantiate
the claims that the ML and QML estimators are consistent and asymptotically normally distributed.
These widely applied estimators have been subjected to many Monte Carlo studies that show that the
large-sample theory performs well in finite samples.
The distribution(t) option causes the mgarch commands to estimate the parameters of the
corresponding model by ML assuming that the errors come from a multivariate Student t distribution.
The choice between the multivariate normal and the multivariate t distributions is one between
robustness and efficiency. If the disturbances come from a multivariate Student t, then the ML
estimates based on the multivariate Student t assumption will be consistent and efficient, while the
QML estimates based on the multivariate normal assumption will be consistent but not efficient. In
contrast, if the disturbances come from a well-behaved distribution that is neither multivariate Student
t nor multivariate normal, then the ML estimates based on the multivariate Student t assumption
will not be consistent, while the QML estimates based on the multivariate normal assumption will be
consistent but not efficient.
Fiorentini and Sentana (2007) compare the ML and QML estimators implemented in mgarch and
provide many useful technical results pertaining to the estimators.

Treatment of missing data


mgarch allows for gaps due to missing data. The unconditional expectations are substituted for
the dynamic components that cannot be computed because of gaps. This method of handling gaps
can only handle the case in which g/T goes to zero as T goes to infinity, where g is the number of
observations lost to gaps in the data and T is the number of nonmissing observations.

References
Aielli, G. P. 2009. Dynamic Conditional Correlations: On Properties and Estimation. Working paper, Dipartimento di
Statistica, University of Florence, Florence, Italy.
Bauwens, L., S. Laurent, and J. V. K. Rombouts. 2006. Multivariate GARCH models: A survey. Journal of Applied
Econometrics 21: 79109.
Berkes, I., and L. Horvath. 2003. The rate of consistency of the quasi-maximum likelihood estimator. Statistics and
Probability Letters 61: 133143.
Bollerslev, T. 1990. Modelling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH
model. Review of Economics and Statistics 72: 498505.
Bollerslev, T., R. F. Engle, and D. B. Nelson. 1994. ARCH models. In Vol. 4 of Handbook of Econometrics, ed.
R. F. Engle and D. L. McFadden. Amsterdam: Elsevier.
Bollerslev, T., R. F. Engle, and J. M. Wooldridge. 1988. A capital asset pricing model with time-varying covariances.
Journal of Political Economy 96: 116131.
Comte, F., and O. Lieberman. 2003. Asymptotic theory for multivariate GARCH processes. Journal of Multivariate
Analysis 84: 6184.
Engle, R. F. 2002. Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional
heteroskedasticity models. Journal of Business & Economic Statistics 20: 339350.
. 2009. Anticipating Correlations: A New Paradigm for Risk Management. Princeton, NJ: Princeton University
Press.
Fiorentini, G., and E. Sentana. 2007. On the efficiency and consistency of likelihood estimation in multivariate conditionally heteroskedastic dynamic regression models. Working paper 0713, CEMFI, Madrid, Spain.
ftp://ftp.cemfi.es/wp/07/0713.pdf.
Jeantheau, T. 1998. Strong consistency of estimators for multivariate ARCH models. Economic Theory 14: 7086.

mgarch Multivariate GARCH models

347

Ling, S., and M. McAleer. 2003. Asymptotic theory for a vector ARMGARCH model. Economic Theory 19:
280310.
Silvennoinen, A., and T. Terasvirta. 2009. Multivariate GARCH models. In Handbook of Financial Time Series, ed.
T. G. Andersen, R. A. Davis, J.-P. Krei, and T. Mikosch, 201229. New York: Springer.
Tse, Y. K., and A. K. C. Tsui. 2002. A multivariate generalized autoregressive conditional heteroscedasticity model
with time-varying correlations. Journal of Business & Economic Statistics 20: 351362.

Also see
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands

Title
mgarch ccc Constant conditional correlation multivariate GARCH models
Description
Options
References

Quick start
Remarks and examples
Also see

Menu
Stored results

Syntax
Methods and formulas

Description
mgarch ccc estimates the parameters of constant conditional correlation (CCC) multivariate generalized autoregressive conditionally heteroskedastic (MGARCH) models in which the conditional variances
are modeled as univariate generalized autoregressive conditionally heteroskedastic (GARCH) models
and the conditional covariances are modeled as nonlinear functions of the conditional variances. The
conditional correlation parameters that weight the nonlinear combinations of the conditional variance
are constant in the CCC MGARCH model.
The CCC MGARCH model is less flexible than the dynamic conditional correlation MGARCH model
(see [TS] mgarch dcc) and varying conditional correlation MGARCH model (see [TS] mgarch vcc),
which specify GARCH-like processes for the conditional correlations. The conditional correlation
MGARCH models are more parsimonious than the diagonal vech MGARCH model (see [TS] mgarch
dvech).

Quick start
Fit constant conditional correlation multivariate GARCH with first- and second-order ARCH components
for dependent variables y1 and y2 using tsset data
mgarch ccc (y1 y2), arch(1 2)
Add regressors x1 and x2 and first-order GARCH component
mgarch ccc (y1 y2 = x1 x2), arch(1 2) garch(1)
Add z1 to the model for the conditional heteroskedasticity
mgarch ccc (y1 y2 = x1 x2), arch(1 2) garch(1) het(z1)

Menu
Statistics

>

Multivariate time series

>

Multivariate GARCH

348

mgarch ccc Constant conditional correlation multivariate GARCH models

Syntax
mgarch ccc eq

eq . . . eq

 

if

 

in

 

, options

where each eq has the form





(depvars = indepvars
, eqoptions )
options

Description

Model

arch(numlist)
garch(numlist)
het(varlist)

 
distribution(dist # )
unconcentrated
constraints(numlist)

ARCH terms for all equations


GARCH terms for all equations

include varlist in the specification of the conditional variance


for all equations
use dist distribution for errors [may be gaussian
(synonym normal) or t; default is gaussian]
perform optimization on unconcentrated log likelihood
apply linear constraints

SE/Robust

vce(vcetype)

vcetype may be oim or robust

Reporting

level(#)
nocnsreport
display options

set confidence level; default is level(95)


do not display constraints
control columns and column formats, row spacing, line width,
display of omitted variables and base and empty cells, and
factor-variable labeling

Maximization

maximize options
from(matname)

control the maximization process; seldom used


initial values for the coefficients; seldom used

coeflegend

display legend instead of statistics

eqoptions

Description

noconstant
arch(numlist)
garch(numlist)
het(varlist)

ARCH terms
GARCH terms

suppress constant term in the mean equation

include varlist in the specification of the conditional variance

You must tsset your data before using mgarch ccc; see [TS] tsset.
indepvars and varlist may contain factor variables; see [U] 11.4.3 Factor variables.
depvars, indepvars, and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.

349

350

mgarch ccc Constant conditional correlation multivariate GARCH models

Options


Model

arch(numlist) specifies the ARCH terms for all equations in the model. By default, no ARCH terms
are specified.
garch(numlist) specifies the GARCH terms for all equations in the model. By default, no GARCH
terms are specified.
het(varlist) specifies that varlist be included in the specification of the conditional variance for all
equations. This varlist enters the variance specification collectively as multiplicative heteroskedasticity.
 
distribution(dist # ) specifies the assumed distribution for the errors. dist may be gaussian,
normal, or t.
gaussian and normal are synonyms; each causes mgarch ccc to assume that the errors come
from a multivariate normal distribution. # cannot be specified with either of them.
t causes mgarch ccc to assume that the errors follow a multivariate Student t distribution, and
the degree-of-freedom parameter is estimated along with the other parameters of the model. If
distribution(t #) is specified, then mgarch ccc uses a multivariate Student t distribution
with # degrees of freedom. # must be greater than 2.
unconcentrated specifies that optimization be performed on the unconcentrated log likelihood. The
default is to start with the concentrated log likelihood.
constraints(numlist) specifies linear constraints to apply to the parameter estimates.

SE/Robust

vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, specifies to use the observed information matrix (OIM) estimator.
vce(robust) specifies to use the Huber/White/sandwich estimator.

Reporting

level(#); see [R] estimation options.


nocnsreport; see [R] estimation options.
display options: noci, nopvalues, noomitted, vsquish, noemptycells, baselevels,
allbaselevels, nofvlabel, fvwrap(#), fvwrapon(style), cformat(% fmt), pformat(% fmt),
sformat(% fmt), and nolstretch; see [R] estimation options.

Maximization

 
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(matname); see [R] maximize for all options except
from(), and see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the coefficients. from(b0) causes mgarch ccc to begin
the optimization algorithm with the values in b0. b0 must be a row vector, and the number of
columns must equal the number of parameters in the model.
The following option is available with mgarch ccc but is not shown in the dialog box:
coeflegend; see [R] estimation options.

mgarch ccc Constant conditional correlation multivariate GARCH models

351

Eqoptions
noconstant suppresses the constant term in the mean equation.
arch(numlist) specifies the ARCH terms in the equation. By default, no ARCH terms are specified.
This option may not be specified with model-level arch().
garch(numlist) specifies the GARCH terms in the equation. By default, no GARCH terms are specified.
This option may not be specified with model-level garch().
het(varlist) specifies that varlist be included in the specification of the conditional variance. This
varlist enters the variance specification collectively as multiplicative heteroskedasticity. This option
may not be specified with model-level het().

Remarks and examples


We assume that you have already read [TS] mgarch, which provides an introduction to MGARCH
models and the methods implemented in mgarch ccc.
MGARCH models are dynamic multivariate regression models in which the conditional variances
and covariances of the errors follow an autoregressive-moving-average structure. The CCC MGARCH
model uses a nonlinear combination of univariate GARCH models in which the cross-equation weights
are time invariant to model the conditional covariance matrix of the disturbances.

As discussed in [TS] mgarch, MGARCH models differ in the parsimony and flexibility of their
specifications for a time-varying conditional covariance matrix of the disturbances, denoted by Ht .
In the conditional correlation family of MGARCH models, the diagonal elements of Ht are modeled
as univariate GARCH models, whereas the off-diagonal elements are modeled as nonlinear functions
of the diagonal terms. In the CCC MGARCH model,

hij,t = ij

p
hii,t hjj,t

where the diagonal elements hii,t and hjj,t follow univariate GARCH processes and ij is a timeinvariate weight interpreted as a conditional correlation.
In the dynamic conditional correlation (DCC) and varying conditional correlation (VCC) MGARCH
models discussed in [TS] mgarch dcc and [TS] mgarch vcc, the ij are allowed to vary over
time. Although the conditional-correlation structure provides a useful trade-off between parsimony
and flexibility in the DCC MGARCH and VCC MGARCH models, the time-invariant parameterization
used in the CCC MGARCH model is generally viewed as too restrictive for many applications; see
Silvennoinen and Terasvirta (2009). The baseline CCC MGARCH estimates are frequently compared
with DCC MGARCH and VCC MGARCH estimates.

Technical note
Formally, the CCC MGARCH model derived by Bollerslev (1990) can be written as

yt = Cxt + t
1/2

t = Ht t
1/2

1/2

Ht = Dt RDt

352

mgarch ccc Constant conditional correlation multivariate GARCH models

where

yt is an m 1 vector of dependent variables;


C is an m k matrix of parameters;
xt is a k 1 vector of independent variables, which may contain lags of yt ;
1/2

Ht

is the Cholesky factor of the time-varying conditional covariance matrix Ht ;

t is an m 1 vector of normal, independent, and identically distributed innovations;

Dt is a diagonal matrix of conditional variances,


2
1,t
0

Dt = .
..
0

..
..
.
.
2
m,t

0
2
2,t
..
.
0

2
in which each i,t
evolves according to a univariate GARCH model of the form
P
Pqi
pi
2
2
i,t
= si + j=1
j 2i,tj + j=1
j i,tj

by default, or
2
i,t
= exp(i zi,t ) +

Ppi

j=1

j 2i,tj +

Pqi

j=1

2
j i,tj

when the het() option is specified, where t is a 1 p vector of parameters, zi is a p 1


vector of independent variables including a constant term, the j s are ARCH parameters,
and the j s are GARCH parameters; and

R is a matrix of time-invariant unconditional correlations of the standardized residuals


1/2
Dt
t ,

1
12 1m
1
2m
12
R=
..
..
..
..

.
.
.
.
1m 2m
1
This model is known as the constant conditional correlation MGARCH model because R is time
invariant.

Some examples
Example 1: Model with common covariates
We have daily data on the stock returns of three car manufacturersToyota, Nissan, and Honda,
from January 2, 2003, to December 31, 2010in the variables toyota, nissan, and honda. We
model the conditional means of the returns as a first-order vector autoregressive process and the
conditional covariances as a CCC MGARCH process in which the variance of each disturbance term
follows a GARCH(1,1) process. We specify the noconstant option, because the returns have mean
zero. The estimated constants in the variance equations are near zero in this example because of how
the data are scaled.

mgarch ccc Constant conditional correlation multivariate GARCH models


. use http://www.stata-press.com/data/r14/stocks
(Data from Yahoo! Finance)
. mgarch ccc (toyota nissan honda = L.toyota L.nissan L.honda, noconstant),
> arch(1) garch(1)
Calculating starting values....
Optimizing concentrated log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood = 16898.994
Iteration 1:
log likelihood = 17008.914
Iteration 2:
log likelihood = 17156.946
Iteration 3:
log likelihood = 17249.527
Iteration 4:
log likelihood = 17287.251
Iteration 5:
log likelihood =
17313.5
Iteration 6:
log likelihood = 17335.087
Iteration 7:
log likelihood = 17356.534
Iteration 8:
log likelihood = 17376.051
Iteration 9:
log likelihood = 17400.035
(switching technique to nr)
Iteration 10: log likelihood = 17423.634
Iteration 11: log likelihood = 17440.261
Iteration 12: log likelihood = 17446.381
Iteration 13: log likelihood = 17447.614
Iteration 14: log likelihood = 17447.645
Iteration 15: log likelihood = 17447.645
Optimizing unconcentrated log likelihood
Iteration 0:
log likelihood = 17447.645
Iteration 1:
log likelihood = 17447.651
Iteration 2:
log likelihood = 17447.651
Constant conditional correlation MGARCH model
Sample: 1 - 2015
Distribution: Gaussian
Log likelihood = 17447.65
Coef.

Std. Err.

Number of obs
Wald chi2(9)
Prob > chi2
P>|z|

=
=
=

2,014
17.46
0.0420

[95% Conf. Interval]

toyota
toyota
L1.

-.0537817

.0353211

-1.52

0.128

-.1230098

.0154463

nissan
L1.

.026686

.024841

1.07

0.283

-.0220015

.0753734

honda
L1.

-.0043073

.0302761

-0.14

0.887

-.0636473

.0550327

ARCH_toyota
arch
L1.

.0615321

.0087313

7.05

0.000

.0444191

.0786452

garch
L1.

.9213798

.0110412

83.45

0.000

.8997395

.9430201

_cons

4.42e-06

1.12e-06

3.93

0.000

2.21e-06

6.62e-06

353

354

mgarch ccc Constant conditional correlation multivariate GARCH models

nissan
toyota
L1.

-.0232321

.0400563

-0.58

0.562

-.1017411

.0552769

nissan
L1.

-.0299552

.0309362

-0.97

0.333

-.0905891

.0306787

honda
L1.

.0369229

.0360532

1.02

0.306

-.0337401

.1075859

ARCH_nissan
arch
L1.

.0740294

.0119353

6.20

0.000

.0506366

.0974222

garch
L1.

.9102548

.0142328

63.95

0.000

.882359

.9381506

_cons

6.36e-06

1.76e-06

3.61

0.000

2.91e-06

9.81e-06

toyota
L1.

-.0378616

.036792

-1.03

0.303

-.1099727

.0342495

nissan
L1.

.0551649

.0272559

2.02

0.043

.0017443

.1085855

honda
L1.

-.0431919

.0331268

-1.30

0.192

-.1081193

.0217354

ARCH_honda
arch
L1.

.0433036

.0070224

6.17

0.000

.0295399

.0570674

garch
L1.

.939117

.010131

92.70

0.000

.9192605

.9589735

_cons

5.02e-06

1.31e-06

3.83

0.000

2.45e-06

7.59e-06

.6532264

.0128035

51.02

0.000

.628132

.6783208

.7185412

.0108132

66.45

0.000

.6973477

.7397346

.6298972

.0135336

46.54

0.000

.6033717

.6564226

honda

corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)

The iteration log has three parts: the dots from the search for initial values, the iteration log from
optimizing the concentrated log likelihood, and the iteration log from maximizing the unconcentrated
log likelihood. A detailed discussion of the optimization methods can be found in Methods and
formulas.
The header describes the estimation sample and reports a Wald test against the null hypothesis
that all the coefficients on the independent variables in the mean equations are zero. Here the null
hypothesis is rejected at the 5% level.
The output table first presents results for the mean or variance parameters used to model each
dependent variable. Subsequently, the output table presents results for the conditional correlation
parameters. For example, the conditional correlation between the standardized residuals for Toyota
and Nissan is estimated to be 0.65.

mgarch ccc Constant conditional correlation multivariate GARCH models

355

The output above indicates that we may not need all the vector autoregressive parameters, but that
each of the univariate ARCH, univariate GARCH, and conditional correlation parameters are statistically
significant. That the estimated conditional correlation parameters are positive and significant indicates
that the returns on these stocks rise or fall together.
That the conditional correlations are time invariant is a restrictive assumption. The DCC MGARCH
model and the VCC MGARCH model nest the CCC MGARCH model. When we test the time-invariance
assumption with Wald tests on the parameters of these more general models in [TS] mgarch dcc and
[TS] mgarch vcc, we reject the null hypothesis that these conditional correlations are time invariant.

Example 2: Model with covariates that differ by equation


We improve the previous example by removing the insignificant parameters from the model. To
remove these parameters, we specify the honda equation separately from the toyota and nissan
equations:
. mgarch ccc (toyota nissan = , noconstant) (honda = L.nissan, noconstant),
> arch(1) garch(1)
Calculating starting values....
Optimizing concentrated log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood =
16886.88
Iteration 1:
log likelihood = 16974.779
Iteration 2:
log likelihood = 17147.893
Iteration 3:
log likelihood = 17247.473
Iteration 4:
log likelihood = 17285.549
Iteration 5:
log likelihood = 17311.153
Iteration 6:
log likelihood = 17333.588
Iteration 7:
log likelihood = 17353.717
Iteration 8:
log likelihood = 17374.895
Iteration 9:
log likelihood = 17400.669
(switching technique to nr)
Iteration 10: log likelihood = 17425.661
Iteration 11: log likelihood = 17436.789
Iteration 12: log likelihood =
17439.74
Iteration 13: log likelihood = 17439.865
Iteration 14: log likelihood = 17439.866
Optimizing unconcentrated log likelihood
Iteration 0:
log likelihood = 17439.865
Iteration 1:
log likelihood = 17439.872
Iteration 2:
log likelihood = 17439.872

356

mgarch ccc Constant conditional correlation multivariate GARCH models


Constant conditional correlation MGARCH model
Sample: 1 - 2015
Distribution: Gaussian
Log likelihood = 17439.87
Coef.

Number of obs
Wald chi2(1)
Prob > chi2

Std. Err.

P>|z|

=
=
=

2,014
1.81
0.1781

[95% Conf. Interval]

ARCH_toyota
arch
L1.

.0619604

.0087942

7.05

0.000

.044724

.0791968

garch
L1.

.9208961

.0110995

82.97

0.000

.8991414

.9426508

_cons

4.43e-06

1.13e-06

3.94

0.000

2.23e-06

6.64e-06

ARCH_nissan
arch
L1.

.0773095

.012328

6.27

0.000

.0531471

.1014719

garch
L1.

.906088

.0147303

61.51

0.000

.8772171

.9349589

_cons

6.77e-06

1.85e-06

3.66

0.000

3.14e-06

.0000104

nissan
L1.

.0186628

.0138575

1.35

0.178

-.0084975

.0458231

ARCH_honda
arch
L1.

.0433741

.006996

6.20

0.000

.0296622

.0570861

garch
L1.

.9391094

.0100707

93.25

0.000

.9193712

.9588476

_cons

5.02e-06

1.31e-06

3.83

0.000

2.45e-06

7.60e-06

.652299

.0128271

50.85

0.000

.6271583

.6774396

.7189531

.0108005

66.57

0.000

.6977845

.7401218

.628435

.0135653

46.33

0.000

.6018475

.6550225

honda

corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)

It turns out that the coefficient on L1.nissan in the honda equation is now statistically insignificant.
We could further improve the model by removing L1.nissan from the model.
As expected, removing the insignificant parameters from conditional mean equations had almost
no effect on the estimated conditional variance parameters.
There is no mean equation for Toyota or Nissan. In [TS] mgarch ccc postestimation, we discuss
prediction from models without covariates.

mgarch ccc Constant conditional correlation multivariate GARCH models

357

Example 3: Model with constraints


Here we fit a bivariate CCC MGARCH model for the Toyota and Nissan shares. We believe that
the shares of these car manufacturers follow the same process, so we impose the constraints that the
ARCH and the GARCH coefficients are the same for the two companies.
. constraint 1 _b[ARCH_toyota:L.arch] = _b[ARCH_nissan:L.arch]
. constraint 2 _b[ARCH_toyota:L.garch] = _b[ARCH_nissan:L.garch]
. mgarch ccc (toyota nissan = , noconstant), arch(1) garch(1) constraints(1 2)
Calculating starting values....
Optimizing concentrated log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood
Iteration 1:
log likelihood
Iteration 2:
log likelihood
Iteration 3:
log likelihood
(output omitted )
Iteration 8:
log likelihood
Iteration 9:
log likelihood
(switching technique to nr)
Iteration 10: log likelihood
Iteration 11: log likelihood
Iteration 12: log likelihood

=
=
=
=

10317.225
10630.464
10865.964
11063.329

=
=

11273.962
11274.409

=
=
=

11274.494
11274.499
11274.499

Optimizing unconcentrated log likelihood


Iteration 0:
Iteration 1:
Iteration 2:

log likelihood =
log likelihood =
log likelihood =

11274.499
11274.501
11274.501

Constant conditional correlation MGARCH model


Sample: 1 - 2015
Distribution: Gaussian
Log likelihood =
11274.5
( 1)
( 2)

Number of obs
Wald chi2(.)
Prob > chi2

=
=
=

2,015
.
.

[ARCH_toyota]L.arch - [ARCH_nissan]L.arch = 0
[ARCH_toyota]L.garch - [ARCH_nissan]L.garch = 0
Coef.

Std. Err.

P>|z|

[95% Conf. Interval]

ARCH_toyota
arch
L1.

.0742678

.0095464

7.78

0.000

.0555572

.0929785

garch
L1.

.9131674

.0111558

81.86

0.000

.8913024

.9350323

_cons

3.77e-06

1.02e-06

3.71

0.000

1.78e-06

5.77e-06

ARCH_nissan
arch
L1.

.0742678

.0095464

7.78

0.000

.0555572

.0929785

garch
L1.

.9131674

.0111558

81.86

0.000

.8913024

.9350323

_cons

5.30e-06

1.36e-06

3.89

0.000

2.63e-06

7.97e-06

.651389

.0128482

50.70

0.000

.6262071

.6765709

corr(toyota,
nissan)

358

mgarch ccc Constant conditional correlation multivariate GARCH models

We could test our constraints by fitting the unconstrained model and performing a likelihood-ratio
test. The results indicate that the restricted model is preferable.

Example 4: Model with a GARCH term


In this example, we have data on fictional stock returns for the Acme and Anvil corporations and
we believe that the movement of the two stocks is governed by different processes. We specify one
ARCH and one GARCH term for the conditional variance equation for Acme and two ARCH terms for
the conditional variance equation for Anvil. In addition, we include the lagged value of the stock
return for Apex, the main subsidiary of Anvil corporation, in the variance equation of Anvil. For
Acme, we have data on the changes in an index of futures prices of products related to those produced
by Acme in afrelated. For Anvil, we have data on the changes in an index of futures prices of
inputs used by Anvil in afinputs.

mgarch ccc Constant conditional correlation multivariate GARCH models


. use http://www.stata-press.com/data/r14/acmeh
. mgarch ccc (acme = afrelated, noconstant arch(1) garch(1))
> (anvil = afinputs, arch(1/2) het(L.apex))
Calculating starting values....
Optimizing concentrated log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood = -12996.245
Iteration 1:
log likelihood = -12609.982
Iteration 2:
log likelihood = -12563.103
Iteration 3:
log likelihood = -12554.73
Iteration 4:
log likelihood = -12554.542
Iteration 5:
log likelihood = -12554.534
Iteration 6:
log likelihood = -12554.534
Iteration 7:
log likelihood = -12554.534
Optimizing unconcentrated log likelihood
Iteration 0:
log likelihood = -12554.534
Iteration 1:
log likelihood = -12554.533
Constant conditional correlation MGARCH model
Sample: 1 - 2500
Number of obs
Distribution: Gaussian
Wald chi2(2)
Log likelihood = -12554.53
Prob > chi2
Coef.

Std. Err.

=
=
=

2,499
2212.30
0.0000

P>|z|

[95% Conf. Interval]

acme
afrelated

.9175148

.0651088

14.09

0.000

.7899039

1.045126

ARCH_acme
arch
L1.

.0798719

.0169526

4.71

0.000

.0466455

.1130983

garch
L1.

.7336823

.0601569

12.20

0.000

.6157768

.8515877

_cons

2.880836

.760206

3.79

0.000

1.39086

4.370812

anvil
afinputs
_cons

-1.015561
.0703606

.0226437
.0211689

-44.85
3.32

0.000
0.001

-1.059942
.0288703

-.97118
.1118508

ARCH_anvil
arch
L1.
L2.

.4893288
.2782296

.0286012
.0208172

17.11
13.37

0.000
0.000

.4332714
.2374287

.5453862
.3190305

apex
L1.

1.894972

.0616293

30.75

0.000

1.774181

2.015763

_cons

.1034111

.0735512

1.41

0.160

-.0407466

.2475688

-.5354047

.0143275

-37.37

0.000

-.563486

-.5073234

corr(acme,
anvil)

359

The results indicate that increases in the futures prices for related products lead to higher returns on
the Acme stock, and increased input prices lead to lower returns on the Anvil stock. In the conditional
variance equation for Anvil, the coefficient on L1.apex is positive and significant, which indicates
that an increase in the return on the Apex stock leads to more variability in the return on the Anvil
stock. That the estimated conditional correlation between the two returns is 0.54 indicates that these

360

mgarch ccc Constant conditional correlation multivariate GARCH models

returns tend to move in opposite directions; in other words, an increase in the return for the Acme
stock tends to be associated with a decrease in the return for the Anvil stock, and vice versa.

Stored results
mgarch ccc stores the following in e():
Scalars
e(N)
e(k)
e(k aux)
e(k extra)
e(k eq)
e(k dv)
e(df m)
e(ll)
e(chi2)
e(p)
e(estdf)
e(usr)
e(tmin)
e(tmax)
e(N gaps)
e(rank)
e(ic)
e(rc)
e(converged)
Macros
e(cmd)
e(model)
e(cmdline)
e(depvar)
e(covariates)
e(dv eqs)
e(indeps)
e(tvar)
e(title)
e(chi2type)
e(vce)
e(vcetype)
e(tmins)
e(tmaxs)
e(dist)
e(arch)
e(garch)
e(technique)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
e(marginsdefault)
e(asbalanced)
e(asobserved)

number of observations
number of parameters
number of auxiliary parameters
number of extra estimates added to
number of equations in e(b)
number of dependent variables
model degrees of freedom
log likelihood

significance
1 if distribution parameter was estimated, 0 otherwise
user-provided distribution parameter
minimum time in sample
maximum time in sample
number of gaps
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
mgarch
ccc
command as typed
names of dependent variables
list of covariates
dependent variables with mean equations
independent variables in each equation
time variable
title in estimation output
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
formatted minimum time
formatted maximum time
distribution for error term: gaussian or t
specified ARCH terms
specified GARCH terms
maximization technique
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
default predict() specification for margins
factor variables fvset as asbalanced
factor variables fvset as asobserved

mgarch ccc Constant conditional correlation multivariate GARCH models


Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(hessian)
e(V)
e(pinfo)
Functions
e(sample)

361

coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
Hessian matrix
variancecovariance matrix of the estimators
parameter information, used by predict
marks estimation sample

Methods and formulas


mgarch ccc estimates the parameters of the CCC MGARCH model by maximum likelihood. The
unconcentrated log-likelihood function based on the multivariate normal distribution for observation t
is

n 
o
1/2
lt = 0.5m log(2) 0.5log {det (R)} log det Dt
0.5e
t R1e
0t

(1)

1/2

where e
t = Dt
t is an m 1 vector of standardized residuals, t = yt Cxt . The log-likelihood
PT
function is t=1 lt .
If we assume that t follow a multivariate t distribution with degrees of freedom (df) greater than
2, then the unconcentrated log-likelihood function for observation t is


df
m
log {(df 2)}

2
2


n 
o df + m
e
t R1e
0t
1/2
0.5log {det (R)} log det Dt

log 1 +
2
df 2

lt = log

df + m
2

log

(2)

The correlation matrix R can be concentrated out of (1) and (2) by defining the (i, j)th element
of R as

bij =

T
X
t=1

!
e
ite
jt

T
X
t=1

e
2it

T
 12  X

e
2jt

 21

t=1

mgarch ccc starts the optimization process with the concentrated log-likelihood function.
The starting values for the parameters in the mean equations and the initial residuals b
t are
obtained by least-squares regression. The starting values for the parameters in the variance equations
are obtained by a procedure proposed by Gourieroux and Monfort (1997, sec. 6.2.2). If the optimization
is started with the unconcentrated log likelihood, then the initial values for the parameters in R are
calculated from the standardized residuals e
t .

362

mgarch ccc Constant conditional correlation multivariate GARCH models

GARCH estimators require initial values that can be plugged in for ti 0ti and

Htj when
t i < 1 and t j < 1. mgarch ccc substitutes an estimator of the unconditional covariance of the
disturbances
T
X
0
1
b
b
=T
b
t b
b
t
(3)
t=1

b
t is the vector of residuals
for ti 0ti when t i < 1 and for Htj when t j < 1, where b
calculated using the estimated parameters.
mgarch ccc requires a sample size that at the minimum is equal to the number of parameters in
the model plus twice the number of equations.
mgarch ccc uses numerical derivatives in maximizing the log-likelihood function.

References
Bollerslev, T. 1990. Modelling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH
model. Review of Economics and Statistics 72: 498505.
Gourieroux, C. S., and A. Monfort. 1997. Time Series and Dynamic Models. Trans. ed. G. M. Gallo. Cambridge:
Cambridge University Press.
Silvennoinen, A., and T. Terasvirta. 2009. Multivariate GARCH models. In Handbook of Financial Time Series, ed.
T. G. Andersen, R. A. Davis, J.-P. Kreis, and T. Mikosch, 201229. Berlin: Springer.

Also see
[TS] mgarch ccc postestimation Postestimation tools for mgarch ccc
[TS] mgarch Multivariate GARCH models
[TS] tsset Declare data to be time-series data
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands

Title
mgarch ccc postestimation Postestimation tools for mgarch ccc
Postestimation commands
Methods and formulas

predict
Also see

margins

Remarks and examples

Postestimation commands
The following standard postestimation commands are available after mgarch ccc:
Command

Description

contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom

contrasts and ANOVA-style joint tests of estimates


Akaikes and Schwarzs Bayesian information criteria (AIC and BIC)
summary statistics for the estimation sample
variancecovariance matrix of the estimators (VCE)
cataloging estimation results
dynamic forecasts and simulations
point estimates, standard errors, testing, and inference for linear combinations
of coefficients
likelihood-ratio test
marginal means, predictive margins, marginal effects, and average marginal
effects
graph the results from margins (profile plots, interaction plots, etc.)
point estimates, standard errors, testing, and inference for nonlinear combinations
of coefficients
predictions, residuals, influence statistics, and other diagnostic measures
point estimates, standard errors, testing, and inference for generalized predictions
pairwise comparisons of estimates
Wald tests of simple and composite linear hypotheses
Wald tests of nonlinear hypotheses

lrtest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl

363

364

mgarch ccc postestimation Postestimation tools for mgarch ccc

predict
Description for predict
predict creates a new variable containing predictions such as linear predictions and conditional variances, covariances, and correlations. All predictions are available as static one-step-ahead
predictions or as dynamic multistep predictions, and you can control when dynamic predictions begin.

Menu for predict


Statistics

>

Postestimation

Syntax for predict




predict

type

statistic

{ stub* | newvarlist }

if

 

in

 

, statistic options

Description

Main

xb
residuals
variance
correlation

linear prediction; the default


residuals
conditional variances and covariances
conditional correlations

These statistics are available both in and out of sample; type predict
the estimation sample.

. . . if e(sample) . . . if wanted only for

Description

options
Options

equation(eqnames)
names of equations for which predictions are made
dynamic(time constant) begin dynamic forecast at specified time

Options for predict



Main

xb, the default, calculates the linear predictions of the dependent variables.
residuals calculates the residuals.
variance predicts the conditional variances and conditional covariances.
correlation predicts the conditional correlations.

Options

equation(eqnames) specifies the equation for which the predictions are calculated. Use this option
to predict a statistic for a particular equation. Equation names, such as equation(income), are
used to identify equations.

mgarch ccc postestimation Postestimation tools for mgarch ccc

365

One equation name may be specified when predicting the dependent variable, the residuals, or
the conditional variance. For example, specifying equation(income) causes predict to predict
income, and specifying variance equation(income) causes predict to predict the conditional
variance of income.
Two equations may be specified when predicting a conditional variance or covariance. For example, specifying equation(income, consumption) variance causes predict to predict the
conditional covariance of income and consumption.
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly; see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with residuals.

margins
Description for margins
margins estimates margins of response for linear predictions and conditional variances, covariances,
and correlations. All predictions are available as static one-step-ahead predictions or as dynamic
multistep predictions, and you can control when dynamic predictions begin.

Menu for margins


Statistics

>

Postestimation

Syntax for margins



margins

margins

marginlist

 

marginlist

, options

, predict(statistic . . . )

predict(statistic . . . ) . . .

statistic

Description

default
xb
variance
correlation
residuals

linear predictions for each equation


linear prediction for a specified equation
conditional variances and covariances
conditional correlations
not allowed with margins

 

options

xb defaults to the first equation.

Statistics not allowed with margins are functions of stochastic quantities other than e(b).
For the full syntax, see [R] margins.

366

mgarch ccc postestimation Postestimation tools for mgarch ccc

Remarks and examples


We assume that you have already read [TS] mgarch ccc. In this entry, we use predict after
mgarch ccc to make in-sample and out-of-sample forecasts.

Example 1: Dynamic forecasts


In this example, we obtain dynamic forecasts for the Toyota, Nissan, and Honda stock returns
modeled in example 2 of [TS] mgarch ccc. In the output below, we reestimate the parameters of the
model, use tsappend (see [TS] tsappend) to extend the data, and use predict to obtain in-sample
one-step-ahead forecasts and dynamic forecasts of the conditional variances of the returns. We graph
the forecasts below.

.001

.002

.003

. use http://www.stata-press.com/data/r14/stocks
(Data from Yahoo! Finance)
. quietly mgarch ccc (toyota nissan = , noconstant)
> (honda = L.nissan, noconstant), arch(1) garch(1)
. tsappend, add(50)
. predict H*, variance dynamic(2016)

01jan2009

01jul2009

01jan2010
Date

01jul2010

01jan2011

Variance prediction (toyota,toyota), dynamic(2016)


Variance prediction (nissan,nissan), dynamic(2016)
Variance prediction (honda,honda), dynamic(2016)

Recent in-sample one-step-ahead forecasts are plotted to the left of the vertical line in the above
graph, and the dynamic out-of-sample forecasts appear to the right of the vertical line. The graph
shows the tail end of the huge increase in return volatility that took place in 2008 and 2009. It also
shows that the dynamic forecasts quickly converge.

Methods and formulas


All one-step predictions are obtained by substituting the parameter estimates into the model. The
b is the initial value for the ARCH and
estimated unconditional variance matrix of the disturbances, ,
b using the prediction sample, the parameter
GARCH terms. The postestimation routines recompute
estimates stored in e(b), and (3) in Methods and formulas of [TS] mgarch ccc.
For observations in which the residuals are missing, the estimated unconditional variance matrix
of the disturbances is used in place of the outer product of the residuals.

mgarch ccc postestimation Postestimation tools for mgarch ccc

367

Dynamic predictions of the dependent variables use previously predicted values beginning in the
period specified by dynamic().

b for the outer product of the


Dynamic variance predictions are implemented by substituting
residuals beginning in the period specified in dynamic().

Also see
[TS] mgarch ccc Constant conditional correlation multivariate GARCH models
[U] 20 Estimation and postestimation commands

Title
mgarch dcc Dynamic conditional correlation multivariate GARCH models
Description
Options
References

Quick start
Remarks and examples
Also see

Menu
Stored results

Syntax
Methods and formulas

Description
mgarch dcc estimates the parameters of dynamic conditional correlation (DCC) multivariate
generalized autoregressive conditionally heteroskedastic (MGARCH) models in which the conditional
variances are modeled as univariate generalized autoregressive conditionally heteroskedastic (GARCH)
models and the conditional covariances are modeled as nonlinear functions of the conditional variances.
The conditional quasicorrelation parameters that weight the nonlinear combinations of the conditional
variances follow the GARCH-like process specified in Engle (2002).
The DCC MGARCH model is about as flexible as the closely related varying conditional correlation
MGARCH model (see [TS] mgarch vcc), more flexible than the conditional correlation MGARCH
model (see [TS] mgarch ccc), and more parsimonious than the diagonal vech MGARCH model (see
[TS] mgarch dvech).

Quick start
Fit dynamic conditional correlation multivariate GARCH with first- and second-order ARCH components
for dependent variables y1 and y2 using tsset data
mgarch dcc (y1 y2), arch(1 2)
Add regressors x1 and x2 and first-order GARCH component
mgarch dcc (y1 y2 = x1 x2), arch(1 2) garch(1)
Add z1 to the model for the conditional heteroskedasticity
mgarch dcc (y1 y2 = x1 x2), arch(1 2) garch(1) het(z1)

Menu
Statistics

>

Multivariate time series

>

Multivariate GARCH

368

mgarch dcc Dynamic conditional correlation multivariate GARCH models

Syntax
mgarch dcc eq

eq . . . eq

 

if

 

in

 

, options

where each eq has the form





(depvars = indepvars
, eqoptions )
options

Description

Model

arch(numlist)
garch(numlist)
het(varlist)

 
distribution(dist # )
constraints(numlist)

ARCH terms for all equations


GARCH terms for all equations

include varlist in the specification of the conditional variance


for all equations
use dist distribution for errors [may be gaussian
(synonym normal) or t; default is gaussian]
apply linear constraints

SE/Robust

vce(vcetype)

vcetype may be oim or robust

Reporting

level(#)
nocnsreport
display options

set confidence level; default is level(95)


do not display constraints
control columns and column formats, row spacing, line width,
display of omitted variables and base and empty cells, and
factor-variable labeling

Maximization

maximize options
from(matname)

control the maximization process; seldom used


initial values for the coefficients; seldom used

coeflegend

display legend instead of statistics

eqoptions

Description

noconstant
arch(numlist)
garch(numlist)
het(varlist)

suppress constant term in the mean equation


ARCH terms
GARCH terms
include varlist in the specification of the conditional variance

You must tsset your data before using mgarch dcc; see [TS] tsset.
indepvars and varlist may contain factor variables; see [U] 11.4.3 Factor variables.
depvars, indepvars, and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.

369

370

mgarch dcc Dynamic conditional correlation multivariate GARCH models

Options


Model

arch(numlist) specifies the ARCH terms for all equations in the model. By default, no ARCH terms
are specified.
garch(numlist) specifies the GARCH terms for all equations in the model. By default, no GARCH
terms are specified.
het(varlist) specifies that varlist be included in the specification of the conditional variance for all
equations. This varlist enters the variance specification collectively as multiplicative heteroskedasticity.
 
distribution(dist # ) specifies the assumed distribution for the errors. dist may be gaussian,
normal, or t.
gaussian and normal are synonyms; each causes mgarch dcc to assume that the errors come
from a multivariate normal distribution. # may not be specified with either of them.
t causes mgarch dcc to assume that the errors follow a multivariate Student t distribution, and
the degree-of-freedom parameter is estimated along with the other parameters of the model. If
distribution(t #) is specified, then mgarch dcc uses a multivariate Student t distribution
with # degrees of freedom. # must be greater than 2.
constraints(numlist) specifies linear constraints to apply to the parameter estimates.

SE/Robust

vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, specifies to use the observed information matrix (OIM) estimator.
vce(robust) specifies to use the Huber/White/sandwich estimator.

Reporting

level(#); see [R] estimation options.


nocnsreport; see [R] estimation options.
display options: noci, nopvalues, noomitted, vsquish, noemptycells, baselevels,
allbaselevels, nofvlabel, fvwrap(#), fvwrapon(style), cformat(% fmt), pformat(% fmt),
sformat(% fmt), and nolstretch; see [R] estimation options.

Maximization

 
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(matname); see [R] maximize for all options except
from(), and see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the coefficients. from(b0) causes mgarch dcc to begin
the optimization algorithm with the values in b0. b0 must be a row vector, and the number of
columns must equal the number of parameters in the model.
The following option is available with mgarch dcc but is not shown in the dialog box:
coeflegend; see [R] estimation options.

mgarch dcc Dynamic conditional correlation multivariate GARCH models

371

Eqoptions
noconstant suppresses the constant term in the mean equation.
arch(numlist) specifies the ARCH terms in the equation. By default, no ARCH terms are specified.
This option may not be specified with model-level arch().
garch(numlist) specifies the GARCH terms in the equation. By default, no GARCH terms are specified.
This option may not be specified with model-level garch().
het(varlist) specifies that varlist be included in the specification of the conditional variance. This
varlist enters the variance specification collectively as multiplicative heteroskedasticity. This option
may not be specified with model-level het().

Remarks and examples


We assume that you have already read [TS] mgarch, which provides an introduction to MGARCH
models and the methods implemented in mgarch dcc.
MGARCH models are dynamic multivariate regression models in which the conditional variances
and covariances of the errors follow an autoregressive-moving-average structure. The DCC MGARCH
model uses a nonlinear combination of univariate GARCH models with time-varying cross-equation
weights to model the conditional covariance matrix of the errors.

As discussed in [TS] mgarch, MGARCH models differ in the parsimony and flexibility of their
specifications for a time-varying conditional covariance matrix of the disturbances, denoted by Ht .
In the conditional correlation family of MGARCH models, the diagonal elements of Ht are modeled
as univariate GARCH models, whereas the off-diagonal elements are modeled as nonlinear functions
of the diagonal terms. In the DCC MGARCH model,

hij,t = ij,t

p
hii,t hjj,t

where the diagonal elements hii,t and hjj,t follow univariate GARCH processes and ij,t follows the
dynamic process specified in Engle (2002) and discussed below.
Because the ij,t varies with time, this model is known as the DCC GARCH model.

Technical note
The DCC GARCH model proposed by Engle (2002) can be written as

yt = Cxt + t
1/2

t = Ht t
1/2

1/2

Ht = Dt Rt Dt

1/2

Rt = diag(Qt )

Qt diag(Qt )

1/2

Qt = (1 1 2 )R + 1 e
t1e
0t1 + 2 Qt1
where

yt is an m 1 vector of dependent variables;


C is an m k matrix of parameters;
xt is a k 1 vector of independent variables, which may contain lags of yt ;

(1)

372

mgarch dcc Dynamic conditional correlation multivariate GARCH models


1/2

Ht

is the Cholesky factor of the time-varying conditional covariance matrix Ht ;

t is an m 1 vector of normal, independent, and identically distributed innovations;

Dt is a diagonal matrix of conditional variances,


2
1,t
0

Dt = .
..
0

..
..
.
.
2
m,t

0
2
2,t
..
.
0

2
in which each i,t
evolves according to a univariate GARCH model of the form
Ppi
Pqi
2
2
i,t = si + j=1 j 2i,tj + j=1
j i,tj

by default, or
2
i,t
= exp(i zi,t ) +

Ppi

j=1

j 2i,tj +

Pqi

j=1

2
j i,tj

when the het() option is specified, where t is a 1 p vector of parameters, zi is a p 1


vector of independent variables including a constant term, the j s are ARCH parameters,
and the j s are GARCH parameters;

Rt is a matrix of conditional quasicorrelations,

12,t
Rt =
..
.
1m,t

12,t
1
..
.
2m,t

1m,t
2m,t
..
..

.
.

1
1/2

e
t is an m 1 vector of standardized residuals, Dt

t ; and

1 and 2 are parameters that govern the dynamics of conditional quasicorrelations. 1 and
2 are nonnegative and satisfy 0 1 + 2 < 1.
When Qt is stationary, the R matrix in (1) is a weighted average of the unconditional covariance
matrix of the standardized residuals e
t , denoted by R, and the unconditional mean of Qt , denoted by
Q. Because R 6= Q, as shown by Aielli (2009), R is neither the unconditional correlation matrix nor
the unconditional mean of Qt . For this reason, the parameters in R are known as quasicorrelations;
see Aielli (2009) and Engle (2009) for discussions.

Some examples
Example 1: Model with common covariates
We have daily data on the stock returns of three car manufacturersToyota, Nissan, and Honda,
from January 2, 2003, to December 31, 2010in the variables toyota, nissan and honda. We
model the conditional means of the returns as a first-order vector autoregressive process and the
conditional covariances as a DCC MGARCH process in which the variance of each disturbance term
follows a GARCH(1,1) process.

mgarch dcc Dynamic conditional correlation multivariate GARCH models


. use http://www.stata-press.com/data/r14/stocks
(Data from Yahoo! Finance)
. mgarch dcc (toyota nissan honda = L.toyota L.nissan L.honda, noconstant),
> arch(1) garch(1)
Calculating starting values....
Optimizing log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood = 16902.435
Iteration 1:
log likelihood = 17005.448
Iteration 2:
log likelihood = 17157.958
Iteration 3:
log likelihood = 17267.363
Iteration 4:
log likelihood =
17318.29
Iteration 5:
log likelihood = 17353.029
Iteration 6:
log likelihood = 17369.115
Iteration 7:
log likelihood = 17388.035
Iteration 8:
log likelihood = 17401.254
Iteration 9:
log likelihood = 17435.556
(switching technique to nr)
Iteration 10: log likelihood = 17451.739
Iteration 11: log likelihood = 17476.882
Iteration 12: log likelihood = 17478.382
Iteration 13: log likelihood = 17483.858
Iteration 14: log likelihood = 17484.886
Iteration 15: log likelihood =
17484.95
Iteration 16: log likelihood =
17484.95
Refining estimates
Iteration 0:
log likelihood =
17484.95
Iteration 1:
log likelihood =
17484.95
Dynamic conditional correlation MGARCH model
Sample: 1 - 2015
Number of obs
=
2,014
Distribution: Gaussian
Wald chi2(9)
=
19.54
Log likelihood = 17484.95
Prob > chi2
=
0.0210
Coef.

Std. Err.

P>|z|

[95% Conf. Interval]

toyota
toyota
L1.

-.0510867

.0339825

-1.50

0.133

-.1176911

.0155177

nissan
L1.

.0297829

.0247455

1.20

0.229

-.0187173

.0782832

honda
L1.

-.0162824

.0300323

-0.54

0.588

-.0751447

.0425799

ARCH_toyota
arch
L1.

.0608223

.0086687

7.02

0.000

.043832

.0778127

garch
L1.

.9222203

.0111055

83.04

0.000

.9004539

.9439866

_cons

4.47e-06

1.15e-06

3.90

0.000

2.22e-06

6.72e-06

373

374

mgarch dcc Dynamic conditional correlation multivariate GARCH models

nissan
toyota
L1.

-.0056722

.0389348

-0.15

0.884

-.0819829

.0706386

nissan
L1.

-.0287097

.0309379

-0.93

0.353

-.0893468

.0319275

honda
L1.

.015498

.0358802

0.43

0.666

-.0548259

.0858218

ARCH_nissan
arch
L1.

.0844244

.0128192

6.59

0.000

.0592992

.1095496

garch
L1.

.89942

.0151125

59.51

0.000

.8698

.92904

_cons

7.21e-06

1.93e-06

3.74

0.000

3.43e-06

.000011

toyota
L1.

-.0272415

.0361819

-0.75

0.452

-.0981566

.0436737

nissan
L1.

.0617491

.0271378

2.28

0.023

.0085599

.1149382

honda
L1.

-.063507

.0332918

-1.91

0.056

-.1287578

.0017437

ARCH_honda
arch
L1.

.0490134

.0073695

6.65

0.000

.0345693

.0634574

garch
L1.

.9331125

.0103686

89.99

0.000

.9127905

.9534346

_cons

5.35e-06

1.35e-06

3.95

0.000

2.69e-06

8.00e-06

.6689537

.0168019

39.81

0.000

.6360226

.7018849

.7259623

.0140155

51.80

0.000

.6984925

.7534321

.6335651

.0180409

35.12

0.000

.5982056

.6689247

.0315281
.8704093

.0088382
.0613336

3.57
14.19

0.000
0.000

.0142054
.7501977

.0488507
.9906209

honda

corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
Adjustment
lambda1
lambda2

The iteration log has three parts: the dots from the search for initial values, the iteration log from
optimizing the log likelihood, and the iteration log from the refining step. A detailed discussion of
the optimization methods is in Methods and formulas.
The header describes the estimation sample and reports a Wald test against the null hypothesis
that all the coefficients on the independent variables in the mean equations are zero. Here the null
hypothesis is rejected at the 5% level.
The output table first presents results for the mean or variance parameters used to model each
dependent variable. Subsequently, the output table presents results for the conditional quasicorrelations.

mgarch dcc Dynamic conditional correlation multivariate GARCH models

375

For example, the conditional quasicorrelation between the standardized residuals for Toyota and Nissan
is estimated to be 0.67. Finally, the output table presents results for the adjustment parameters 1
and 2 . In the example at hand, the estimates for both 1 and 2 are statistically significant.
The DCC MGARCH model reduces to the CCC MGARCH model when 1 = 2 = 0. The output
below shows that a Wald test rejects the null hypothesis that 1 = 2 = 0 at all conventional levels.
. test _b[Adjustment:lambda1] = _b[Adjustment:lambda2] = 0
( 1) [Adjustment]lambda1 - [Adjustment]lambda2 = 0
( 2) [Adjustment]lambda1 = 0
chi2( 2) = 1102.27
Prob > chi2 =
0.0000

These results indicate that the assumption of time-invariant conditional correlations maintained in
the CCC MGARCH model is too restrictive for these data.

Example 2: Model with covariates that differ by equation


We improve the previous example by removing the insignificant parameters from the model. To
remove these parameters, we specify the honda equation separately from the toyota and nissan
equations:
. mgarch dcc (toyota nissan = , noconstant) (honda = L.nissan, noconstant),
> arch(1) garch(1)
Calculating starting values....
Optimizing log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood = 16884.502
Iteration 1:
log likelihood = 16970.755
Iteration 2:
log likelihood = 17140.318
Iteration 3:
log likelihood = 17237.807
Iteration 4:
log likelihood =
17306.12
Iteration 5:
log likelihood = 17342.533
Iteration 6:
log likelihood = 17363.511
Iteration 7:
log likelihood = 17392.501
Iteration 8:
log likelihood = 17407.242
Iteration 9:
log likelihood = 17448.702
(switching technique to nr)
Iteration 10: log likelihood = 17472.199
Iteration 11: log likelihood = 17475.842
Iteration 12: log likelihood = 17476.345
Iteration 13: log likelihood =
17476.35
Iteration 14: log likelihood =
17476.35
Refining estimates
Iteration 0:
Iteration 1:

log likelihood =
log likelihood =

17476.35
17476.35

376

mgarch dcc Dynamic conditional correlation multivariate GARCH models


Dynamic conditional correlation MGARCH model
Sample: 1 - 2015
Distribution: Gaussian
Log likelihood = 17476.35
Coef.

Number of obs
Wald chi2(1)
Prob > chi2

=
=
=

2,014
2.21
0.1374

Std. Err.

P>|z|

[95% Conf. Interval]

ARCH_toyota
arch
L1.

.0608188

.0086675

7.02

0.000

.0438308

.0778067

garch
L1.

.9219957

.0111066

83.01

0.000

.9002271

.9437643

_cons

4.49e-06

1.14e-06

3.95

0.000

2.27e-06

6.72e-06

ARCH_nissan
arch
L1.

.0876161

.01302

6.73

0.000

.0620974

.1131349

garch
L1.

.8950964

.0152908

58.54

0.000

.865127

.9250658

_cons

7.69e-06

1.99e-06

3.86

0.000

3.79e-06

.0000116

nissan
L1.

.019978

.0134488

1.49

0.137

-.0063811

.0463371

ARCH_honda
arch
L1.

.0488799

.0073767

6.63

0.000

.0344218

.063338

garch
L1.

.9330047

.0103944

89.76

0.000

.912632

.9533774

_cons

5.42e-06

1.36e-06

3.98

0.000

2.75e-06

8.08e-06

.6668433

.0163209

40.86

0.000

.6348548

.6988317

.7258101

.0137072

52.95

0.000

.6989446

.7526757

.6313515

.0175454

35.98

0.000

.5969631

.6657399

.0324493
.8574681

.0074013
.0476274

4.38
18.00

0.000
0.000

.0179429
.7641202

.0469556
.950816

honda

corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
Adjustment
lambda1
lambda2

It turns out that the coefficient on L1.nissan in the honda equation is now statistically insignificant.
We could further improve the model by removing L1.nissan from the model.
There is no mean equation for Toyota or Nissan. In [TS] mgarch dcc postestimation, we discuss
prediction from models without covariates.

mgarch dcc Dynamic conditional correlation multivariate GARCH models

377

Example 3: Model with constraints


Here we fit a bivariate DCC MGARCH model for the Toyota and Nissan shares. We believe that
the shares of these car manufacturers follow the same process, so we impose the constraints that the
ARCH coefficients are the same for the two companies and that the GARCH coefficients are also the
same.
. constraint 1 _b[ARCH_toyota:L.arch] = _b[ARCH_nissan:L.arch]
. constraint 2 _b[ARCH_toyota:L.garch] = _b[ARCH_nissan:L.garch]
. mgarch dcc (toyota nissan = , noconstant), arch(1) garch(1) constraints(1 2)
Calculating starting values....
Optimizing log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood = 10307.609
Iteration 1:
log likelihood = 10656.153
Iteration 2:
log likelihood = 10862.137
Iteration 3:
log likelihood = 10987.457
Iteration 4:
log likelihood = 11062.347
Iteration 5:
log likelihood = 11135.207
Iteration 6:
log likelihood = 11245.619
Iteration 7:
log likelihood =
11253.56
Iteration 8:
log likelihood =
11294
Iteration 9:
log likelihood = 11296.364
(switching technique to nr)
Iteration 10: log likelihood =
11296.76
Iteration 11: log likelihood = 11297.087
Iteration 12: log likelihood = 11297.091
Iteration 13: log likelihood = 11297.091
Refining estimates
Iteration 0:
log likelihood = 11297.091
Iteration 1:
log likelihood = 11297.091

378

mgarch dcc Dynamic conditional correlation multivariate GARCH models


Dynamic conditional correlation MGARCH model
Sample: 1 - 2015
Number of obs
Distribution: Gaussian
Wald chi2(.)
Log likelihood = 11297.09
Prob > chi2
( 1) [ARCH_toyota]L.arch - [ARCH_nissan]L.arch = 0
( 2) [ARCH_toyota]L.garch - [ARCH_nissan]L.garch = 0
Coef.

Std. Err.

P>|z|

=
=
=

2,015
.
.

[95% Conf. Interval]

ARCH_toyota
arch
L1.

.080889

.0103227

7.84

0.000

.060657

.1011211

garch
L1.

.9060711

.0119107

76.07

0.000

.8827267

.9294156

_cons

4.21e-06

1.10e-06

3.83

0.000

2.05e-06

6.36e-06

ARCH_nissan
arch
L1.

.080889

.0103227

7.84

0.000

.060657

.1011211

garch
L1.

.9060711

.0119107

76.07

0.000

.8827267

.9294156

_cons

5.92e-06

1.47e-06

4.03

0.000

3.04e-06

8.80e-06

.6646283

.0187793

35.39

0.000

.6278215

.7014351

.0446559
.8686054

.0123017
.0510884

3.63
17.00

0.000
0.000

.020545
.7684739

.0687668
.968737

corr(toyota,
nissan)
Adjustment
lambda1
lambda2

We could test our constraints by fitting the unconstrained model and performing a likelihood-ratio
test. The resu