Anda di halaman 1dari 13

Autocorrelation

SDSF, DAVV
MBA (Business Analytics

SUBMITTED TO: SUBMITTED BY:


Dr. Dipshikha Agarwal Manokamna Kochar
What is Autocorrelation
Autocorrelation is a mathematical representation of the degree of similarity
between a given time series and a lagged version of itself over successive time
intervals. It is the same as calculating the correlation between two different
time series, except that the same time series is actually used twice: once in its
original form and once lagged one or more time periods.

Autocorrelation can also be referred to as lagged correlation or serial


correlation, as it measures the relationship between a variable's current value
and its past values.
NATURE OF AUTOCORRELATION
● Observations of the error term are correlated with each other
● Cov(εi , εj) 6= 0 , i 6= j
● Violation of one of the classical assumptions
● Can exist in any data in which the order of the observations has some
meaning - most frequently in time-series data
● Particular form of autocorrelation - AR(p) process:
● εt = ρ1εt−1 + ρ2εt−2 + . . . + ρpεt−p + ut
● ut is a classical (not autocorrelated) error term I ρk are autocorrelation
coefficients (between -1 and 1)
Source of autocorrelation
1. Carryover of effect, atleast in part, is an important source of
autocorrelation.
2. Another source of autocorrelation is the effect of deletion of some
variables.
3. If there are log or exponential terms present in the model so that the
linearity of the model is questionable then this also gives rise to
autocorrelation in the data.
4. The presence of measurement errors on the dependent variable may also
introduce the autocorrelation in the data.
The following structures are popular in
autocorrelation:

1. Autoregressive (AR) process.

2. Moving average (MA) process.

3. Joint autoregression moving average (ARMA) process.


Application of OLS fails in case of autocorrelation
in the data and leads to serious consequences as
1. overly optimistic view from 2 R .

2. narrow confidence interval.

3. usual t -ratio and F − ratio tests provide misleading results.

4. prediction may have large variances.


Tests for autocorrelation:
Durbin Watson test

Used to determine if there is a first-order serial correlation by examining the residuals


of the equation

Assumptions (criteria for using this test)

The regression includes the intercept

If autocorrelation is present, it is of AR(1) type: εt = ρεt−1 + ut

The regression does not include a lagged dependent variable.


DURBIN-WATSON TEST FOR AUTOCORRELATION
1. Estimate the equation by OLS, save the residuals

2. Calculate the d statistic

3. Determine the sample size T and the number of explanatory variables


(excluding the intercept!) k 0

4. Find the upper critical value dU and the lower critical value dL for T and k 0
in statistical tables

5. Evaluate the test as one-sided or two-sided.


ONE-SIDED DURBIN-WATSON TEST
For cases when we consider only positive serial correlation as an option
Hypothesis:

H0 : ρ ≤ 0 (no positive serial correlation)

HA : ρ > 0 (positive serial correlation)

Decision rule:

if d < dL reject H0

if d > dU do not reject H0

if dL ≤ d ≤ dU inconclusive
TWO-SIDED DURBIN-WATSON TEST
For cases when we consider both signs of serial correlation

Hypothesis: H0 : ρ = 0 (no serial correlation)

HA : ρ 6= 0 (serial correlation)

Decision rule:

if d < dL reject H0

if d > 4 − dL reject H0

if d > dU do not reject H0

if d < 4 − dU do not reject H0

otherwise inconclusive
SUMMARY
Autocorrelation does not lead to inconsistent estimates, but it makes the
inference wrong (estimated coefficients are correct, but their standard errors
are not).

It can be diagnosed using :

1. Durbin-Watson test
2. Analysis of residuals
THANKYOU

Anda mungkin juga menyukai