Tuesday, June 14, 2011

CHAPTER 12: AUTOCORRELATION: WHAT HAPPENS IF THE ERROR TERMS ARE CORRELATED?

ü  The Nature of The Problem
Autocorrelation – correlation between members of series of observations ordered in time.
Specification Bias: Excluded Variables Case. In empirical analysis, the researcher often starts with a plausible regression model that may not be the most “perfect” one. After the regression analysis, the researcher does the postmortem to find out whether the results accord with a priori expectations.
Cobweb Phenomenon. The supply of many agricultural commodities reflects the so-called cobweb phenomenon, where supply reacts to price with a lag of one time period because supply decisions take time to implement.
Lags. A regression such as

Consumptiont = β1 + β2incomet + β3consumptiont – 1 + ui
is known as autoregression because one of the explanatory variables is the lagged value of the dependent variable.
“Manipulation” of Data. Another source of manipulation is interpolation or extrapolation of data.
ü  OLS Estimation in the Presence of Autocorrelation
ρ(=rho) is known as the coefficient of autocovariance.
The scheme:

ut = ρut – 1 + εt                       -1<ρ<1
is known as the Markov first-order autoregressive sceme, or simply a first-order autoregressive scheme, usually denoted as AR(1). The name autoregressive is appropriate because it can be interpreted as the regression of ui on itself lagged one period.
ü  The BLUE Estimator in the Presence of Autocorrelation

β2 = ∑t = 2(Xt – ρXt – 1)(Yt – ρYt – 1)   + C
                t = 2(Xt – ρXt – 1)2

varβ2 =                                      + D
               t = 2(Xt – ρXt – 1)2
ü  Consequences of Using OLS in the Presence of Autocorrelation
ü  OLS Estimation Allowing for Autocorrelation
To establish confidence intervals and to test hypotheses, one should use GLS and not OLS even though the estimators derived from the latter are unbiased and consistent.
ü  OLS Estimation Disregarding Autocorrelation
1.      The residual variance          = ∑ui2/(n – 2) is likely to underestimate the true         .
2.      As a result, we are likely to overestimate R2.
3.      Even if          is not underestimated, var(β2) may underestimate var(β2) AR1, its variance under autocorrelation, even though the latter is inefficient compared to var(β2)GLS.
4.      Therefore, the usual t and F tests of significance are no longer valid, and if applied, are likely to give seriously misleading conclusions about the statistical significance of the estimated regression coefficients.
ü  Detecting Autocorrelation
                                I.            Graphical Method
                              II.            The Runs Test

Mean: E(R) = 2N1N2 + 1
                                                 N
                  Variance:         = 2N1N2(2N1N2 – N)
                                                       (N)2(N – 1)
       Decision Rule: Do not reject the null hypothesis of randomness with 95% confidence if     R, the number of runs, lies in the preceding confidence interval, reject the null hypothesis if the estimated R lies outside these limits.


                            III.            Durbin-Watson d Test
Durbin-Watson d statistic

d = ∑t = 2(ui – ut – 1)2
                t = 1 ui2
Assumptions underlying d statistic:
1.      The regression model includes the intercept form.
2.      The explanatory variables, the X’s are nonstochastic, or fixed in repeated sampling.
3.      The disturbances ui are generated by the first-order autoregressive sceme: ut = ρut – 1 + εt                      
4.      The error term ut is assumed to be normally distributed.
5.      The regression model does not include the lagged value(s) of the dependent variable as one of the explanatory variables.
6.      There are no missing observations in the data. Therefore, as a rule of thumb, if d is found to be 2 in the application, one may assume that there is no first-order autocorrelation, either positive or negative.
Mechanics of the Durbin-Watson Test
1)      Run the OLS regression and obtain the residuals.
2)      Compute d.
3)      For the given sample size and given number of explanatory variables, find out the critical dL and dV values.
4)      Now follow the decision rules of durbin-watson d test.
IV. A General Test of Autocorrelation: The Breusch-Godfrey (BG) Test
Steps:
1.      Estimate Yt = β1 + β2Xt + ut by OLS and obtain the residuals, ut.
2.      Regress ut on the original Xt and ut – 1, ut – 2, . . . ,ut – p, where the latter are the lagged values of the estimated residuals in step 1.
3.      If the sample size is large, Breusch and Godfrey have shown that

(n – p)R2         X2p
ü  What To Do When You Find Autocorrelation: Remedial Measures
1.      Try to find out if the autocorrelation is pure autocorrelation and not the result of mis-specification of the model.
2.      If it is pure autocorrelation, one can use appropriate transformation of the original model so that in the transformed model we do not have the problem of autocorrelation.
3.      In large samples, we can use the Newey-West method to obtain standard errors of OLS estimators that are corrected for autocorrelation.
4.      In some situations we can continue to use the OLS method.





1 comment: