Monday, June 6, 2011

CHAPTER 8: MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF INFERENCE

ü  Hypothesis Testing in Multiple Regression: General Comments
1.      Testing hypotheses about an individual partial regression coefficient.
2.      Testing the overall significance of the estimated multiple regression model, that is, finding out all the partial slope coefficients are simultaneously equal to zero.
3.      Testing that two or more coefficients are equal to one another.
4.      Testing that the partial regression coefficients satisfy certain restrictions.
5.      Testing the stability of the estimated regression model over time in different cross-sectional units.
6.      Testing the functional form of regression models.
ü  Testing the Overall Significance of a Multiple Regression: the F-test
Decision Rule: Given the k-variable regression model:

Yi = β1 + β2X2i + β3X3i + ……….+ βkXki + ui
To test the hypothesis
H0: β2 = β3 = ………. = βk = 0

H1: Not all slope coefficients are simultaneously zero
compute

F = ESS/df   =   ESS/(k – 1)
      RSS/df        RSS/(n – k)
If F>Fα(k – 1, n – k), reject H0: otherwise you do not reject it, where Fα(k – 1, n – k) is the critical F value at the α level of significance and (k – 1) numerator df and (n – k) denominator df. Alternatively, if the p value of F is sufficiently low, one can reject H0.
ü  Testing the Overall Significance of a Multiple Regression in terms of R2
Decision Rule: Testing the overall significance of a regression in terms of R2. Alternative but equivalent test.
Given the k-variable regression model
Yi = β1 + β2X2i + β3X3i + ……….+ βkXki + ui
To test the hypothesis
H0: β2 = β3 = ………. = βk = 0
versus
H1: Not all slope coefficients are simultaneously zero
compute

F = R2/ (k – 1)
            (1 – R2)/(n – k)
If F>Fα(k – 1, n – k), reject H0; otherwise you may accept H0 where Fα(k – 1 , n – k) is the critical F value at the α level of significance and (k – 1) numerator df and (n – k) denominator df. Alternatively, if the p value of F obtained is sufficiently low, reject H0.



ü  The “Incremental” or “Marginal” Contribution of an Explanatory Variable

F = Q2/df
      Q4/df

   = ESSnew – ESSold/ number of new regressors
      RSSnew/df( =n – number of parameters in the new model)

·         When to Add a New Variable
It can be shown that R2 will increase if the t value of the coefficient of the newly added variable is larger than 1 in absolute value. R2 will increase with the addition of an extra explanatory variable only if the F(= t2) value of that variable exceeds 1.
·         When to Add a Group of Variables
If adding (dropping) a group of variables to the model gives an F value greater (less) than 1, R2 will increase (decrease).
ü  Testing the Equality of Two Regression Coefficients
ü  Testing for Structural or Parameter Stability of Regression Models: The Chow Test
Assumptions of Chow Test:
1.      u1t      N(0,      ) and u2t       N(0,      )
2.      The two error terms u1t and u2t are independently distributed.
The mechanics of the Chow test are as follows:
1.      Estimate regression which is appropriate if there is no parameter instability, and obtain RSS3 with df = (n1 + n2 – k), where k is the number of parameters estimated, 2 in the present case.
2.      Estimate and obtain its residual sum of squares RSS1 with df = (n1 – k).
3.      Estimate and obtain its residual sum of squares, RSS2, with df = (n2 – k).
4.      Since the two sets of samples are deemed independent, we can add RSS1 and RSS2 to obtain what may be called unrestricted residual sum of squares (RSSUR), that is, obtain:

RSSUR = RSS1 + RSS2   with df = (n1 + n2 – 2k)
5.      There is no structural change, then the RSSR and RSSUR should not be statistically independent.
6.      Therefore, we do not reject the null hypothesis of parameter stability if the computed F value in an application does not exceed the critical F value obtained from the F value at the chosen level of significance.
There are some caveats about the Chow test that must be kept in mind:
1.      The assumptions underlying the test must be fulfilled.
2.      The Chow test will tell us only if the two regressions are different, without telling us whether the difference is on account of the intercepts, or the slopes, or both.
3.      The Chow test assumes that we know the point(s) of structural break.

ü  Testing the Functional Form of Regression Choosing Between Linear and Log-Linear Regression Models
The MacKinnon, White, Davidson Test involves the following steps
Step 1: Estimate the linear model and obtain the estimated Y values.
Step 2: Estimate the log-linear model and obtain the estimated ln Y values.
Step 3: Obtain z1 = (lnYf – lnf)
Step 4: Regress Y on X’s and Z1 obtained in Step 3.
Step 5: Obtain z2 = (antilog of lnf – Yf)
Step 6: Regress log of Y on the logs of X’s and z2.




1 comment: