Monday, May 30, 2011

CHAPTER 4: CLASSICAL NORMAL LINEAR REGRESSION MODEL (CNLRM)

The Probability Distribution of Disturbances ui

β2 = ∑kiYi

β2 = ∑ki (β1 + β2Xi +ui)
Adding the normality assumption for ui to the assumptions of the classical linear regression model (CLRM), we obtain what is known as the classical normal linear regression model (CNLRM).
The Normality Assumption for ui
Mean: E(ui) = 0
Variance: E[ui – E(ui)]2 = E(ui2) =
cov(ui, uj) = E{[ui – E(ui)][uj – E(uj)]} = E(ui, uj) = 0     i     j

ui      N(0,    )
Two normally distributed variables, zero covariances or correlation means independence of the two variables.
Why the Normality Assumption?
1.      If there are a large number of independent and identically distributed random variables, then, with a few exceptions, the distribution of their sum tends to a normal distribution as the number of such variables increase indefinitely.
2.      A variant of the CLT states that, even if the number of variables is not very large or if these variables are not strictly independent, their sum may still be normally distributed.
3.      With the normality assumption, the probability distributions of OLS estimators can be easily derived, one property of the normal distribution is that any linear function of normally distributed random variables is itself normally distributed.
4.      The normal distribution is a comparatively simple distribution involving only two parameters; it is very well-known.
5.      Finally, if we were dealing with a small ,or finite, sample size, say data of less than 100 observations, the normality assumption assumes a critical role. It only helps us to derive the exact probability distributions of OLS estimators but also enables us to use the t, F and X2 statistical tests for regression models.
Properties of OLS Estimators under the Normality Assumption
1.      They are unbiased
2.      They have minimum variance.
3.      They have consistency, that is, as the sample size increases indefinitely, the estimators coverage to their population values.
4.      Β1 is normally distributed with
Mean: E(β1) = β1
Variance:       β1 = ∑xi2
                                n∑xi2
5.      Β2 is normally distributed with
Mean: E(β2) = β2
var(β2):        β2 =
                              ∑xi2
6.      (n – 2)(     /     ) is distributed as the X2 distribution with (n – 2)df.
7.      (β1, β2) are distributed independently of        .
8.      Β1 and β2 have minimum variance in the entire class of unbiased estimators, whether linear or not.

1 comment: