Monday, May 30, 2011

CHAPTER 3: TWO-VARIABLE REGRESSION MODEL: THE PROBLEM OF ESTIMATION

The Classical Linear Regression Model: The Assumptions Underlying the Method of Least Squares
Assumption 1: Linear regression model. The regression model is linear in the parameters, as shown in

Yi = β1 + β2Xi + µi
Assumption 2: X values are fixed in repeated sampling. Values taken by the regressor X are considered fixed in repeated samples. More technically, X is assumed to be nonstochastic.
Assumption 3: Zero mean value of disturbance µi. Given the value of X, the mean or expected, value of the random disturbance term µi is zero. Technically, the conditional mean value of µi is zero. Symbolically, we have

E( µi  Xi ) = 0
Assumption 4: Homoscedasticity or equal variance of µi. Given the value of X, the variance of µi is the same for all observations. That is, the conditional variances of µi are identical. Symbolically, we have

var (µi  Xi) = E[ µi – E(µi   Xi)]2
                    = E(µi2  Xi) because of Assumption 3
                    =
Homoscedasticity or equal spread or equal variance. The word comes from the Greek word skedanime, which means to disperse or scatter. It means that the Y populations corresponding to X values have the same variance. Put simply the variation around the regression line, is the same across the X values; it neither increases or decreases as X values.
Heteroscedasticity or unequal spread, or variance

var(µi  Xi) =
which indicates that the variance of the Y population is no longer constant.
In short, not all Y values corresponding to the various X’s will be equally reliable, reliably being judged by how closely or distantly the Y values are distributed around their means.
Assumption 5: No autocorrelation between the disturbances. Given any two X values, Xi and Xj (i    j) the correlation between any two ui and uj (i    j) is zero. Symbolically,

cov (ui,uj   Xi,Xj) = E{[ ui –E(ui)]  Xi}{[uj –E(uj)]  Xj}
              = E(ui  Xi)(uj  Xj)   (why?)
                                                                  = 0
Assumption 6: Zero covariance between ui and Xi, or E(uiXi) = 0. Formally,
cov(ui, Xi) =  E[ui – E(ui)][Xi – E(Xi)]
                 = E[ui (Xi – E(Xi))] since E(ui) = 0
                                          = E(uiXi) – E(Xi)E(ui) since E(Xi) is nonstochastic
    = E(uiXi) since E(ui) = 0
                                                             = 0 by assumption
We assumed that X and u have separate influence on Y. But if X and u are correlated, it is not possible to assess their individual effects on Y. Thus, if X and u are positively correlated, X increases when u increases and it decreases when u decreases. Similarly, if X and u are negatively correlated, X increases when u decreases and it decreases when u increases. In either case, it is difficult to isolate the influence of X and u on Y.
Assumption 7: The number of observations n must be greater than the number of parameters to be estimated. Alternatively, the number of observations n must be greater than the number of explanatory variables.
Assumption 8: Variability in X values. The X values in a given sample must not all be the same. Technically, var(X) must be a finite positive number.
Assumption 9: The regression model is correctly specified. Alternatively, there is no specification bias or error in the model used in empirical analysis.
Assumption 10: There is no perfect multicollinearity. That is there is no perfect linear relationships among the explanatory variables.
ü  Precision or Standard Errors of Least-Squares Estimates
Standard error – the standard deviation of the sampling distribution of the estimator, and the sampling distribution of the estimator is simply a probability or frequency distribution of the estimator, that is, a distribution of the set of values of the estimator obtained from all possible samples of the same size from a given population.

var(β2) = _____                                     var(β1) =  ∑xi2
                   ∑xi2                                                        n∑xi2


se(β2) = _____                                        se(β1) =    ∑xi2
                  ∑xi2                                                          n∑xi2

      = ∑µi2                                                 ∑µi = ∑Yi2 – β2∑Xi2
          n – 2

β2 = ∑XiYi
          ∑Xi2

Number of degrees of freedom – means the total number of observations in the sample (=n) less than the number of independent (linear) constraints or restrictions put on them. In other words, it is the number of independent observations out of total of n observations.
3 Features of the variances of β1 and β2
1.      The variance of β2 is directly proportional to         but inversely proportional to ∑xi2.
2.      The variance of β1 is directly proportional to         and ∑xi2 but inversely proportional to ∑xi2 and the sample size n.
3.      Since β1 and β2 are estimators, they will not only vary from sample to sample but in a given sample they are likely to be dependent on each other, this dependence being measured by the covariance between them.

cov(β1,β2) =  - xvar(β2)
                    =  -x
                                ∑xi2
Properties of Least-Squares Estimators: The Gauss-Markov Theorem
1.      It is linear, that is, a linear function of the random variable, such as dependent variable Y in the regression model.
2.      It is unbiased, that is, its average or expected value, E(β2), is equal to the true value, β2.
3.      It has minimum variance in the class of all such linear unbiased estimators; an unbiased estimator with the least variance is known as an efficient estimator.
Finite sample properties – these properties hold regardless of the sample size on which the estimators are based.
Asymptotic properties – properties that hold only if the sample size is very large.
The Coefficient of Determination r2: A Measure of “Goodness of Fit”
The coefficient of determination r2 – is a summary measure that tells how well the sample regression line fits the data.
Total sum of squares – total variation of the actual Y values about their sample mean.
Residual sum of squares – variation of the estimated Y values about their mean.
r2 measures the proportion or percentage of the total variation in Y explained by the regression model.
Two Properties of r2:
1.      It is a nonnegative quantity.
2.      Its limits are 0≤r2≤1
Coefficient of correlation – is a measure of the degree of association between two variables.

r =    ∑XiYi
      (∑Xi2)(∑Yi2)


r =   n∑XiYi – (∑Xi)(∑Yi)
       [n∑Xi2 – (∑Xi2)][n∑Yi2 – (∑Yi2)
Seven Propertie of r:
1.      It can be positive or negative, the sign depending on the sign of the term, which measures the sample covariation of two variables.
2.      It lies between the limits of -1 and +1; that is, -1≤r≤1.
3.      It is symmetrical in nature; that is, the coefficient of correlation between X and Y (rXY) is the same as that between Y and X (rYX).
4.      It is independent of the origin and scale; that is, if we define Xi = aXi + c and Yi = bYi + d, where a>0, b>0, and c and d are constants, then r between X and Y is the same as that between the original variables X and Y.
5.      If X and Y are statistically independent, the correlation coefficient between them is zero; but if r = 0, it does not mean that two variables are independent. In other words, zero correlation does not necessarily imply independence.
6.      It is a measure of linear association or linear dependence only; it has no meaning for describing nonlinear relations.
7.      Although it is a measure of linear association between two variables, it does not necessarily imply any cause-and-effect relationship.

No comments:

Post a Comment