Gauss Markov Theorem & Proof

Says the Ordinary Least Squares estimator is the

  • Best – Minimum variance in the class of
  • Linear – Can be written as a linear (affine) function of y
  • Unbiased – Zero bias. Where bias is $E(\widehat\theta) – \theta $. The regression coefficient’s expected value minus its true value. $E(\widehat\theta) – \theta = 0$.
  • Estimator of the regression coefficient $\beta$

While other linear unbiased estimators $\tilde \beta$ do exist, none of them is more efficient than $\widehat\beta$ the Ordinary Least Squares estimator. For these other $\tilde \beta$ estimators,

var$\tilde \beta$ – var$\widehat \beta$ = a positive semidefinite matrix