Least Squares - Least Squares, Regression Analysis and Statistics

Least Squares, Regression Analysis and Statistics

The methods of least squares and regression analysis are conceptually different. However, the method of least squares is often used to generate estimators and other statistics in regression analysis.

Consider a simple example drawn from physics. A spring should obey Hooke's law which states that the extension of a spring is proportional to the force, F, applied to it.

constitutes the model, where F is the independent variable. To estimate the force constant, k, a series of n measurements with different forces will produce a set of data, where yi is a measured spring extension. Each experimental observation will contain some error. If we denote this error, we may specify an empirical model for our observations,

There are many methods we might use to estimate the unknown parameter k. Noting that the n equations in the m variables in our data comprise an overdetermined system with one unknown and n equations, we may choose to estimate k using least squares. The sum of squares to be minimized is

The least squares estimate of the force constant, k, is given by

Here it is assumed that application of the force causes the spring to expand and, having derived the force constant by least squares fitting, the extension can be predicted from Hooke's law.

In regression analysis the researcher specifies an empirical model. For example, a very common model is the straight line model which is used to test if there is a linear relationship between dependent and independent variable. If a linear relationship is found to exist, the variables are said to be correlated. However, correlation does not prove causation, as both variables may be correlated with other, hidden, variables, or the dependent variable may "reverse" cause the independent variables, or the variables may be otherwise spuriously correlated. For example, suppose there is a correlation between deaths by drowning and the volume of ice cream sales at a particular beach. Yet, both the number of people going swimming and the volume of ice cream sales increase as the weather gets hotter, and presumably the number of deaths by drowning is correlated with the number of people going swimming. Perhaps an increase in swimmers causes both the other variables to increase.

In order to make statistical tests on the results it is necessary to make assumptions about the nature of the experimental errors. A common (but not necessary) assumption is that the errors belong to a Normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases.

  • The Gauss–Markov theorem. In a linear model in which the errors have expectation zero conditional on the independent variables, are uncorrelated and have equal variances, the best linear unbiased estimator of any linear combination of the observations, is its least-squares estimator. "Best" means that the least squares estimators of the parameters have minimum variance. The assumption of equal variance is valid when the errors all belong to the same distribution.
  • In a linear model, if the errors belong to a Normal distribution the least squares estimators are also the maximum likelihood estimators.

However, if the errors are not normally distributed, a central limit theorem often nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution.

In a least squares calculation with unit weights, or in linear regression, the variance on the jth parameter, denoted, is usually estimated with

where the true residual variance σ2 is replaced by an estimate based on the minimised value of the sum of squares objective function S. The denominator, n-m, is the statistical degrees of freedom; see effective degrees of freedom for generalizations.

Confidence limits can be found if the probability distribution of the parameters is known, or an asymptotic approximation is made, or assumed. Likewise statistical tests on the residuals can be made if the probability distribution of the residuals is known or assumed. The probability distribution of any linear combination of the dependent variables can be derived if the probability distribution of experimental errors is known or assumed. Inference is particularly straightforward if the errors are assumed to follow a normal distribution, which implies that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables.

Read more about this topic:  Least Squares

Famous quotes containing the words analysis and/or statistics:

    The spider-mind acquires a faculty of memory, and, with it, a singular skill of analysis and synthesis, taking apart and putting together in different relations the meshes of its trap. Man had in the beginning no power of analysis or synthesis approaching that of the spider, or even of the honey-bee; but he had acute sensibility to the higher forces.
    Henry Brooks Adams (1838–1918)

    We ask for no statistics of the killed,
    For nothing political impinges on
    This single casualty, or all those gone,
    Missing or healing, sinking or dispersed,
    Hundreds of thousands counted, millions lost.
    Karl Shapiro (b. 1913)