Cauchy Distribution - Estimation of Parameters

Estimation of Parameters

Because the parameters of the Cauchy distribution don't correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if n samples are taken from a Cauchy distribution, one may calculate the sample mean as:

Although the sample values xi will be concentrated about the central value x0, the sample mean will become increasingly variable as more samples are taken, because of the increased likelihood of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the samples themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of x0 than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more samples are taken.

Therefore, more robust means of estimating the central value x0 and the scaling parameter γ are needed. One simple method is to take the median value of the sample as an estimator of x0 and half the sample interquartile range as an estimator of γ. Other, more precise and robust methods have been developed For example, the truncated mean of the middle 24% of the sample order statistics produces an estimate for x0 that is more efficient than using either the sample median or the full sample mean. However, because of the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.

Maximum likelihood can also be used to estimate the parameters x0 and γ. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. The log-likelihood function for the Cauchy distribution for sample size n is:

Maximizing the log likelihood function with respect to x0 and γ produces the following system of equations:

Note that is a monotone function in γ and that the solution γ must satisfy . Solving just for x0 requires solving a polynomial of degree 2n − 1, and solving just for γ requires solving a polynomial of degree n (first for, then x0). Therefore, whether solving for one parameter or for both paramters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating x0 using the sample median is only about 81% as asymptotically efficient as estimating x0 by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of x0 as the maximum likelihood estimate. When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for x0.

Read more about this topic:  Cauchy Distribution

Famous quotes containing the words estimation and/or parameters:

    ... it would be impossible for women to stand in higher estimation than they do here. The deference that is paid to them at all times and in all places has often occasioned me as much surprise as pleasure.
    Frances Wright (1795–1852)

    What our children have to fear is not the cars on the highways of tomorrow but our own pleasure in calculating the most elegant parameters of their deaths.
    —J.G. (James Graham)