Parameter Estimation
For both variants of the geometric distribution, the parameter p can be estimated by equating the expected value with the sample mean. This is the method of moments, which in this case happens to yield maximum likelihood estimates of p.
Specifically, for the first variant let k = k1, ..., kn be a sample where ki ≥ 1 for i = 1, ..., n. Then p can be estimated as
In Bayesian inference, the Beta distribution is the conjugate prior distribution for the parameter p. If this parameter is given a Beta(α, β) prior, then the posterior distribution is
The posterior mean E approaches the maximum likelihood estimate as α and β approach zero.
In the alternative case, let k1, ..., kn be a sample where ki ≥ 0 for i = 1, ..., n. Then p can be estimated as
The posterior distribution of p given a Beta(α, β) prior is
Again the posterior mean E approaches the maximum likelihood estimate as α and β approach zero.
Read more about this topic: Geometric Distribution
Famous quotes containing the word estimation:
“... it would be impossible for women to stand in higher estimation than they do here. The deference that is paid to them at all times and in all places has often occasioned me as much surprise as pleasure.”
—Frances Wright (17951852)