Template:Maximum Likelihood Parameter Estimation

Maximum Likelihood Estimation
If $$x$$ is a continuous random variable with $$pdf\ \ :$$


 * $$f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}),$$

where $${{\theta }_{1}},$$ $${{\theta }_{2}},$$$$...,$$ $${{\theta }_{k}}$$ are $$k$$ unknown constant parameters that need to be estimated, conduct an experiment and obtain $$N$$ independent observations, $${{x}_{1}},$$ $${{x}_{2}},$$$$...,$$ $${{x}_{N}}$$, which correspond in the case of life data analysis to failure times. The likelihood function (for complete data) is given by:


 * $$L({{x}_{1}},{{x}_{2}},...,{{x}_{N}}|{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})=L=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})$$


 * $$i=1,2,...,N$$

The logarithmic likelihood function is:
 * $$\Lambda =\ln L=\underset{i=1}{\overset{N}{\mathop \sum }}\,\ln f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})$$

The maximum likelihood estimators (MLE) of $${{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}},$$ are obtained by maximizing $$L$$ or $$\Lambda .$$ By maximizing $$\Lambda ,$$ which is much easier to work with than $$L$$, the maximum likelihood estimators (MLE) of $${{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}$$ are the simultaneous solutions of $$k$$ equations such that:
 * $$\frac{\partial (\Lambda )}{\partial {{\theta }_{j}}}=0,j=1,2,...,k$$

Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely accurate. As it can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, many times the MLE solution appears not to track the data on the probability plot. This is perfectly acceptable since the two methods are independent of each other, and in no way suggests that the solution is wrong.

Illustrating the MLE Method Using the Exponential Distribution

 * •	To estimate $$\widehat{\lambda }$$ for a sample of $$n$$ units (all tested to failure), first obtain the likelihood function:


 * $$\begin{align}

L(\lambda |{{t}_{1}},{{t}_{2}},...,{{t}_{n}})= & \underset{i=1}{\overset{n}{\mathop \prod }}\,f({{t}_{i}}) \\ = & \underset{i=1}{\overset{n}{\mathop \prod }}\,\lambda {{e}^{-\lambda {{t}_{i}}}} \\ = & {{\lambda }^{n}}\cdot {{e}^{-\lambda \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{t}_{i}}}} \end{align}$$
 * •	Take the natural log of both sides:
 * $$\Lambda =\ln (L)=n\ln (\lambda )-\lambda \underset{i=1}{\overset{n}{\mathop \sum }}\,{{t}_{i}}.$$
 * •	Obtain $$\tfrac{\partial \Lambda }{\partial \lambda }$$, and set it equal to zero:
 * $$\frac{\partial \Lambda }{\partial \lambda }=\frac{n}{\lambda }-\underset{i=1}{\overset{n}{\mathop \sum }}\,{{t}_{i}}=0$$
 * •	Solve for $$\widehat{\lambda }$$ or:
 * $$\hat{\lambda }=\frac{n}{\underset{i=1}{\overset{n}{\mathop{\sum }}}\,{{t}_{i}}}$$

Notes About $$\widehat{\lambda }$$
Note that the value of $$\widehat{\lambda }$$ is an estimate because if we obtain another sample from the same population and re-estimate $$\lambda $$, the new value would differ from the one previously calculated. In plain language, $$\hat{\lambda }$$ is an estimate of the true value of ... How close is the value of our estimate to the true value? To answer this question, one must first determine the distribution of the parameter, in this case $$\lambda $$. This methodology introduces a new term, confidence bound, which allows us to specify a range for our estimate with a certain confidence level. The treatment of confidence bounds is integral to reliability engineering, and to all of statistics. (Confidence bounds are covered in Chapter 5.)

Illustrating the MLE Method Using Normal Distribution
To obtain the MLE estimates for the mean, $$\bar{T},$$ and standard deviation, $${{\sigma }_{T}},$$ for the normal distribution, start with the $$pdf$$ of the normal distribution which is given by:


 * $$f(T)=\frac{1}{{{\sigma }_{T}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{T-\bar{T}} \right)}^{2}}}}$$

If $${{T}_{1}},{{T}_{2}},...,{{T}_{N}}$$ are known times-to-failure (and with no suspensions), then the likelihood function is given by:
 * $$L({{T}_{1}},{{T}_{2}},...,{{T}_{N}}|\bar{T},{{\sigma }_{T}})=L=\underset{i=1}{\overset{N}{\mathop \prod }}\,\left[ \frac{1}{{{\sigma }_{T}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{T}_{i}}-\bar{T}} \right)}^{2}}}} \right]$$


 * $$L=\frac{1}{{e}^{-\tfrac{1}{2}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{\left( \tfrac{{{T}_{i}}-\bar{T}} \right)}^{2}}}}$$


 * then:


 * $$\Lambda =\ln L=-\frac{N}{2}\ln (2\pi )-N\ln {{\sigma }_{T}}-\frac{1}{2}\underset{i=1}{\overset{N}{\mathop \sum }}\,\left( \frac{{{T}_{i}}-\bar{T}} \right)_{}^{2}$$

Then taking the partial derivatives of $$\Lambda $$ with respect to each one of the parameters and setting them equal to zero yields:
 * $$\frac{\partial (\Lambda )}{\partial \bar{T}}=\frac{1}{\sigma _{T}^{2}}\underset{i=1}{\overset{N}{\mathop \sum }}\,({{T}_{i}}-\bar{T})=0$$


 * and:


 * $$\frac{\partial (\Lambda )}{\partial {{\sigma }_{T}}}=-\frac{N}{{{\sigma }_{T}}}+\frac{1}{\sigma _{T}^{3}}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{T}_{i}}-\bar{T})}^{2}}=0$$

Solving Eqns. (dldt) and (dlds) simultaneously yields:
 * $$\bar{T}=\frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{T}_{i}}$$


 * and:


 * $$\begin{align}

& \hat{\sigma }_{T}^{2}= & \frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{T}_{i}}-\bar{T})}^{2}} \\ & &  \\  & {{{\hat{\sigma }}}_{T}}= & \sqrt{\frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{T}_{i}}-\bar{T})}^{2}}} \end{align}$$

It should be noted that these solutions are only valid for data with no suspensions, i.e. all units are tested to failure. In the case where suspensions are present or all units are not tested to failure, the methodology changes and the problem becomes much more complicated.

Illustrating with an Example of the Normal Distribution
If we had five units that failed at 10, 20, 30, 40 and 50 hours, the mean would be:


 * $$\begin{align}

\bar{T}= & \frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{T}_{i}} \\ = & \frac{10+20+30+40+50}{5} \\ = & 30 \end{align}$$

The standard deviation estimate then would be:


 * $$\begin{align}

{{{\hat{\sigma }}}_{T}}= & \sqrt{\frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{T}_{i}}-\bar{T})}^{2}}} \\ = & \sqrt{\frac{{{(10-30)}^{2}}+{{(20-30)}^{2}}+{{(30-30)}^{2}}+{{(40-30)}^{2}}+{{(50-30)}^{2}}}{5}}, \\ = & 14.1421 \end{align}$$

A look at the likelihood function surface plot in Figure A-1 reveals that both of these values are the maximum values of the function.



This three-dimensional plot represents the likelihood function. As can be seen from the plot, the maximum likelihood estimates for the two parameters correspond with the peak or maximum of the likelihood function surface.

Distribution Log-Likelihood Equations
The following list provides links to the log-likelihood functions and their associated partial derivatives for most of the distributions available in Weibull++.


 * Exponential Log-Likelihood Functions and their Partials
 * Weibull Log-Likelihood Functions and their Partials
 * Mixed Weibull Log-Likelihood Functions and their Partials
 * Normal Log-Likelihood Functions and their Partials
 * Lognormal Log-Likelihood Functions and their Partials
 * Logistic Log-Likelihood Functions and their Partials
 * The Loglogistic Log-Likelihood Functions and their Partials
 * The Gumbel Log-Likelihood Functions and their Partials
 * The Gamma Log-Likelihood Functions and their Partials