The Mixed Weibull Distribution: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 168: Line 168:


===The Generalized Gamma Distribution===
===The Generalized Gamma Distribution===
While not as frequently used for modeling life data as the previous distributions, the generalized gamma distribution does have the ability to mimic the attributes of other distributions such as the Weibull or lognormal, based on the values of the distribution's parameters. While the generalized gamma distribution is not often used to model life data by itself , its ability to behave like other more commonly-used life distributions is sometimes used to determine which of those life distributions should be used to model a particular set of data.
While not as frequently used for modeling life data as the previous distributions, the generalized gamma distribution does have the ability to mimic the attributes of other distributions such as the Weibull or lognormal, based on the values of the distribution's parameters. While the generalized gamma distribution is not often used to model life data by itself , its ability to behave like other more commonly-used life distributions is sometimes used to determine which of those life distributions should be used to model a particular set of data.
 
====Generalized Gamma Probability Density Function====
The generalized gamma function is a three-parameter distribution. One version of the generalized gamma distribution uses the parameters  <math>k</math>,  <math>\beta </math>, and  <math>\theta </math>. The  <math>pdf</math>  for this form of the generalized gamma distribution is given by:
 
<math>f(t)=\frac{\beta }{\Gamma (k)\cdot \theta }{{\left( \frac{t}{\theta } \right)}^{k\beta -1}}{{e}^{-{{\left( \tfrac{t}{\theta } \right)}^{\beta }}}}</math>
 
where  <math>\theta >0</math>  is a scale parameter,  <math>\beta >0</math>  and  <math>k>0</math>  are shape parameters and  <math>\Gamma (x)</math>  is the gamma function of  <math>x</math>, which is defined by:
 
<math>\Gamma (x)=\int_{0}^{\infty }{{s}^{x-1}}\cdot {{e}^{-s}}ds</math>
 
With this version of the distribution, however, convergence problems arise that severely limit its usefulness. Even with data sets containing 200 or more data points, the MLE methods may fail to converge. Further adding to the confusion is the fact that distributions with widely different values of <math>k</math>, <math>\beta </math>, and <math>\theta </math> may appear almost identical [21]. In order to overcome these difficulties, Weibull++ uses a reparameterization with parameters  <math>\mu </math> ,  <math>\sigma </math> , and  <math>\lambda </math>  [21] where:
 
<math>\begin{align}
  \mu = & ln(\theta )+\frac{1}{\beta }\cdot ln\left( \frac{1}{{{\lambda }^{2}}} \right) \\
  \sigma = & \frac{1}{\beta \sqrt{k}} \\
  \lambda = & \frac{1}{\sqrt{k}} 
\end{align}</math>
 
where  <math>-\infty <\mu <\infty ,\,\sigma >0,</math>  and  <math>0<\lambda .</math>
While this makes the distribution converge much more easily in computations, it does not facilitate manual manipulation of the equation. By allowing  <math>\lambda </math>  to become negative, the  <math>pdf</math>  of the reparameterized distribution is given by:
 
 
<math>f(t)=\left\{ \begin{matrix}
  \tfrac{|\lambda |}{\sigma \cdot t}\cdot \tfrac{1}{\Gamma \left( \tfrac{1}{{{\lambda }^{2}}} \right)}\cdot {{e}^{\left[ \tfrac{\lambda \cdot \tfrac{\text{ln}(t)-\mu }{\sigma }+\text{ln}\left( \tfrac{1}{{{\lambda }^{2}}} \right)-{{e}^{\lambda \cdot \tfrac{\text{ln}(t)-\mu }{\sigma }}}}{{{\lambda }^{2}}} \right]}}\text{ if }\lambda \ne 0  \\
  \tfrac{1}{t\cdot \sigma \sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}^{2}}}}\text{                            if }\lambda =0  \\
\end{matrix} \right.</math>
 
====Generalized Gamma Reliability Function====
The reliability function for the generalized gamma distribution is given by:
 
 
<math></math>
 
<math>R(t)=\left\{ \begin{array}{*{35}{l}}
  1-{{\Gamma }_{I}}\left( \tfrac{{{e}^{\lambda \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}}}{{{\lambda }^{2}}};\tfrac{1}{{{\lambda }^{2}}} \right)\text{ if }\lambda >0  \\
  1-\Phi \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)\text{              if }\lambda =0  \\
  {{\Gamma }_{I}}\left( \tfrac{{{e}^{\lambda \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}}}{{{\lambda }^{2}}};\tfrac{1}{{{\lambda }^{2}}} \right)\text{      if }\lambda <0  \\
\end{array} \right.</math>
 
where:
 
 
<math>\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}{{e}^{-\tfrac{{{x}^{2}}}{2}}}dx</math>
 
and  <math>{{\Gamma }_{I}}(k;x)</math>  is the incomplete gamma function of  <math>k</math>  and  <math>x</math> , which is given by:
 
 
<math>{{\Gamma }_{I}}(k;x)=\frac{1}{\Gamma (k)}\int_{0}^{x}{{s}^{k-1}}{{e}^{-s}}ds</math>
 
where  <math>\Gamma (x)</math>  is the gamma function of  <math>x</math> .
Note that in Weibull++ the probability plot of the generalized gamma is created on lognormal probability paper. This means that the fitted line will not be straight unless  <math>\lambda =0.</math>
 
====Generalized Gamma Failure Rate Function====
As defined in Chapter 3, the failure rate function is given by:
 
<math>\lambda (t)=\frac{f(t)}{R(t)}</math>
 
Owing to the complexity of the equations involved, the function will not be displayed here, but the failure rate function for the generalized gamma distribution can be obtained merely by dividing Eqn. (ggampdf) by Eqn. (ggamrel).
 
====Generalized Gamma Reliable Life====
The reliable life,  <math>{{T}_{R}}</math> , of a unit for a specified reliability, starting the mission at age zero, is given by:
 
<math>{{T}_{R}}=\left\{ \begin{array}{*{35}{l}}
  {{e}^{\mu +\tfrac{\sigma }{\lambda }\ln \left[ {{\lambda }^{2}}\Gamma _{I}^{-1}\left( 1-R,\tfrac{1}{{{\lambda }^{2}}} \right) \right]}}\text{  if }\lambda >0  \\
  {{\Phi }^{-1}}(1-R)\text{                  if }\lambda =0  \\
  {{e}^{\mu +\tfrac{\sigma }{\lambda }\ln \left[ {{\lambda }^{2}}\Gamma _{I}^{-1}\left( R,\tfrac{1}{{{\lambda }^{2}}} \right) \right]}}\text{    if }\lambda <0  \\
\end{array} \right.</math>
 
====Characteristics of the Generalized Gamma Distribution====
As mentioned previously, the generalized gamma distribution includes other distributions as special cases based on the values of the parameters.
 
• The Weibull distribution is a special case when  <math>\lambda =1</math>  and:
 
<math>\begin{align}
  & \beta = & \frac{1}{\sigma } \\
& \eta = & \ln (\mu ) 
\end{align}</math>
 
• In this case, the generalized distribution has the same behavior as the Weibull for  <math>\sigma >1,</math>  <math>\sigma =1,</math>  and  <math>\sigma <1</math>  ( <math>\beta
<1,</math>  <math>\beta =1,</math>  and  <math>\beta >1</math>  respectively).
 
• The exponential distribution is a special case when  <math>\lambda =1</math>  and  <math>\sigma =1</math>.
 
• The lognormal distribution is a special case when  <math>\lambda =0</math>.
 
• The gamma distribution is a special case when  <math>\lambda =\sigma </math>.
 
By allowing  <math>\lambda </math>  to take negative values, the generalized gamma distribution can be further extended to include additional distributions as special cases. For example, the Fréchet distribution of maxima (also known as a reciprocal Weibull) is a special case when  <math>\lambda =-1</math>.
 
===Confidence Bounds===
The only method available in Weibull++ for confidence bounds for the generalized gamma distribution is the Fisher matrix, which is described next.
 
====Bounds on the Parameters====
The lower and upper bounds on the parameter  <math>\mu </math>  are estimated from:
 
<math>\begin{align}
  & {{\mu }_{U}}= & \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\
& {{\mu }_{L}}= & \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} 
\end{align}</math>
 
For the parameter  <math>\widehat{\sigma }</math> ,  <math>\ln (\widehat{\sigma })</math>  is treated as normally distributed, and the bounds are estimated from:
 
<math>\begin{align}
  & {{\sigma }_{U}}= & \widehat{\sigma }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}\text{ (upper bound)} \\
& {{\sigma }_{L}}= & \frac{\widehat{\sigma }}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}}\text{ (lower bound)} 
\end{align}</math>
 
 
For the parameter  <math>\lambda ,</math>  the bounds are estimated from:
 
<math>\begin{align}
  & {{\lambda }_{U}}= & \widehat{\lambda }+{{K}_{\alpha }}\sqrt{Var(\widehat{\lambda })}\text{ (upper bound)} \\
& {{\lambda }_{L}}= & \widehat{\lambda }-{{K}_{\alpha }}\sqrt{Var(\widehat{\lambda })}\text{ (lower bound)} 
\end{align}</math>
 
where  <math>{{K}_{\alpha }}</math>  is defined by:
 
<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})</math>
 
If  <math>\delta </math>  is the confidence level, then  <math>\alpha =\tfrac{1-\delta }{2}</math>  for the two-sided bounds, and  <math>\alpha =1-\delta </math>  for the one-sided bounds.
 
The variances and covariances of  <math>\widehat{\mu }</math>  and  <math>\widehat{\sigma }</math>  are estimated as follows:
 
 
<math>\begin{align}
  &  & \left( \begin{matrix}
  \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\lambda } \right)  \\
  \widehat{Cov}\left( \widehat{\sigma },\widehat{\mu } \right) & \widehat{Var}\left( \widehat{\sigma } \right) & \widehat{Cov}\left( \widehat{\sigma },\widehat{\lambda } \right)  \\
  \widehat{Cov}\left( \widehat{\lambda },\widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\lambda },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\lambda } \right)  \\
\end{matrix} \right) \\
& = & \left( \begin{matrix}
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \lambda }  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \lambda \partial \sigma }  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \lambda } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \lambda \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\lambda }^{2}}}  \\
\end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma },\lambda =\hat{\lambda }}^{-1} 
\end{align}</math>
 
Where  <math>\Lambda </math>  is the log-likelihood function of the generalized gamma distribution.
 
====Bounds on Reliability====
The upper and lower bounds on reliability are given by:
 
<math>\begin{align}
  & {{R}_{U}}= & \frac{{\hat{R}}}{\hat{R}+(1-\hat{R}){{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})}}{\hat{R}(1-\hat{R})}}}} \\
& {{R}_{L}}= & \frac{{\hat{R}}}{\hat{R}+(1-\hat{R}){{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})}}{\hat{R}(1-\hat{R})}}}} 
\end{align}</math>
 
where:
 
<math>\begin{align}
  & Var(\widehat{R})= & {{\left( \frac{\partial R}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial R}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+{{\left( \frac{\partial R}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+ \\
&  & +2\left( \frac{\partial R}{\partial \mu } \right)\left( \frac{\partial R}{\partial \sigma } \right)Cov(\widehat{\mu },\widehat{\sigma })+2\left( \frac{\partial R}{\partial \mu } \right)\left( \frac{\partial R}{\partial \lambda } \right)Cov(\widehat{\mu },\widehat{\lambda })+ \\
&  & +2\left( \frac{\partial R}{\partial \lambda } \right)\left( \frac{\partial R}{\partial \sigma } \right)Cov(\widehat{\lambda },\widehat{\sigma }) 
\end{align}</math>
 
====Bounds on Time====
The bounds around time for a given percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, given by Eqn. (GGamma Time). Since  <math>T</math>  is a positive variable, the transformed variable  <math>\hat{u}=\ln (\widehat{T})</math>  is treated as normally distributed and the bounds are estimated from:
 
<math>\begin{align}
  & {{u}_{u}}= & \ln {{T}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})} \\
& {{u}_{L}}= & \ln {{T}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})} 
\end{align}</math>
 
Solving for  <math>{{T}_{U}}</math>  and  <math>{{T}_{L}}</math>  we get:
 
<math>\begin{align}
  & {{T}_{U}}= & {{e}^{{{T}_{U}}}}\text{ (upper bound)} \\
& {{T}_{L}}= & {{e}^{{{T}_{L}}}}\text{ (lower bound)} 
\end{align}</math>
 
The variance of  <math>u</math>  is estimated from:
 
<math>\begin{align}
  & Var(\widehat{u})= & {{\left( \frac{\partial u}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial u}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+{{\left( \frac{\partial u}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+ \\
&  & +2\left( \frac{\partial u}{\partial \mu } \right)\left( \frac{\partial u}{\partial \sigma } \right)Cov(\widehat{\mu },\widehat{\sigma })+2\left( \frac{\partial u}{\partial \mu } \right)\left( \frac{\partial u}{\partial \lambda } \right)Cov(\widehat{\mu },\widehat{\lambda })+ \\
&  & +2\left( \frac{\partial u}{\partial \lambda } \right)\left( \frac{\partial u}{\partial \sigma } \right)Cov(\widehat{\lambda },\widehat{\sigma }) 
\end{align}</math>
 
 
====A Generalized Gamma Distribution Example====
The following data set represents revolutions-to-failure (in millions) for 23 ball bearings in a fatigue test [21].
 
 
<math>\begin{array}{*{35}{l}}
  \text{17}\text{.88} & \text{28}\text{.92} & \text{33} & \text{41}\text{.52} & \text{42}\text{.12} & \text{45}\text{.6} & \text{48}\text{.4} & \text{51}\text{.84} & \text{51}\text{.96} & \text{54}\text{.12}  \\
  \text{55}\text{.56} & \text{67}\text{.8} & \text{68}\text{.64} & \text{68}\text{.64} & \text{68}\text{.88} & \text{84}\text{.12} & \text{93}\text{.12} & \text{98}\text{.64} & \text{105}\text{.12} & \text{105}\text{.84}  \\
  \text{127}\text{.92} & \text{128}\text{.04} & \text{173}\text{.4} & {} & {} & {} & {} & {} & {} & {}  \\
\end{array}</math>
 
When the generalized gamma distribution is fitted to this data using MLE, the following values for parameters are obtained:
 
<math>\begin{align}
  & \widehat{\mu }= & 4.23064 \\
& \widehat{\sigma }= & 0.509982 \\
& \widehat{\lambda }= & 0.307639 
\end{align}</math>
 
Note that for this data, the generalized gamma offers a compromise between the Weibull  <math>(\lambda =1),</math>  and the lognormal  <math>(\lambda =0)</math>  distributions. The value of  <math>\lambda </math>  indicates that the lognormal distribution is better supported by the data. A better assessment, however, can be made by looking at the confidence bounds on  <math>\lambda .</math>  For example, the 90% two-sided confidence bounds are:
 
<math>\begin{align}
  & {{\lambda }_{u}}= & -0.592087 \\
& {{\lambda }_{u}}= & 1.20736 
\end{align}</math>
 
It can be then concluded that both distributions (i.e. Weibull and lognormal) are well supported by the data, with the lognormal being the ,better supported of the two.
In Weibull++ the generalized gamma probability is plotted on gamma probability paper, as shown next.
 
It is important to also note that as in the case of the mixed Weibull distribution, in the case of regression analysis, using a generalized gamma model, the choice of regression axis, i.e.  <math>RRX</math>  or  <math>RRY,</math>  is of no consequence since non-linear regression is utilized.
 
===The Gamma Distribution===
The gamma distribution is a flexible life distribution model that may offer a good fit to some sets of failure data. It is not, however, widely used as a life distribution model for common failure mechanisms. The gamma distribution does arise naturally as the time-to-first-fail distribution for a system with standby exponentially distributed backups, and is also a good fit for the sum of independent exponential random variables. The gamma distribution is sometimes called the Erlang distribution, which is used frequently in queuing theory applications. [32]
 
====Gamma Probability Density Function====
The  <math>pdf</math>  of the gamma distribution is given by:
 
<math>f(T)=\frac{{{e}^{kz-{{e}^{z}}}}}{t\Gamma (k)}</math>
 
where:
 
<math>z=\ln (t)-\mu </math>
 
and:
 
<math>\begin{align}
  & {{e}^{\mu }}= & \text{scale parameter} \\
& k= & \text{shape parameter} 
\end{align}</math>
 
where  <math>0<t<\infty </math> ,  <math>-\infty <\mu <\infty </math>  and  <math>k>0</math> .
The Gamma Reliability Function
The reliability for a mission of time  <math>T</math>  for the gamma distribution is:
 
 
<math>R=1-{{\Gamma }_{1}}(k;{{e}^{z}})</math>
 
 
====The Gamma Mean, Median and Mode====
The gamma mean or MTTF is:
 
 
<math>\overline{T}=k{{e}^{\mu }}</math>
 
 
The mode exists if  <math>k>1</math>  and is given by:
 
 
<math>\tilde{T}=(k-1){{e}^{\mu }}</math>
 
 
The median is:
 
<math>\widehat{T}={{e}^{\mu +\ln (\Gamma _{1}^{-1}(0.5;k))}}</math>
 
====The Gamma Standard Deviation====
The standard deviation for the gamma distribution is:
 
<math>{{\sigma }_{T}}=\sqrt{k}{{e}^{\mu }}</math>
 
 
====The Gamma Reliable Life====
The gamma reliable life is:
 
<math>{{T}_{R}}={{e}^{\mu +\ln (\Gamma _{1}^{-1}(1-R;k))}}</math>
 
====The Gamma Failure Rate Function====
The instantaneous gamma failure rate is given by:
 
<math>\lambda =\frac{{{e}^{kz-{{e}^{z}}}}}{t\Gamma (k)(1-{{\Gamma }_{1}}(k;{{e}^{z}}))}</math>
 
====Characteristics of the Gamma Distribution====
Some of the specific characteristics of the gamma distribution are the following:
 
For  <math>k>1</math> :
 
• As  <math>T\to 0,\infty </math>  ,  <math>f(T)\to 0.</math>
 
• <math>f(T)</math>  increases from 0 to the mode value and decreases thereafter.
 
• If  <math>k\le 2</math>  then  <math>pdf</math>  has one inflection point at  <math>T={{e}^{\mu }}\sqrt{k-1}(</math>  <math>\sqrt{k-1}+1).</math>
 
• If  <math>k>2</math>  then  <math>pdf</math>  has two inflection points for  <math>T={{e}^{\mu }}\sqrt{k-1}(</math>  <math>\sqrt{k-1}\pm 1).</math>
 
• For a fixed  <math>k</math> , as  <math>\mu </math>  increases, the  <math>pdf</math> starts to look more like a straight angle.
 
As  <math>T\to \infty ,\lambda (T)\to \tfrac{1}{{{e}^{\mu }}}.</math>
 
 
For  <math>k=1</math> :
 
• Gamma becomes the exponential distribution.
 
• As  <math>T\to 0</math>  ,  <math>f(T)\to \tfrac{1}{{{e}^{\mu }}}.</math>
 
• As  <math>T\to \infty ,f(T)\to 0.</math>
 
• The  <math>pdf</math>  decreases monotonically and is convex.
 
• <math>\lambda (T)\equiv \tfrac{1}{{{e}^{\mu }}}</math>  .  <math>\lambda (T)</math>  is constant.
 
• The mode does not exist.
 
For  <math>0<k<1</math> :
 
• As  <math>T\to 0</math>  ,  <math>f(T)\to \infty .</math>
 
• As  <math>T\to \infty ,f(T)\to 0.</math>
 
• As  <math>T\to \infty ,\lambda (T)\to \tfrac{1}{{{e}^{\mu }}}.</math>
 
• The  <math>pdf</math>  decreases monotonically and is convex.
 
• As  <math>\mu </math>  increases, the  <math>pdf</math>  gets stretched out to the right and its height decreases, while maintaining its shape.
 
• As  <math>\mu </math>  decreases, the  <math>pdf</math>  shifts towards the left and its height increases.
 
• The mode does not exist.
 
====Confidence Bounds====
The only method available in Weibull++ for confidence bounds for the gamma distribution is the Fisher matrix, which is described next. The complete derivations were presented in detail (for a general function) in Chapter 5.
====Bounds on the Parameters====
The lower and upper bounds on the mean,  <math>\widehat{\mu }</math> , are estimated from:
 
<math>\begin{align}
  & {{\mu }_{U}}= & \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\
& {{\mu }_{L}}= & \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} 
\end{align}</math>
 
 
Since the standard deviation,  <math>\widehat{\sigma }</math> , must be positive,  <math>\ln (\widehat{\sigma })</math>  is treated as normally distributed and the bounds are estimated from:
 
<math>\begin{align}
  & {{k}_{U}}= & \widehat{k}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{k})}}{{\hat{k}}}}}\text{ (upper bound)} \\
& {{k}_{L}}= & \frac{\widehat{\sigma }}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{k})}}{\widehat{k}}}}}\text{ (lower bound)} 
\end{align}</math>
 
where  <math>{{K}_{\alpha }}</math>  is defined by:
 
<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})</math>
 
If  <math>\delta </math>  is the confidence level, then  <math>\alpha =\tfrac{1-\delta }{2}</math>  for the two-sided bounds and  <math>\alpha =1-\delta </math>  for the one-sided bounds.
 
The variances and covariances of  <math>\widehat{\mu }</math>  and  <math>\widehat{k}</math>  are estimated from the Fisher matrix, as follows:
 
<math>\left( \begin{matrix}
  \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{k} \right)  \\
  \widehat{Cov}\left( \widehat{\mu },\widehat{k} \right) & \widehat{Var}\left( \widehat{k} \right)  \\
\end{matrix} \right)=\left( \begin{matrix}
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial k}  \\
  {} & {}  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial k} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{k}^{2}}}  \\
\end{matrix} \right)_{\mu =\widehat{\mu },k=\widehat{k}}^{-1}</math>
 
 
<math>\Lambda </math>  is the log-likelihood function of the gamma distribution, described in Chapter 3 and Appendix C.
 
====Bounds on Reliability====
The reliability of the gamma distribution is:
 
<math>\widehat{R}(T;\hat{\mu },\hat{k})=1-{{\Gamma }_{1}}(\widehat{k};{{e}^{\widehat{z}}})</math>
 
where:
 
<math>\widehat{z}=\ln (t)-\widehat{\mu }</math>
 
The upper and lower bounds on reliability are:
 
<math>{{R}_{U}}=\frac{\widehat{R}}{\widehat{R}+(1-\widehat{R})\exp (\tfrac{-{{K}_{\alpha }}\sqrt{Var(\widehat{R})\text{ }}}{\widehat{R}(1-\widehat{R})})}\text{  (upper bound)}</math>
 
<math>{{R}_{L}}=\frac{\widehat{R}}{\widehat{R}+(1-\widehat{R})\exp (\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})\text{ }}}{\widehat{R}(1-\widehat{R})})}\text{  (lower bound)}</math>
 
where:
 
<math>Var(\widehat{R})={{(\frac{\partial R}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial R}{\partial \mu })(\frac{\partial R}{\partial k})Cov(\widehat{\mu },\widehat{k})+{{(\frac{\partial z}{\partial k})}^{2}}Var(\widehat{k})</math>
 
====Bounds on Time====
The bounds around time for a given gamma percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:
 
 
<math>\widehat{T}(\widehat{\mu },\widehat{\sigma })=\widehat{\mu }+\widehat{\sigma }z</math>
 
 
where:
 
 
<math>z=\ln (-\ln (R))</math>
 
 
 
 
<math>Var(\widehat{T})={{(\frac{\partial T}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial T}{\partial \mu })(\frac{\partial T}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial T}{\partial \sigma })}^{2}}Var(\widehat{\sigma })</math>
 
or:
 
 
<math>Var(\widehat{T})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })</math>
 
 
The upper and lower bounds are then found by:
 
 
<math>\begin{align}
  & {{T}_{U}}= & \hat{T}+{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Upper bound)} \\
& {{T}_{L}}= & \hat{T}-{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Lower bound)} 
\end{align}</math>
 
====A Gamma Distribution Example====
Twenty four units were reliability tested and the following life test data were obtained:
 
 
<math>\begin{matrix}
  \text{61} & \text{50} & \text{67} & \text{49} & \text{53} & \text{62}  \\
  \text{53} & \text{61} & \text{43} & \text{65} & \text{53} & \text{56}  \\
  \text{62} & \text{56} & \text{58} & \text{55} & \text{58} & \text{48}  \\
  \text{66} & \text{44} & \text{48} & \text{58} & \text{43} & \text{40}  \\
\end{matrix}</math>
 
Fitting the gamma distribution to this data, using maximum likelihood as the analysis method, gives the following parameters:
 
<math>\begin{align}
  & \hat{\mu }= & 7.72E-02 \\
& \hat{k}= & 50.4908 
\end{align}</math>
 
Using rank regression on  <math>X,</math>  the estimated parameters are:
 
<math>\begin{align}
  & \hat{\mu }= & 0.2915 \\
& \hat{k}= & 41.1726 
\end{align}</math>
 
 
Using rank regression on  <math>Y,</math>  the estimated parameters are:
 
<math>\begin{align}
  & \hat{\mu }= & 0.2915 \\
& \hat{k}= & 41.1726 
\end{align}</math>
 
===The Logistic Distribution===
The logistic distribution has been used for growth models, and is used in a certain type of regression known as the logistic regression. It has also applications in modeling life data. The shape of the logistic distribution and the normal distribution are very similar [27]. There are some who argue that the logistic distribution is inappropriate for modeling lifetime data because the left-hand limit of the distribution extends to negative infinity. This could conceivably result in modeling negative times-to-failure. However, provided that the distribution in question has a relatively high mean and a relatively small location parameter, the issue of negative failure times should not present itself as a problem.
====Logistic Probability Density Function====
The logistic  <math>pdf</math>  is given by:
 
<math>\begin{matrix}
  f(T)=\tfrac{{{e}^{z}}}{\sigma {{(1+{{e}^{z}})}^{2}}}  \\
  z=\tfrac{t-\mu }{\sigma }  \\
  -\infty <T<\infty ,\ \ -\infty <\mu <\infty ,\sigma >0  \\
\end{matrix}</math>
 
where: 
 
<math>\begin{align}
  \mu = & \text{location parameter (also denoted as }\overline{T)} \\
  \sigma = & \text{scale parameter} 
\end{align}</math>
 
====The Logistic Mean, Median and Mode====
The logistic mean or MTTF is actually one of the parameters of the distribution, usually denoted as  <math>\mu </math> . Since the logistic distribution is symmetrical,
the median and the mode are always equal to the mean,  <math>\mu =\tilde{T}=\breve{T}.</math>
 
====The Logistic Standard Deviation====
The standard deviation of the logistic distribution, 
<math>{{\sigma }_{T}}</math>  , is given by:
 
<math>{{\sigma }_{T}}=\sigma \pi \frac{\sqrt{3}}{3}</math>
 
 
====The Logistic Reliability Function====
The reliability for a mission of time  <math>T</math> , starting at age 0, for the logistic distribution is determined by:
 
 
<math>R(T)=\int_{T}^{\infty }f(t)dt</math>
 
or:
 
 
<math>R(T)=\frac{1}{1+{{e}^{z}}}</math>
 
 
The unreliability function is:
 
 
<math>F=\frac{{{e}^{z}}}{1+{{e}^{z}}}</math>
 
where:
 
 
<math>z=\frac{T-\mu }{\sigma }</math>
 
====The Logistic Conditional Reliability Function====
The logistic conditional reliability function is given by:
 
<math>R(t/T)=\frac{R(T+t)}{R(T)}=\frac{1+{{e}^{\tfrac{T-\mu }{\sigma }}}}{1+{{e}^{\tfrac{t+T-\mu }{\sigma }}}}</math>
 
 
====The Logistic Reliable Life====
The logistic reliable life is given by:
 
 
<math>{{T}_{R}}=\mu +\sigma [\ln (1-R)-\ln (R)]</math>
 
====The Logistic Failure Rate Function====
The logistic failure rate function is given by:
 
<math>\lambda (T)=\frac{{{e}^{z}}}{\sigma (1+{{e}^{z}})}</math>
 
 
====Characteristics of the Logistic Distribution====
• The logistic distribution has no shape parameter. This means that the logistic  <math>pdf</math>  has only one shape, the bell shape, and this shape does not change. The shape of the logistic distribution is very similar to that of the normal distribution.
 
• The mean,  <math>\mu </math> , or the mean life or the  <math>MTTF</math> , is also the location parameter of the logistic  <math>pdf</math> , as it locates the  <math>pdf</math>  along the abscissa. It can assume values of  <math>-\infty <\bar{T}<\infty </math> .
 
• As  <math>\mu </math>  decreases, the  <math>pdf</math>  is shifted to the left.
 
• As  <math>\mu </math>  increases, the  <math>pdf</math>  is shifted to the right.
 
• As  <math>\sigma </math>  decreases, the  <math>pdf</math>  gets pushed toward the mean, or it becomes narrower and taller.
 
• As  <math>\sigma </math>  increases, the  <math>pdf</math>  spreads out away from the mean, or it becomes broader and shallower.
 
• The scale parameter can assume values of  <math>0<\sigma <\infty </math>.
• The logistic  <math>pdf</math>  starts at  <math>T=-\infty </math>  with an  <math>f(T)=0</math> . As  <math>T</math>  increases,  <math>f(T)</math>  also increases, goes through its point of inflection and reaches its maximum value at  <math>T=\bar{T}</math> . Thereafter,  <math>f(T)</math>  decreases, goes through its point of inflection and assumes a value of  <math>f(T)=0</math>  at  <math>T=+\infty </math> .
 
• For  <math>T=\pm \infty ,</math>  the  <math>pdf</math>  equals  <math>0.</math>  The maximum value of the  <math>pdf</math>  occurs at  <math>T</math> = <math>\mu </math>  and equals  <math>\tfrac{1}{4\sigma }.</math>
 
• The point of inflection of the  <math>pdf</math>  plot is the point where the second derivative of the  <math>pdf</math>  equals zero. The inflection point occurs at  <math>T=\mu +\sigma \ln (2\pm \sqrt{3})</math>  or  <math>T\approx \mu \pm \sigma 1.31696</math>.
 
• If the location parameter  <math>\mu </math>  decreases, the reliability plot is shifted to the left. If  <math>\mu </math>  increases, the reliability plot is shifted to the right.
 
• If  <math>T=\mu </math>  then  <math>R=0.5</math> .    is the inflection point. If  <math>T<\mu </math>  then  <math>R(t)</math>  is concave (concave down); if  <math>T>\mu </math>  then  <math>R(t)</math>  is convex (concave up). For  <math>T<\mu ,</math>  <math>\lambda (t)</math>  is convex (concave up), for  <math>T>\mu ;</math>  <math>\lambda (t)</math>  is concave (concave down).
 
• The main difference between the normal distribution and logistic distribution lies in the tails and in the behavior of the failure rate function. The logistic distribution has slightly longer tails compared to the normal distribution. Also, in the upper tail of the logistic distribution, the failure rate function levels out for large  <math>t</math>  approaching 1/ <math>\delta .</math>
 
• If location parameter  <math>\mu </math>  decreases, the failure rate plot is shifted to the left. Vice versa if  <math>\mu </math>  increases, the failure rate plot is shifted to the right.
 
• <math>\lambda </math>  always increases. For  <math>T\to -\infty </math>      for  <math>T\to \infty </math>      It is always  <math>0\le \lambda (t)\le \tfrac{1}{\sigma }.</math>
 
• If  <math>\sigma </math>  increases, then  <math>\lambda (t)</math>  increases more slowly and smoothly. The segment of time where  <math>0<\lambda (t)<\tfrac{1}{\sigma }</math>  increases, too, whereas the region where  <math>\lambda (t)</math>  is close to  <math>0</math>  or  <math>\tfrac{1}{\sigma }</math>  gets narrower. Conversely, if  <math>\sigma </math>  decreases, then  <math>\lambda (t)</math>  increases more quickly and sharply. The segment of time where  <math>0<</math>  <math>\lambda (t)<\tfrac{1}{\sigma }</math>  decreases, too, whereas the region where  <math>\lambda (t)</math>  is close to  <math>0</math>  or  <math>\tfrac{1}{\sigma }</math>  gets broader.
 
====Weibull++ Notes on Negative Time Values====
One of the disadvantages of using the logistic distribution for reliability calculations is the fact that the logistic distribution starts at negative infinity. This can result in negative values for some of the results. Negative values for time are not accepted in most of the components of Weibull++, nor are they implemented. Certain components of the application reserve negative values for suspensions, or will not return negative results. For example, the Quick Calculation Pad will return a null value (zero) if the result is negative. Only the Free-Form (Probit) data sheet can accept negative values for the random variable (x-axis values).
 
 
====Probability Paper====
The form of the Logistic probability paper is based on linearizing the  <math>cdf</math> .
From Eqn. (UnR fcn),  <math>z</math>  can be calculated as a function of the  <math>cdf</math>  <math>F</math>  as follows:
 
<math>z=\ln (F)-\ln (1-F)</math>
 
 
or using Eqn. (z func of parameters)
 
<math>\frac{T-\mu }{\sigma }=\ln (F)-\ln (1-F)</math>
 
Then:
 
<math>\ln (F)-\ln (1-F)=-\frac{\mu }{\sigma }+\frac{1}{\sigma }T</math>
 
 
Now let:
 
<math>y=\ln (F)-\ln (1-F)</math>
 
 
<math>x=T</math>
 
 
and:
 
<math>a=-\frac{\mu }{\sigma }</math>
 
 
<math>b=\frac{1}{\sigma }</math>
 
 
which results in the following linear equation:
 
<math>y=a+bx</math>
 
 
The logistic probability paper resulting from this linearized  <math>cdf</math>  function is shown next.
 
 
 
Since the logistic distribution is symmetrical, the area under the  <math>pdf</math>  curve from  <math>-\infty </math>  to  <math>\mu </math>  is  <math>0.5</math> , as is the area from  <math>\mu </math>  to  <math>+\infty </math> . Consequently, the value of  <math>\mu </math>  is said to be the point where  <math>R(t)=Q(t)=50%</math> .  This means that the estimate of  <math>\mu </math>  can be read from the point where the plotted line crosses the 50% unreliability line.
For  <math>z=1</math> ,  <math>\sigma =t-\mu </math>  and  <math>R(t)=\tfrac{1}{1+\exp (1)}\approx 0.2689.</math>  Therefore,  <math>\sigma </math>  can be found by subtracting  <math>\mu </math>  from the time value where the plotted probability line crosses the 73.10% unreliability (26.89% reliability) horizontal line.
 
====Confidence Bounds====
In this section, we present the methods used in the application to estimate the different types of confidence bounds for logistically distributed data. The complete derivations were presented in detail (for a general function) in Chapter 5.
 
====Bounds on the Parameters====
The lower and upper bounds on the location parameter  <math>\widehat{\mu }</math>  are estimated from
:
 
<math>{{\mu }_{U}}=\widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })\text{ }}\text{ (upper bound)}</math>
 
<math>{{\mu }_{L}}=\widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })\text{ }}\text{ (lower bound)}</math>
 
The lower and upper bounds on the scale parameter  <math>\widehat{\sigma }</math>  are estimated from:
 
<math>{{\sigma }_{U}}=\widehat{\sigma }{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })\text{ }}}{\widehat{\sigma }}}}(\text{upper bound})</math>
 
 
<math>{{\sigma }_{L}}=\widehat{\sigma }{{e}^{\tfrac{-{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })\text{ }}}{\widehat{\sigma }}}}\text{ (lower bound)}</math>
 
where  <math>{{K}_{\alpha }}</math>  is defined by:
 
<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})</math>
 
 
If  <math>\delta </math>  is the confidence level, then  <math>\alpha =\tfrac{1-\delta }{2}</math>  for the two-sided bounds, and  <math>\alpha =1-\delta </math>  for the one-sided bounds.
The variances and covariances of  <math>\widehat{\mu }</math>  and  <math>\widehat{\sigma }</math>  are estimated from the Fisher matrix, as follows:
 
<math>\left( \begin{matrix}
  \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right)  \\
  \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\sigma } \right)  \\
\end{matrix} \right)=\left( \begin{matrix}
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma }  \\
  {} & {}  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}}  \\
\end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1}</math>
 
<math>\Lambda </math>  is the log-likelihood function of the normal distribution, described in Chapter 3 and Appendix C.
 
====Bounds on Reliability====
The reliability of the logistic distribution is:
 
<math>\widehat{R}=\frac{1}{1+{{e}^{\widehat{z}}}}</math>
 
where:
 
<math>\widehat{z}=\frac{T-\widehat{\mu }}{\widehat{\sigma }}</math>
 
 
Here  <math>-\infty <T<\infty </math> ,  <math>-\infty <\mu <\infty </math>  ,  <math>0<\sigma <\infty </math> . Therefore,  <math>z</math>  also is changing from  <math>-\infty </math>  to  <math>+\infty </math> . Then the bounds on  <math>z</math>  are estimated from:
 
<math>{{z}_{U}}=\widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})\text{ }}</math>
 
 
<math>{{z}_{L}}=\widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})\text{ }}\text{ }</math>
 
 
where:
 
<math>Var(\widehat{z})={{(\frac{\partial z}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial z}{\partial \mu })(\frac{\partial z}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial z}{\partial \sigma })}^{2}}Var(\widehat{\sigma })</math>
 
or:
 
<math>Var(\widehat{z})=\frac{1}{{{\sigma }^{2}}}(Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma }))</math>
 
The upper and lower bounds on reliability are:
 
<math>{{R}_{U}}=\frac{1}{1+{{e}^{{{z}_{L}}}}}\text{(upper bound)}</math>
 
<math>{{R}_{L}}=\frac{1}{1+{{e}^{{{z}_{U}}}}}\text{(lower bound)}</math>
 
====Bounds on Time====
The bounds around time for a given logistic percentile (unreliability) are estimated by first solving the reliability equation with respect to time as follows:
 
<math>\widehat{T}(\widehat{\mu },\widehat{\sigma })=\widehat{\mu }+\widehat{\sigma }z</math>
 
 
where:
 
 
<math>z=\ln (1-R)-\ln (R)</math>
 
 
 
 
<math>Var(\widehat{T})={{(\frac{\partial T}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial T}{\partial \mu })(\frac{\partial T}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial T}{\partial \sigma })}^{2}}Var(\widehat{\sigma })</math>
 
 
or:
 
 
<math>Var(\widehat{T})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })</math>
 
 
The upper and lower bounds are then found by:
 
<math>{{T}_{U}}=\widehat{T}+{{K}_{\alpha }}\sqrt{Var(\widehat{T})\text{ }}(\text{upper bound})</math>
 
 
<math>{{T}_{L}}=\widehat{T}-{{K}_{\alpha }}\sqrt{Var(\widehat{T})\text{ }}(\text{lower bound})</math>
 
 
====A Logistic Distribution Example====
The lifetime of a mechanical valve is known to follow a logistic distribution. Ten units were tested for 28 months and the following months-to-failure data was collected.
 
 
<math>\overset{{}}{\mathop{\text{Table 10}\text{.1 - Times-to-Failure Data with Suspensions}}}\,</math>
 
 
<math>\begin{matrix}
  \text{Data Point Index} & \text{State F or S} & \text{State End Time}  \\
  \text{1} & \text{F} & \text{8}  \\
  \text{2} & \text{F} & \text{10}  \\
  \text{3} & \text{F} & \text{15}  \\
  \text{4} & \text{F} & \text{17}  \\
  \text{5} & \text{F} & \text{19}  \\
  \text{6} & \text{F} & \text{26}  \\
  \text{7} & \text{F} & \text{27}  \\
  \text{8} & \text{S} & \text{28}  \\
  \text{9} & \text{S} & \text{28}  \\
  \text{10} & \text{S} & \text{28}  \\
\end{matrix}</math>
 
• Determine the valve's design life if specifications call for a reliability goal of 0.90.
 
• The valve is to be used in a pumping device that requires 1 month of continuous operation. What is the probability of the pump failing due to the valve?
 
This data set can be entered into Weibull++ as follows:
 
 
The computed parameters for maximum likelihood are:
 
<math>\begin{align}
  & \widehat{\mu }= & 22.34 \\
& \hat{\sigma }= & 6.15 
\end{align}</math>
 
• The valve's design life, along with 90% two sided confidence bounds, can be obtained using the QCP as follows:
 
• The probability, along with 90% two sided confidence bounds, that the pump fails due to a valve failure during the first month is obtained as follows:
 
===The Loglogistic Distribution===
As may be indicated by the name, the loglogistic distribution has certain similarities to the logistic distribution. A random variable is loglogistically distributed if the logarithm of the random variable is logistically distributed. Because of this, there are many mathematical similarities between the two distributions [27]. For example, the mathematical reasoning for the construction of the probability plotting scales is very similar for these two distributions.
 
====Loglogistic Probability Density Function====
The loglogistic distribution is a two-parameter distribution with parameters  <math>\mu </math>  and  <math>\sigma </math> . The  <math>pdf</math>  for this distribution is given by:
 
<math>f(T)=\frac{{{e}^{z}}}{\sigma T{{(1+{{e}^{z}})}^{2}}}</math>
 
where:
 
<math>z=\frac{{T}'-\mu }{\sigma }</math>
 
<math>{T}'=\ln (T)</math>
 
and:
 
<math>\begin{align}
  & \mu = & \text{scale parameter} \\
& \sigma = & \text{shape parameter} 
\end{align}</math>
 
where  <math>0<t<\infty </math> ,  <math>-\infty <\mu <\infty </math>  and  <math>0<\sigma <\infty </math> .
 
====Mean, Median and Mode====
The mean of the loglogistic distribution,  <math>\overline{T}</math> , is given by:
 
<math>\overline{T}={{e}^{\mu }}\Gamma (1+\sigma )\Gamma (1-\sigma )</math>
 
 
Note that for  <math>\sigma \ge 1,</math>  <math>\overline{T}</math>  does not exist.
 
The median of the loglogistic distribution,  <math>\breve{T}</math> , is given by:
 
<math>\widehat{T}={{e}^{\mu }}</math>
 
The mode of the loglogistic distribution,  <math>\tilde{T}</math> , if  <math>\sigma <1,</math>  is given by:
 
..
 
====The Standard Deviation====
The standard deviation of the loglogistic distribution,  <math>{{\sigma }_{T}}</math> , is given by:
 
<math>{{\sigma }_{T}}={{e}^{\mu }}\sqrt{\Gamma (1+2\sigma )\Gamma (1-2\sigma )-{{(\Gamma (1+\sigma )\Gamma (1-\sigma ))}^{2}}}</math>
 
 
Note that for  <math>\sigma \ge 0.5,</math>  the standard deviation does not exist.
 
====The Loglogistic Reliability Function====
The reliability for a mission of time  <math>T</math> , starting at age 0, for the loglogistic distribution is determined by:
 
<math>R=\frac{1}{1+{{e}^{z}}}</math>
 
where:
 
<math>z=\frac{{T}'-\mu }{\sigma }</math>
 
<math>{T}'=\ln (t)</math>
 
The unreliability function is:
 
<math>F=\frac{{{e}^{z}}}{1+{{e}^{z}}}</math>
 
====The loglogistic Reliable Life====
The  logistic reliable life is:
 
 
<math>{{T}_{R}}={{e}^{\mu +\sigma [\ln (1-R)-\ln (R)]}}</math>
 
====The loglogistic Failure Rate Function====
The loglogistic failure rate is given by:
 
 
<math>\lambda (T)=\frac{{{e}^{z}}}{\sigma T(1+{{e}^{z}})}</math>
 
 
====Distribution Characteristics====
For  <math>\sigma >1</math> :
 
• <math>f(T)</math>  decreases monotonically and is convex. Mode and mean do not exist.
 
For  <math>\sigma =1</math> :
 
• <math>f(T)</math>  decreases monotonically and is convex. Mode and mean do not exist. As  <math>T\to 0</math> ,  <math>f(T)\to \tfrac{1}{\sigma {{e}^{\tfrac{\mu }{\sigma }}}}.</math>
 
• As  <math>T\to 0</math>  ,  <math>\lambda (T)\to \tfrac{1}{\sigma {{e}^{\tfrac{\mu }{\sigma }}}}.</math>
 
For  <math>0<\sigma <1</math> :
 
• The shape of the loglogistic distribution is very similar to that of the lognormal distribution and the Weibull distribution.
 
• The  <math>pdf</math>  starts at zero, increases to its mode, and decreases thereafter.
 
• As  <math>\mu </math>  increases, while  <math>\sigma </math>  is kept the same, the  <math>pdf</math>  gets stretched out to the right and its height decreases, while maintaining its shape.
 
• As  <math>\mu </math>  decreases,while  <math>\sigma </math>  is kept the same, the  ..  gets pushed in towards the left and its height increases.
 
• <math>\lambda (T)</math>  increases till  <math>T={{e}^{\mu +\sigma \ln (\tfrac{1-\sigma }{\sigma })}}</math>  and decreases thereafter.  <math>\lambda (T)</math>  is concave at first, then becomes convex.
 
====Confidence Bounds====
The method used by the application in estimating the different types of confidence bounds for loglogistically distributed data is presented in this section. The complete derivations were presented in detail for a general function in Chapter 5.
====Bounds on the Parameters====
The lower and upper bounds on the mean,  <math>{\mu }'</math> , are estimated from:
 
 
<math>\begin{align}
  & \mu _{U}^{\prime }= & {{\widehat{\mu }}^{\prime }}+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\
& \mu _{L}^{\prime }= & {{\widehat{\mu }}^{\prime }}-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} 
\end{align}</math>
 
 
For the standard deviation,  <math>{{\widehat{\sigma }}_{{{T}'}}}</math> ,  <math>\ln ({{\widehat{\sigma }}_{{{T}'}}})</math>  is treated as normally distributed, and the bounds are estimated from:
 
<math>\begin{align}
  & {{\sigma }_{U}}= & {{\widehat{\sigma }}_{{{T}'}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}\text{ (upper bound)} \\
& {{\sigma }_{L}}= & \frac{{{\widehat{\sigma }}_{{{T}'}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{{{\widehat{\sigma }}_{{{T}'}}}}}}}\text{ (lower bound)} 
\end{align}</math>
 
where  <math>{{K}_{\alpha }}</math>  is defined by:
 
<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})</math>
 
 
If  <math>\delta </math>  is the confidence level, then  <math>\alpha =\tfrac{1-\delta }{2}</math>  for the two-sided bounds, and  <math>\alpha =1-\delta </math>  for the one-sided bounds.
 
The variances and covariances of  <math>\widehat{\mu }</math>  and  <math>\widehat{\sigma }</math>  are estimated as follows:
 
<math>\left( \begin{matrix}
  \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right)  \\
  \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\sigma } \right)  \\
\end{matrix} \right)=\left( \begin{matrix}
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{(\mu )}^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma }  \\
  {} & {}  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}}  \\
\end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1}</math>
 
 
where  <math>\Lambda </math>  is the log-likelihood function of the loglogistic distribution.
 
====Bounds on Reliability====
The reliability of the logistic distribution is:
 
<math>\widehat{R}=\frac{1}{1+\exp (\widehat{z})}</math>
 
 
where:
 
<math>\widehat{z}=\frac{{T}'-\widehat{\mu }}{\widehat{\sigma }}</math>
 
 
Here  <math>0<t<\infty </math> ,  <math>-\infty <\mu <\infty </math>  ,  <math>0<\sigma <\infty </math> , therefore  <math>0<\ln (t)<\infty </math>    and  <math>z</math>  also is changing from  <math>-\infty </math>  till  <math>+\infty </math>  .The bounds on  <math>z</math>  are estimated from:
 
<math>{{z}_{U}}=\widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})}</math>
 
 
<math>{{z}_{L}}=\widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})\text{ }}\text{ }</math>
 
 
where:
 
<math>Var(\widehat{z})={{(\frac{\partial z}{\partial \mu })}^{2}}Var({{\widehat{\mu }}^{\prime }})+2(\frac{\partial z}{\partial \mu })(\frac{\partial z}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial z}{\partial \sigma })}^{2}}Var(\widehat{\sigma })</math>
 
 
or:
 
<math>Var(\widehat{z})=\frac{1}{{{\sigma }^{2}}}(Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma }))</math>
 
 
The upper and lower bounds on reliability are:
 
<math>{{R}_{U}}=\frac{1}{1+{{e}^{{{z}_{L}}}}}\text{(Upper bound)}</math>
 
 
<math>{{R}_{L}}=\frac{1}{1+{{e}^{{{z}_{U}}}}}\text{(Lower bound)}</math>
 
 
====Bounds on Time====
The bounds around time for a given loglogistic percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, as follows:
 
<math>\widehat{T}(\widehat{\mu },\widehat{\sigma })={{e}^{\widehat{\mu }+\widehat{\sigma }z}}</math>
 
 
where:
 
<math>z=\ln (1-R)-\ln (R)</math>
 
 
or:
 
<math>\ln (T)=\widehat{\mu }+\widehat{\sigma }z</math>
 
 
Let:
 
<math>u=\ln (T)=\widehat{\mu }+\widehat{\sigma }z</math>
 
 
then:
 
<math>{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})\text{ }}\text{ }</math>
 
 
 
 
<math>{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})\text{ }}\text{ }</math>
 
 
where:
 
 
<math>Var(\widehat{u})={{(\frac{\partial u}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial u}{\partial \mu })(\frac{\partial u}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial u}{\partial \sigma })}^{2}}Var(\widehat{\sigma })</math>
 
 
or:
 
<math>Var(\widehat{u})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })</math>
 
 
The upper and lower bounds are then found by:
 
<math>{{T}_{U}}={{e}^{{{u}_{U}}}}\text{ (upper bound)}</math>
 
 
<math>{{T}_{L}}={{e}^{{{u}_{L}}}}\text{ (lower bound)}</math>
 
 
====A LogLogistic Distribution Example====
Determine the loglogistic parameter estimates for the data given in Table 10.3.
 
<math>\overset{{}}{\mathop{\text{Table 10}\text{.3 - Test data}}}\,</math>
 
 
<math>\begin{matrix}
  \text{Data point index} & \text{Last Inspected} & \text{State End time}  \\
  \text{1} & \text{105} & \text{106}  \\
  \text{2} & \text{197} & \text{200}  \\
  \text{3} & \text{297} & \text{301}  \\
  \text{4} & \text{330} & \text{335}  \\
  \text{5} & \text{393} & \text{401}  \\
  \text{6} & \text{423} & \text{426}  \\
  \text{7} & \text{460} & \text{468}  \\
  \text{8} & \text{569} & \text{570}  \\
  \text{9} & \text{675} & \text{680}  \\
  \text{10} & \text{884} & \text{889}  \\
\end{matrix}</math>
 
 
Using Times-to-failure data under the Folio Data Type and the My data set contains interval and/or left censored data under Times-to-failure data options to enter the above data, the computed parameters for maximum likelihood are calculated to be:
 
<math>\begin{align}
  & {{{\hat{\mu }}}^{\prime }}= & 5.9772 \\
& {{{\hat{\sigma }}}_{{{T}'}}}= & 0.3256 
\end{align}</math>
 
 
For rank regression on  <math>X\ \ :</math> 
 
<math>\begin{align}
  & \hat{\mu }= & 5.9281 \\
& \hat{\sigma }= & 0.3821 
\end{align}</math>
 
 
For rank regression on  <math>Y\ \ :</math> 
 
<math>\begin{align}
  & \hat{\mu }= & 5.9772 \\
& \hat{\sigma }= & 0.3256 
\end{align}</math>
 
===The Gumbel/SEV Distribution===
The Gumbel distribution is also referred to as the Smallest Extreme Value (SEV) distribution or the Smallest Extreme Value (Type I) distribution. The Gumbel distribution's  <math>pdf</math>  is skewed to the left, unlike the Weibull distribution's  <math>pdf</math> , which is skewed to the right. The Gumbel distribution is appropriate for modeling strength, which is sometimes skewed to the left (few weak units in the lower tail, most units in the upper tail of the strength population). The Gumbel distribution could also be appropriate for modeling the life of products that experience very quick wear-out after reaching a certain age. The distribution of logarithms of times can often be modeled with the Gumbel distribution (in addition to the more common lognormal distribution). [27]
====Gumbel Probability Density Function====
The  <math>pdf</math>  of the Gumbel distribution is given by:
 
<math>f(T)=\frac{1}{\sigma }{{e}^{z-{{e}^{z}}}}</math>
 
 
 
<math>f(T)\ge 0,\sigma >0</math>
where:
 
<math>z=\frac{T-\mu }{\sigma }</math>
 
and:
 
<math>\begin{align}
  & \mu = & \text{location parameter} \\
& \sigma = & \text{scale parameter} 
\end{align}</math>
 
 
====The Gumbel Mean, Median and Mode====
The Gumbel mean or MTTF is:
 
<math>\overline{T}=\mu -\sigma \gamma </math>
 
where  <math>\gamma \approx 0.5772</math>  (Euler's constant).
 
The mode of the Gumbel distribution is:
 
<math>\tilde{T}=\mu </math>
 
The median of the Gumbel distribution is:
 
<math>\widehat{T}=\mu +\sigma \ln (\ln (2))</math>
 
====The Gumbel Standard Deviation====
The standard deviation for the Gumbel distribution is given by:
 
<math>{{\sigma }_{T}}=\sigma \pi \frac{\sqrt{6}}{6}</math>
 
 
====The Gumbel Reliability Function====
The reliability for a mission of time  <math>T</math>  for the Gumbel distribution is given by:
 
<math>R(T)={{e}^{-{{e}^{z}}}}</math>
 
The unreliability function is given by:
 
<math>F(T)=1-{{e}^{-{{e}^{z}}}}</math>
 
====The Gumbel Reliable Life====
The Gumbel reliable life is given by:
 
 
<math>{{T}_{R}}=\mu +\sigma [\ln (-\ln (R))]</math>
 
 
====The Gumbel Failure Rate Function====
The instantaneous Gumbel failure rate is given by:
 
<math>\lambda =\frac{{{e}^{z}}}{\sigma }</math>
 
 
====Characteristics of the Gumbel Distribution====
Some of the specific characteristics of the Gumbel distribution are the following:
 
• The shape of the Gumbel distribution is skewed to the left. The Gumbel  <math>pdf</math>  has no shape parameter. This means that the Gumbel  <math>pdf</math>  has only one shape, which does not change.
 
• The Gumbel  <math>pdf</math>  has location parameter  <math>\mu ,</math>  which is equal to the mode  <math>\tilde{T},</math>  but it differs from median and mean. This is because the Gumbel distribution is not symmetrical about its  <math>\mu </math> .
 
• As  <math>\mu </math>  decreases, the  <math>pdf</math>  is shifted to the left.
 
• As  <math>\mu </math>  increases, the  <math>pdf</math>  is shifted to the right.
 
• As  <math>\sigma </math>  increases, the  <math>pdf</math>  spreads out and becomes shallower.
 
• As  <math>\sigma </math>  decreases, the  <math>pdf</math>  becomes taller and narrower.
 
• For  <math>T=\pm \infty ,</math>  <math>pdf=0.</math>  For  <math>T=\mu </math> , the  <math>pdf</math>  reaches its maximum point <math>\frac{1}{\sigma e}</math>
 
• The points of inflection of the  <math>pdf</math>  graph are  <math>T=\mu \pm \sigma \ln (\tfrac{3\pm \sqrt{5}}{2})</math>  or  <math>T\approx \mu \pm \sigma 0.96242</math> .
 
• If times follow the Weibull distribution, then the logarithm of times follow a Gumbel distribution. If  <math>{{t}_{i}}</math>  follows a Weibull distribution with  <math>\beta </math>  and  <math>\eta </math>  , then the  <math>Ln({{t}_{i}})</math>  follows a Gumbel distribution with  <math>\mu =\ln (\eta )</math>  and  <math>\sigma =\tfrac{1}{\beta }</math>  [32] <math>.</math>
 
====Probability Paper====
The form of the Gumbel probability paper is based on a linearization of the  <math>cdf</math> . From Eqn. (UnrGumbel):
 
<math>z=\ln (-\ln (1-F))</math>
 
 
using Eqns. (z3):
 
<math>\frac{T-\mu }{\sigma }=\ln (-\ln (1-F))</math>
 
 
Then:
 
<math>\ln (-\ln (1-F))=-\frac{\mu }{\sigma }+\frac{1}{\sigma }T</math>
 
 
Now let:
 
<math>y=\ln (-\ln (1-F))</math>
 
 
<math>x=T</math>
 
 
and:
 
<math>\begin{align}
  & a= & -\frac{\mu }{\sigma } \\
& b= & \frac{1}{\sigma } 
\end{align}</math>
 
 
which results in the linear equation of:
 
<math>y=a+bx</math>
 
 
The Gumbel probability paper resulting from this linearized  <math>cdf</math>  function is shown next.
 
 
For  <math>z=0</math> ,  <math>T=\mu </math>  and  <math>R(t)={{e}^{-{{e}^{0}}}}\approx 0.3678</math>  (63.21% unreliability). For  <math>z=1</math> ,  <math>\sigma =T-\mu </math>  and  <math>R(t)={{e}^{-{{e}^{1}}}}\approx 0.0659.</math>  To read  <math>\mu </math>  from the plot, find the time value that corresponds to the intersection of the probability plot with the 63.21% unreliability line. To read  <math>\sigma </math>  from the plot, find the time value that corresponds to the intersection of the probability plot with the 93.40% unreliability line, then take the difference between this time value and the  <math>\mu </math>  value.
====Confidence Bounds====
This section presents the method used by the application to estimate the different types of confidence bounds for data that follow the Gumbel distribution. The complete derivations were presented in detail (for a general function) in Chapter 5. Only Fisher Matrix confidence bounds are available for the Gumbel distribution.
====Bounds on the Parameters====
The lower and upper bounds on the mean,  <math>\widehat{\mu }</math> , are estimated from:
 
<math>\begin{align}
  & {{\mu }_{U}}= & \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\
& {{\mu }_{L}}= & \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} 
\end{align}</math>
 
 
Since the standard deviation,  <math>\widehat{\sigma }</math> , must be positive, then  <math>\ln (\widehat{\sigma })</math>  is treated as normally distributed, and the bounds are estimated from:
 
<math>\begin{align}
  & {{\sigma }_{U}}= & \widehat{\sigma }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{{{\widehat{\sigma }}_{T}}}}}\text{ (upper bound)} \\
& {{\sigma }_{L}}= & \frac{\widehat{\sigma }}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}}\text{ (lower bound)} 
\end{align}</math>
 
where  <math>{{K}_{\alpha }}</math>  is defined by:
 
<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})</math>
 
 
If  <math>\delta </math>  is the confidence level, then  <math>\alpha =\tfrac{1-\delta }{2}</math>  for the two-sided bounds, and  <math>\alpha =1-\delta </math>  for the one-sided bounds.
 
The variances and covariances of  <math>\widehat{\mu }</math>  and  <math>\widehat{\sigma }</math>  are estimated from the Fisher matrix as follows:
 
<math>\left( \begin{matrix}
  \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right)  \\
  \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\sigma } \right)  \\
\end{matrix} \right)=\left( \begin{matrix}
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma }  \\
  {} & {}  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}}  \\
\end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1}</math>
 
 
<math>\Lambda </math>  is the log-likelihood function of the Gumbel distribution, described in Chapter 3 and Appendix C.
 
====Bounds on Reliability====
The reliability of the Gumbel distribution is given by: 
 
<math>\widehat{R}(T;\hat{\mu },\hat{\sigma })={{e}^{-{{e}^{{\hat{z}}}}}}</math>
 
where:
 
<math>\widehat{z}=\frac{t-\widehat{\mu }}{\widehat{\sigma }}</math>
 
 
The bounds on  <math>z</math>  are estimated from:
 
<math>\begin{align}
  & {{z}_{U}}= & \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\
& {{z}_{L}}= & \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})} 
\end{align}</math>
 
where:
 
<math>Var(\widehat{z})={{\left( \frac{\partial z}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial z}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+2\left( \frac{\partial z}{\partial \mu } \right)\left( \frac{\partial z}{\partial \sigma } \right)Cov\left( \widehat{\mu },\widehat{\sigma } \right)</math>
 
or:
 
<math>Var(\widehat{z})=\frac{1}{{{\widehat{\sigma }}^{2}}}\left[ Var(\widehat{\mu })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })+2\cdot \widehat{z}\cdot Cov\left( \widehat{\mu },\widehat{\sigma } \right) \right]</math>
 
 
The upper and lower bounds on reliability are:
 
<math>\begin{align}
  & {{R}_{U}}= & {{e}^{-{{e}^{{{z}_{L}}}}}}\text{ (upper bound)} \\
& {{R}_{L}}= & {{e}^{-{{e}^{{{z}_{U}}}}}}\text{ (lower bound)} 
\end{align}</math>
 
====Bounds on Time====
The bounds around time for a given Gumbel percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:
 
<math>\widehat{T}(\widehat{\mu },\widehat{\sigma })=\widehat{\mu }+\widehat{\sigma }z</math>
 
 
where:
 
<math>z=\ln (-\ln (R))</math>
 
 
<math>Var(\widehat{T})={{(\frac{\partial T}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial T}{\partial \mu })(\frac{\partial T}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial T}{\partial \sigma })}^{2}}Var(\widehat{\sigma })</math>
 
 
or:
 
<math>Var(\widehat{T})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })</math>
 
 
The upper and lower bounds are then found by:
 
<math>\begin{align}
  & {{T}_{U}}= & \hat{T}+{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Upper bound)} \\
& {{T}_{L}}= & \hat{T}-{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Lower bound)} 
\end{align}</math>
 
 
====A Gumbel Distribution Example====
Verify using Monte Carlo simulation that if  <math>{{t}_{i}}</math>  follows a Weibull distribution with  <math>\beta </math>  and  <math>\eta </math> , then the  <math>Ln({{t}_{i}})</math>  follows a Gumbel distribution with  <math>\mu =\ln (\eta )</math>  and  <math>\sigma =1/\beta ).</math>
Let us assume that  <math>{{t}_{i}}</math>  follows a Weibull distribution with  <math>\beta =0.5</math>  and  <math>\eta =10000.</math>  The Monte Carlo simulation tool in Weibull++ can be used to generate a set of random numbers that follow a Weibull distribution with the specified parameters.
 
 
After obtaining the random time values  <math>{{t}_{i}}</math> , insert a new Data Sheet using the Insert Data Sheet option under the Folio menu. In this sheet enter the  <math>Ln({{t}_{i}})</math>  values using the LN function and referring to the cells in the sheet that contains the  <math>{{t}_{i}}</math>  values. Delete any negative values, if there are any, since Weibull++ expects time values to be positive. Calculate the parameters of the Gumbel distribution that fits the  <math>Ln({{t}_{i}})</math>  values.
 
Using maximum likelihood as the analysis method, the estimated parameters are:
 
<math>\begin{align}
  & \hat{\mu }= & 9.3816 \\
& \hat{\sigma }= & 1.9717 
\end{align}</math>
 
 
Since  <math>\ln (\eta )=</math>  9.2103 ( <math>\simeq 9.3816</math> ) and  <math>1/\beta =2</math>  <math>(\simeq 1.9717),</math>  then this simulation verifies that  <math>Ln({{t}_{i}})</math>  follows a Gumbel distribution with  <math>\mu =\ln (\eta )</math>  and  <math>\delta =1/\beta .</math>
Note: This example illustrates a property of the Gumbel distribution; it is not meant to be a formal proof.

Revision as of 14:05, 24 June 2011

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/life_data_analysis

Chapter 11: The Mixed Weibull Distribution


Weibullbox.png

Chapter 11  
The Mixed Weibull Distribution  

Synthesis-icon.png

Available Software:
Weibull++

Examples icon.png

More Resources:
Weibull++ Examples Collection


Other Distributions

Besides the Weibull, exponential, normal and lognormal, there are other distributions that are used to model reliability and life data. However, these four represent the most prominent distributions in Weibull++. In this chapter, we will discuss other distributions that are used under special circumstances: the mixed Weibull, the generalized gamma, the Gumbel, the logistic and the loglogistic distributions.

Mixed Weibull Distribution

The mixed Weibull distribution (also known as a multimodal Weibull) is used to model data that do not fall on a straight line on a Weibull probability plot. Data of this type, particularly if the data points follow an S-shape on the probability plot, may be indicative of more than one failure mode at work in the population of failure times. Field data from a given mixed population may frequently represent multiple failure modes. The necessity of determining the life regions where these failure modes occur is apparent when it is realized that the times-to-failure for each mode may follow a distinct Weibull distribution, thus requiring individual mathematical treatment. Another reason is that each failure mode may require a different design change to improve the component's reliability [19].

A decreasing failure rate is usually encountered during the early life period of components when the substandard components fail and are removed from the population. The failure rate continues to decrease until all such substandard components fail and are removed. This corresponds to a decreasing failure rate. The Weibull distribution having [math]\displaystyle{ \beta \lt 1 }[/math] is often used to depict this life characteristic.

A second type of failure prevails when the components fail by chance alone and their failure rate is nearly constant. This can be caused by sudden, unpredictable stress applications that have a stress level above those to which the product is designed. Such failures tend to occur throughout the life of a component. The distributions most often used to describe this failure rate characteristic are the exponential distribution and the Weibull distribution with [math]\displaystyle{ \beta \approx 1 }[/math] .

A third type of failure is characterized by a failure rate that increases as operating hours are accumulated. Usually, wear has started to set in and this brings the component's performance out of specification. As age increases further, this wear-out process removes more and more components until all components fail. The normal distribution and the Weibull distribution with a [math]\displaystyle{ \beta \gt 1 }[/math] have been successfully used to model the times-to-failure distribution during the wear-out period.

Several different failure modes may occur during the various life periods. A methodology is needed to identify these failure modes and determine their failure distributions and reliabilities. This section presents a procedure whereby the proportion of units failing in each mode is determined and their contribution to the reliability of the component is quantified. From this reliability expression, the remaining major reliability functions, the probability density, the failure rate and the conditional-reliability functions are calculated to complete the reliability analysis of such mixed populations.

Background

Consider a life test of identical components. The components were placed in a test at age [math]\displaystyle{ T=0 }[/math] and were tested to failure, with their times-to-failure recorded. Further assume that the test covered the entire lifespan of the units, and different failure modes were observed over each region of life, namely early life (early failure mode), chance life (chance failure mode), and wear-out life (wear-out failure mode). Also, as items failed during the test, they were removed from the test, inspected and segregated into lots according to their failure mode. At the conclusion of the test, there will be [math]\displaystyle{ n }[/math] subpopulations of [math]\displaystyle{ {{N}_{1}},{{N}_{2}},{{N}_{3}},...,{{N}_{n}} }[/math] failed components. If the events of the test are now reconstructed, it may be theorized that at age [math]\displaystyle{ T=0 }[/math] there were actually [math]\displaystyle{ n }[/math] separate subpopulations in the test, each with a different times-to-failure distribution and failure mode, even though at [math]\displaystyle{ T=0 }[/math] the subpopulations were not physically distinguishable. The mixed Weibull methodology accomplishes this segregation based on the results of the life test.

If [math]\displaystyle{ N }[/math] identical components from a mixed population undertake a mission of [math]\displaystyle{ T }[/math] duration, starting the mission at age zero, then the number of components surviving this mission can be found from the following definition of reliability:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{{{N}_{1,2,3,..,{{n}_{S}}}}(T)}{N} }[/math]


Then:


[math]\displaystyle{ \begin{align} {{N}_{1,2,...,{{n}_{S}}}}(T)= & N[{{R}_{1,2,...,n}}(T)] \\ \\ {{N}_{{{1}_{S}}}}(T)=& {{N}_{1}}{{R}_{1}}(T);{{N}_{{{2}_{S}}}}(T)={{N}_{2}}{{R}_{2}}(T) \\ {{N}_{{{3}_{S}}}}(T)=& {{N}_{3}}{{R}_{3}}(T);...;{{N}_{{{n}_{S}}}}={{N}_{n}}{{R}_{n}}(T) \end{align} }[/math]

The total number surviving by age [math]\displaystyle{ T }[/math] in the mixed population is the sum of the number surviving in all subpopulations or:

[math]\displaystyle{ {{N}_{1,2,...,{{n}_{S}}}}(T)={{N}_{{{1}_{S}}}}(T)+{{N}_{{{2}_{S}}}}(T)+{{N}_{{{3}_{S}}}}(T)+\cdots +{{N}_{{{n}_{S}}}}(T) }[/math]


Substituting into Eqn. (rel) yields:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{1}{N}[{{N}_{1}}{{R}_{1}}(T)+{{N}_{2}}{{R}_{2}}(T)+{{N}_{3}}{{R}_{3}}(T)+\cdots +{{N}_{n}}{{R}_{n}}(T)] }[/math]

or:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{{{N}_{1}}}{N}{{R}_{1}}(T)+\frac{{{N}_{2}}}{N}{{R}_{2}}(T)+\frac{{{N}_{3}}}{N}{{R}_{3}}(T)+\cdots +\frac{{{N}_{n}}}{N}{{R}_{n}}(T) }[/math]

This expression can also be derived by applying Bayes' theorem [20], which says that the reliability of a component drawn at random from a mixed population composed of [math]\displaystyle{ n }[/math] types of failure subpopulations is its reliability, [math]\displaystyle{ {{R}_{1}}(T) }[/math] , given that the component is from subpopulation 1, or [math]\displaystyle{ \tfrac{{{N}_{1}}}{N} }[/math] plus its reliability, [math]\displaystyle{ {{R}_{2}}(T) }[/math] , given that the component is from subpopulation 2, or [math]\displaystyle{ \tfrac{{{N}_{2}}}{N} }[/math] plus its reliability, [math]\displaystyle{ {{R}_{3}}(T) }[/math] , given that the component is from subpopulation 3, or [math]\displaystyle{ \tfrac{{{N}_{3}}}{N} }[/math] , and so on, plus its reliability, [math]\displaystyle{ {{R}_{n}}(T) }[/math] , given that the component is from subpopulation [math]\displaystyle{ n }[/math] , or [math]\displaystyle{ \tfrac{{{N}_{n}}}{N} }[/math] , and:

[math]\displaystyle{ \underset{i=1}{\overset{n}{\mathop \sum }}\,\frac{{{N}_{i}}}{N}=1 }[/math]

This may be written mathematically as:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{{{N}_{1}}}{N}{{R}_{1}}(T)+\frac{{{N}_{2}}}{N}{{R}_{2}}(T)+\frac{{{N}_{3}}}{N}{{R}_{3}}(T)+\cdots +\frac{{{N}_{n}}}{N}{{R}_{n}}(T) }[/math]

Other functions of reliability engineering interest are found by applying the fundamentals to Eqn. (rel1).

For example, the probability density function can be found from:

[math]\displaystyle{ \begin{align} {{f}_{1,2,...,n}}(T)= & -\frac{d}{dT}[{{R}_{1,2,...,n}}(T)] \\ {{f}_{1,2,...,n}}(T)= & \frac{{{N}_{1}}}{N}\left( -\frac{d}{dT}[{{R}_{1}}(T)] \right)+\frac{{{N}_{2}}}{N}\left( -\frac{d}{dT}[{{R}_{2}}(T)] \right) \\ & +\ \ \frac{{{N}_{3}}}{N}\left( -\frac{d}{dT}[{{R}_{3}}(T)] \right)+\cdots +\frac{{{N}_{n}}}{N}\left( -\frac{d}{dT}[{{R}_{n}}(T)] \right) \\ {{f}_{1,2,...,n}}(T)= & \frac{{{N}_{1}}}{N}{{f}_{1}}(T)+\frac{{{N}_{2}}}{N}{{f}_{2}}(T) \\ & +\ \ \frac{{{N}_{3}}}{N}{{f}_{3}}(T)+\cdots +\frac{{{N}_{n}}}{N}{{f}_{n}}(T) \end{align} }[/math]

Also, the failure rate function of a population is given by:

[math]\displaystyle{ \begin{align} {{\lambda }_{1,2,...,n}}(T)= & \frac{{{f}_{1,2,...,n}}(T)}{{{R}_{1,2,...,n}}(T)}, \\ {{\lambda }_{1,2,...,n}}(T)= & \frac{\tfrac{{{N}_{1}}}{N}{{f}_{1}}(T)+\tfrac{{{N}_{2}}}{N}{{f}_{2}}(T)+\tfrac{{{N}_{3}}}{N}{{f}_{3}}(T)+\cdots +\tfrac{{{N}_{n}}}{N}{{f}_{n}}(T)}{\tfrac{{{N}_{1}}}{N}{{R}_{1}}(T)+\tfrac{{{N}_{2}}}{N}{{R}_{2}}(T)+\tfrac{{{N}_{3}}}{N}{{R}_{3}}(T)+\cdots +\tfrac{{{N}_{n}}}{N}{{R}_{n}}(T)}. \end{align} }[/math]


The conditional reliability for a new mission of duration [math]\displaystyle{ t }[/math] , starting this mission at age [math]\displaystyle{ T }[/math] , or after having already operated a total of [math]\displaystyle{ T }[/math] hours, is given by:

[math]\displaystyle{ \begin{align} {{R}_{1,2,...,n}}(T,t)= & \frac{{{R}_{1,2,...,n}}(T+t)}{{{R}_{1,2,...,n}}(T)} \\ {{R}_{1,2,...,n}}(T,t)= & \frac{\tfrac{{{N}_{1}}}{N}{{R}_{1}}(T+t)+\tfrac{{{N}_{2}}}{N}{{R}_{2}}(T+t)+\cdots +\tfrac{{{N}_{n}}}{N}{{R}_{n}}(T+t)}{\tfrac{{{N}_{1}}}{N}{{R}_{1}}(T)+\tfrac{{{N}_{2}}}{N}{{R}_{2}}(T)+\cdots +\tfrac{{{N}_{n}}}{N}{{R}_{n}}(T)} \end{align} }[/math]

The Mixed Weibull Equations

Depending on the number of subpopulations chosen, Weibull++ uses the following equations for the reliability and probability density functions:


[math]\displaystyle{ {{R}_{1,...,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,\frac{{{N}_{i}}}{N}{{e}^{-{{\left( \tfrac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}}}}} }[/math]

and:

[math]\displaystyle{ {{f}_{1,...,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,\frac{{{N}_{i}}{{\beta }_{i}}}{N{{\eta }_{i}}}{{\left( \frac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}-1}}{{e}^{-{{(\tfrac{T}{{{\eta }_{i}}})}^{{{\beta }_{i}}}}}} }[/math]

where [math]\displaystyle{ S=2 }[/math] , [math]\displaystyle{ S=3 }[/math] , and [math]\displaystyle{ S=4 }[/math] for 2, 3 and 4 subpopulations respectively. Weibull++ uses a non-linear regression method or direct maximum likelihood methods to estimate the parameters.

Mixed Weibull Parameter Estimation

Regression Solution

Weibull++ utilizes a modified Levenberg-Marquardt algorithm (non-linear regression) when performing regression analysis on a mixed Weibull distribution. The procedure is rather involved and is beyond the scope of this reference. It is sufficient to say that the algorithm fits a curved line of the form:

[math]\displaystyle{ {{R}_{1,...,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,{{\rho }_{i}}\cdot {{e}^{-{{\left( \tfrac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}}}}} }[/math] where:

[math]\displaystyle{ \underset{i=1}{\overset{S}{\mathop \sum }}\,{{\rho }_{i}}=1 }[/math]

to the parameters [math]\displaystyle{ \widehat{{{\rho }_{1,\text{ }}}} }[/math] [math]\displaystyle{ \widehat{{{\beta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\rho }_{2,\text{ }}}}\widehat{{{\beta }_{2}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{2}}},..., }[/math] [math]\displaystyle{ \widehat{{{\rho }_{S,}}\text{ }}\widehat{{{\beta }_{S}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{S}}}, }[/math] utilizing the times-to-failure and their respective plotting positions. It is important to note that in the case of regression analysis, using a mixed Weibull model, the choice of regression axis, i.e. [math]\displaystyle{ RRX }[/math] or [math]\displaystyle{ RRY, }[/math] is of no consequence since non-linear regression is utilized.

MLE The same space of parameters, namely [math]\displaystyle{ \widehat{{{\rho }_{1,\text{ }}}} }[/math] [math]\displaystyle{ \widehat{{{\beta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\rho }_{2,\text{ }}}}\widehat{{{\beta }_{2}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{2}}},..., }[/math] [math]\displaystyle{ \widehat{{{\rho }_{S,}}\text{ }}\widehat{{{\beta }_{S}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{S}}}, }[/math] is also used under the MLE method, using the likelihood function as given in Appendix C of this reference. Weibull++ uses the EM algorithm, short for Expectation-Maximization algorithm, for the MLE analysis. Details on the numerical procedure are beyond the scope of this reference.

Mixed Weibull Confidence Bounds

In Weibull++, two methods are available for estimating the confidence bounds for the mixed Weibull distribution. The first method is the beta binomial, described in Chapter 5. The second method is the Fisher matrix confidence bounds. For the Fisher matrix bounds, the methodology is the same as described in Chapter 5. The variance/covariance matrix for the mixed Weibull is a [math]\displaystyle{ (3\cdot S-1)\times (3\cdot S-1) }[/math] matrix, where [math]\displaystyle{ S }[/math] is the number of subpopulations. Bounds on the parameters, reliability and time are estimated using the same transformations and methods that were used for the Weibull distribution (Chapter 6). Note, however, that in addition to the Weibull parameters, the bounds on the subpopulation portions are obtained as well. The bounds on the portions are estimated by:

[math]\displaystyle{ \begin{align} & {{\rho }_{U}}= & \frac{{\hat{\rho }}}{\hat{\rho }+(1-\hat{\rho }){{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\rho })}}{\hat{\rho }(1-\hat{\rho })}}}} \\ & & \\ & {{\rho }_{L}}= & \frac{{\hat{\rho }}}{\hat{\rho }+(1-\hat{\rho }){{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\rho })}}{\hat{\rho }(1-\hat{\rho })}}}} \end{align} }[/math]


where [math]\displaystyle{ Var(\widehat{\rho }) }[/math] is obtained from the variance/covariance matrix. When using the Fisher matrix bounds method, problems can occur on the transition points of the distribution, and in particular on the Type 1 confidence bounds (bounds on time). The problems (i.e. the departure from the expected monotonic behavior) occur when the transition region between two subpopulations becomes a ``saddle (i.e. the probability line is almost parallel to the time axis on a probability plot). In this case, the bounds on time approach infinity. This behavior is more frequently encountered with smaller sample sizes. The physical interpretation is that there is insufficient data to support any inferences when in this region. This is graphically illustrated in the following figure. In this plot it can be seen that there are no data points between the last point of the first subpopulation and the first point of the second subpopulation, thus the uncertainty is high, as described by the mathematical model.


Beta binomial bounds can be used instead in these cases, especially when estimations are to be obtained close to these regions.


Using the Mixed Weibull Distribution in Weibull++

To use the mixed Weibull distribution, simply select the Mixed option under Parameters/Type, and click the Calculate icon. A window will appear asking you which form of the mixed Weibull you would like to use, i.e. S = 2, 3 or 4. In other words, How many subpopulations would you like to consider?

Simply select the number of subpopulations you would like to consider and click OK. The application will automatically calculate the parameters of each subpopulation for you.

Viewing the Calculated Parameters

When using the Mixed Weibull option, the parameters given in the result area apply to different subpopulations. To view the results for a particular subpopulation, select the subpopulation, as shown next.

About the Calculated Parameters

Weibull++ uses the numbers 1, 2, 3 and 4 (or first, second, third and fourth subpopulation) to identify each subpopulation. These are just designations for each subpopulation, and they are ordered based on the value of the scale parameter, [math]\displaystyle{ \eta }[/math] . Since the equation used is additive or:

[math]\displaystyle{ {{R}_{1,..,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,\frac{{{N}_{i}}}{N}{{e}^{-{{\left( \tfrac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}}}}} }[/math]

the order of the subpopulations which are given the designation 1, 2, 3, or 4 is of no consequence. For consistency, the application will always return the order of the results based on the magnitude of the scale parameter.

Mixed Weibull, Other Uses

Reliability Bathtub Curves

A reliability bathtub curve is nothing more than the graph of the failure rate versus time, over the life of the product. In general, the life stages of the product consist of early, chance and wear-out. Weibull++ allows you to plot this by simply selecting the failure rate plot, as shown next.

Determination of the Burn-in Period

If the failure rate goal is known, then the burn-in period can be found from the failure rate plot by drawing a horizontal line at the failure rate goal level and then finding the intersection with the failure rate curve. Next, drop vertically at the intersection, and read off the burn-in time from the time axis. This burn-in time helps insure that the population will have a failure rate that is at least equal to or lower than the goal after the burn-in period. The same could also be obtained using the Function Wizard and generating different failure rates based on time increments. Using these generated times and the corresponding failure rates, one can decide on the optimum burn-in time versus the corresponding desired failure rate.

A Mixed Weibull Example

We will illustrate mixed Weibull analysis using a Monte Carlo generated set of data. To repeat this example, generate data from a two-parameter Weibull distribution, using the Weibull++ Monte Carlo data window. The following figures illustrate the required steps, inputs and results.

• In the Monte Carlo window, enter the values and select the options shown below for subpopulation 1.

Switch to subpopulation 2 and make the selection shown below. Click Generate.

• After the data has been generated, choose the Weibull distribution and select Mixed for the Parameters/Type. Click the Calculate icon and then choose Two (2) Population Weibull Analysis. Click OK.


The results for subpopulation 1 are shown next. (Note that your results could be different due to the randomness of the simulation.)


The results for subpopulation 2 are shown next (Note that your results could be different due to the randomness of the simulation.)


The Weibull probability plot for this data is shown next (Note that your results could be different due to the randomness of the simulation.)

The Generalized Gamma Distribution

While not as frequently used for modeling life data as the previous distributions, the generalized gamma distribution does have the ability to mimic the attributes of other distributions such as the Weibull or lognormal, based on the values of the distribution's parameters. While the generalized gamma distribution is not often used to model life data by itself , its ability to behave like other more commonly-used life distributions is sometimes used to determine which of those life distributions should be used to model a particular set of data.