The Mixed Weibull Distribution

From ReliaWiki
Revision as of 14:04, 24 June 2011 by Pantelis (talk | contribs)
Jump to navigation Jump to search

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/life_data_analysis

Chapter 11: The Mixed Weibull Distribution


Weibullbox.png

Chapter 11  
The Mixed Weibull Distribution  

Synthesis-icon.png

Available Software:
Weibull++

Examples icon.png

More Resources:
Weibull++ Examples Collection


Other Distributions

Besides the Weibull, exponential, normal and lognormal, there are other distributions that are used to model reliability and life data. However, these four represent the most prominent distributions in Weibull++. In this chapter, we will discuss other distributions that are used under special circumstances: the mixed Weibull, the generalized gamma, the Gumbel, the logistic and the loglogistic distributions.

Mixed Weibull Distribution

The mixed Weibull distribution (also known as a multimodal Weibull) is used to model data that do not fall on a straight line on a Weibull probability plot. Data of this type, particularly if the data points follow an S-shape on the probability plot, may be indicative of more than one failure mode at work in the population of failure times. Field data from a given mixed population may frequently represent multiple failure modes. The necessity of determining the life regions where these failure modes occur is apparent when it is realized that the times-to-failure for each mode may follow a distinct Weibull distribution, thus requiring individual mathematical treatment. Another reason is that each failure mode may require a different design change to improve the component's reliability [19].

A decreasing failure rate is usually encountered during the early life period of components when the substandard components fail and are removed from the population. The failure rate continues to decrease until all such substandard components fail and are removed. This corresponds to a decreasing failure rate. The Weibull distribution having [math]\displaystyle{ \beta \lt 1 }[/math] is often used to depict this life characteristic.

A second type of failure prevails when the components fail by chance alone and their failure rate is nearly constant. This can be caused by sudden, unpredictable stress applications that have a stress level above those to which the product is designed. Such failures tend to occur throughout the life of a component. The distributions most often used to describe this failure rate characteristic are the exponential distribution and the Weibull distribution with [math]\displaystyle{ \beta \approx 1 }[/math] .

A third type of failure is characterized by a failure rate that increases as operating hours are accumulated. Usually, wear has started to set in and this brings the component's performance out of specification. As age increases further, this wear-out process removes more and more components until all components fail. The normal distribution and the Weibull distribution with a [math]\displaystyle{ \beta \gt 1 }[/math] have been successfully used to model the times-to-failure distribution during the wear-out period.

Several different failure modes may occur during the various life periods. A methodology is needed to identify these failure modes and determine their failure distributions and reliabilities. This section presents a procedure whereby the proportion of units failing in each mode is determined and their contribution to the reliability of the component is quantified. From this reliability expression, the remaining major reliability functions, the probability density, the failure rate and the conditional-reliability functions are calculated to complete the reliability analysis of such mixed populations.

Background

Consider a life test of identical components. The components were placed in a test at age [math]\displaystyle{ T=0 }[/math] and were tested to failure, with their times-to-failure recorded. Further assume that the test covered the entire lifespan of the units, and different failure modes were observed over each region of life, namely early life (early failure mode), chance life (chance failure mode), and wear-out life (wear-out failure mode). Also, as items failed during the test, they were removed from the test, inspected and segregated into lots according to their failure mode. At the conclusion of the test, there will be [math]\displaystyle{ n }[/math] subpopulations of [math]\displaystyle{ {{N}_{1}},{{N}_{2}},{{N}_{3}},...,{{N}_{n}} }[/math] failed components. If the events of the test are now reconstructed, it may be theorized that at age [math]\displaystyle{ T=0 }[/math] there were actually [math]\displaystyle{ n }[/math] separate subpopulations in the test, each with a different times-to-failure distribution and failure mode, even though at [math]\displaystyle{ T=0 }[/math] the subpopulations were not physically distinguishable. The mixed Weibull methodology accomplishes this segregation based on the results of the life test.

If [math]\displaystyle{ N }[/math] identical components from a mixed population undertake a mission of [math]\displaystyle{ T }[/math] duration, starting the mission at age zero, then the number of components surviving this mission can be found from the following definition of reliability:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{{{N}_{1,2,3,..,{{n}_{S}}}}(T)}{N} }[/math]


Then:


[math]\displaystyle{ \begin{align} {{N}_{1,2,...,{{n}_{S}}}}(T)= & N[{{R}_{1,2,...,n}}(T)] \\ \\ {{N}_{{{1}_{S}}}}(T)=& {{N}_{1}}{{R}_{1}}(T);{{N}_{{{2}_{S}}}}(T)={{N}_{2}}{{R}_{2}}(T) \\ {{N}_{{{3}_{S}}}}(T)=& {{N}_{3}}{{R}_{3}}(T);...;{{N}_{{{n}_{S}}}}={{N}_{n}}{{R}_{n}}(T) \end{align} }[/math]

The total number surviving by age [math]\displaystyle{ T }[/math] in the mixed population is the sum of the number surviving in all subpopulations or:

[math]\displaystyle{ {{N}_{1,2,...,{{n}_{S}}}}(T)={{N}_{{{1}_{S}}}}(T)+{{N}_{{{2}_{S}}}}(T)+{{N}_{{{3}_{S}}}}(T)+\cdots +{{N}_{{{n}_{S}}}}(T) }[/math]


Substituting into Eqn. (rel) yields:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{1}{N}[{{N}_{1}}{{R}_{1}}(T)+{{N}_{2}}{{R}_{2}}(T)+{{N}_{3}}{{R}_{3}}(T)+\cdots +{{N}_{n}}{{R}_{n}}(T)] }[/math]

or:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{{{N}_{1}}}{N}{{R}_{1}}(T)+\frac{{{N}_{2}}}{N}{{R}_{2}}(T)+\frac{{{N}_{3}}}{N}{{R}_{3}}(T)+\cdots +\frac{{{N}_{n}}}{N}{{R}_{n}}(T) }[/math]

This expression can also be derived by applying Bayes' theorem [20], which says that the reliability of a component drawn at random from a mixed population composed of [math]\displaystyle{ n }[/math] types of failure subpopulations is its reliability, [math]\displaystyle{ {{R}_{1}}(T) }[/math] , given that the component is from subpopulation 1, or [math]\displaystyle{ \tfrac{{{N}_{1}}}{N} }[/math] plus its reliability, [math]\displaystyle{ {{R}_{2}}(T) }[/math] , given that the component is from subpopulation 2, or [math]\displaystyle{ \tfrac{{{N}_{2}}}{N} }[/math] plus its reliability, [math]\displaystyle{ {{R}_{3}}(T) }[/math] , given that the component is from subpopulation 3, or [math]\displaystyle{ \tfrac{{{N}_{3}}}{N} }[/math] , and so on, plus its reliability, [math]\displaystyle{ {{R}_{n}}(T) }[/math] , given that the component is from subpopulation [math]\displaystyle{ n }[/math] , or [math]\displaystyle{ \tfrac{{{N}_{n}}}{N} }[/math] , and:

[math]\displaystyle{ \underset{i=1}{\overset{n}{\mathop \sum }}\,\frac{{{N}_{i}}}{N}=1 }[/math]

This may be written mathematically as:

[math]\displaystyle{ {{R}_{1,2,...,n}}(T)=\frac{{{N}_{1}}}{N}{{R}_{1}}(T)+\frac{{{N}_{2}}}{N}{{R}_{2}}(T)+\frac{{{N}_{3}}}{N}{{R}_{3}}(T)+\cdots +\frac{{{N}_{n}}}{N}{{R}_{n}}(T) }[/math]

Other functions of reliability engineering interest are found by applying the fundamentals to Eqn. (rel1).

For example, the probability density function can be found from:

[math]\displaystyle{ \begin{align} {{f}_{1,2,...,n}}(T)= & -\frac{d}{dT}[{{R}_{1,2,...,n}}(T)] \\ {{f}_{1,2,...,n}}(T)= & \frac{{{N}_{1}}}{N}\left( -\frac{d}{dT}[{{R}_{1}}(T)] \right)+\frac{{{N}_{2}}}{N}\left( -\frac{d}{dT}[{{R}_{2}}(T)] \right) \\ & +\ \ \frac{{{N}_{3}}}{N}\left( -\frac{d}{dT}[{{R}_{3}}(T)] \right)+\cdots +\frac{{{N}_{n}}}{N}\left( -\frac{d}{dT}[{{R}_{n}}(T)] \right) \\ {{f}_{1,2,...,n}}(T)= & \frac{{{N}_{1}}}{N}{{f}_{1}}(T)+\frac{{{N}_{2}}}{N}{{f}_{2}}(T) \\ & +\ \ \frac{{{N}_{3}}}{N}{{f}_{3}}(T)+\cdots +\frac{{{N}_{n}}}{N}{{f}_{n}}(T) \end{align} }[/math]

Also, the failure rate function of a population is given by:

[math]\displaystyle{ \begin{align} {{\lambda }_{1,2,...,n}}(T)= & \frac{{{f}_{1,2,...,n}}(T)}{{{R}_{1,2,...,n}}(T)}, \\ {{\lambda }_{1,2,...,n}}(T)= & \frac{\tfrac{{{N}_{1}}}{N}{{f}_{1}}(T)+\tfrac{{{N}_{2}}}{N}{{f}_{2}}(T)+\tfrac{{{N}_{3}}}{N}{{f}_{3}}(T)+\cdots +\tfrac{{{N}_{n}}}{N}{{f}_{n}}(T)}{\tfrac{{{N}_{1}}}{N}{{R}_{1}}(T)+\tfrac{{{N}_{2}}}{N}{{R}_{2}}(T)+\tfrac{{{N}_{3}}}{N}{{R}_{3}}(T)+\cdots +\tfrac{{{N}_{n}}}{N}{{R}_{n}}(T)}. \end{align} }[/math]


The conditional reliability for a new mission of duration [math]\displaystyle{ t }[/math] , starting this mission at age [math]\displaystyle{ T }[/math] , or after having already operated a total of [math]\displaystyle{ T }[/math] hours, is given by:

[math]\displaystyle{ \begin{align} {{R}_{1,2,...,n}}(T,t)= & \frac{{{R}_{1,2,...,n}}(T+t)}{{{R}_{1,2,...,n}}(T)} \\ {{R}_{1,2,...,n}}(T,t)= & \frac{\tfrac{{{N}_{1}}}{N}{{R}_{1}}(T+t)+\tfrac{{{N}_{2}}}{N}{{R}_{2}}(T+t)+\cdots +\tfrac{{{N}_{n}}}{N}{{R}_{n}}(T+t)}{\tfrac{{{N}_{1}}}{N}{{R}_{1}}(T)+\tfrac{{{N}_{2}}}{N}{{R}_{2}}(T)+\cdots +\tfrac{{{N}_{n}}}{N}{{R}_{n}}(T)} \end{align} }[/math]

The Mixed Weibull Equations

Depending on the number of subpopulations chosen, Weibull++ uses the following equations for the reliability and probability density functions:


[math]\displaystyle{ {{R}_{1,...,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,\frac{{{N}_{i}}}{N}{{e}^{-{{\left( \tfrac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}}}}} }[/math]

and:

[math]\displaystyle{ {{f}_{1,...,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,\frac{{{N}_{i}}{{\beta }_{i}}}{N{{\eta }_{i}}}{{\left( \frac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}-1}}{{e}^{-{{(\tfrac{T}{{{\eta }_{i}}})}^{{{\beta }_{i}}}}}} }[/math]

where [math]\displaystyle{ S=2 }[/math] , [math]\displaystyle{ S=3 }[/math] , and [math]\displaystyle{ S=4 }[/math] for 2, 3 and 4 subpopulations respectively. Weibull++ uses a non-linear regression method or direct maximum likelihood methods to estimate the parameters.

Mixed Weibull Parameter Estimation

Regression Solution

Weibull++ utilizes a modified Levenberg-Marquardt algorithm (non-linear regression) when performing regression analysis on a mixed Weibull distribution. The procedure is rather involved and is beyond the scope of this reference. It is sufficient to say that the algorithm fits a curved line of the form:

[math]\displaystyle{ {{R}_{1,...,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,{{\rho }_{i}}\cdot {{e}^{-{{\left( \tfrac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}}}}} }[/math] where:

[math]\displaystyle{ \underset{i=1}{\overset{S}{\mathop \sum }}\,{{\rho }_{i}}=1 }[/math]

to the parameters [math]\displaystyle{ \widehat{{{\rho }_{1,\text{ }}}} }[/math] [math]\displaystyle{ \widehat{{{\beta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\rho }_{2,\text{ }}}}\widehat{{{\beta }_{2}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{2}}},..., }[/math] [math]\displaystyle{ \widehat{{{\rho }_{S,}}\text{ }}\widehat{{{\beta }_{S}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{S}}}, }[/math] utilizing the times-to-failure and their respective plotting positions. It is important to note that in the case of regression analysis, using a mixed Weibull model, the choice of regression axis, i.e. [math]\displaystyle{ RRX }[/math] or [math]\displaystyle{ RRY, }[/math] is of no consequence since non-linear regression is utilized.

MLE The same space of parameters, namely [math]\displaystyle{ \widehat{{{\rho }_{1,\text{ }}}} }[/math] [math]\displaystyle{ \widehat{{{\beta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{1}}}, }[/math] [math]\displaystyle{ \widehat{{{\rho }_{2,\text{ }}}}\widehat{{{\beta }_{2}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{2}}},..., }[/math] [math]\displaystyle{ \widehat{{{\rho }_{S,}}\text{ }}\widehat{{{\beta }_{S}}}, }[/math] [math]\displaystyle{ \widehat{{{\eta }_{S}}}, }[/math] is also used under the MLE method, using the likelihood function as given in Appendix C of this reference. Weibull++ uses the EM algorithm, short for Expectation-Maximization algorithm, for the MLE analysis. Details on the numerical procedure are beyond the scope of this reference.

Mixed Weibull Confidence Bounds

In Weibull++, two methods are available for estimating the confidence bounds for the mixed Weibull distribution. The first method is the beta binomial, described in Chapter 5. The second method is the Fisher matrix confidence bounds. For the Fisher matrix bounds, the methodology is the same as described in Chapter 5. The variance/covariance matrix for the mixed Weibull is a [math]\displaystyle{ (3\cdot S-1)\times (3\cdot S-1) }[/math] matrix, where [math]\displaystyle{ S }[/math] is the number of subpopulations. Bounds on the parameters, reliability and time are estimated using the same transformations and methods that were used for the Weibull distribution (Chapter 6). Note, however, that in addition to the Weibull parameters, the bounds on the subpopulation portions are obtained as well. The bounds on the portions are estimated by:

[math]\displaystyle{ \begin{align} & {{\rho }_{U}}= & \frac{{\hat{\rho }}}{\hat{\rho }+(1-\hat{\rho }){{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\rho })}}{\hat{\rho }(1-\hat{\rho })}}}} \\ & & \\ & {{\rho }_{L}}= & \frac{{\hat{\rho }}}{\hat{\rho }+(1-\hat{\rho }){{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\rho })}}{\hat{\rho }(1-\hat{\rho })}}}} \end{align} }[/math]


where [math]\displaystyle{ Var(\widehat{\rho }) }[/math] is obtained from the variance/covariance matrix. When using the Fisher matrix bounds method, problems can occur on the transition points of the distribution, and in particular on the Type 1 confidence bounds (bounds on time). The problems (i.e. the departure from the expected monotonic behavior) occur when the transition region between two subpopulations becomes a ``saddle (i.e. the probability line is almost parallel to the time axis on a probability plot). In this case, the bounds on time approach infinity. This behavior is more frequently encountered with smaller sample sizes. The physical interpretation is that there is insufficient data to support any inferences when in this region. This is graphically illustrated in the following figure. In this plot it can be seen that there are no data points between the last point of the first subpopulation and the first point of the second subpopulation, thus the uncertainty is high, as described by the mathematical model.


Beta binomial bounds can be used instead in these cases, especially when estimations are to be obtained close to these regions.


Using the Mixed Weibull Distribution in Weibull++

To use the mixed Weibull distribution, simply select the Mixed option under Parameters/Type, and click the Calculate icon. A window will appear asking you which form of the mixed Weibull you would like to use, i.e. S = 2, 3 or 4. In other words, How many subpopulations would you like to consider?

Simply select the number of subpopulations you would like to consider and click OK. The application will automatically calculate the parameters of each subpopulation for you.

Viewing the Calculated Parameters

When using the Mixed Weibull option, the parameters given in the result area apply to different subpopulations. To view the results for a particular subpopulation, select the subpopulation, as shown next.

About the Calculated Parameters

Weibull++ uses the numbers 1, 2, 3 and 4 (or first, second, third and fourth subpopulation) to identify each subpopulation. These are just designations for each subpopulation, and they are ordered based on the value of the scale parameter, [math]\displaystyle{ \eta }[/math] . Since the equation used is additive or:

[math]\displaystyle{ {{R}_{1,..,S}}(T)=\underset{i=1}{\overset{S}{\mathop \sum }}\,\frac{{{N}_{i}}}{N}{{e}^{-{{\left( \tfrac{T}{{{\eta }_{i}}} \right)}^{{{\beta }_{i}}}}}} }[/math]

the order of the subpopulations which are given the designation 1, 2, 3, or 4 is of no consequence. For consistency, the application will always return the order of the results based on the magnitude of the scale parameter.

Mixed Weibull, Other Uses

Reliability Bathtub Curves

A reliability bathtub curve is nothing more than the graph of the failure rate versus time, over the life of the product. In general, the life stages of the product consist of early, chance and wear-out. Weibull++ allows you to plot this by simply selecting the failure rate plot, as shown next.

Determination of the Burn-in Period

If the failure rate goal is known, then the burn-in period can be found from the failure rate plot by drawing a horizontal line at the failure rate goal level and then finding the intersection with the failure rate curve. Next, drop vertically at the intersection, and read off the burn-in time from the time axis. This burn-in time helps insure that the population will have a failure rate that is at least equal to or lower than the goal after the burn-in period. The same could also be obtained using the Function Wizard and generating different failure rates based on time increments. Using these generated times and the corresponding failure rates, one can decide on the optimum burn-in time versus the corresponding desired failure rate.

A Mixed Weibull Example

We will illustrate mixed Weibull analysis using a Monte Carlo generated set of data. To repeat this example, generate data from a two-parameter Weibull distribution, using the Weibull++ Monte Carlo data window. The following figures illustrate the required steps, inputs and results.

• In the Monte Carlo window, enter the values and select the options shown below for subpopulation 1.

Switch to subpopulation 2 and make the selection shown below. Click Generate.

• After the data has been generated, choose the Weibull distribution and select Mixed for the Parameters/Type. Click the Calculate icon and then choose Two (2) Population Weibull Analysis. Click OK.


The results for subpopulation 1 are shown next. (Note that your results could be different due to the randomness of the simulation.)


The results for subpopulation 2 are shown next (Note that your results could be different due to the randomness of the simulation.)


The Weibull probability plot for this data is shown next (Note that your results could be different due to the randomness of the simulation.)

The Generalized Gamma Distribution

While not as frequently used for modeling life data as the previous distributions, the generalized gamma distribution does have the ability to mimic the attributes of other distributions such as the Weibull or lognormal, based on the values of the distribution's parameters. While the generalized gamma distribution is not often used to model life data by itself , its ability to behave like other more commonly-used life distributions is sometimes used to determine which of those life distributions should be used to model a particular set of data.

Generalized Gamma Probability Density Function

The generalized gamma function is a three-parameter distribution. One version of the generalized gamma distribution uses the parameters [math]\displaystyle{ k }[/math], [math]\displaystyle{ \beta }[/math], and [math]\displaystyle{ \theta }[/math]. The [math]\displaystyle{ pdf }[/math] for this form of the generalized gamma distribution is given by:

[math]\displaystyle{ f(t)=\frac{\beta }{\Gamma (k)\cdot \theta }{{\left( \frac{t}{\theta } \right)}^{k\beta -1}}{{e}^{-{{\left( \tfrac{t}{\theta } \right)}^{\beta }}}} }[/math]

where [math]\displaystyle{ \theta \gt 0 }[/math] is a scale parameter, [math]\displaystyle{ \beta \gt 0 }[/math] and [math]\displaystyle{ k\gt 0 }[/math] are shape parameters and [math]\displaystyle{ \Gamma (x) }[/math] is the gamma function of [math]\displaystyle{ x }[/math], which is defined by:

[math]\displaystyle{ \Gamma (x)=\int_{0}^{\infty }{{s}^{x-1}}\cdot {{e}^{-s}}ds }[/math]

With this version of the distribution, however, convergence problems arise that severely limit its usefulness. Even with data sets containing 200 or more data points, the MLE methods may fail to converge. Further adding to the confusion is the fact that distributions with widely different values of [math]\displaystyle{ k }[/math], [math]\displaystyle{ \beta }[/math], and [math]\displaystyle{ \theta }[/math] may appear almost identical [21]. In order to overcome these difficulties, Weibull++ uses a reparameterization with parameters [math]\displaystyle{ \mu }[/math] , [math]\displaystyle{ \sigma }[/math] , and [math]\displaystyle{ \lambda }[/math] [21] where:

[math]\displaystyle{ \begin{align} \mu = & ln(\theta )+\frac{1}{\beta }\cdot ln\left( \frac{1}{{{\lambda }^{2}}} \right) \\ \sigma = & \frac{1}{\beta \sqrt{k}} \\ \lambda = & \frac{1}{\sqrt{k}} \end{align} }[/math]

where [math]\displaystyle{ -\infty \lt \mu \lt \infty ,\,\sigma \gt 0, }[/math] and [math]\displaystyle{ 0\lt \lambda . }[/math] While this makes the distribution converge much more easily in computations, it does not facilitate manual manipulation of the equation. By allowing [math]\displaystyle{ \lambda }[/math] to become negative, the [math]\displaystyle{ pdf }[/math] of the reparameterized distribution is given by:


[math]\displaystyle{ f(t)=\left\{ \begin{matrix} \tfrac{|\lambda |}{\sigma \cdot t}\cdot \tfrac{1}{\Gamma \left( \tfrac{1}{{{\lambda }^{2}}} \right)}\cdot {{e}^{\left[ \tfrac{\lambda \cdot \tfrac{\text{ln}(t)-\mu }{\sigma }+\text{ln}\left( \tfrac{1}{{{\lambda }^{2}}} \right)-{{e}^{\lambda \cdot \tfrac{\text{ln}(t)-\mu }{\sigma }}}}{{{\lambda }^{2}}} \right]}}\text{ if }\lambda \ne 0 \\ \tfrac{1}{t\cdot \sigma \sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}^{2}}}}\text{ if }\lambda =0 \\ \end{matrix} \right. }[/math]

Generalized Gamma Reliability Function

The reliability function for the generalized gamma distribution is given by:


[math]\displaystyle{ }[/math]

[math]\displaystyle{ R(t)=\left\{ \begin{array}{*{35}{l}} 1-{{\Gamma }_{I}}\left( \tfrac{{{e}^{\lambda \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}}}{{{\lambda }^{2}}};\tfrac{1}{{{\lambda }^{2}}} \right)\text{ if }\lambda \gt 0 \\ 1-\Phi \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)\text{ if }\lambda =0 \\ {{\Gamma }_{I}}\left( \tfrac{{{e}^{\lambda \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}}}{{{\lambda }^{2}}};\tfrac{1}{{{\lambda }^{2}}} \right)\text{ if }\lambda \lt 0 \\ \end{array} \right. }[/math]

where:


[math]\displaystyle{ \Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}{{e}^{-\tfrac{{{x}^{2}}}{2}}}dx }[/math]

and [math]\displaystyle{ {{\Gamma }_{I}}(k;x) }[/math] is the incomplete gamma function of [math]\displaystyle{ k }[/math] and [math]\displaystyle{ x }[/math] , which is given by:


[math]\displaystyle{ {{\Gamma }_{I}}(k;x)=\frac{1}{\Gamma (k)}\int_{0}^{x}{{s}^{k-1}}{{e}^{-s}}ds }[/math]

where [math]\displaystyle{ \Gamma (x) }[/math] is the gamma function of [math]\displaystyle{ x }[/math] . Note that in Weibull++ the probability plot of the generalized gamma is created on lognormal probability paper. This means that the fitted line will not be straight unless [math]\displaystyle{ \lambda =0. }[/math]

Generalized Gamma Failure Rate Function

As defined in Chapter 3, the failure rate function is given by:

[math]\displaystyle{ \lambda (t)=\frac{f(t)}{R(t)} }[/math]

Owing to the complexity of the equations involved, the function will not be displayed here, but the failure rate function for the generalized gamma distribution can be obtained merely by dividing Eqn. (ggampdf) by Eqn. (ggamrel).

Generalized Gamma Reliable Life

The reliable life, [math]\displaystyle{ {{T}_{R}} }[/math] , of a unit for a specified reliability, starting the mission at age zero, is given by:

[math]\displaystyle{ {{T}_{R}}=\left\{ \begin{array}{*{35}{l}} {{e}^{\mu +\tfrac{\sigma }{\lambda }\ln \left[ {{\lambda }^{2}}\Gamma _{I}^{-1}\left( 1-R,\tfrac{1}{{{\lambda }^{2}}} \right) \right]}}\text{ if }\lambda \gt 0 \\ {{\Phi }^{-1}}(1-R)\text{ if }\lambda =0 \\ {{e}^{\mu +\tfrac{\sigma }{\lambda }\ln \left[ {{\lambda }^{2}}\Gamma _{I}^{-1}\left( R,\tfrac{1}{{{\lambda }^{2}}} \right) \right]}}\text{ if }\lambda \lt 0 \\ \end{array} \right. }[/math]

Characteristics of the Generalized Gamma Distribution

As mentioned previously, the generalized gamma distribution includes other distributions as special cases based on the values of the parameters.

• The Weibull distribution is a special case when [math]\displaystyle{ \lambda =1 }[/math] and:

[math]\displaystyle{ \begin{align} & \beta = & \frac{1}{\sigma } \\ & \eta = & \ln (\mu ) \end{align} }[/math]

• In this case, the generalized distribution has the same behavior as the Weibull for [math]\displaystyle{ \sigma \gt 1, }[/math] [math]\displaystyle{ \sigma =1, }[/math] and [math]\displaystyle{ \sigma \lt 1 }[/math] ( [math]\displaystyle{ \beta \lt 1, }[/math] [math]\displaystyle{ \beta =1, }[/math] and [math]\displaystyle{ \beta \gt 1 }[/math] respectively).

• The exponential distribution is a special case when [math]\displaystyle{ \lambda =1 }[/math] and [math]\displaystyle{ \sigma =1 }[/math].

• The lognormal distribution is a special case when [math]\displaystyle{ \lambda =0 }[/math].

• The gamma distribution is a special case when [math]\displaystyle{ \lambda =\sigma }[/math].

By allowing [math]\displaystyle{ \lambda }[/math] to take negative values, the generalized gamma distribution can be further extended to include additional distributions as special cases. For example, the Fréchet distribution of maxima (also known as a reciprocal Weibull) is a special case when [math]\displaystyle{ \lambda =-1 }[/math].

Confidence Bounds

The only method available in Weibull++ for confidence bounds for the generalized gamma distribution is the Fisher matrix, which is described next.

Bounds on the Parameters

The lower and upper bounds on the parameter [math]\displaystyle{ \mu }[/math] are estimated from:

[math]\displaystyle{ \begin{align} & {{\mu }_{U}}= & \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\ & {{\mu }_{L}}= & \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} \end{align} }[/math]

For the parameter [math]\displaystyle{ \widehat{\sigma } }[/math] , [math]\displaystyle{ \ln (\widehat{\sigma }) }[/math] is treated as normally distributed, and the bounds are estimated from:

[math]\displaystyle{ \begin{align} & {{\sigma }_{U}}= & \widehat{\sigma }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}\text{ (upper bound)} \\ & {{\sigma }_{L}}= & \frac{\widehat{\sigma }}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}}\text{ (lower bound)} \end{align} }[/math]


For the parameter [math]\displaystyle{ \lambda , }[/math] the bounds are estimated from:

[math]\displaystyle{ \begin{align} & {{\lambda }_{U}}= & \widehat{\lambda }+{{K}_{\alpha }}\sqrt{Var(\widehat{\lambda })}\text{ (upper bound)} \\ & {{\lambda }_{L}}= & \widehat{\lambda }-{{K}_{\alpha }}\sqrt{Var(\widehat{\lambda })}\text{ (lower bound)} \end{align} }[/math]

where [math]\displaystyle{ {{K}_{\alpha }} }[/math] is defined by:

[math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }}) }[/math]


If [math]\displaystyle{ \delta }[/math] is the confidence level, then [math]\displaystyle{ \alpha =\tfrac{1-\delta }{2} }[/math] for the two-sided bounds, and [math]\displaystyle{ \alpha =1-\delta }[/math] for the one-sided bounds.

The variances and covariances of [math]\displaystyle{ \widehat{\mu } }[/math] and [math]\displaystyle{ \widehat{\sigma } }[/math] are estimated as follows:


[math]\displaystyle{ \begin{align} & & \left( \begin{matrix} \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\lambda } \right) \\ \widehat{Cov}\left( \widehat{\sigma },\widehat{\mu } \right) & \widehat{Var}\left( \widehat{\sigma } \right) & \widehat{Cov}\left( \widehat{\sigma },\widehat{\lambda } \right) \\ \widehat{Cov}\left( \widehat{\lambda },\widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\lambda },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\lambda } \right) \\ \end{matrix} \right) \\ & = & \left( \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \lambda } \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \lambda \partial \sigma } \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \lambda } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \lambda \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\lambda }^{2}}} \\ \end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma },\lambda =\hat{\lambda }}^{-1} \end{align} }[/math]

Where [math]\displaystyle{ \Lambda }[/math] is the log-likelihood function of the generalized gamma distribution.

Bounds on Reliability

The upper and lower bounds on reliability are given by:

[math]\displaystyle{ \begin{align} & {{R}_{U}}= & \frac{{\hat{R}}}{\hat{R}+(1-\hat{R}){{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})}}{\hat{R}(1-\hat{R})}}}} \\ & {{R}_{L}}= & \frac{{\hat{R}}}{\hat{R}+(1-\hat{R}){{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})}}{\hat{R}(1-\hat{R})}}}} \end{align} }[/math]

where:

[math]\displaystyle{ \begin{align} & Var(\widehat{R})= & {{\left( \frac{\partial R}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial R}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+{{\left( \frac{\partial R}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+ \\ & & +2\left( \frac{\partial R}{\partial \mu } \right)\left( \frac{\partial R}{\partial \sigma } \right)Cov(\widehat{\mu },\widehat{\sigma })+2\left( \frac{\partial R}{\partial \mu } \right)\left( \frac{\partial R}{\partial \lambda } \right)Cov(\widehat{\mu },\widehat{\lambda })+ \\ & & +2\left( \frac{\partial R}{\partial \lambda } \right)\left( \frac{\partial R}{\partial \sigma } \right)Cov(\widehat{\lambda },\widehat{\sigma }) \end{align} }[/math]

Bounds on Time

The bounds around time for a given percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, given by Eqn. (GGamma Time). Since [math]\displaystyle{ T }[/math] is a positive variable, the transformed variable [math]\displaystyle{ \hat{u}=\ln (\widehat{T}) }[/math] is treated as normally distributed and the bounds are estimated from:

[math]\displaystyle{ \begin{align} & {{u}_{u}}= & \ln {{T}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})} \\ & {{u}_{L}}= & \ln {{T}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})} \end{align} }[/math]

Solving for [math]\displaystyle{ {{T}_{U}} }[/math] and [math]\displaystyle{ {{T}_{L}} }[/math] we get:

[math]\displaystyle{ \begin{align} & {{T}_{U}}= & {{e}^{{{T}_{U}}}}\text{ (upper bound)} \\ & {{T}_{L}}= & {{e}^{{{T}_{L}}}}\text{ (lower bound)} \end{align} }[/math]

The variance of [math]\displaystyle{ u }[/math] is estimated from:

[math]\displaystyle{ \begin{align} & Var(\widehat{u})= & {{\left( \frac{\partial u}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial u}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+{{\left( \frac{\partial u}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+ \\ & & +2\left( \frac{\partial u}{\partial \mu } \right)\left( \frac{\partial u}{\partial \sigma } \right)Cov(\widehat{\mu },\widehat{\sigma })+2\left( \frac{\partial u}{\partial \mu } \right)\left( \frac{\partial u}{\partial \lambda } \right)Cov(\widehat{\mu },\widehat{\lambda })+ \\ & & +2\left( \frac{\partial u}{\partial \lambda } \right)\left( \frac{\partial u}{\partial \sigma } \right)Cov(\widehat{\lambda },\widehat{\sigma }) \end{align} }[/math]


A Generalized Gamma Distribution Example

The following data set represents revolutions-to-failure (in millions) for 23 ball bearings in a fatigue test [21].


[math]\displaystyle{ \begin{array}{*{35}{l}} \text{17}\text{.88} & \text{28}\text{.92} & \text{33} & \text{41}\text{.52} & \text{42}\text{.12} & \text{45}\text{.6} & \text{48}\text{.4} & \text{51}\text{.84} & \text{51}\text{.96} & \text{54}\text{.12} \\ \text{55}\text{.56} & \text{67}\text{.8} & \text{68}\text{.64} & \text{68}\text{.64} & \text{68}\text{.88} & \text{84}\text{.12} & \text{93}\text{.12} & \text{98}\text{.64} & \text{105}\text{.12} & \text{105}\text{.84} \\ \text{127}\text{.92} & \text{128}\text{.04} & \text{173}\text{.4} & {} & {} & {} & {} & {} & {} & {} \\ \end{array} }[/math]

When the generalized gamma distribution is fitted to this data using MLE, the following values for parameters are obtained:

[math]\displaystyle{ \begin{align} & \widehat{\mu }= & 4.23064 \\ & \widehat{\sigma }= & 0.509982 \\ & \widehat{\lambda }= & 0.307639 \end{align} }[/math]

Note that for this data, the generalized gamma offers a compromise between the Weibull [math]\displaystyle{ (\lambda =1), }[/math] and the lognormal [math]\displaystyle{ (\lambda =0) }[/math] distributions. The value of [math]\displaystyle{ \lambda }[/math] indicates that the lognormal distribution is better supported by the data. A better assessment, however, can be made by looking at the confidence bounds on [math]\displaystyle{ \lambda . }[/math] For example, the 90% two-sided confidence bounds are:

[math]\displaystyle{ \begin{align} & {{\lambda }_{u}}= & -0.592087 \\ & {{\lambda }_{u}}= & 1.20736 \end{align} }[/math]

It can be then concluded that both distributions (i.e. Weibull and lognormal) are well supported by the data, with the lognormal being the ,better supported of the two. In Weibull++ the generalized gamma probability is plotted on gamma probability paper, as shown next.

It is important to also note that as in the case of the mixed Weibull distribution, in the case of regression analysis, using a generalized gamma model, the choice of regression axis, i.e. [math]\displaystyle{ RRX }[/math] or [math]\displaystyle{ RRY, }[/math] is of no consequence since non-linear regression is utilized.

The Gamma Distribution

The gamma distribution is a flexible life distribution model that may offer a good fit to some sets of failure data. It is not, however, widely used as a life distribution model for common failure mechanisms. The gamma distribution does arise naturally as the time-to-first-fail distribution for a system with standby exponentially distributed backups, and is also a good fit for the sum of independent exponential random variables. The gamma distribution is sometimes called the Erlang distribution, which is used frequently in queuing theory applications. [32]

Gamma Probability Density Function

The [math]\displaystyle{ pdf }[/math] of the gamma distribution is given by:

[math]\displaystyle{ f(T)=\frac{{{e}^{kz-{{e}^{z}}}}}{t\Gamma (k)} }[/math]

where:

[math]\displaystyle{ z=\ln (t)-\mu }[/math]

and:

[math]\displaystyle{ \begin{align} & {{e}^{\mu }}= & \text{scale parameter} \\ & k= & \text{shape parameter} \end{align} }[/math]

where [math]\displaystyle{ 0\lt t\lt \infty }[/math] , [math]\displaystyle{ -\infty \lt \mu \lt \infty }[/math] and [math]\displaystyle{ k\gt 0 }[/math] . The Gamma Reliability Function The reliability for a mission of time [math]\displaystyle{ T }[/math] for the gamma distribution is:


[math]\displaystyle{ R=1-{{\Gamma }_{1}}(k;{{e}^{z}}) }[/math]


The Gamma Mean, Median and Mode

The gamma mean or MTTF is:


[math]\displaystyle{ \overline{T}=k{{e}^{\mu }} }[/math]


The mode exists if [math]\displaystyle{ k\gt 1 }[/math] and is given by:


[math]\displaystyle{ \tilde{T}=(k-1){{e}^{\mu }} }[/math]


The median is:

[math]\displaystyle{ \widehat{T}={{e}^{\mu +\ln (\Gamma _{1}^{-1}(0.5;k))}} }[/math]

The Gamma Standard Deviation

The standard deviation for the gamma distribution is:

[math]\displaystyle{ {{\sigma }_{T}}=\sqrt{k}{{e}^{\mu }} }[/math]


The Gamma Reliable Life

The gamma reliable life is:

[math]\displaystyle{ {{T}_{R}}={{e}^{\mu +\ln (\Gamma _{1}^{-1}(1-R;k))}} }[/math]

The Gamma Failure Rate Function

The instantaneous gamma failure rate is given by:

[math]\displaystyle{ \lambda =\frac{{{e}^{kz-{{e}^{z}}}}}{t\Gamma (k)(1-{{\Gamma }_{1}}(k;{{e}^{z}}))} }[/math]

Characteristics of the Gamma Distribution

Some of the specific characteristics of the gamma distribution are the following:

For [math]\displaystyle{ k\gt 1 }[/math] :

• As [math]\displaystyle{ T\to 0,\infty }[/math] , [math]\displaystyle{ f(T)\to 0. }[/math]

[math]\displaystyle{ f(T) }[/math] increases from 0 to the mode value and decreases thereafter.

• If [math]\displaystyle{ k\le 2 }[/math] then [math]\displaystyle{ pdf }[/math] has one inflection point at [math]\displaystyle{ T={{e}^{\mu }}\sqrt{k-1}( }[/math] [math]\displaystyle{ \sqrt{k-1}+1). }[/math]

• If [math]\displaystyle{ k\gt 2 }[/math] then [math]\displaystyle{ pdf }[/math] has two inflection points for [math]\displaystyle{ T={{e}^{\mu }}\sqrt{k-1}( }[/math] [math]\displaystyle{ \sqrt{k-1}\pm 1). }[/math]

• For a fixed [math]\displaystyle{ k }[/math] , as [math]\displaystyle{ \mu }[/math] increases, the [math]\displaystyle{ pdf }[/math] starts to look more like a straight angle.

As [math]\displaystyle{ T\to \infty ,\lambda (T)\to \tfrac{1}{{{e}^{\mu }}}. }[/math]


For [math]\displaystyle{ k=1 }[/math] :

• Gamma becomes the exponential distribution.

• As [math]\displaystyle{ T\to 0 }[/math] , [math]\displaystyle{ f(T)\to \tfrac{1}{{{e}^{\mu }}}. }[/math]

• As [math]\displaystyle{ T\to \infty ,f(T)\to 0. }[/math]

• The [math]\displaystyle{ pdf }[/math] decreases monotonically and is convex.

[math]\displaystyle{ \lambda (T)\equiv \tfrac{1}{{{e}^{\mu }}} }[/math] . [math]\displaystyle{ \lambda (T) }[/math] is constant.

• The mode does not exist.

For [math]\displaystyle{ 0\lt k\lt 1 }[/math] :

• As [math]\displaystyle{ T\to 0 }[/math] , [math]\displaystyle{ f(T)\to \infty . }[/math]

• As [math]\displaystyle{ T\to \infty ,f(T)\to 0. }[/math]

• As [math]\displaystyle{ T\to \infty ,\lambda (T)\to \tfrac{1}{{{e}^{\mu }}}. }[/math]

• The [math]\displaystyle{ pdf }[/math] decreases monotonically and is convex.

• As [math]\displaystyle{ \mu }[/math] increases, the [math]\displaystyle{ pdf }[/math] gets stretched out to the right and its height decreases, while maintaining its shape.

• As [math]\displaystyle{ \mu }[/math] decreases, the [math]\displaystyle{ pdf }[/math] shifts towards the left and its height increases.

• The mode does not exist.

Confidence Bounds

The only method available in Weibull++ for confidence bounds for the gamma distribution is the Fisher matrix, which is described next. The complete derivations were presented in detail (for a general function) in Chapter 5.

Bounds on the Parameters

The lower and upper bounds on the mean, [math]\displaystyle{ \widehat{\mu } }[/math] , are estimated from:

[math]\displaystyle{ \begin{align} & {{\mu }_{U}}= & \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\ & {{\mu }_{L}}= & \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} \end{align} }[/math]


Since the standard deviation, [math]\displaystyle{ \widehat{\sigma } }[/math] , must be positive, [math]\displaystyle{ \ln (\widehat{\sigma }) }[/math] is treated as normally distributed and the bounds are estimated from:

[math]\displaystyle{ \begin{align} & {{k}_{U}}= & \widehat{k}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{k})}}{{\hat{k}}}}}\text{ (upper bound)} \\ & {{k}_{L}}= & \frac{\widehat{\sigma }}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{k})}}{\widehat{k}}}}}\text{ (lower bound)} \end{align} }[/math]

where [math]\displaystyle{ {{K}_{\alpha }} }[/math] is defined by:

[math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }}) }[/math]

If [math]\displaystyle{ \delta }[/math] is the confidence level, then [math]\displaystyle{ \alpha =\tfrac{1-\delta }{2} }[/math] for the two-sided bounds and [math]\displaystyle{ \alpha =1-\delta }[/math] for the one-sided bounds.

The variances and covariances of [math]\displaystyle{ \widehat{\mu } }[/math] and [math]\displaystyle{ \widehat{k} }[/math] are estimated from the Fisher matrix, as follows:

[math]\displaystyle{ \left( \begin{matrix} \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{k} \right) \\ \widehat{Cov}\left( \widehat{\mu },\widehat{k} \right) & \widehat{Var}\left( \widehat{k} \right) \\ \end{matrix} \right)=\left( \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial k} \\ {} & {} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial k} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{k}^{2}}} \\ \end{matrix} \right)_{\mu =\widehat{\mu },k=\widehat{k}}^{-1} }[/math]


[math]\displaystyle{ \Lambda }[/math] is the log-likelihood function of the gamma distribution, described in Chapter 3 and Appendix C.

Bounds on Reliability

The reliability of the gamma distribution is:

[math]\displaystyle{ \widehat{R}(T;\hat{\mu },\hat{k})=1-{{\Gamma }_{1}}(\widehat{k};{{e}^{\widehat{z}}}) }[/math]

where:

[math]\displaystyle{ \widehat{z}=\ln (t)-\widehat{\mu } }[/math]

The upper and lower bounds on reliability are:

[math]\displaystyle{ {{R}_{U}}=\frac{\widehat{R}}{\widehat{R}+(1-\widehat{R})\exp (\tfrac{-{{K}_{\alpha }}\sqrt{Var(\widehat{R})\text{ }}}{\widehat{R}(1-\widehat{R})})}\text{ (upper bound)} }[/math]

[math]\displaystyle{ {{R}_{L}}=\frac{\widehat{R}}{\widehat{R}+(1-\widehat{R})\exp (\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})\text{ }}}{\widehat{R}(1-\widehat{R})})}\text{ (lower bound)} }[/math]

where:

[math]\displaystyle{ Var(\widehat{R})={{(\frac{\partial R}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial R}{\partial \mu })(\frac{\partial R}{\partial k})Cov(\widehat{\mu },\widehat{k})+{{(\frac{\partial z}{\partial k})}^{2}}Var(\widehat{k}) }[/math]

Bounds on Time

The bounds around time for a given gamma percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:


[math]\displaystyle{ \widehat{T}(\widehat{\mu },\widehat{\sigma })=\widehat{\mu }+\widehat{\sigma }z }[/math]


where:


[math]\displaystyle{ z=\ln (-\ln (R)) }[/math]



[math]\displaystyle{ Var(\widehat{T})={{(\frac{\partial T}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial T}{\partial \mu })(\frac{\partial T}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial T}{\partial \sigma })}^{2}}Var(\widehat{\sigma }) }[/math]

or:


[math]\displaystyle{ Var(\widehat{T})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma }) }[/math]


The upper and lower bounds are then found by:


[math]\displaystyle{ \begin{align} & {{T}_{U}}= & \hat{T}+{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Upper bound)} \\ & {{T}_{L}}= & \hat{T}-{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Lower bound)} \end{align} }[/math]

A Gamma Distribution Example

Twenty four units were reliability tested and the following life test data were obtained:


[math]\displaystyle{ \begin{matrix} \text{61} & \text{50} & \text{67} & \text{49} & \text{53} & \text{62} \\ \text{53} & \text{61} & \text{43} & \text{65} & \text{53} & \text{56} \\ \text{62} & \text{56} & \text{58} & \text{55} & \text{58} & \text{48} \\ \text{66} & \text{44} & \text{48} & \text{58} & \text{43} & \text{40} \\ \end{matrix} }[/math]

Fitting the gamma distribution to this data, using maximum likelihood as the analysis method, gives the following parameters:

[math]\displaystyle{ \begin{align} & \hat{\mu }= & 7.72E-02 \\ & \hat{k}= & 50.4908 \end{align} }[/math]

Using rank regression on [math]\displaystyle{ X, }[/math] the estimated parameters are:

[math]\displaystyle{ \begin{align} & \hat{\mu }= & 0.2915 \\ & \hat{k}= & 41.1726 \end{align} }[/math]


Using rank regression on [math]\displaystyle{ Y, }[/math] the estimated parameters are:

[math]\displaystyle{ \begin{align} & \hat{\mu }= & 0.2915 \\ & \hat{k}= & 41.1726 \end{align} }[/math]

The Logistic Distribution

The logistic distribution has been used for growth models, and is used in a certain type of regression known as the logistic regression. It has also applications in modeling life data. The shape of the logistic distribution and the normal distribution are very similar [27]. There are some who argue that the logistic distribution is inappropriate for modeling lifetime data because the left-hand limit of the distribution extends to negative infinity. This could conceivably result in modeling negative times-to-failure. However, provided that the distribution in question has a relatively high mean and a relatively small location parameter, the issue of negative failure times should not present itself as a problem.

Logistic Probability Density Function

The logistic [math]\displaystyle{ pdf }[/math] is given by:

[math]\displaystyle{ \begin{matrix} f(T)=\tfrac{{{e}^{z}}}{\sigma {{(1+{{e}^{z}})}^{2}}} \\ z=\tfrac{t-\mu }{\sigma } \\ -\infty \lt T\lt \infty ,\ \ -\infty \lt \mu \lt \infty ,\sigma \gt 0 \\ \end{matrix} }[/math]

where:

[math]\displaystyle{ \begin{align} \mu = & \text{location parameter (also denoted as }\overline{T)} \\ \sigma = & \text{scale parameter} \end{align} }[/math]

The Logistic Mean, Median and Mode

The logistic mean or MTTF is actually one of the parameters of the distribution, usually denoted as [math]\displaystyle{ \mu }[/math] . Since the logistic distribution is symmetrical, the median and the mode are always equal to the mean, [math]\displaystyle{ \mu =\tilde{T}=\breve{T}. }[/math]

The Logistic Standard Deviation

The standard deviation of the logistic distribution, [math]\displaystyle{ {{\sigma }_{T}} }[/math] , is given by:

[math]\displaystyle{ {{\sigma }_{T}}=\sigma \pi \frac{\sqrt{3}}{3} }[/math]


The Logistic Reliability Function

The reliability for a mission of time [math]\displaystyle{ T }[/math] , starting at age 0, for the logistic distribution is determined by:


[math]\displaystyle{ R(T)=\int_{T}^{\infty }f(t)dt }[/math]

or:


[math]\displaystyle{ R(T)=\frac{1}{1+{{e}^{z}}} }[/math]


The unreliability function is:


[math]\displaystyle{ F=\frac{{{e}^{z}}}{1+{{e}^{z}}} }[/math]

where:


[math]\displaystyle{ z=\frac{T-\mu }{\sigma } }[/math]

The Logistic Conditional Reliability Function

The logistic conditional reliability function is given by:

[math]\displaystyle{ R(t/T)=\frac{R(T+t)}{R(T)}=\frac{1+{{e}^{\tfrac{T-\mu }{\sigma }}}}{1+{{e}^{\tfrac{t+T-\mu }{\sigma }}}} }[/math]


The Logistic Reliable Life

The logistic reliable life is given by:


[math]\displaystyle{ {{T}_{R}}=\mu +\sigma [\ln (1-R)-\ln (R)] }[/math]

The Logistic Failure Rate Function

The logistic failure rate function is given by:

[math]\displaystyle{ \lambda (T)=\frac{{{e}^{z}}}{\sigma (1+{{e}^{z}})} }[/math]


Characteristics of the Logistic Distribution

• The logistic distribution has no shape parameter. This means that the logistic [math]\displaystyle{ pdf }[/math] has only one shape, the bell shape, and this shape does not change. The shape of the logistic distribution is very similar to that of the normal distribution.

• The mean, [math]\displaystyle{ \mu }[/math] , or the mean life or the [math]\displaystyle{ MTTF }[/math] , is also the location parameter of the logistic [math]\displaystyle{ pdf }[/math] , as it locates the [math]\displaystyle{ pdf }[/math] along the abscissa. It can assume values of [math]\displaystyle{ -\infty \lt \bar{T}\lt \infty }[/math] .

• As [math]\displaystyle{ \mu }[/math] decreases, the [math]\displaystyle{ pdf }[/math] is shifted to the left.

• As [math]\displaystyle{ \mu }[/math] increases, the [math]\displaystyle{ pdf }[/math] is shifted to the right.

• As [math]\displaystyle{ \sigma }[/math] decreases, the [math]\displaystyle{ pdf }[/math] gets pushed toward the mean, or it becomes narrower and taller.

• As [math]\displaystyle{ \sigma }[/math] increases, the [math]\displaystyle{ pdf }[/math] spreads out away from the mean, or it becomes broader and shallower.

• The scale parameter can assume values of [math]\displaystyle{ 0\lt \sigma \lt \infty }[/math].

• The logistic [math]\displaystyle{ pdf }[/math] starts at [math]\displaystyle{ T=-\infty }[/math] with an [math]\displaystyle{ f(T)=0 }[/math] . As [math]\displaystyle{ T }[/math] increases, [math]\displaystyle{ f(T) }[/math] also increases, goes through its point of inflection and reaches its maximum value at [math]\displaystyle{ T=\bar{T} }[/math] . Thereafter, [math]\displaystyle{ f(T) }[/math] decreases, goes through its point of inflection and assumes a value of [math]\displaystyle{ f(T)=0 }[/math] at [math]\displaystyle{ T=+\infty }[/math] .

• For [math]\displaystyle{ T=\pm \infty , }[/math] the [math]\displaystyle{ pdf }[/math] equals [math]\displaystyle{ 0. }[/math] The maximum value of the [math]\displaystyle{ pdf }[/math] occurs at [math]\displaystyle{ T }[/math] = [math]\displaystyle{ \mu }[/math] and equals [math]\displaystyle{ \tfrac{1}{4\sigma }. }[/math]

• The point of inflection of the [math]\displaystyle{ pdf }[/math] plot is the point where the second derivative of the [math]\displaystyle{ pdf }[/math] equals zero. The inflection point occurs at [math]\displaystyle{ T=\mu +\sigma \ln (2\pm \sqrt{3}) }[/math] or [math]\displaystyle{ T\approx \mu \pm \sigma 1.31696 }[/math].

• If the location parameter [math]\displaystyle{ \mu }[/math] decreases, the reliability plot is shifted to the left. If [math]\displaystyle{ \mu }[/math] increases, the reliability plot is shifted to the right.

• If [math]\displaystyle{ T=\mu }[/math] then [math]\displaystyle{ R=0.5 }[/math] . is the inflection point. If [math]\displaystyle{ T\lt \mu }[/math] then [math]\displaystyle{ R(t) }[/math] is concave (concave down); if [math]\displaystyle{ T\gt \mu }[/math] then [math]\displaystyle{ R(t) }[/math] is convex (concave up). For [math]\displaystyle{ T\lt \mu , }[/math] [math]\displaystyle{ \lambda (t) }[/math] is convex (concave up), for [math]\displaystyle{ T\gt \mu ; }[/math] [math]\displaystyle{ \lambda (t) }[/math] is concave (concave down).

• The main difference between the normal distribution and logistic distribution lies in the tails and in the behavior of the failure rate function. The logistic distribution has slightly longer tails compared to the normal distribution. Also, in the upper tail of the logistic distribution, the failure rate function levels out for large [math]\displaystyle{ t }[/math] approaching 1/ [math]\displaystyle{ \delta . }[/math]

• If location parameter [math]\displaystyle{ \mu }[/math] decreases, the failure rate plot is shifted to the left. Vice versa if [math]\displaystyle{ \mu }[/math] increases, the failure rate plot is shifted to the right.

[math]\displaystyle{ \lambda }[/math] always increases. For [math]\displaystyle{ T\to -\infty }[/math] for [math]\displaystyle{ T\to \infty }[/math] It is always [math]\displaystyle{ 0\le \lambda (t)\le \tfrac{1}{\sigma }. }[/math]

• If [math]\displaystyle{ \sigma }[/math] increases, then [math]\displaystyle{ \lambda (t) }[/math] increases more slowly and smoothly. The segment of time where [math]\displaystyle{ 0\lt \lambda (t)\lt \tfrac{1}{\sigma } }[/math] increases, too, whereas the region where [math]\displaystyle{ \lambda (t) }[/math] is close to [math]\displaystyle{ 0 }[/math] or [math]\displaystyle{ \tfrac{1}{\sigma } }[/math] gets narrower. Conversely, if [math]\displaystyle{ \sigma }[/math] decreases, then [math]\displaystyle{ \lambda (t) }[/math] increases more quickly and sharply. The segment of time where [math]\displaystyle{ 0\lt }[/math] [math]\displaystyle{ \lambda (t)\lt \tfrac{1}{\sigma } }[/math] decreases, too, whereas the region where [math]\displaystyle{ \lambda (t) }[/math] is close to [math]\displaystyle{ 0 }[/math] or [math]\displaystyle{ \tfrac{1}{\sigma } }[/math] gets broader.

Weibull++ Notes on Negative Time Values

One of the disadvantages of using the logistic distribution for reliability calculations is the fact that the logistic distribution starts at negative infinity. This can result in negative values for some of the results. Negative values for time are not accepted in most of the components of Weibull++, nor are they implemented. Certain components of the application reserve negative values for suspensions, or will not return negative results. For example, the Quick Calculation Pad will return a null value (zero) if the result is negative. Only the Free-Form (Probit) data sheet can accept negative values for the random variable (x-axis values).


Probability Paper

The form of the Logistic probability paper is based on linearizing the [math]\displaystyle{ cdf }[/math] . From Eqn. (UnR fcn), [math]\displaystyle{ z }[/math] can be calculated as a function of the [math]\displaystyle{ cdf }[/math] [math]\displaystyle{ F }[/math] as follows:

[math]\displaystyle{ z=\ln (F)-\ln (1-F) }[/math]


or using Eqn. (z func of parameters)

[math]\displaystyle{ \frac{T-\mu }{\sigma }=\ln (F)-\ln (1-F) }[/math]

Then:

[math]\displaystyle{ \ln (F)-\ln (1-F)=-\frac{\mu }{\sigma }+\frac{1}{\sigma }T }[/math]


Now let:

[math]\displaystyle{ y=\ln (F)-\ln (1-F) }[/math]


[math]\displaystyle{ x=T }[/math]


and:

[math]\displaystyle{ a=-\frac{\mu }{\sigma } }[/math]


[math]\displaystyle{ b=\frac{1}{\sigma } }[/math]


which results in the following linear equation:

[math]\displaystyle{ y=a+bx }[/math]


The logistic probability paper resulting from this linearized [math]\displaystyle{ cdf }[/math] function is shown next.


Since the logistic distribution is symmetrical, the area under the [math]\displaystyle{ pdf }[/math] curve from [math]\displaystyle{ -\infty }[/math] to [math]\displaystyle{ \mu }[/math] is [math]\displaystyle{ 0.5 }[/math] , as is the area from [math]\displaystyle{ \mu }[/math] to [math]\displaystyle{ +\infty }[/math] . Consequently, the value of [math]\displaystyle{ \mu }[/math] is said to be the point where [math]\displaystyle{ R(t)=Q(t)=50% }[/math] . This means that the estimate of [math]\displaystyle{ \mu }[/math] can be read from the point where the plotted line crosses the 50% unreliability line. For [math]\displaystyle{ z=1 }[/math] , [math]\displaystyle{ \sigma =t-\mu }[/math] and [math]\displaystyle{ R(t)=\tfrac{1}{1+\exp (1)}\approx 0.2689. }[/math] Therefore, [math]\displaystyle{ \sigma }[/math] can be found by subtracting [math]\displaystyle{ \mu }[/math] from the time value where the plotted probability line crosses the 73.10% unreliability (26.89% reliability) horizontal line.

Confidence Bounds

In this section, we present the methods used in the application to estimate the different types of confidence bounds for logistically distributed data. The complete derivations were presented in detail (for a general function) in Chapter 5.

Bounds on the Parameters

The lower and upper bounds on the location parameter [math]\displaystyle{ \widehat{\mu } }[/math] are estimated from

[math]\displaystyle{ {{\mu }_{U}}=\widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })\text{ }}\text{ (upper bound)} }[/math]

[math]\displaystyle{ {{\mu }_{L}}=\widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })\text{ }}\text{ (lower bound)} }[/math]

The lower and upper bounds on the scale parameter [math]\displaystyle{ \widehat{\sigma } }[/math] are estimated from:

[math]\displaystyle{ {{\sigma }_{U}}=\widehat{\sigma }{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })\text{ }}}{\widehat{\sigma }}}}(\text{upper bound}) }[/math]


[math]\displaystyle{ {{\sigma }_{L}}=\widehat{\sigma }{{e}^{\tfrac{-{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })\text{ }}}{\widehat{\sigma }}}}\text{ (lower bound)} }[/math]

where [math]\displaystyle{ {{K}_{\alpha }} }[/math] is defined by:

[math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }}) }[/math]


If [math]\displaystyle{ \delta }[/math] is the confidence level, then [math]\displaystyle{ \alpha =\tfrac{1-\delta }{2} }[/math] for the two-sided bounds, and [math]\displaystyle{ \alpha =1-\delta }[/math] for the one-sided bounds. The variances and covariances of [math]\displaystyle{ \widehat{\mu } }[/math] and [math]\displaystyle{ \widehat{\sigma } }[/math] are estimated from the Fisher matrix, as follows:

[math]\displaystyle{ \left( \begin{matrix} \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) \\ \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\sigma } \right) \\ \end{matrix} \right)=\left( \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } \\ {} & {} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}} \\ \end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1} }[/math]

[math]\displaystyle{ \Lambda }[/math] is the log-likelihood function of the normal distribution, described in Chapter 3 and Appendix C.

Bounds on Reliability

The reliability of the logistic distribution is:

[math]\displaystyle{ \widehat{R}=\frac{1}{1+{{e}^{\widehat{z}}}} }[/math]

where:

[math]\displaystyle{ \widehat{z}=\frac{T-\widehat{\mu }}{\widehat{\sigma }} }[/math]


Here [math]\displaystyle{ -\infty \lt T\lt \infty }[/math] , [math]\displaystyle{ -\infty \lt \mu \lt \infty }[/math] , [math]\displaystyle{ 0\lt \sigma \lt \infty }[/math] . Therefore, [math]\displaystyle{ z }[/math] also is changing from [math]\displaystyle{ -\infty }[/math] to [math]\displaystyle{ +\infty }[/math] . Then the bounds on [math]\displaystyle{ z }[/math] are estimated from:

[math]\displaystyle{ {{z}_{U}}=\widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})\text{ }} }[/math]


[math]\displaystyle{ {{z}_{L}}=\widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})\text{ }}\text{ } }[/math]


where:

[math]\displaystyle{ Var(\widehat{z})={{(\frac{\partial z}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial z}{\partial \mu })(\frac{\partial z}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial z}{\partial \sigma })}^{2}}Var(\widehat{\sigma }) }[/math]

or:

[math]\displaystyle{ Var(\widehat{z})=\frac{1}{{{\sigma }^{2}}}(Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })) }[/math]

The upper and lower bounds on reliability are:

[math]\displaystyle{ {{R}_{U}}=\frac{1}{1+{{e}^{{{z}_{L}}}}}\text{(upper bound)} }[/math]


[math]\displaystyle{ {{R}_{L}}=\frac{1}{1+{{e}^{{{z}_{U}}}}}\text{(lower bound)} }[/math]

Bounds on Time

The bounds around time for a given logistic percentile (unreliability) are estimated by first solving the reliability equation with respect to time as follows:

[math]\displaystyle{ \widehat{T}(\widehat{\mu },\widehat{\sigma })=\widehat{\mu }+\widehat{\sigma }z }[/math]


where:


[math]\displaystyle{ z=\ln (1-R)-\ln (R) }[/math]



[math]\displaystyle{ Var(\widehat{T})={{(\frac{\partial T}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial T}{\partial \mu })(\frac{\partial T}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial T}{\partial \sigma })}^{2}}Var(\widehat{\sigma }) }[/math]


or:


[math]\displaystyle{ Var(\widehat{T})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma }) }[/math]


The upper and lower bounds are then found by:

[math]\displaystyle{ {{T}_{U}}=\widehat{T}+{{K}_{\alpha }}\sqrt{Var(\widehat{T})\text{ }}(\text{upper bound}) }[/math]


[math]\displaystyle{ {{T}_{L}}=\widehat{T}-{{K}_{\alpha }}\sqrt{Var(\widehat{T})\text{ }}(\text{lower bound}) }[/math]


A Logistic Distribution Example

The lifetime of a mechanical valve is known to follow a logistic distribution. Ten units were tested for 28 months and the following months-to-failure data was collected.


[math]\displaystyle{ \overset{{}}{\mathop{\text{Table 10}\text{.1 - Times-to-Failure Data with Suspensions}}}\, }[/math]


[math]\displaystyle{ \begin{matrix} \text{Data Point Index} & \text{State F or S} & \text{State End Time} \\ \text{1} & \text{F} & \text{8} \\ \text{2} & \text{F} & \text{10} \\ \text{3} & \text{F} & \text{15} \\ \text{4} & \text{F} & \text{17} \\ \text{5} & \text{F} & \text{19} \\ \text{6} & \text{F} & \text{26} \\ \text{7} & \text{F} & \text{27} \\ \text{8} & \text{S} & \text{28} \\ \text{9} & \text{S} & \text{28} \\ \text{10} & \text{S} & \text{28} \\ \end{matrix} }[/math]

• Determine the valve's design life if specifications call for a reliability goal of 0.90.

• The valve is to be used in a pumping device that requires 1 month of continuous operation. What is the probability of the pump failing due to the valve?

This data set can be entered into Weibull++ as follows:


The computed parameters for maximum likelihood are:

[math]\displaystyle{ \begin{align} & \widehat{\mu }= & 22.34 \\ & \hat{\sigma }= & 6.15 \end{align} }[/math]

• The valve's design life, along with 90% two sided confidence bounds, can be obtained using the QCP as follows:

• The probability, along with 90% two sided confidence bounds, that the pump fails due to a valve failure during the first month is obtained as follows:

The Loglogistic Distribution

As may be indicated by the name, the loglogistic distribution has certain similarities to the logistic distribution. A random variable is loglogistically distributed if the logarithm of the random variable is logistically distributed. Because of this, there are many mathematical similarities between the two distributions [27]. For example, the mathematical reasoning for the construction of the probability plotting scales is very similar for these two distributions.

Loglogistic Probability Density Function

The loglogistic distribution is a two-parameter distribution with parameters [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma }[/math] . The [math]\displaystyle{ pdf }[/math] for this distribution is given by:

[math]\displaystyle{ f(T)=\frac{{{e}^{z}}}{\sigma T{{(1+{{e}^{z}})}^{2}}} }[/math]

where:

[math]\displaystyle{ z=\frac{{T}'-\mu }{\sigma } }[/math]

[math]\displaystyle{ {T}'=\ln (T) }[/math]

and:

[math]\displaystyle{ \begin{align} & \mu = & \text{scale parameter} \\ & \sigma = & \text{shape parameter} \end{align} }[/math]

where [math]\displaystyle{ 0\lt t\lt \infty }[/math] , [math]\displaystyle{ -\infty \lt \mu \lt \infty }[/math] and [math]\displaystyle{ 0\lt \sigma \lt \infty }[/math] .

Mean, Median and Mode

The mean of the loglogistic distribution, [math]\displaystyle{ \overline{T} }[/math] , is given by:

[math]\displaystyle{ \overline{T}={{e}^{\mu }}\Gamma (1+\sigma )\Gamma (1-\sigma ) }[/math]


Note that for [math]\displaystyle{ \sigma \ge 1, }[/math] [math]\displaystyle{ \overline{T} }[/math] does not exist.

The median of the loglogistic distribution, [math]\displaystyle{ \breve{T} }[/math] , is given by:

[math]\displaystyle{ \widehat{T}={{e}^{\mu }} }[/math]

The mode of the loglogistic distribution, [math]\displaystyle{ \tilde{T} }[/math] , if [math]\displaystyle{ \sigma \lt 1, }[/math] is given by:

..

The Standard Deviation

The standard deviation of the loglogistic distribution, [math]\displaystyle{ {{\sigma }_{T}} }[/math] , is given by:

[math]\displaystyle{ {{\sigma }_{T}}={{e}^{\mu }}\sqrt{\Gamma (1+2\sigma )\Gamma (1-2\sigma )-{{(\Gamma (1+\sigma )\Gamma (1-\sigma ))}^{2}}} }[/math]


Note that for [math]\displaystyle{ \sigma \ge 0.5, }[/math] the standard deviation does not exist.

The Loglogistic Reliability Function

The reliability for a mission of time [math]\displaystyle{ T }[/math] , starting at age 0, for the loglogistic distribution is determined by:

[math]\displaystyle{ R=\frac{1}{1+{{e}^{z}}} }[/math]

where:

[math]\displaystyle{ z=\frac{{T}'-\mu }{\sigma } }[/math]


[math]\displaystyle{ {T}'=\ln (t) }[/math]

The unreliability function is:

[math]\displaystyle{ F=\frac{{{e}^{z}}}{1+{{e}^{z}}} }[/math]

The loglogistic Reliable Life

The logistic reliable life is:


[math]\displaystyle{ {{T}_{R}}={{e}^{\mu +\sigma [\ln (1-R)-\ln (R)]}} }[/math]

The loglogistic Failure Rate Function

The loglogistic failure rate is given by:


[math]\displaystyle{ \lambda (T)=\frac{{{e}^{z}}}{\sigma T(1+{{e}^{z}})} }[/math]


Distribution Characteristics

For [math]\displaystyle{ \sigma \gt 1 }[/math] :

[math]\displaystyle{ f(T) }[/math] decreases monotonically and is convex. Mode and mean do not exist.

For [math]\displaystyle{ \sigma =1 }[/math] :

[math]\displaystyle{ f(T) }[/math] decreases monotonically and is convex. Mode and mean do not exist. As [math]\displaystyle{ T\to 0 }[/math] , [math]\displaystyle{ f(T)\to \tfrac{1}{\sigma {{e}^{\tfrac{\mu }{\sigma }}}}. }[/math]

• As [math]\displaystyle{ T\to 0 }[/math] , [math]\displaystyle{ \lambda (T)\to \tfrac{1}{\sigma {{e}^{\tfrac{\mu }{\sigma }}}}. }[/math]

For [math]\displaystyle{ 0\lt \sigma \lt 1 }[/math] :

• The shape of the loglogistic distribution is very similar to that of the lognormal distribution and the Weibull distribution.

• The [math]\displaystyle{ pdf }[/math] starts at zero, increases to its mode, and decreases thereafter.

• As [math]\displaystyle{ \mu }[/math] increases, while [math]\displaystyle{ \sigma }[/math] is kept the same, the [math]\displaystyle{ pdf }[/math] gets stretched out to the right and its height decreases, while maintaining its shape.

• As [math]\displaystyle{ \mu }[/math] decreases,while [math]\displaystyle{ \sigma }[/math] is kept the same, the .. gets pushed in towards the left and its height increases.

[math]\displaystyle{ \lambda (T) }[/math] increases till [math]\displaystyle{ T={{e}^{\mu +\sigma \ln (\tfrac{1-\sigma }{\sigma })}} }[/math] and decreases thereafter. [math]\displaystyle{ \lambda (T) }[/math] is concave at first, then becomes convex.

Confidence Bounds

The method used by the application in estimating the different types of confidence bounds for loglogistically distributed data is presented in this section. The complete derivations were presented in detail for a general function in Chapter 5.

Bounds on the Parameters

The lower and upper bounds on the mean, [math]\displaystyle{ {\mu }' }[/math] , are estimated from:


[math]\displaystyle{ \begin{align} & \mu _{U}^{\prime }= & {{\widehat{\mu }}^{\prime }}+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\ & \mu _{L}^{\prime }= & {{\widehat{\mu }}^{\prime }}-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} \end{align} }[/math]


For the standard deviation, [math]\displaystyle{ {{\widehat{\sigma }}_{{{T}'}}} }[/math] , [math]\displaystyle{ \ln ({{\widehat{\sigma }}_{{{T}'}}}) }[/math] is treated as normally distributed, and the bounds are estimated from:

[math]\displaystyle{ \begin{align} & {{\sigma }_{U}}= & {{\widehat{\sigma }}_{{{T}'}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}\text{ (upper bound)} \\ & {{\sigma }_{L}}= & \frac{{{\widehat{\sigma }}_{{{T}'}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{{{\widehat{\sigma }}_{{{T}'}}}}}}}\text{ (lower bound)} \end{align} }[/math]

where [math]\displaystyle{ {{K}_{\alpha }} }[/math] is defined by:

[math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }}) }[/math]


If [math]\displaystyle{ \delta }[/math] is the confidence level, then [math]\displaystyle{ \alpha =\tfrac{1-\delta }{2} }[/math] for the two-sided bounds, and [math]\displaystyle{ \alpha =1-\delta }[/math] for the one-sided bounds.

The variances and covariances of [math]\displaystyle{ \widehat{\mu } }[/math] and [math]\displaystyle{ \widehat{\sigma } }[/math] are estimated as follows:

[math]\displaystyle{ \left( \begin{matrix} \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) \\ \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\sigma } \right) \\ \end{matrix} \right)=\left( \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{(\mu )}^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } \\ {} & {} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}} \\ \end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1} }[/math]


where [math]\displaystyle{ \Lambda }[/math] is the log-likelihood function of the loglogistic distribution.

Bounds on Reliability

The reliability of the logistic distribution is:

[math]\displaystyle{ \widehat{R}=\frac{1}{1+\exp (\widehat{z})} }[/math]


where:

[math]\displaystyle{ \widehat{z}=\frac{{T}'-\widehat{\mu }}{\widehat{\sigma }} }[/math]


Here [math]\displaystyle{ 0\lt t\lt \infty }[/math] , [math]\displaystyle{ -\infty \lt \mu \lt \infty }[/math] , [math]\displaystyle{ 0\lt \sigma \lt \infty }[/math] , therefore [math]\displaystyle{ 0\lt \ln (t)\lt \infty }[/math] and [math]\displaystyle{ z }[/math] also is changing from [math]\displaystyle{ -\infty }[/math] till [math]\displaystyle{ +\infty }[/math] .The bounds on [math]\displaystyle{ z }[/math] are estimated from:

[math]\displaystyle{ {{z}_{U}}=\widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} }[/math]


[math]\displaystyle{ {{z}_{L}}=\widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})\text{ }}\text{ } }[/math]


where:

[math]\displaystyle{ Var(\widehat{z})={{(\frac{\partial z}{\partial \mu })}^{2}}Var({{\widehat{\mu }}^{\prime }})+2(\frac{\partial z}{\partial \mu })(\frac{\partial z}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial z}{\partial \sigma })}^{2}}Var(\widehat{\sigma }) }[/math]


or:

[math]\displaystyle{ Var(\widehat{z})=\frac{1}{{{\sigma }^{2}}}(Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })) }[/math]


The upper and lower bounds on reliability are:

[math]\displaystyle{ {{R}_{U}}=\frac{1}{1+{{e}^{{{z}_{L}}}}}\text{(Upper bound)} }[/math]


[math]\displaystyle{ {{R}_{L}}=\frac{1}{1+{{e}^{{{z}_{U}}}}}\text{(Lower bound)} }[/math]


Bounds on Time

The bounds around time for a given loglogistic percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, as follows:

[math]\displaystyle{ \widehat{T}(\widehat{\mu },\widehat{\sigma })={{e}^{\widehat{\mu }+\widehat{\sigma }z}} }[/math]


where:

[math]\displaystyle{ z=\ln (1-R)-\ln (R) }[/math]


or:

[math]\displaystyle{ \ln (T)=\widehat{\mu }+\widehat{\sigma }z }[/math]


Let:

[math]\displaystyle{ u=\ln (T)=\widehat{\mu }+\widehat{\sigma }z }[/math]


then:

[math]\displaystyle{ {{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})\text{ }}\text{ } }[/math]



[math]\displaystyle{ {{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})\text{ }}\text{ } }[/math]


where:


[math]\displaystyle{ Var(\widehat{u})={{(\frac{\partial u}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial u}{\partial \mu })(\frac{\partial u}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial u}{\partial \sigma })}^{2}}Var(\widehat{\sigma }) }[/math]


or:

[math]\displaystyle{ Var(\widehat{u})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma }) }[/math]


The upper and lower bounds are then found by:

[math]\displaystyle{ {{T}_{U}}={{e}^{{{u}_{U}}}}\text{ (upper bound)} }[/math]


[math]\displaystyle{ {{T}_{L}}={{e}^{{{u}_{L}}}}\text{ (lower bound)} }[/math]


A LogLogistic Distribution Example

Determine the loglogistic parameter estimates for the data given in Table 10.3.

[math]\displaystyle{ \overset{{}}{\mathop{\text{Table 10}\text{.3 - Test data}}}\, }[/math]


[math]\displaystyle{ \begin{matrix} \text{Data point index} & \text{Last Inspected} & \text{State End time} \\ \text{1} & \text{105} & \text{106} \\ \text{2} & \text{197} & \text{200} \\ \text{3} & \text{297} & \text{301} \\ \text{4} & \text{330} & \text{335} \\ \text{5} & \text{393} & \text{401} \\ \text{6} & \text{423} & \text{426} \\ \text{7} & \text{460} & \text{468} \\ \text{8} & \text{569} & \text{570} \\ \text{9} & \text{675} & \text{680} \\ \text{10} & \text{884} & \text{889} \\ \end{matrix} }[/math]


Using Times-to-failure data under the Folio Data Type and the My data set contains interval and/or left censored data under Times-to-failure data options to enter the above data, the computed parameters for maximum likelihood are calculated to be:

[math]\displaystyle{ \begin{align} & {{{\hat{\mu }}}^{\prime }}= & 5.9772 \\ & {{{\hat{\sigma }}}_{{{T}'}}}= & 0.3256 \end{align} }[/math]


For rank regression on [math]\displaystyle{ X\ \ : }[/math]

[math]\displaystyle{ \begin{align} & \hat{\mu }= & 5.9281 \\ & \hat{\sigma }= & 0.3821 \end{align} }[/math]


For rank regression on [math]\displaystyle{ Y\ \ : }[/math]

[math]\displaystyle{ \begin{align} & \hat{\mu }= & 5.9772 \\ & \hat{\sigma }= & 0.3256 \end{align} }[/math]

The Gumbel/SEV Distribution

The Gumbel distribution is also referred to as the Smallest Extreme Value (SEV) distribution or the Smallest Extreme Value (Type I) distribution. The Gumbel distribution's [math]\displaystyle{ pdf }[/math] is skewed to the left, unlike the Weibull distribution's [math]\displaystyle{ pdf }[/math] , which is skewed to the right. The Gumbel distribution is appropriate for modeling strength, which is sometimes skewed to the left (few weak units in the lower tail, most units in the upper tail of the strength population). The Gumbel distribution could also be appropriate for modeling the life of products that experience very quick wear-out after reaching a certain age. The distribution of logarithms of times can often be modeled with the Gumbel distribution (in addition to the more common lognormal distribution). [27]

Gumbel Probability Density Function

The [math]\displaystyle{ pdf }[/math] of the Gumbel distribution is given by:

[math]\displaystyle{ f(T)=\frac{1}{\sigma }{{e}^{z-{{e}^{z}}}} }[/math]


[math]\displaystyle{ f(T)\ge 0,\sigma \gt 0 }[/math] where:

[math]\displaystyle{ z=\frac{T-\mu }{\sigma } }[/math]

and:

[math]\displaystyle{ \begin{align} & \mu = & \text{location parameter} \\ & \sigma = & \text{scale parameter} \end{align} }[/math]


The Gumbel Mean, Median and Mode

The Gumbel mean or MTTF is:

[math]\displaystyle{ \overline{T}=\mu -\sigma \gamma }[/math]

where [math]\displaystyle{ \gamma \approx 0.5772 }[/math] (Euler's constant).

The mode of the Gumbel distribution is:

[math]\displaystyle{ \tilde{T}=\mu }[/math]

The median of the Gumbel distribution is:

[math]\displaystyle{ \widehat{T}=\mu +\sigma \ln (\ln (2)) }[/math]

The Gumbel Standard Deviation

The standard deviation for the Gumbel distribution is given by:

[math]\displaystyle{ {{\sigma }_{T}}=\sigma \pi \frac{\sqrt{6}}{6} }[/math]


The Gumbel Reliability Function

The reliability for a mission of time [math]\displaystyle{ T }[/math] for the Gumbel distribution is given by:

[math]\displaystyle{ R(T)={{e}^{-{{e}^{z}}}} }[/math]

The unreliability function is given by:

[math]\displaystyle{ F(T)=1-{{e}^{-{{e}^{z}}}} }[/math]

The Gumbel Reliable Life

The Gumbel reliable life is given by:


[math]\displaystyle{ {{T}_{R}}=\mu +\sigma [\ln (-\ln (R))] }[/math]


The Gumbel Failure Rate Function

The instantaneous Gumbel failure rate is given by:

[math]\displaystyle{ \lambda =\frac{{{e}^{z}}}{\sigma } }[/math]


Characteristics of the Gumbel Distribution

Some of the specific characteristics of the Gumbel distribution are the following:

• The shape of the Gumbel distribution is skewed to the left. The Gumbel [math]\displaystyle{ pdf }[/math] has no shape parameter. This means that the Gumbel [math]\displaystyle{ pdf }[/math] has only one shape, which does not change.

• The Gumbel [math]\displaystyle{ pdf }[/math] has location parameter [math]\displaystyle{ \mu , }[/math] which is equal to the mode [math]\displaystyle{ \tilde{T}, }[/math] but it differs from median and mean. This is because the Gumbel distribution is not symmetrical about its [math]\displaystyle{ \mu }[/math] .

• As [math]\displaystyle{ \mu }[/math] decreases, the [math]\displaystyle{ pdf }[/math] is shifted to the left.

• As [math]\displaystyle{ \mu }[/math] increases, the [math]\displaystyle{ pdf }[/math] is shifted to the right.

• As [math]\displaystyle{ \sigma }[/math] increases, the [math]\displaystyle{ pdf }[/math] spreads out and becomes shallower.

• As [math]\displaystyle{ \sigma }[/math] decreases, the [math]\displaystyle{ pdf }[/math] becomes taller and narrower.

• For [math]\displaystyle{ T=\pm \infty , }[/math] [math]\displaystyle{ pdf=0. }[/math] For [math]\displaystyle{ T=\mu }[/math] , the [math]\displaystyle{ pdf }[/math] reaches its maximum point [math]\displaystyle{ \frac{1}{\sigma e} }[/math]

• The points of inflection of the [math]\displaystyle{ pdf }[/math] graph are [math]\displaystyle{ T=\mu \pm \sigma \ln (\tfrac{3\pm \sqrt{5}}{2}) }[/math] or [math]\displaystyle{ T\approx \mu \pm \sigma 0.96242 }[/math] .

• If times follow the Weibull distribution, then the logarithm of times follow a Gumbel distribution. If [math]\displaystyle{ {{t}_{i}} }[/math] follows a Weibull distribution with [math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ \eta }[/math] , then the [math]\displaystyle{ Ln({{t}_{i}}) }[/math] follows a Gumbel distribution with [math]\displaystyle{ \mu =\ln (\eta ) }[/math] and [math]\displaystyle{ \sigma =\tfrac{1}{\beta } }[/math] [32] [math]\displaystyle{ . }[/math]

Probability Paper

The form of the Gumbel probability paper is based on a linearization of the [math]\displaystyle{ cdf }[/math] . From Eqn. (UnrGumbel):

[math]\displaystyle{ z=\ln (-\ln (1-F)) }[/math]


using Eqns. (z3):

[math]\displaystyle{ \frac{T-\mu }{\sigma }=\ln (-\ln (1-F)) }[/math]


Then:

[math]\displaystyle{ \ln (-\ln (1-F))=-\frac{\mu }{\sigma }+\frac{1}{\sigma }T }[/math]


Now let:

[math]\displaystyle{ y=\ln (-\ln (1-F)) }[/math]


[math]\displaystyle{ x=T }[/math]


and:

[math]\displaystyle{ \begin{align} & a= & -\frac{\mu }{\sigma } \\ & b= & \frac{1}{\sigma } \end{align} }[/math]


which results in the linear equation of:

[math]\displaystyle{ y=a+bx }[/math]


The Gumbel probability paper resulting from this linearized [math]\displaystyle{ cdf }[/math] function is shown next.


For [math]\displaystyle{ z=0 }[/math] , [math]\displaystyle{ T=\mu }[/math] and [math]\displaystyle{ R(t)={{e}^{-{{e}^{0}}}}\approx 0.3678 }[/math] (63.21% unreliability). For [math]\displaystyle{ z=1 }[/math] , [math]\displaystyle{ \sigma =T-\mu }[/math] and [math]\displaystyle{ R(t)={{e}^{-{{e}^{1}}}}\approx 0.0659. }[/math] To read [math]\displaystyle{ \mu }[/math] from the plot, find the time value that corresponds to the intersection of the probability plot with the 63.21% unreliability line. To read [math]\displaystyle{ \sigma }[/math] from the plot, find the time value that corresponds to the intersection of the probability plot with the 93.40% unreliability line, then take the difference between this time value and the [math]\displaystyle{ \mu }[/math] value.

Confidence Bounds

This section presents the method used by the application to estimate the different types of confidence bounds for data that follow the Gumbel distribution. The complete derivations were presented in detail (for a general function) in Chapter 5. Only Fisher Matrix confidence bounds are available for the Gumbel distribution.

Bounds on the Parameters

The lower and upper bounds on the mean, [math]\displaystyle{ \widehat{\mu } }[/math] , are estimated from:

[math]\displaystyle{ \begin{align} & {{\mu }_{U}}= & \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\ & {{\mu }_{L}}= & \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)} \end{align} }[/math]


Since the standard deviation, [math]\displaystyle{ \widehat{\sigma } }[/math] , must be positive, then [math]\displaystyle{ \ln (\widehat{\sigma }) }[/math] is treated as normally distributed, and the bounds are estimated from:

[math]\displaystyle{ \begin{align} & {{\sigma }_{U}}= & \widehat{\sigma }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{{{\widehat{\sigma }}_{T}}}}}\text{ (upper bound)} \\ & {{\sigma }_{L}}= & \frac{\widehat{\sigma }}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}}\text{ (lower bound)} \end{align} }[/math]

where [math]\displaystyle{ {{K}_{\alpha }} }[/math] is defined by:

[math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }}) }[/math]


If [math]\displaystyle{ \delta }[/math] is the confidence level, then [math]\displaystyle{ \alpha =\tfrac{1-\delta }{2} }[/math] for the two-sided bounds, and [math]\displaystyle{ \alpha =1-\delta }[/math] for the one-sided bounds.

The variances and covariances of [math]\displaystyle{ \widehat{\mu } }[/math] and [math]\displaystyle{ \widehat{\sigma } }[/math] are estimated from the Fisher matrix as follows:

[math]\displaystyle{ \left( \begin{matrix} \widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) \\ \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) & \widehat{Var}\left( \widehat{\sigma } \right) \\ \end{matrix} \right)=\left( \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } \\ {} & {} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}} \\ \end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1} }[/math]


[math]\displaystyle{ \Lambda }[/math] is the log-likelihood function of the Gumbel distribution, described in Chapter 3 and Appendix C.

Bounds on Reliability

The reliability of the Gumbel distribution is given by:

[math]\displaystyle{ \widehat{R}(T;\hat{\mu },\hat{\sigma })={{e}^{-{{e}^{{\hat{z}}}}}} }[/math]

where:

[math]\displaystyle{ \widehat{z}=\frac{t-\widehat{\mu }}{\widehat{\sigma }} }[/math]


The bounds on [math]\displaystyle{ z }[/math] are estimated from:

[math]\displaystyle{ \begin{align} & {{z}_{U}}= & \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ & {{z}_{L}}= & \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \end{align} }[/math]

where:

[math]\displaystyle{ Var(\widehat{z})={{\left( \frac{\partial z}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial z}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+2\left( \frac{\partial z}{\partial \mu } \right)\left( \frac{\partial z}{\partial \sigma } \right)Cov\left( \widehat{\mu },\widehat{\sigma } \right) }[/math]

or:

[math]\displaystyle{ Var(\widehat{z})=\frac{1}{{{\widehat{\sigma }}^{2}}}\left[ Var(\widehat{\mu })+{{\widehat{z}}^{2}}Var(\widehat{\sigma })+2\cdot \widehat{z}\cdot Cov\left( \widehat{\mu },\widehat{\sigma } \right) \right] }[/math]


The upper and lower bounds on reliability are:

[math]\displaystyle{ \begin{align} & {{R}_{U}}= & {{e}^{-{{e}^{{{z}_{L}}}}}}\text{ (upper bound)} \\ & {{R}_{L}}= & {{e}^{-{{e}^{{{z}_{U}}}}}}\text{ (lower bound)} \end{align} }[/math]

Bounds on Time

The bounds around time for a given Gumbel percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:

[math]\displaystyle{ \widehat{T}(\widehat{\mu },\widehat{\sigma })=\widehat{\mu }+\widehat{\sigma }z }[/math]


where:

[math]\displaystyle{ z=\ln (-\ln (R)) }[/math]


[math]\displaystyle{ Var(\widehat{T})={{(\frac{\partial T}{\partial \mu })}^{2}}Var(\widehat{\mu })+2(\frac{\partial T}{\partial \mu })(\frac{\partial T}{\partial \sigma })Cov(\widehat{\mu },\widehat{\sigma })+{{(\frac{\partial T}{\partial \sigma })}^{2}}Var(\widehat{\sigma }) }[/math]


or:

[math]\displaystyle{ Var(\widehat{T})=Var(\widehat{\mu })+2\widehat{z}Cov(\widehat{\mu },\widehat{\sigma })+{{\widehat{z}}^{2}}Var(\widehat{\sigma }) }[/math]


The upper and lower bounds are then found by:

[math]\displaystyle{ \begin{align} & {{T}_{U}}= & \hat{T}+{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Upper bound)} \\ & {{T}_{L}}= & \hat{T}-{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (Lower bound)} \end{align} }[/math]


A Gumbel Distribution Example

Verify using Monte Carlo simulation that if [math]\displaystyle{ {{t}_{i}} }[/math] follows a Weibull distribution with [math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ \eta }[/math] , then the [math]\displaystyle{ Ln({{t}_{i}}) }[/math] follows a Gumbel distribution with [math]\displaystyle{ \mu =\ln (\eta ) }[/math] and [math]\displaystyle{ \sigma =1/\beta ). }[/math] Let us assume that [math]\displaystyle{ {{t}_{i}} }[/math] follows a Weibull distribution with [math]\displaystyle{ \beta =0.5 }[/math] and [math]\displaystyle{ \eta =10000. }[/math] The Monte Carlo simulation tool in Weibull++ can be used to generate a set of random numbers that follow a Weibull distribution with the specified parameters.


After obtaining the random time values [math]\displaystyle{ {{t}_{i}} }[/math] , insert a new Data Sheet using the Insert Data Sheet option under the Folio menu. In this sheet enter the [math]\displaystyle{ Ln({{t}_{i}}) }[/math] values using the LN function and referring to the cells in the sheet that contains the [math]\displaystyle{ {{t}_{i}} }[/math] values. Delete any negative values, if there are any, since Weibull++ expects time values to be positive. Calculate the parameters of the Gumbel distribution that fits the [math]\displaystyle{ Ln({{t}_{i}}) }[/math] values.

Using maximum likelihood as the analysis method, the estimated parameters are:

[math]\displaystyle{ \begin{align} & \hat{\mu }= & 9.3816 \\ & \hat{\sigma }= & 1.9717 \end{align} }[/math]


Since [math]\displaystyle{ \ln (\eta )= }[/math] 9.2103 ( [math]\displaystyle{ \simeq 9.3816 }[/math] ) and [math]\displaystyle{ 1/\beta =2 }[/math] [math]\displaystyle{ (\simeq 1.9717), }[/math] then this simulation verifies that [math]\displaystyle{ Ln({{t}_{i}}) }[/math] follows a Gumbel distribution with [math]\displaystyle{ \mu =\ln (\eta ) }[/math] and [math]\displaystyle{ \delta =1/\beta . }[/math] Note: This example illustrates a property of the Gumbel distribution; it is not meant to be a formal proof.