The Weibull Distribution

The Weibull distribution is one of the most widely used lifetime distributions in reliability engineering. It is a versatile distribution that can take on the characteristics of other types of distributions, based on the value of the shape parameter, $$ {\beta} $$. This chapter provides a brief background on the Weibull distribution, presents and derives most of the applicable equations and presents examples calculated both manually and by using ReliaSoft's Weibull++.

The 2-Parameter Weibull Distribution
The 2-parameter Weibull pdf is obtained by setting $$ \gamma=0 \,\!$$, and is given by:


 * $$ f(t)={ \frac{\beta }{\eta }}\left( {\frac{t}{\eta }}\right) ^{\beta -1}e^{-\left( { \frac{t}{\eta }}\right) ^{\beta }} \,\!$$

The 1-Parameter Weibull Distribution
The 1-parameter Weibull pdf is obtained by again setting $$\gamma=0 \,\!$$ and assuming $$\beta=C=Constant \,\!$$ assumed value or:

$$ f(t)={ \frac{C}{\eta }}\left( {\frac{t}{\eta }}\right) ^{C-1}e^{-\left( {\frac{t}{ \eta }}\right) ^{C}} \,\!$$

where the only unknown parameter is the scale parameter, $$\eta\,\!$$.

Note that in the formulation of the 1-parameter Weibull, we assume that the shape parameter $$\beta \,\!$$ is known a priori from past experience with identical or similar products. The advantage of doing this is that data sets with few or no failures can be analyzed.

The Mean or MTTF
The mean, $$ \overline{T} \,\!$$, (also called MTTF of the Weibull pdf is given by:


 * $$ \overline{T}=\gamma +\eta \cdot \Gamma \left( {\frac{1}{\beta }}+1\right) \,\!$$

where
 * $$ \Gamma \left( {\frac{1}{\beta }}+1\right) \,\!$$

is the gamma function evaluated at the value of


 * $$ \left( { \frac{1}{\beta }}+1\right) \,\!$$.

The gamma function is defined as:


 * $$ \Gamma (n)=\int_{0}^{\infty }e^{-x}x^{n-1}dx \,\!$$

For the 2-parameter case, this can be reduced to:


 * $$ \overline{T}=\eta \cdot \Gamma \left( {\frac{1}{\beta }}+1\right) \,\!$$

Note that some practitioners erroneously assume that $$ \eta \,\!$$ is equal to the MTTF, $$ \overline{T}\,\!$$. This is only true for the case of $$ \beta=1 \,\!$$ or



\begin{align} \overline{T} &= \eta \cdot \Gamma \left( {\frac{1}{1}}+1\right) \\ &= \eta \cdot \Gamma \left( {\frac{1}{1}}+1\right) \\ &= \eta \cdot \Gamma \left( {2}\right) \\ &= \eta \cdot 1\\ &= \eta \end{align} $$

The Median
The median, $$ \breve{T}$$, of the Weibull distribution is given by:


 * $$ \breve{T}=\gamma +\eta \left( \ln 2\right) ^{\frac{1}{\beta }} $$

The Mode
The mode, $$ \tilde{T} $$, is given by:


 * $$ \tilde{T}=\gamma +\eta \left( 1-\frac{1}{\beta }\right) ^{\frac{1}{\beta }} $$

The Standard Deviation
The standard deviation, σT, is given by:


 * $$ \sigma _{T}=\eta \cdot \sqrt{\Gamma \left( {\frac{2}{\beta }}+1\right) -\Gamma \left( {\frac{1}{ \beta }}+1\right) ^{2}} $$

The Weibull Reliability Function
The equation for the 3-parameter Weibull cumulative density function, cdf, is given by:


 * $$ F(t)=1-e^{-\left( \frac{t-\gamma }{\eta }\right) ^{\beta }} $$.

This is also referred to as unreliability and designated as $$ Q(t) \,\!$$ by some authors.

Recalling that the reliability function of a distribution is simply one minus the cdf, the reliability function for the 3-parameter Weibull distribution is then given by:
 * $$ R(t)=e^{-\left( { \frac{t-\gamma }{\eta }}\right) ^{\beta }} $$

The Weibull Conditional Reliability Function
The 3-parameter Weibull conditional reliability function is given by:


 * $$ R(t|T)={ \frac{R(T+t)}{R(T)}}={\frac{e^{-\left( {\frac{T+t-\gamma }{\eta }}\right) ^{\beta }}}{e^{-\left( {\frac{T-\gamma }{\eta }}\right) ^{\beta }}}} $$

or:


 * $$ R(t|T)=e^{-\left[ \left( {\frac{T+t-\gamma }{\eta }}\right) ^{\beta }-\left( {\frac{T-\gamma }{\eta }}\right) ^{\beta }\right] } $$

These gives the reliability for a new mission of $$ t \,\!$$ duration, having already accumulated $$ T \,\!$$ time of operation up to the start of this new mission, and the units are checked out to assure that they will start the next mission successfully. It is called conditional because you can calculate the reliability of a new mission based on the fact that the unit or units already accumulated hours of operation successfully.

The Weibull Reliable Life
The reliable life, $$ T_{R} \,\!$$, of a unit for a specified reliability, R, starting the mission at age zero, is given by:


 * $$ T_{R}=\gamma +\eta \cdot \left\{ -\ln ( R ) \right\} ^{ \frac{1}{\beta }} $$

This is the life for which the unit/item will be functioning successfully with a reliability of R. If R=0.50, then $$ T_{R}=\breve{T} $$, the median life, or the life by which half of the units will survive.

The Weibull Failure Rate Function
The Weibull failure rate function, $$ \lambda(t) \,\!$$, is given by:


 * $$ \lambda \left( t\right) = \frac{f\left( t\right) }{R\left( t\right) }=\frac{\beta }{\eta }\left( \frac{ t-\gamma }{\eta }\right) ^{\beta -1} $$

Characteristics of the Weibull Distribution
As was mentioned previously, the Weibull distribution is widely used in reliability and life data analysis due to its versatility. Depending on the values of the parameters, the Weibull distribution can be used to model a variety of life behaviors. We will now examine how the values of the shape parameter, β, and the scale parameter, η , affect such distribution characteristics as the shape of the curve, the reliability and the failure rate. Note that in the rest of this section we will assume the most general form of the Weibull distribution, i.e. the 3-parameter form. The appropriate substitutions to obtain the other forms, such as the 2-parameter form where γ = 0, or the 1-parameter form where β = C = constant, can easily be made.

Characteristic Effects of the Shape Parameter, β
The Weibull shape parameter, β, is also known as the slope. This is because the value of β is equal to the slope of the regressed line in a probability plot. Different values of the shape parameter can have marked effects on the behavior of the distribution. In fact, some values of the shape parameter will cause the distribution equations to reduce to those of other distributions. For example, when β = 1, the of the 3-parameter Weibull reduces to that of the 2-parameter exponential distribution or:


 * $$ f(t)={\frac{1}{\eta }}e^{-{\frac{t-\gamma }{\eta }}} $$

where $$ \frac{1}{\eta }=\lambda = $$ failure rate. The parameter β is a pure number, i.e.; it is dimensionless.

The Effect of Beta on the Weibull pdf

The following figure shows the effect of different values of the shape parameter, β, on the shape of the $$pdf$$. As you can see, the shape can take on a variety of forms based on the value of β.



For $$ 0<\beta \leq 1 $$:
 * As t→0 ( or γ), f(t)→∞.
 * As t→∞, f(t)→0.
 * f(t) decreases monotonically and is convex as increases beyond the value of γ.
 * The mode is non-existent.

For β &gt; 1 :


 * f(t) = 0 at ( or γ).
 * f(t) increases as $$ t\rightarrow \tilde{T} $$ (the mode) and decreases thereafter.
 * For β &lt; 2.6 the Weibull $$pdf$$ is positively skewed (has a right tail), for 2.6 &lt; β &lt; 3.7 its coefficient of skewness approaches zero (no tail). Consequently, it may approximate the normal $$pdf$$, and for β &gt; 3.7 it is negatively skewed (left tail). The way the value of β relates to the physical behavior of the items being modeled becomes more apparent when we observe how its different values affect the reliability and failure rate functions. Note that for β = 0.999 , f(0) = ∞ , but for β = 1.001 , f(0) = 0. This abrupt shift is what complicates MLE estimation when β is close to one.

The Effect of β on the $$cdf$$ and Reliability Function

The above figure shows the effect of the value of β on the $$cdf$$, as manifested in the Weibull probability plot. It is easy to see why this parameter is sometimes referred to as the slope. Note that the models represented by the three lines all have the same value of η. The following figure shows the effects of these varied values of β on the reliability plot, which is a linear analog of the probability plot.




 * R(t) decreases sharply and monotonically for 0 &lt; β &lt; 1 and is convex.
 * For β = 1, R(t) decreases monotonically but less sharply than for 0 &lt; β &lt; 1 and is convex.
 * For β &gt; 1, R(t) decreases as increases. As wear-out sets in, the curve goes through an inflection point and decreases sharply.

The Effect of β on the Weibull Failure Rate

The value of β has a marked effect on the failure rate of the Weibull distribution and inferences can be drawn about a population's failure characteristics just by considering whether the value of β is less than, equal to, or greater than one.



As indicated by above Figure, populations with β &lt; 1 exhibit a failure rate that decreases with time, populations with β = 1 have a constant failure rate (consistent with the exponential distribution) and populations with β &gt; 1 have a failure rate that increases with time. All three life stages of the bathtub curve can be modeled with the Weibull distribution and varying values of β. The Weibull failure rate for 0 &lt; β &lt; 1 is unbounded at ( or γ). The failure rate, λ(t), decreases thereafter monotonically and is convex, approaching the value of zero as t→∞ or λ(∞) = 0. This behavior makes it suitable for representing the failure rate of units exhibiting early-type failures, for which the failure rate decreases with age. When encountering such behavior in a manufactured product, it may be indicative of problems in the production process, inadequate burn-in, substandard parts and components, or problems with packaging and shipping. For β = 1, λ(t) yields a constant value of $$ { \frac{1}{\eta }} $$ or:


 * $$ \lambda (t)=\lambda ={\frac{1}{\eta }} $$

This makes it suitable for representing the failure rate of chance-type failures and the useful life period failure rate of units.

For β &gt; 1, λ(t) increases as increases and becomes suitable for representing the failure rate of units exhibiting wear-out type failures. For 1 &lt; β &lt; 2, the λ(t) curve is concave, consequently the failure rate increases at a decreasing rate as increases.

For β = 2 there emerges a straight line relationship between λ(t) and, starting at a value of λ(t) = 0 at t = γ , and increasing thereafter with a slope of $$ { \frac{2}{\eta ^{2}}} $$. Consequently, the failure rate increases at a constant rate as increases. Furthermore, if η = 1 the slope becomes equal to 2, and when γ = 0, λ(t) becomes a straight line which passes through the origin with a slope of 2. Note that at β = 2, the Weibull distribution equations reduce to that of the Rayleigh distribution.

When β &gt; 2, the λ(t) curve is convex, with its slope increasing as increases. Consequently, the failure rate increases at an increasing rate as increases indicating wear-out life.

Characteristic Effects of the Scale Parameter, η


A change in the scale parameter η has the same effect on the distribution as a change of the abscissa scale. Increasing the value of η while holding β constant has the effect of stretching out the. Since the area under a curve is a constant value of one, the "peak" of the pdf curve will also decrease with the increase of η, as indicated in the above figure.


 * If η is increased while β and γ are kept the same, the distribution gets stretched out to the right and its height decreases, while maintaining its shape and location.
 * If η is decreased while β and γ are kept the same, the distribution gets pushed in towards the left (i.e. towards its beginning or towards 0 or γ ), and its height increases.
 * η has the same units as, such as hours, miles, cycles, actuations, etc.

Characteristic Effects of the Location Parameter, γ
The location parameter, γ, as the name implies, locates the distribution along the abscissa. Changing the value of γ has the effect of sliding the distribution and its associated function either to the right (if γ &gt; 0 ) or to the left (if γ &lt; 0 ).''




 * When γ = 0, the distribution starts at or at the origin.
 * If γ &gt; 0, the distribution starts at the location γ to the right of the origin.
 * If γ &lt; 0, the distribution starts at the location γ to the left of the origin.
 * γ provides an estimate of the earliest time-to-failure of such units.
 * The life period 0 to + γ is a failure free operating period of such units.
 * The parameter γ may assume all values and provides an estimate of the earliest time a failure may be observed. A negative γ may indicate that failures have occurred prior to the beginning of the test, namely during production, in storage, in transit, during checkout prior to the start of a mission, or prior to actual use.
 * γ has the same units as T, such as hours, miles, cycles, actuations, etc.

Estimation of the Weibull Parameters
The estimates of the parameters of the Weibull distribution can be found graphically via probability plotting paper, or analytically, either using least squares or maximum likelihood.

Probability Plotting
One method of calculating the parameters of the Weibull distribution is by using probability plotting. To better illustrate this procedure, consider the following example from Kececioglu [20].

Example 1:

Probability Plotting for the Location Parameter, γ 

The third parameter of the Weibull distribution is utilized when the data do not fall on a straight line, but fall on either a concave up or down curve. The following statements can be made regarding the value of γ:


 * Case 1: If the curve for MR versus tj is concave down and the curve for MR versus (tj − t1) is concave up, then there exists a γ such that 0 &lt; γ &lt; t1, or γ has a positive value.


 * Case 2: If the curves for MR versus tj and MR versus (tj − t1) are both concave up, then there exists a negative γ which will straighten out the curve of MR versus tj.


 * Case 3: If neither one of the previous two cases prevails, then either reject the Weibull as one capable of representing the data, or proceed with the multiple population (mixed Weibull) analysis. To obtain the location parameter, γ:


 * Subtract the same arbitrary value, γ, from all the times to failure and replot the data.
 * If the initial curve is concave up, subtract a negative γ from each failure time.
 * If the initial curve is concave down, subtract a positive γ from each failure time.
 * Repeat until the data plots on an acceptable straight line.
 * The value of γ is the subtracted (positive or negative) value that places the points in an acceptable straight line.

The other two parameters are then obtained using the techniques previously described. Also, it is important to note that we used the term subtract a positive or negative gamma, where subtracting a negative gamma is equivalent to adding it. Note that when adjusting for gamma, the x-axis scale for the straight line becomes (t − γ).

Rank Regression on Y
Performing rank regression on Y requires that a straight line mathematically be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized. This is in essence the same methodology as the probability plotting method, except that we use the principle of least squares to determine the line through the points, as opposed to just eyeballing it. The first step is to bring our function into a linear form. For the two-parameter Weibull distribution, the (cumulative density function) is:


 * $$ F(t)=1-e^{-\left( \frac{t}{\eta }\right) ^{\beta }} $$

Taking the natural logarithm of both sides of the equation yields:


 * $$\ln[ 1-F(t)] =-( \frac{t}{\eta }) ^{\beta } $$


 * $$ \ln{ -\ln[ 1-F(t)]} =\beta \ln ( \frac{t}{ \eta }) $$

or:


 * $$ \ln \{ -\ln[ 1-F(t)]\} =-\beta \ln (\eta )+\beta \ln (t) $$

Now let:


 * $$ y = \ln \{ -\ln[ 1-F(t)]\} $$


 * $$ a = − βln(\eta) $$

and:


 * $$ b= \beta$$

which results in the linear equation of:


 * $$y=a+bx$$

The least squares parameter estimation method (also known as regression analysis) was discussed in Parameter Estimation, and the following equations for regression on Y were derived:


 * $$ \hat{a}=\frac{\sum\limits_{i=1}^{N}y_{i}}{N}-\hat{b}\frac{ \sum\limits_{i=1}^{N}x_{i}}{N}=\bar{y}-\hat{b}\bar{x} $$

and:


 * $$ \hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}x_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}x_{i}\right) ^{2}}{N}}} $$

In this case the equations for yi and xi are:


 * $$ y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\}, $$

and:


 * xi = ln(ti).

The $$ F(t_{i})s $$ are estimated from the median ranks.

Once $$ \hat{a} $$ and $$ \hat{b} $$ are obtained, then $$ \hat{\beta } $$ and $$ \hat{\eta } $$ can easily be obtained from previous equations.

The Correlation Coefficient

The correlation coefficient is defined as follows:


 * $$ \rho ={\frac{\sigma _{xy}}{\sigma _{x}\sigma _{y}}} $$

where, σx y = covariance of and, σx = standard deviation of , and σy = standard deviation of. The estimator of ρ is the sample correlation coefficient, $$ \hat{\rho} $$, given by:


 * $$ \hat{\rho}=\frac{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})(y_{i}-\overline{y} )}{\sqrt{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})^{2}\cdot \sum\limits_{i=1}^{N}(y_{i}-\overline{y})^{2}}}$$

Example 3: 

Rank Regression on X
Performing a rank regression on X is similar to the process for rank regression on Y, with the difference being that the horizontal deviations from the points to the line are minimized rather than the vertical. Again, the first task is to bring the reliability function into a linear form. This step is exactly the same as in the regression on Y analysis and all the equations apply in this case too. The derivation from the previous analysis begins on the least squares fit part, where in this case we treat as the dependent variable and as the independent variable. The best-fitting straight line to the data, for regression on X (see Parameter Estimation), is the straight line:


 * $$ x= \hat{a}+\hat{b}y $$

The corresponding equations for $$ \hat{a} $$ and $$ \hat{b} $$ are:


 * $$ \hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\sum\limits_{i=1}^{N}x_{i}}{N} -\hat{b}\frac{\sum\limits_{i=1}^{N}y_{i}}{N} $$

and


 * $$ \hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}y_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}y_{i}\right) ^{2}}{N}}} $$

where:


 * $$ y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\} $$ and:

xi = ln(ti) and the F(ti) values are again obtained from the median ranks.

Once $$ \hat{a} $$ and $$ \hat{b} $$ are obtained, solve the linear equation for y, which corresponds to:


 * $$ y=-\frac{\hat{a}}{\hat{b}}+\frac{1}{\hat{b}}x $$ Solving for the parameters from above equations, we get:


 * $$ a=-\frac{\hat{a}}{\hat{b}}=-\beta \ln (\eta )$$

and


 * $$ b=\frac{1}{\hat{b}}=\beta$$

The correlation coefficient is evaluated as before.

Example 4: 

Three-Parameter Weibull Regression
When the MR versus tj points plotted on the Weibull probability paper do not fall on a satisfactory straight line and the points fall on a curve,(Note that other shapes, particularly shapes, might suggest the existence of more than one population. In these cases, the multiple population, mixed Weibull distribution, may be more appropriate. Chapter The Mixed Weibull Distribution presents the mixed Weibull distribution.) then a location parameter, γ, might exist which may straighten out these points. The goal in this case is to fit a curve, instead of a line, through the data points using nonlinear regression. The Gauss-Newton method can be used to solve for the parameters, β, η and γ , by performing a Taylor series expansion on F(ti;β,η,γ). Then the nonlinear model is approximated with linear terms and ordinary least squares are employed to estimate the parameters. This procedure is iterated until a satisfactory solution is reached. Weibull++ calculates the value of γ by utilizing an optimized Nelder-Mead algorithm, and adjusts the points by this value of γ such that they fall on a straight line, and then plots both the adjusted and the original unadjusted points. To draw a curve through the original unadjusted points, if so desired, select Weibull 3P Line Unadjusted for Gamma from the Show Plot Line submenu under the Plot Options menu. The returned estimations of the parameters are the same when selecting RRX or RRY. To display the unadjusted data points and line along with the adjusted data points and line, select Show/Hide Items under the Plot Options menu and include the unadjusted data points and line as follows:





The results and the associated graph for the previous example using the three-parameter Weibull case are shown next:



Maximum Likelihood Estimation
As outlined in Chapter Parameter Estimation, maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function, but this can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood function with respect to the parameters, setting the resulting equations equal to zero and solving simultaneously to determine the values of the parameter estimates. ( Note that MLE asymptotic properties do not hold when estimating γ using MLE [27].) The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the Weibull distribution are covered in Appendix.

Example 5:

Fisher Matrix Confidence Bounds
One of the methods used by the application in estimating the different types of confidence bounds for Weibull data, the Fisher matrix method, is presented in this section. The complete derivations were presented in detail (for a general function) in chapter Confidence Bounds.

Bounds on the Parameters
One of the properties of maximum likelihood estimators is that they are asymptotically normal, meaning that for large samples they are normally distributed. Additionally, since both the shape parameter estimate, $$ \hat{\beta } $$, and the scale parameter estimate, $$ \hat{\eta }, $$ must be positive, thus lnβ and lnη are treated as being normally distributed as well. The lower and upper bounds on the parameters are estimated from [30]:


 * $$ \beta _{U} =\hat{\beta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}\text{ (upper bound)} $$


 * $$ \beta _{L} =\frac{\hat{\beta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}} \text{ (lower bound)}

$$

and:


 * $$ \eta _{U} =\hat{\eta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}\text{ (upper bound)}

$$


 * $$ \eta _{L} =\frac{\hat{\eta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}}\text{ (lower bound)} $$

where $$ K_{\alpha}$$ is defined by:


 * $$ \alpha =\frac{1}{\sqrt{2\pi }}\int_{K_{\alpha }}^{\infty }e^{-\frac{t^{2}}{2} }dt=1-\Phi (K_{\alpha }) $$

If δ is the confidence level, then $$ \alpha =\frac{1-\delta }{2} $$ for the two-sided bounds and α = 1 − δ for the one-sided bounds. The variances and covariances of $$ \hat{\beta } $$ and $$ \hat{\eta } $$ are estimated from the inverse local Fisher matrix, as follows:
 * $$ \left( \begin{array}{cc} \hat{Var}\left( \hat{\beta }\right) & \hat{Cov}\left( \hat{ \beta },\hat{\eta }\right)

\\ \hat{Cov}\left( \hat{\beta },\hat{\eta }\right) & \hat{Var} \left( \hat{\eta }\right) \end{array} \right) =\left( \begin{array}{cc} -\frac{\partial ^{2}\Lambda }{\partial \beta ^{2}} & -\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } \\

-\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } & -\frac{ \partial ^{2}\Lambda }{\partial \eta ^{2}} \end{array} \right) _{\beta =\hat{\beta },\text{ }\eta =\hat{\eta }}^{-1} $$

Fisher Matrix Confidence Bounds and Regression Analysis

Note that the variance and covariance of the parameters are obtained from the inverse Fisher information matrix as described in this section. The local Fisher information matrix is obtained from the second partials of the likelihood function, by substituting the solved parameter estimates into the particular functions. This method is based on maximum likelihood theory and is derived from the fact that the parameter estimates were computed using maximum likelihood estimation methods. When one uses least squares or regression analysis for the parameter estimates, this methodology is theoretically then not applicable. However, if one assumes that the variance and covariance of the parameters will be similar ( One also assumes similar properties for both estimators.) regardless of the underlying solution method, then the above methodology can also be used in regression analysis.

The Fisher matrix is one of the methodologies that Weibull++ uses for both MLE and regression analysis. Specifically, Weibull++ uses the likelihood function and computes the local Fisher information matrix based on the estimates of the parameters and the current data. This gives consistent confidence bounds regardless of the underlying method of solution, i.e. MLE or regression. In addition, Weibull++ checks this assumption and proceeds with it if it considers it to be acceptable. In some instances, Weibull++ will prompt you with an "Unable to Compute Confidence Bounds" message when using regression analysis. This is an indication that these assumptions were violated.

Bounds on Reliability
The bounds on reliability can easily be derived by first looking at the general extreme value distribution (EVD). Its reliability function is given by:


 * $$ R(t)=e^{-e^{\left( \frac{t-p_{1}}{p_{2}}\right) }} $$

By transforming t = lnt and converting $$ p=\ln({\eta})$$, $$ p_{2}=\frac{1}{ \beta } $$, the above equation becomes the Weibull reliability function:


 * $$ R(t)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}=e^{-e^{\ln \left( \frac{t }{\eta }\right) ^{\beta }}}=e^{-\left( \frac{t}{\eta }\right) ^{\beta }} $$

with:


 * $$ R(T)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}$$

set:


 * $$ u=\beta \left( \ln t-\ln \eta \right). $$

The reliability function now becomes:


 * $$ R(T)=e^{-e^{u}} $$

The next step is to find the upper and lower bounds on u. Using the equations derived in Chapter Confidence Bounds, the bounds on are then estimated from [30]:


 * $$ u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})}

$$


 * $$ u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})}

$$

where:


 * $$ Var(\hat{u}) =\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta }) +2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u }{\partial \eta }\right) Cov\left( \hat{\beta },\hat{\eta }\right) $$

or:


 * $$ Var(\hat{u}) =\frac{\hat{u}^{2}}{\hat{\beta }^{2}}Var(\hat{ \beta })+\frac{\hat{\beta }^{2}}{\hat{\eta }^{2}}Var(\hat{\eta }) -\left( \frac{2u}{\hat{\eta }}\right) Cov\left( \hat{\beta }, \hat{\eta }\right). $$

The upper and lower bounds on reliability are:


 * $$ R_{U} =e^{-e^{u_{L}}}\text{ (upper bound)}$$


 * $$ R_{L} =e^{-e^{u_{U}}}\text{ (lower bound)}$$

Other Weibull Forms

Weibull++ makes the following assumptions/substitutions when using the three-parameter or one-parameter forms:


 * For the three-parameter case, substitute $$ t=\ln (t-\hat{\gamma }) $$ (and by definition γ &lt; t ), instead of lnt. (Note that this is an approximation since it eliminates the third parameter and assumes that $$ Var( \hat{\gamma })=0. $$)
 * For the one-parameter, $$ Var(\hat{\beta })=0, $$ thus:


 * $$ Var(\hat{u})=\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })=\left( \frac{\hat{\beta }}{\hat{\eta }}\right) ^{2}Var(\hat{\eta }) $$

Also note that the time axis (x-axis) in the three-parameter Weibull plot in Weibull++ is not but t − γ. This means that one must be cautious when obtaining confidence bounds from the plot. If one desires to estimate the confidence bounds on reliability for a given time t0 from the adjusted plotted line, then these bounds should be obtained for a t0 − γ entry on the time axis.

Bounds on Time
The bounds around the time estimate or reliable life estimate, for a given Weibull percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows [24, 30]:


 * $$ \ln R =-\left( \frac{t}{\eta }\right) ^{\beta }

$$


 * $$ \ln (-\ln R) =\beta \ln \left( \frac{t}{\eta }\right) $$


 * $$ \ln (-\ln R) =\beta (\ln t-\ln \eta )$$

or:


 * $$ u=\frac{1}{\beta }\ln (-\ln R)+\ln \eta $$

where u = lnt.

The upper and lower bounds on are estimated from:


 * $$ u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} $$


 * $$ u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} $$

where:


 * $$ Var(\hat{u})=\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })+2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u}{\partial \eta }\right) Cov\left( \hat{\beta },\hat{ \eta }\right) $$

or:
 * $$ Var(\hat{u}) =\frac{1}{\hat{\beta }^{4}}\left[ \ln (-\ln R)\right] ^{2}Var(\hat{\beta })+\frac{1}{\hat{\eta }^{2}}Var(\hat{\eta })+2\left( -\frac{1}{\hat{\beta }^{2}}\right) \left( \frac{\ln (-\ln R)}{ \hat{\eta }}\right) Cov\left( \hat{\beta },\hat{\eta }\right) $$

The upper and lower bounds are then found by:


 * $$ T_{U} =e^{u_{U}}\text{ (upper bound)} $$


 * $$ T_{L} =e^{u_{L}}\text{ (lower bound)} $$

Likelihood Ratio Confidence Bounds
As covered in Chapter Confidence Bounds, the likelihood confidence bounds are calculated by finding values for θ1 and θ2 that satisfy:


 * $$ -2\cdot \text{ln}\left( \frac{L(\theta _{1},\theta _{2})}{L(\hat{\theta }_{1}, \hat{\theta }_{2})}\right) =\chi _{\alpha ;1}^{2} $$

This equation can be rewritten as:


 * $$ L(\theta _{1},\theta _{2})=L(\hat{\theta }_{1},\hat{\theta } _{2})\cdot e^{\frac{-\chi _{\alpha ;1}^{2}}{2}} $$

For complete data, the likelihood function for the Weibull distribution is given by:
 * $$ L(\beta ,\eta )=\prod_{i=1}^{N}f(x_{i};\beta ,\eta )=\prod_{i=1}^{N}\frac{ \beta }{\eta }\cdot \left( \frac{x_{i}}{\eta }\right) ^{\beta -1}\cdot e^{-\left( \frac{x_{i}}{\eta }\right) ^{\beta }} $$

For a given value of α, values for β and η can be found which represent the maximum and minimum values that satisfy the above equation. These represent the confidence bounds for the parameters at a confidence level δ, where α = δ for two-sided bounds and α = 2δ − 1 for one-sided.

Similarly, the bounds on time and reliability can be found by substituting the Weibull reliability equation into the likelihood function so that it is in terms of β and time or reliability, as discussed in Chapter Confidence Bounds. The likelihood ratio equation used to solve for bounds on time (Type 1) is:


 * $$ L(\beta ,t)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] $$

The likelihood ratio equation used to solve for bounds on reliability (Type 2) is:


 * $$ L(\beta ,R)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] $$

Bounds on Parameters
Bayesian Bounds use non-informative prior distributions for both parameters. From Chapter Confidence Bounds, we know that if the prior distribution of η and β are independent, the posterior joint distribution of η and β can be written as:


 * $$ f(\eta ,\beta |Data)= \dfrac{L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )}{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } $$

The marginal distribution of η is:


 * $$ f(\eta |Data) =\int_{0}^{\infty }f(\eta ,\beta |Data)d\beta =

\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\beta }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } $$

where: $$ \varphi (\beta )=\frac{1}{\beta } $$ is the non-informative prior of β. $$ \varphi (\eta )=\frac{1}{\eta } $$ is the non-informative prior of η. Using these non-informative prior distributions, $$f(\eta|Data)$$ can be rewritten as:


 * $$ f(\eta |Data)=\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta } \frac{1}{\eta }d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } $$

The one-sided upper bounds of η is:


 * $$ CL=P(\eta \leq \eta _{U})=\int_{0}^{\eta _{U}}f(\eta |Data)d\eta $$

The one-sided lower bounds of η is:


 * $$ 1-CL=P(\eta \leq \eta _{L})=\int_{0}^{\eta _{L}}f(\eta |Data)d\eta $$

The two-sided bounds of η is:


 * $$ CL=P(\eta _{L}\leq \eta \leq \eta _{U})=\int_{\eta _{L}}^{\eta _{U}}f(\eta |Data)d\eta $$

Same method is used to obtain the bounds of β.

Bounds on Reliability

 * $$ CL=\Pr (R\leq R_{U})=\Pr (\eta \leq T\exp (-\frac{\ln (-\ln R_{U})}{\beta })) $$

From the posterior distribution of η, we have:


 * $$ CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } $$

The above equation is solved numerically for RU. The same method can be used to calculate the one sided lower bounds and two-sided bounds on reliability.

Bounds on Time
From Chapter Confidence Bounds, we know that:


 * $$ CL=\Pr (T\leq T_{U})=\Pr (\eta \leq T_{U}\exp (-\frac{\ln (-\ln R)}{\beta })) $$

From the posterior distribution of η, we have:


 * $$ CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{ \ln (-\ln R)}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } $$

The above equation is solved numerically for TU. The same method can be applied to calculate one sided lower bounds and two-sided bounds on time.

Bayesian-Weibull Analysis
In this section, the Bayesian methods are presented for the 2-parameter Weibull distribution. Bayesian concepts were introduced in Chapter Parameter Estimation. This model considers prior knowledge on the shape ( β ) parameter of the Weibull distribution when it is chosen to be fitted to a given set of data. There are many practical applications for this model, particularly when dealing with small sample sizes and some prior knowledge for the shape parameter is available. For example, when a test is performed, there is often a good understanding about the behavior of the failure mode under investigation, primarily through historical data. At the same time, most reliability tests are performed on a limited number of samples. Under these conditions, it would be very useful to use this prior knowledge with the goal of making more accurate predictions. A common approach for such scenarios is to use the 1-parameter Weibull distribution, but this approach is too deterministic, too absolute you may say (and you would be right). The Weibull-Bayesian model in Weibull++ (which is actually a true "WeiBayes" model, unlike the 1-parameter Weibull that is commonly referred to as such) offers an alternative to the 1-parameter Weibull, by including the variation and uncertainty that might have been observed in the past on the shape parameter. Applying Bayes's rule on the 2-parameter Weibull distribution and assuming the prior distributions of β and η are independent, we obtain the following posterior :


 * $$ f(\beta ,\eta |Data)=\dfrac{L(\beta ,\eta )\varphi (\beta )\varphi (\eta )}{ \int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta } $$

In this model, η is assumed to follow a noninformative prior distribution with the density function $$ \varphi (\eta )=\dfrac{1}{\eta } $$. This is called Jeffrey's prior, and is obtained by performing a logarithmic transformation on η. Specifically, since η is always positive, we can assume that ln( η) follows a uniform distribution, U( − ∞, + ∞). Applying Jeffrey's rule [9] which says "in general, an approximate non-informative prior is taken proportional to the square root of Fisher's information", yields $$ \varphi (\eta )=\dfrac{1}{\eta }. $$

The prior distribution of β, denoted as $$ \varphi (\beta ) $$, can be selected from the following distributions: normal, lognormal, exponential and uniform. The procedure of performing a Weibull-Bayesian analysis is as follows:


 * Collect the times-to-failure data.
 * Specify a prior distribution for β (the prior for η is assumed to be 1/ η).
 * Obtain the posterior from the above equation.

In other words, a distribution (the posterior ) is obtained, rather than a point estimate as in classical statistics (i.e., as in the parameter estimation methods described previously in this chapter). Therefore, if a point estimate needs to be reported, a point of the posterior needs to be calculated. Typical points of the posterior distribution used are the mean (expected value) or median. In Weibull++, both options are available and can be chosen from the Analysis page, under the Results As area, as shown next.



The expected value of β is obtained by:
 * $$ E(\beta )=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\beta \cdot f(\beta ,\eta |Data)d\beta d\eta $$

Similarly, the expected value of η is obtained by:
 * $$ E(\eta )=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\eta \cdot f(\beta ,\eta |Data)d\beta d\eta $$

The median points are obtained by solving the following equations for $$ \breve{\beta} $$ and $$ \breve{\eta} $$ respectively:


 * $$ \int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\breve{\beta}}f(\beta ,\eta |Data)d\beta d\eta =0.5 $$

and


 * $$ \int\nolimits_{0}^{\breve{\eta}}\int\nolimits_{0}^{\infty }f(\beta ,\eta |Data)d\beta d\eta =0.5 $$

Of course, other points of the posterior distribution can be calculated as well. For example, one may want to calculate the 10th percentile of the joint posterior distribution (w.r.t. one of the parameters). The procedure for obtaining other points of the posterior distribution is similar to the one for obtaining the median values, where instead of 0.5 the percentage of interest is given. This procedure actually provides the confidence bounds on the parameters, which in the Bayesian framework are called ‘‘Credible Bounds‘‘. However, since the engineering interpretation is the same, and to avoid confusion, we refer to them as confidence bounds in this reference and in Weibull++.

Posterior Distributions for Functions of Parameters
As explained in Chpater Parameter Estimation, in Bayesian analysis, all the functions of the parameters are distributed. In other words, a posterior distribution is obtained for functions such as reliability and failure rate, instead of point estimate as in classical statistics. Therefore, in order to obtain a point estimate for these functions, a point on the posterior distributions needs to be calculated. Again, the expected value (mean) or median value are used.

$$pdf$$ of the Times-to-Failure

The posterior distribution of the failure time is given by:


 * $$ f(T|Data)=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }f(T,\beta ,\eta )f(\beta ,\eta |Data)d\eta d\beta $$

where:


 * $$ f(T,\beta ,\eta )=\dfrac{\beta }{\eta }\left( \dfrac{T}{\eta }\right) ^{\beta -1}e^{-\left( \dfrac{T}{\eta }\right) ^{\beta }} $$

For the $$pdf$$ of the times-to-failure, only the expected value is calculated and reported in Weibull++.

Reliability

In order to calculate the median value of the reliability function, we first need to obtain posterior of the reliability. Since R(T) is a function of β, the density functions of β and R(T) have the following relationship:


 * $$ \begin{align} f(R|Data,T)dR = & f(\beta |Data)d\beta)\\

= & (\int\nolimits_{0}^{\infty }f(\beta ,\eta |Data)d{\eta}) d{\beta} \\ =& \dfrac{\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }d\beta \end{align}$$

The median value of the reliability is obtained by solving the following equation w.r.t. $$ \breve{R}: $$


 * $$ \int\nolimits_{0}^{\breve{R}}f(R|Data,T)dR=0.5 $$

The expected value of the reliability at time is given by:


 * $$ R(T|Data)=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }R(T,\beta ,\eta )f(\beta ,\eta |Data)d\eta d\beta $$

where:


 * $$ R(T,\beta ,\eta )=e^{-\left( \dfrac{T}{\eta }\right) ^{^{\beta }}} $$

Failure Rate

The failure rate at time is given by:


 * $$ \lambda (T|Data)=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\lambda (T,\beta ,\eta )L(\beta ,\eta )\varphi (\eta )\varphi (\beta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\eta )\varphi (\beta )d\eta d\beta } $$

where:


 * $$ \lambda (T,\beta ,\eta )=\dfrac{\beta }{\eta }\left( \dfrac{T}{\eta }\right) ^{\beta -1} $$

Note on Calculated Results
As mentioned above, in order to obtain point estimates for the parameters of functions of the parameters in Bayesian analysis, the Median or Mean values of the different posterior $$pdf$$s are calculated. It is important to note that the Median value is preferable and is the default in Weibull++. This is because the Median value always corresponds to the 50th percentile of the distribution. On the other hand, the Mean is not a fixed point on the distribution, which could cause issues, especially when comparing results across different data sets.

Bounds on Reliability
The confidence bounds calculation under the Weibull-Bayesian analysis is very similar to the Bayesian Confidence Bounds method described in the previous section, with the exception that in the case of the Weibull-Bayesian Analysis the specified prior of β is considered instead of an non-informative prior. The Bayesian one-sided upper bound estimate for R(T) is given by:


 * $$ \int\nolimits_{0}^{R_{U}(T)}f(R|Data,t)dR=CL $$

Using the posterior distribution, the following is obtained:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{t\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL $$

The above equation can be solved for RU(t). The Bayesian one-sided lower bound estimate for $$ \ R(t) $$ is given by:


 * $$ \int\nolimits_{0}^{R_{L}(t)}f(R|Data,t)dR=1-CL $$

Using the posterior distribution, the following is obtained:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{L})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=1-CL $$

The above equation can be solved for RL(t). The Bayesian two-sided bounds estimate for R(t) is given by:


 * $$ \int\nolimits_{R_{L}(t)}^{R_{U}(t)}f(R|Data,t)dR=CL $$ which is equivalent to:


 * $$ \int\nolimits_{0}^{R_{U}(t)}f(R|Data,t)dR=(1+CL)/2 $$

and


 * $$ \int\nolimits_{0}^{R_{L}(t)}f(R|Data,T)dR=(1-CL)/2 $$

Using the same method for one-sided bounds, RU(t) and RL(t) can be computed.

Bounds on Time
Following the same procedure described for bounds on Reliability, the bounds of time can be calculated, given. The Bayesian one-sided upper bound estimate for T(R) is given by:


 * $$ \int\nolimits_{0}^{T_{U}(R)}f(T|Data,R)dT=CL $$

Using the posterior distribution, the following is obtained:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL $$

The above equation can be solved for TU(R). The Bayesian one-sided lower bound estimate for T(R) is given by:


 * $$ \int\nolimits_{0}^{T_{L}(R)}f(T|Data,R)dT=1-CL $$

or:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{T_{L}\exp (\dfrac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL $$

The above equation can be solved for TL(R). The Bayesian two-sided lower bounds estimate for T(R) is:


 * $$ \int\nolimits_{T_{L}(R)}^{T_{U}(R)}f(T|Data,R)dT=CL $$

which is equivalent to:


 * $$ \int\nolimits_{0}^{T_{U}(R)}f(T|Data,R)dT=(1+CL)/2 $$

and:


 * $$ \int\nolimits_{0}^{T_{L}(R)}f(T|Data,R)dT=(1-CL)/2 $$

Example 6: