The Weibull Distribution

The Weibull distribution is one of the most widely used lifetime distributions in reliability engineering. It is a versatile distribution that can take on the characteristics of other types of distributions, based on the value of the shape parameter, $$ {\beta} $$. This chapter provides a brief background on the Weibull distribution, presents and derives most of the applicable equations and presents examples calculated both manually and by using ReliaSoft's Weibull++.

The 2-Parameter Weibull
The 2-parameter Weibull pdf is obtained by setting $$ \gamma=0 \,\!$$, and is given by:


 * $$ f(t)={ \frac{\beta }{\eta }}\left( {\frac{t}{\eta }}\right) ^{\beta -1}e^{-\left( { \frac{t}{\eta }}\right) ^{\beta }} \,\!$$

The 1-Parameter Weibull
The 1-parameter Weibull pdf is obtained by again setting $$\gamma=0 \,\!$$ and assuming $$\beta=C=Constant \,\!$$ assumed value or:

$$ f(t)={ \frac{C}{\eta }}\left( {\frac{t}{\eta }}\right) ^{C-1}e^{-\left( {\frac{t}{ \eta }}\right) ^{C}} \,\!$$

where the only unknown parameter is the scale parameter, $$\eta\,\!$$.

Note that in the formulation of the 1-parameter Weibull, we assume that the shape parameter $$\beta \,\!$$ is known a priori from past experience with identical or similar products. The advantage of doing this is that data sets with few or no failures can be analyzed.

Estimation of the Weibull Parameters
The estimates of the parameters of the Weibull distribution can be found graphically via probability plotting paper, or analytically, either using least squares or maximum likelihood.

Probability Plotting
One method of calculating the parameters of the Weibull distribution is by using probability plotting. To better illustrate this procedure, consider the following example from Kececioglu [20].

Example 1:

Probability Plotting for the Location Parameter, γ 

The third parameter of the Weibull distribution is utilized when the data do not fall on a straight line, but fall on either a concave up or down curve. The following statements can be made regarding the value of γ:


 * Case 1: If the curve for MR versus tj is concave down and the curve for MR versus (tj − t1) is concave up, then there exists a γ such that 0 &lt; γ &lt; t1, or γ has a positive value.


 * Case 2: If the curves for MR versus tj and MR versus (tj − t1) are both concave up, then there exists a negative γ which will straighten out the curve of MR versus tj.


 * Case 3: If neither one of the previous two cases prevails, then either reject the Weibull as one capable of representing the data, or proceed with the multiple population (mixed Weibull) analysis. To obtain the location parameter, γ:


 * Subtract the same arbitrary value, γ, from all the times to failure and replot the data.
 * If the initial curve is concave up, subtract a negative γ from each failure time.
 * If the initial curve is concave down, subtract a positive γ from each failure time.
 * Repeat until the data plots on an acceptable straight line.
 * The value of γ is the subtracted (positive or negative) value that places the points in an acceptable straight line.

The other two parameters are then obtained using the techniques previously described. Also, it is important to note that we used the term subtract a positive or negative gamma, where subtracting a negative gamma is equivalent to adding it. Note that when adjusting for gamma, the x-axis scale for the straight line becomes (t − γ).

Rank Regression on Y
Performing rank regression on Y requires that a straight line mathematically be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized. This is in essence the same methodology as the probability plotting method, except that we use the principle of least squares to determine the line through the points, as opposed to just eyeballing it. The first step is to bring our function into a linear form. For the two-parameter Weibull distribution, the (cumulative density function) is:


 * $$ F(t)=1-e^{-\left( \frac{t}{\eta }\right) ^{\beta }} $$

Taking the natural logarithm of both sides of the equation yields:


 * $$\ln[ 1-F(t)] =-( \frac{t}{\eta }) ^{\beta } $$


 * $$ \ln{ -\ln[ 1-F(t)]} =\beta \ln ( \frac{t}{ \eta }) $$

or:


 * $$\begin{align}

\ln \{ -\ln[ 1-F(t)]\} =-\beta \ln (\eta )+\beta \ln (t) \end{align}$$

Now let:


 * $$\begin{align}

y = \ln \{ -\ln[ 1-F(t)]\} \end{align}$$


 * $$\begin{align}

a = − βln(\eta) \end{align}$$

and:


 * $$\begin{align}

b= \beta \end{align}$$

which results in the linear equation of:


 * $$\begin{align}

y=a+bx \end{align}$$

The least squares parameter estimation method (also known as regression analysis) was discussed in Parameter Estimation, and the following equations for regression on Y were derived:


 * $$ \hat{a}=\frac{\sum\limits_{i=1}^{N}y_{i}}{N}-\hat{b}\frac{ \sum\limits_{i=1}^{N}x_{i}}{N}=\bar{y}-\hat{b}\bar{x} $$

and:


 * $$ \hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}x_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}x_{i}\right) ^{2}}{N}}} $$

In this case the equations for yi and xi are:


 * $$ y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\}, $$

and:


 * xi = ln(ti).

The $$ F(t_{i})s $$ are estimated from the median ranks.

Once $$ \hat{a} $$ and $$ \hat{b} $$ are obtained, then $$ \hat{\beta } $$ and $$ \hat{\eta } $$ can easily be obtained from previous equations.

The Correlation Coefficient

The correlation coefficient is defined as follows:


 * $$ \rho ={\frac{\sigma _{xy}}{\sigma _{x}\sigma _{y}}} $$

where, σx y = covariance of and, σx = standard deviation of , and σy = standard deviation of. The estimator of ρ is the sample correlation coefficient, $$ \hat{\rho} $$, given by:


 * $$ \hat{\rho}=\frac{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})(y_{i}-\overline{y} )}{\sqrt{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})^{2}\cdot \sum\limits_{i=1}^{N}(y_{i}-\overline{y})^{2}}}$$

Example 3: 

Rank Regression on X
Performing a rank regression on X is similar to the process for rank regression on Y, with the difference being that the horizontal deviations from the points to the line are minimized rather than the vertical. Again, the first task is to bring the reliability function into a linear form. This step is exactly the same as in the regression on Y analysis and all the equations apply in this case too. The derivation from the previous analysis begins on the least squares fit part, where in this case we treat as the dependent variable and as the independent variable. The best-fitting straight line to the data, for regression on X (see Parameter Estimation), is the straight line:


 * $$ x= \hat{a}+\hat{b}y $$

The corresponding equations for $$ \hat{a} $$ and $$ \hat{b} $$ are:


 * $$ \hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\sum\limits_{i=1}^{N}x_{i}}{N} -\hat{b}\frac{\sum\limits_{i=1}^{N}y_{i}}{N} $$

and


 * $$ \hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}y_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}y_{i}\right) ^{2}}{N}}} $$

where:


 * $$ y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\} $$ and:

xi = ln(ti) and the F(ti) values are again obtained from the median ranks.

Once $$ \hat{a} $$ and $$ \hat{b} $$ are obtained, solve the linear equation for y, which corresponds to:


 * $$ y=-\frac{\hat{a}}{\hat{b}}+\frac{1}{\hat{b}}x $$ Solving for the parameters from above equations, we get:


 * $$ a=-\frac{\hat{a}}{\hat{b}}=-\beta \ln (\eta )$$

and


 * $$ b=\frac{1}{\hat{b}}=\beta$$

The correlation coefficient is evaluated as before.

Example 4: 

Three-Parameter Weibull Regression
When the MR versus tj points plotted on the Weibull probability paper do not fall on a satisfactory straight line and the points fall on a curve,(Note that other shapes, particularly shapes, might suggest the existence of more than one population. In these cases, the multiple population, mixed Weibull distribution, may be more appropriate. The Mixed Weibull Distribution presents the mixed Weibull distribution.) then a location parameter, γ, might exist which may straighten out these points. The goal in this case is to fit a curve, instead of a line, through the data points using nonlinear regression. The Gauss-Newton method can be used to solve for the parameters, β, η and γ , by performing a Taylor series expansion on F(ti;β,η,γ). Then the nonlinear model is approximated with linear terms and ordinary least squares are employed to estimate the parameters. This procedure is iterated until a satisfactory solution is reached. Weibull++ calculates the value of γ by utilizing an optimized Nelder-Mead algorithm, and adjusts the points by this value of γ such that they fall on a straight line, and then plots both the adjusted and the original unadjusted points. To draw a curve through the original unadjusted points, if so desired, select Weibull 3P Line Unadjusted for Gamma from the Show Plot Line submenu under the Plot Options menu. The returned estimations of the parameters are the same when selecting RRX or RRY. To display the unadjusted data points and line along with the adjusted data points and line, select Show/Hide Items under the Plot Options menu and include the unadjusted data points and line as follows:





The results and the associated graph for the previous example using the three-parameter Weibull case are shown next:



Maximum Likelihood Estimation
As outlined in Parameter Estimation, maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function, but this can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood function with respect to the parameters, setting the resulting equations equal to zero and solving simultaneously to determine the values of the parameter estimates. ( Note that MLE asymptotic properties do not hold when estimating γ using MLE [27].) The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the Weibull distribution are covered in Appendix D.

Example 5:

Fisher Matrix Confidence Bounds
One of the methods used by the application in estimating the different types of confidence bounds for Weibull data, the Fisher matrix method, is presented in this section. The complete derivations were presented in detail (for a general function) in Confidence Bounds.

Bounds on the Parameters
One of the properties of maximum likelihood estimators is that they are asymptotically normal, meaning that for large samples they are normally distributed. Additionally, since both the shape parameter estimate, $$ \hat{\beta } $$, and the scale parameter estimate, $$ \hat{\eta }, $$ must be positive, thus lnβ and lnη are treated as being normally distributed as well. The lower and upper bounds on the parameters are estimated from [30]:


 * $$ \beta _{U} =\hat{\beta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}\text{ (upper bound)} $$


 * $$ \beta _{L} =\frac{\hat{\beta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}} \text{ (lower bound)}

$$

and:


 * $$ \eta _{U} =\hat{\eta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}\text{ (upper bound)}

$$


 * $$ \eta _{L} =\frac{\hat{\eta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}}\text{ (lower bound)} $$

where $$ K_{\alpha}$$ is defined by:


 * $$ \alpha =\frac{1}{\sqrt{2\pi }}\int_{K_{\alpha }}^{\infty }e^{-\frac{t^{2}}{2} }dt=1-\Phi (K_{\alpha }) $$

If δ is the confidence level, then $$ \alpha =\frac{1-\delta }{2} $$ for the two-sided bounds and α = 1 − δ for the one-sided bounds. The variances and covariances of $$ \hat{\beta } $$ and $$ \hat{\eta } $$ are estimated from the inverse local Fisher matrix, as follows:
 * $$ \left( \begin{array}{cc} \hat{Var}\left( \hat{\beta }\right) & \hat{Cov}\left( \hat{ \beta },\hat{\eta }\right)

\\ \hat{Cov}\left( \hat{\beta },\hat{\eta }\right) & \hat{Var} \left( \hat{\eta }\right) \end{array} \right) =\left( \begin{array}{cc} -\frac{\partial ^{2}\Lambda }{\partial \beta ^{2}} & -\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } \\

-\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } & -\frac{ \partial ^{2}\Lambda }{\partial \eta ^{2}} \end{array} \right) _{\beta =\hat{\beta },\text{ }\eta =\hat{\eta }}^{-1} $$

Fisher Matrix Confidence Bounds and Regression Analysis

Note that the variance and covariance of the parameters are obtained from the inverse Fisher information matrix as described in this section. The local Fisher information matrix is obtained from the second partials of the likelihood function, by substituting the solved parameter estimates into the particular functions. This method is based on maximum likelihood theory and is derived from the fact that the parameter estimates were computed using maximum likelihood estimation methods. When one uses least squares or regression analysis for the parameter estimates, this methodology is theoretically then not applicable. However, if one assumes that the variance and covariance of the parameters will be similar ( One also assumes similar properties for both estimators.) regardless of the underlying solution method, then the above methodology can also be used in regression analysis.

The Fisher matrix is one of the methodologies that Weibull++ uses for both MLE and regression analysis. Specifically, Weibull++ uses the likelihood function and computes the local Fisher information matrix based on the estimates of the parameters and the current data. This gives consistent confidence bounds regardless of the underlying method of solution, (i.e., MLE or regression). In addition, Weibull++ checks this assumption and proceeds with it if it considers it to be acceptable. In some instances, Weibull++ will prompt you with an "Unable to Compute Confidence Bounds" message when using regression analysis. This is an indication that these assumptions were violated.

Bounds on Reliability
The bounds on reliability can easily be derived by first looking at the general extreme value distribution (EVD). Its reliability function is given by:


 * $$ R(t)=e^{-e^{\left( \frac{t-p_{1}}{p_{2}}\right) }} $$

By transforming t = lnt and converting $$ p=\ln({\eta})$$, $$ p_{2}=\frac{1}{ \beta } $$, the above equation becomes the Weibull reliability function:


 * $$ R(t)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}=e^{-e^{\ln \left( \frac{t }{\eta }\right) ^{\beta }}}=e^{-\left( \frac{t}{\eta }\right) ^{\beta }} $$

with:


 * $$ R(T)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}$$

set:


 * $$ u=\beta \left( \ln t-\ln \eta \right). $$

The reliability function now becomes:


 * $$ R(T)=e^{-e^{u}} $$

The next step is to find the upper and lower bounds on u. Using the equations derived in Confidence Bounds, the bounds on are then estimated from [30]:


 * $$ u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})}

$$


 * $$ u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})}

$$

where:


 * $$ Var(\hat{u}) =\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta }) +2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u }{\partial \eta }\right) Cov\left( \hat{\beta },\hat{\eta }\right) $$

or:


 * $$ Var(\hat{u}) =\frac{\hat{u}^{2}}{\hat{\beta }^{2}}Var(\hat{ \beta })+\frac{\hat{\beta }^{2}}{\hat{\eta }^{2}}Var(\hat{\eta }) -\left( \frac{2u}{\hat{\eta }}\right) Cov\left( \hat{\beta }, \hat{\eta }\right). $$

The upper and lower bounds on reliability are:


 * $$ R_{U} =e^{-e^{u_{L}}}\text{ (upper bound)}$$


 * $$ R_{L} =e^{-e^{u_{U}}}\text{ (lower bound)}$$

Other Weibull Forms

Weibull++ makes the following assumptions/substitutions when using the three-parameter or one-parameter forms:


 * For the three-parameter case, substitute $$ t=\ln (t-\hat{\gamma }) $$ (and by definition γ &lt; t ), instead of lnt. (Note that this is an approximation since it eliminates the third parameter and assumes that $$ Var( \hat{\gamma })=0. $$)
 * For the one-parameter, $$ Var(\hat{\beta })=0, $$ thus:


 * $$ Var(\hat{u})=\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })=\left( \frac{\hat{\beta }}{\hat{\eta }}\right) ^{2}Var(\hat{\eta }) $$

Also note that the time axis (x-axis) in the three-parameter Weibull plot in Weibull++ is not but t − γ. This means that one must be cautious when obtaining confidence bounds from the plot. If one desires to estimate the confidence bounds on reliability for a given time t0 from the adjusted plotted line, then these bounds should be obtained for a t0 − γ entry on the time axis.

Bounds on Time
The bounds around the time estimate or reliable life estimate, for a given Weibull percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows [24, 30]:


 * $$ \ln R =-\left( \frac{t}{\eta }\right) ^{\beta }

$$


 * $$ \ln (-\ln R) =\beta \ln \left( \frac{t}{\eta }\right) $$


 * $$ \ln (-\ln R) =\beta (\ln t-\ln \eta )$$

or:


 * $$ u=\frac{1}{\beta }\ln (-\ln R)+\ln \eta $$

where u = lnt.

The upper and lower bounds on are estimated from:


 * $$ u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} $$


 * $$ u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} $$

where:


 * $$ Var(\hat{u})=\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })+2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u}{\partial \eta }\right) Cov\left( \hat{\beta },\hat{ \eta }\right) $$

or:
 * $$ Var(\hat{u}) =\frac{1}{\hat{\beta }^{4}}\left[ \ln (-\ln R)\right] ^{2}Var(\hat{\beta })+\frac{1}{\hat{\eta }^{2}}Var(\hat{\eta })+2\left( -\frac{1}{\hat{\beta }^{2}}\right) \left( \frac{\ln (-\ln R)}{ \hat{\eta }}\right) Cov\left( \hat{\beta },\hat{\eta }\right) $$

The upper and lower bounds are then found by:


 * $$ T_{U} =e^{u_{U}}\text{ (upper bound)} $$


 * $$ T_{L} =e^{u_{L}}\text{ (lower bound)} $$

Likelihood Ratio Confidence Bounds
As covered in Confidence Bounds, the likelihood confidence bounds are calculated by finding values for θ1 and θ2 that satisfy:


 * $$ -2\cdot \text{ln}\left( \frac{L(\theta _{1},\theta _{2})}{L(\hat{\theta }_{1}, \hat{\theta }_{2})}\right) =\chi _{\alpha ;1}^{2} $$

This equation can be rewritten as:


 * $$ L(\theta _{1},\theta _{2})=L(\hat{\theta }_{1},\hat{\theta } _{2})\cdot e^{\frac{-\chi _{\alpha ;1}^{2}}{2}} $$

For complete data, the likelihood function for the Weibull distribution is given by:
 * $$ L(\beta ,\eta )=\prod_{i=1}^{N}f(x_{i};\beta ,\eta )=\prod_{i=1}^{N}\frac{ \beta }{\eta }\cdot \left( \frac{x_{i}}{\eta }\right) ^{\beta -1}\cdot e^{-\left( \frac{x_{i}}{\eta }\right) ^{\beta }} $$

For a given value of α, values for β and η can be found which represent the maximum and minimum values that satisfy the above equation. These represent the confidence bounds for the parameters at a confidence level δ, where α = δ for two-sided bounds and α = 2δ − 1 for one-sided.

Similarly, the bounds on time and reliability can be found by substituting the Weibull reliability equation into the likelihood function so that it is in terms of β and time or reliability, as discussed in Confidence Bounds. The likelihood ratio equation used to solve for bounds on time (Type 1) is:


 * $$ L(\beta ,t)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] $$

The likelihood ratio equation used to solve for bounds on reliability (Type 2) is:


 * $$ L(\beta ,R)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] $$

Bounds on Parameters
Bayesian Bounds use non-informative prior distributions for both parameters. From Confidence Bounds, we know that if the prior distribution of η and β are independent, the posterior joint distribution of η and β can be written as:


 * $$ f(\eta ,\beta |Data)= \dfrac{L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )}{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } $$

The marginal distribution of η is:


 * $$ f(\eta |Data) =\int_{0}^{\infty }f(\eta ,\beta |Data)d\beta =

\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\beta }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } $$

where: $$ \varphi (\beta )=\frac{1}{\beta } $$ is the non-informative prior of β. $$ \varphi (\eta )=\frac{1}{\eta } $$ is the non-informative prior of η. Using these non-informative prior distributions, $$f(\eta|Data)$$ can be rewritten as:


 * $$ f(\eta |Data)=\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta } \frac{1}{\eta }d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } $$

The one-sided upper bounds of η is:


 * $$ CL=P(\eta \leq \eta _{U})=\int_{0}^{\eta _{U}}f(\eta |Data)d\eta $$

The one-sided lower bounds of η is:


 * $$ 1-CL=P(\eta \leq \eta _{L})=\int_{0}^{\eta _{L}}f(\eta |Data)d\eta $$

The two-sided bounds of η is:


 * $$ CL=P(\eta _{L}\leq \eta \leq \eta _{U})=\int_{\eta _{L}}^{\eta _{U}}f(\eta |Data)d\eta $$

Same method is used to obtain the bounds of β.

Bounds on Reliability

 * $$ CL=\Pr (R\leq R_{U})=\Pr (\eta \leq T\exp (-\frac{\ln (-\ln R_{U})}{\beta })) $$

From the posterior distribution of η, we have:


 * $$ CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } $$

The above equation is solved numerically for RU. The same method can be used to calculate the one sided lower bounds and two-sided bounds on reliability.

Bounds on Time
From Confidence Bounds, we know that:


 * $$ CL=\Pr (T\leq T_{U})=\Pr (\eta \leq T_{U}\exp (-\frac{\ln (-\ln R)}{\beta })) $$

From the posterior distribution of η, we have:


 * $$ CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{ \ln (-\ln R)}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } $$

The above equation is solved numerically for TU. The same method can be applied to calculate one sided lower bounds and two-sided bounds on time.

Bayesian-Weibull Analysis
In this section, the Bayesian methods are presented for the 2-parameter Weibull distribution. Bayesian concepts were introduced in Parameter Estimation. This model considers prior knowledge on the shape ( β ) parameter of the Weibull distribution when it is chosen to be fitted to a given set of data. There are many practical applications for this model, particularly when dealing with small sample sizes and some prior knowledge for the shape parameter is available. For example, when a test is performed, there is often a good understanding about the behavior of the failure mode under investigation, primarily through historical data. At the same time, most reliability tests are performed on a limited number of samples. Under these conditions, it would be very useful to use this prior knowledge with the goal of making more accurate predictions. A common approach for such scenarios is to use the 1-parameter Weibull distribution, but this approach is too deterministic, too absolute you may say (and you would be right). The Weibull-Bayesian model in Weibull++ (which is actually a true "WeiBayes" model, unlike the 1-parameter Weibull that is commonly referred to as such) offers an alternative to the 1-parameter Weibull, by including the variation and uncertainty that might have been observed in the past on the shape parameter. Applying Bayes's rule on the 2-parameter Weibull distribution and assuming the prior distributions of β and η are independent, we obtain the following posterior :


 * $$ f(\beta ,\eta |Data)=\dfrac{L(\beta ,\eta )\varphi (\beta )\varphi (\eta )}{ \int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta } $$

In this model, η is assumed to follow a noninformative prior distribution with the density function $$ \varphi (\eta )=\dfrac{1}{\eta } $$. This is called Jeffrey's prior, and is obtained by performing a logarithmic transformation on η. Specifically, since η is always positive, we can assume that ln( η) follows a uniform distribution, U( − ∞, + ∞). Applying Jeffrey's rule [9] which says "in general, an approximate non-informative prior is taken proportional to the square root of Fisher's information", yields $$ \varphi (\eta )=\dfrac{1}{\eta }. $$

The prior distribution of β, denoted as $$ \varphi (\beta ) $$, can be selected from the following distributions: normal, lognormal, exponential and uniform. The procedure of performing a Weibull-Bayesian analysis is as follows:


 * Collect the times-to-failure data.
 * Specify a prior distribution for β (the prior for η is assumed to be 1/ η).
 * Obtain the posterior from the above equation.

In other words, a distribution (the posterior ) is obtained, rather than a point estimate as in classical statistics (i.e., as in the parameter estimation methods described previously in this chapter). Therefore, if a point estimate needs to be reported, a point of the posterior needs to be calculated. Typical points of the posterior distribution used are the mean (expected value) or median. In Weibull++, both options are available and can be chosen from the Analysis page, under the Results As area, as shown next.



The expected value of β is obtained by:
 * $$ E(\beta )=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\beta \cdot f(\beta ,\eta |Data)d\beta d\eta $$

Similarly, the expected value of η is obtained by:
 * $$ E(\eta )=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\eta \cdot f(\beta ,\eta |Data)d\beta d\eta $$

The median points are obtained by solving the following equations for $$ \breve{\beta} $$ and $$ \breve{\eta} $$ respectively:


 * $$ \int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\breve{\beta}}f(\beta ,\eta |Data)d\beta d\eta =0.5 $$

and


 * $$ \int\nolimits_{0}^{\breve{\eta}}\int\nolimits_{0}^{\infty }f(\beta ,\eta |Data)d\beta d\eta =0.5 $$

Of course, other points of the posterior distribution can be calculated as well. For example, one may want to calculate the 10th percentile of the joint posterior distribution (w.r.t. one of the parameters). The procedure for obtaining other points of the posterior distribution is similar to the one for obtaining the median values, where instead of 0.5 the percentage of interest is given. This procedure actually provides the confidence bounds on the parameters, which in the Bayesian framework are called ‘‘Credible Bounds‘‘. However, since the engineering interpretation is the same, and to avoid confusion, we refer to them as confidence bounds in this reference and in Weibull++.

Posterior Distributions for Functions of Parameters
As explained in Chpater Parameter Estimation, in Bayesian analysis, all the functions of the parameters are distributed. In other words, a posterior distribution is obtained for functions such as reliability and failure rate, instead of point estimate as in classical statistics. Therefore, in order to obtain a point estimate for these functions, a point on the posterior distributions needs to be calculated. Again, the expected value (mean) or median value are used.

$$pdf$$ of the Times-to-Failure

The posterior distribution of the failure time is given by:


 * $$ f(T|Data)=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }f(T,\beta ,\eta )f(\beta ,\eta |Data)d\eta d\beta $$

where:


 * $$ f(T,\beta ,\eta )=\dfrac{\beta }{\eta }\left( \dfrac{T}{\eta }\right) ^{\beta -1}e^{-\left( \dfrac{T}{\eta }\right) ^{\beta }} $$

For the $$pdf$$ of the times-to-failure, only the expected value is calculated and reported in Weibull++.

Reliability

In order to calculate the median value of the reliability function, we first need to obtain posterior of the reliability. Since R(T) is a function of β, the density functions of β and R(T) have the following relationship:


 * $$ \begin{align} f(R|Data,T)dR = & f(\beta |Data)d\beta)\\

= & (\int\nolimits_{0}^{\infty }f(\beta ,\eta |Data)d{\eta}) d{\beta} \\ =& \dfrac{\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }d\beta \end{align}$$

The median value of the reliability is obtained by solving the following equation w.r.t. $$ \breve{R}: $$


 * $$ \int\nolimits_{0}^{\breve{R}}f(R|Data,T)dR=0.5 $$

The expected value of the reliability at time is given by:


 * $$ R(T|Data)=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }R(T,\beta ,\eta )f(\beta ,\eta |Data)d\eta d\beta $$

where:


 * $$ R(T,\beta ,\eta )=e^{-\left( \dfrac{T}{\eta }\right) ^{^{\beta }}} $$

Failure Rate

The failure rate at time is given by:


 * $$ \lambda (T|Data)=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\lambda (T,\beta ,\eta )L(\beta ,\eta )\varphi (\eta )\varphi (\beta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\eta )\varphi (\beta )d\eta d\beta } $$

where:


 * $$ \lambda (T,\beta ,\eta )=\dfrac{\beta }{\eta }\left( \dfrac{T}{\eta }\right) ^{\beta -1} $$

Note on Calculated Results
As mentioned above, in order to obtain point estimates for the parameters of functions of the parameters in Bayesian analysis, the Median or Mean values of the different posterior $$pdf$$s are calculated. It is important to note that the Median value is preferable and is the default in Weibull++. This is because the Median value always corresponds to the 50th percentile of the distribution. On the other hand, the Mean is not a fixed point on the distribution, which could cause issues, especially when comparing results across different data sets.

Bounds on Reliability
The confidence bounds calculation under the Weibull-Bayesian analysis is very similar to the Bayesian Confidence Bounds method described in the previous section, with the exception that in the case of the Weibull-Bayesian Analysis the specified prior of β is considered instead of an non-informative prior. The Bayesian one-sided upper bound estimate for R(T) is given by:


 * $$ \int\nolimits_{0}^{R_{U}(T)}f(R|Data,t)dR=CL $$

Using the posterior distribution, the following is obtained:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{t\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL $$

The above equation can be solved for RU(t). The Bayesian one-sided lower bound estimate for $$ \ R(t) $$ is given by:


 * $$ \int\nolimits_{0}^{R_{L}(t)}f(R|Data,t)dR=1-CL $$

Using the posterior distribution, the following is obtained:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{L})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=1-CL $$

The above equation can be solved for RL(t). The Bayesian two-sided bounds estimate for R(t) is given by:


 * $$ \int\nolimits_{R_{L}(t)}^{R_{U}(t)}f(R|Data,t)dR=CL $$ which is equivalent to:


 * $$ \int\nolimits_{0}^{R_{U}(t)}f(R|Data,t)dR=(1+CL)/2 $$

and


 * $$ \int\nolimits_{0}^{R_{L}(t)}f(R|Data,T)dR=(1-CL)/2 $$

Using the same method for one-sided bounds, RU(t) and RL(t) can be computed.

Bounds on Time
Following the same procedure described for bounds on Reliability, the bounds of time can be calculated, given. The Bayesian one-sided upper bound estimate for T(R) is given by:


 * $$ \int\nolimits_{0}^{T_{U}(R)}f(T|Data,R)dT=CL $$

Using the posterior distribution, the following is obtained:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL $$

The above equation can be solved for TU(R). The Bayesian one-sided lower bound estimate for T(R) is given by:


 * $$ \int\nolimits_{0}^{T_{L}(R)}f(T|Data,R)dT=1-CL $$

or:


 * $$ \dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{T_{L}\exp (\dfrac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL $$

The above equation can be solved for TL(R). The Bayesian two-sided lower bounds estimate for T(R) is:


 * $$ \int\nolimits_{T_{L}(R)}^{T_{U}(R)}f(T|Data,R)dT=CL $$

which is equivalent to:


 * $$ \int\nolimits_{0}^{T_{U}(R)}f(T|Data,R)dT=(1+CL)/2 $$

and:


 * $$ \int\nolimits_{0}^{T_{L}(R)}f(T|Data,R)dT=(1-CL)/2 $$

Example 6: