The Exponential Distribution

The exponential distribution is a commonly used distribution in reliability engineering. Mathematically, it is a fairly simple distribution, which many times leads to its use in inappropriate situations. It is, in fact, a special case of the Weibull distribution where $$\beta =1$$. The exponential distribution is used to model the behavior of units that have a constant failure rate (or units that do not degrade with time or wear out).

The 2-Parameter Exponential Distribution
The 2-parameter exponential pdf is given by:


 * $$f(t)=\lambda {{e}^{-\lambda (t-\gamma )}},f(t)\ge 0,\lambda >0,t\ge 0\text{ or }\gamma $$

where $$\gamma $$ is the location parameter. Some of the characteristics of the 2-parameter exponential distribution are [19]:
 * 1) The location parameter, $$\gamma $$, if positive, shifts the beginning of the distribution by a distance of $$\gamma $$ to the right of the origin, signifying that the chance failures start to occur only after $$\gamma $$ hours of operation, and cannot occur before.
 * 2) The scale parameter is $$\tfrac{1}{\lambda }=\bar{t}-\gamma =m-\gamma $$.
 * 3) The exponential $$pdf$$ has no shape parameter, as it has only one shape.
 * 4) The distribution starts at $$t=\gamma $$ at the level of $$f(t=\gamma )=\lambda $$ and decreases thereafter exponentially and monotonically as $$t$$ increases beyond $$\gamma $$ and is convex.
 * 5) As $$t\to \infty $$, $$f(t)\to 0$$.

The 1-Parameter Exponential Distribution
The 1-parameter exponential $$pdf$$ is obtained by setting $$\gamma =0$$, and is given by:


 * $$ \begin{align}f(t)= & \lambda {{e}^{-\lambda t}}=\frac{1}{m}{{e}^{-\tfrac{1}{m}t}},

& t\ge 0, \lambda >0,m>0 \end{align} $$

where:


 * $$\lambda \,\!$$ = constant rate, in failures per unit of measurement, (e.g., failures per hour, per cycle, etc.)


 * $$\lambda =\frac{1}{m}$$
 * $$m\,\!$$ = mean time between failures, or to failure
 * $$t\,\!$$ = operating time, life, or age, in hours, cycles, miles, actuations, etc.

This distribution requires the knowledge of only one parameter, $$\lambda $$, for its application. Some of the characteristics of the 1-parameter exponential distribution are [19]:
 * The location parameter, $$\gamma \,\!$$, is zero.
 * The scale parameter is $$\tfrac{1}{\lambda }=m$$.
 * As $$\lambda $$ is decreased in value, the distribution is stretched out to the right, and as $$\lambda $$ is increased, the distribution is pushed toward the origin.
 * This distribution has no shape parameter as it has only one shape, (i.e., the exponential, and the only parameter it has is the failure rate, $$\lambda \,\!$$).
 * The distribution starts at $$t=0\,\!$$ at the level of $$f(t=0)=\lambda \,\!$$ and decreases thereafter exponentially and monotonically as $$t$$ increases, and is convex.
 * As $$t\to \infty $$, $$f(t)\to 0$$.
 * The $$pdf$$ can be thought of as a special case of the Weibull $$pdf$$ with $$\gamma =0\,\!$$ and $$\beta =1\,\!$$.

Probability Plotting
Estimation of the parameters for the exponential distribution via probability plotting is very similar to the process used when dealing with the Weibull distribution. Recall, however, that the appearance of the probability plotting paper and the methods by which the parameters are estimated vary from distribution to distribution, so there will be some noticeable differences. In fact, due to the nature of the exponential $$cdf$$, the exponential probability plot is the only one with a negative slope. This is because the y-axis of the exponential probability plotting paper represents the reliability, whereas the y-axis for most of the other life distributions represents the unreliability.

This is illustrated in the process of linearizing the $$cdf$$, which is necessary to construct the exponential probability plotting paper. For the two-parameter exponential distribution the cumulative density function is given by:


 * $$\begin{align}

F(t)=1-{{e}^{-\lambda (t-\gamma )}} \end{align}$$

Taking the natural logarithm of both sides of the above equation yields:


 * $$\ln \left[ 1-F(t) \right]=-\lambda (t-\gamma )$$

or:


 * $$\begin{align}

\ln [1-F(t)]=\lambda \gamma -\lambda t \end{align}$$

Now, let:


 * $$\begin{align}

y=\ln [1-F(t)] \end{align}$$


 * $$\begin{align}

a=\lambda \gamma \end{align}$$

and:


 * $$\begin{align}

b=-\lambda \end{align}$$

which results in the linear equation of:


 * $$\begin{align}

y=a+bt \end{align}$$

Note that with the exponential probability plotting paper, the y-axis scale is logarithmic and the x-axis scale is linear. This means that the zero value is present only on the x-axis. For $$t=0$$, $$R=1$$ and $$F(t)=0$$. So if we were to use $$F(t)$$ for the y-axis, we would have to plot the point $$(0,0)$$. However, since the y-axis is logarithmic, there is no place to plot this on the exponential paper. Also, the failure rate, $$\lambda $$, is the negative of the slope of the line, but there is an easier way to determine the value of $$\lambda $$ from the probability plot, as will be illustrated in the following example.

Rank Regression on Y
Performing a rank regression on Y requires that a straight line be fitted to the set of available data points such that the sum of the squares of the vertical deviations from the points to the line is minimized. The least squares parameter estimation method (regression analysis) was discussed in Parameter Estimation, and the following equations for rank regression on Y (RRY) were derived:


 * $$\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}$$

and:


 * $$\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{N}}$$

In our case, the equations for $${{y}_{i}}$$ and $${{x}_{i}}$$ are:


 * $$\begin{align}

{{y}_{i}}=\ln [1-F({{t}_{i}})] \end{align}$$

and:


 * $$\begin{align}

{{x}_{i}}={{t}_{i}} \end{align}$$

and the $$F({{t}_{i}})$$ is estimated from the median ranks. Once $$\hat{a}$$ and $$\hat{b}$$ are obtained, then $$\hat{\lambda }$$ and $$\hat{\gamma }$$ can easily be obtained from above equations. For the one-parameter exponential, equations for estimating a and b become:


 * $$\begin{align}

\hat{a}= & 0, \\ \hat{b}= & \frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}} \end{align}$$

The Correlation Coefficient The estimator of $$\rho $$ is the sample correlation coefficient, $$\hat{\rho }$$, given by:


 * $$\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,({{x}_{i}}-\overline{x})({{y}_{i}}-\overline{y})}{\sqrt{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{x}_{i}}-\overline{x})}^{2}}\cdot \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{y}_{i}}-\overline{y})}^{2}}}}$$

Rank Regression on X
Similar to rank regression on Y, performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.

Again the first task is to bring our exponential $$cdf$$ function into a linear form. This step is exactly the same as in regression on Y analysis. The deviation from the previous analysis begins on the least squares fit step, since in this case we treat $$x$$ as the dependent variable and $$y$$ as the independent variable. The best-fitting straight line to the data, for regression on X (see Parameter Estimation), is the straight line:


 * $$x=\hat{a}+\hat{b}y$$

The corresponding equations for $$\hat{a}$$ and $$\hat{b}$$ are:


 * $$\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}$$

and:


 * $$\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{N}}$$

where:


 * $$\begin{align}

{{y}_{i}}=\ln [1-F({{t}_{i}})] \end{align}$$

and:


 * $$\begin{align}

{{x}_{i}}={{t}_{i}} \end{align}$$

The values of $$F({{t}_{i}})$$ are estimated from the median ranks. Once $$\hat{a}$$ and $$\hat{b}$$ are obtained, solve for the unknown $$y$$ value, which corresponds to:


 * $$y=-\frac{\hat{a}}{\hat{b}}+\frac{1}{\hat{b}}x$$

Solving for the parameters from above equations we get:


 * $$a=-\frac{\hat{a}}{\hat{b}}=\lambda \gamma \Rightarrow \gamma =\hat{a}$$

and:


 * $$b=\frac{1}{\hat{b}}=-\lambda \Rightarrow \lambda =-\frac{1}{\hat{b}}$$

For the one-parameter exponential case, equations for estimating a and b become:


 * $$\begin{align}

\hat{a}= & 0 \\ \hat{b}= & \frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}} \end{align}$$

The correlation coefficient is evaluated as before.

RRX Example
2-Parameter Exponential RRX Example

Using the same data set from the RRY example above and assuming a 2-parameter exponential distribution, estimate the parameters and determine the correlation coefficient estimate, $$\hat{\rho }$$, using rank regression on X.

 Solution

The table constructed for the RRY analysis applies to this example also. Using the values from this table, we get:


 * $$\begin{align}

\hat{b}= & \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{14}} \\ \\  \hat{b}= & \frac{-927.4899-(630)(-13.2315)/14}{22.1148-{{(-13.2315)}^{2}}/14} \end{align}$$

or:


 * $$\hat{b}=-34.5563$$

and:


 * $$\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}}{14}-\hat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}$$

or:


 * $$\hat{a}=\frac{630}{14}-(-34.5563)\frac{(-13.2315)}{14}=12.3406$$

Therefore:


 * $$\hat{\lambda }=-\frac{1}{\hat{b}}=-\frac{1}{(-34.5563)}=0.0289\text{ failures/hour}$$

and:


 * $$\hat{\gamma }=\hat{a}=12.3406$$

The correlation coefficient is found to be:


 * $$\hat{\rho }=-0.9679$$

Note that the equation for regression on Y is not necessarily the same as that for the regression on X. The only time when the two regression methods yield identical results is when the data lie perfectly on a line. If this were the case, the correlation coefficient would be $$-1$$. The negative value of the correlation coefficient is due to the fact that the slope of the exponential probability plot is negative.

This example can be repeated using Weibull++, choosing two-parameter exponential and rank regression on X (RRX) methods for analysis, as shown below. The estimated parameters and the correlation coefficient using Weibull++ were found to be:


 * $$\begin{array}{*{35}{l}}

\hat{\lambda }= &0.0289 \text{failures/hour} \\ \hat{\gamma}= & 12.3395 \text{hours} \\ \hat{\rho} = &-0.9679 \\ \end{array}$$



The probability plot can be obtained simply by clicking the Plot icon.



Maximum Likelihood Estimation
As outlined in Parameter Estimation, maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. This can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood equation with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the exponential distribution are covered in Appendix D.

MLE Example
MLE for the Exponential Distribution

Using the same data set from the RRY and RRX examples above and assuming a 2-parameter exponential distribution, estimate the parameters using the MLE method.

Solution

In this example, we have complete data only. The partial derivative of the log-likelihood function, $$\Lambda ,$$ is given by:


 * $$\frac{\partial \Lambda }{\partial \lambda }=\underset{i=1}{\overset{\mathop \sum }}\,\left[ \frac{1}{\lambda }-\left( {{T}_{i}}-\gamma \right) \right]=\underset{i=1}{\overset{14}{\mathop \sum }}\,\left[ \frac{1}{\lambda }-\left( {{T}_{i}}-\gamma  \right) \right]=0$$

Complete descriptions of the partial derivatives can be found in Appendix D. Recall that when using the MLE method for the exponential distribution, the value of $$\gamma $$ is equal to that of the first failure time. The first failure occurred at 5 hours, thus $$\gamma =5$$ hours$$.$$ Substituting the values for $$T$$ and $$\gamma $$ we get:


 * $$\frac{14}{\hat{\lambda }}=560$$

or:


 * $$\hat{\lambda }=0.025\text{ failures/hour}$$

Using Weibull++:



The probability plot is:



Confidence Bounds
In this section, we present the methods used in the application to estimate the different types of confidence bounds for exponentially distributed data. The complete derivations were presented in detail (for a general function) in the chapter for Confidence Bounds. At this time we should point out that exact confidence bounds for the exponential distribution have been derived, and exist in a closed form, utilizing the $${{\chi }^{2}}$$ distribution. These are described in detail in Kececioglu [20], and are covered in the section in the test design chapter. For most exponential data analyses, Weibull++ will use the approximate confidence bounds, provided from the Fisher information matrix or the likelihood ratio, in order to stay consistent with all of the other available distributions in the application. The $${{\chi }^{2}}$$ confidence bounds for the exponential distribution are discussed in more detail in the test design chapter.

Bounds on the Parameters
For the failure rate $$\hat{\lambda }$$ the upper ($${{\lambda }_{U}}$$) and lower ($${{\lambda }_{L}}$$) bounds are estimated by [30]:


 * $$\begin{align}

& {{\lambda }_{U}}= & \hat{\lambda }\cdot {{e}^{\left[ \tfrac{{{K}_{\alpha }}\sqrt{Var(\hat{\lambda })}}{\hat{\lambda }} \right]}} \\ & &  \\  & {{\lambda }_{L}}= & \frac{\hat{\lambda }} \end{align}$$

where $${{K}_{\alpha }}$$ is defined by:


 * $$\alpha =\frac{1}{\sqrt{2\pi }}\int_^{\infty }{{e}^{-\tfrac{2}}}dt=1-\Phi ({{K}_{\alpha }})$$

If $$\delta $$ is the confidence level, then $$\alpha =\tfrac{1-\delta }{2}$$ for the two-sided bounds, and $$\alpha =1-\delta $$ for the one-sided bounds. The variance of $$\hat{\lambda },$$ $$Var(\hat{\lambda }),$$ is estimated from the Fisher matrix, as follows:


 * $$Var(\hat{\lambda })={{\left( -\frac{{{\partial }^{2}}\Lambda }{\partial {{\lambda }^{2}}} \right)}^{-1}}$$

where $$\Lambda $$ is the log-likelihood function of the exponential distribution, described in Appendix D.

Note that no true MLE solution exists for the case of the two-parameter exponential distribution. The mathematics simply break down while trying to simultaneously solve the partial derivative equations for both the $$\gamma $$ and $$\lambda $$ parameters, resulting in unrealistic conditions. The way around this conundrum involves setting $$\gamma ={{t}_{1}},$$ or the first time-to-failure, and calculating $$\lambda $$ in the regular fashion for this methodology. Weibull++ treats $$\gamma $$ as a constant when computing bounds, (i.e., $$Var(\hat{\gamma })=0$$). (See the discussion in Appendix D for more information.)

Bounds on Reliability
The reliability of the two-parameter exponential distribution is:


 * $$\hat{R}(t;\hat{\lambda })={{e}^{-\hat{\lambda }(t-\hat{\gamma })}}$$

The corresponding confidence bounds are estimated from:


 * $$\begin{align}

& {{R}_{L}}= & {{e}^{-{{\lambda }_{U}}(t-\hat{\gamma })}} \\ & {{R}_{U}}= & {{e}^{-{{\lambda }_{L}}(t-\hat{\gamma })}} \end{align}$$

These equations hold true for the 1-parameter exponential distribution, with $$\gamma =0$$.

Bounds on Time
The bounds around time for a given exponential percentile, or reliability value, are estimated by first solving the reliability equation with respect to time, or reliable life:


 * $$\hat{t}=-\frac{1}\cdot \ln (R)+\hat{\gamma }$$

The corresponding confidence bounds are estimated from:


 * $$\begin{align}

& {{t}_{U}}= & -\frac{1}\cdot \ln (R)+\hat{\gamma } \\ & {{t}_{L}}= & -\frac{1}\cdot \ln (R)+\hat{\gamma } \end{align}$$

The same equations apply for the one-parameter exponential with $$\gamma =0.$$

Bounds on Parameters
For one-parameter distributions such as the exponential, the likelihood confidence bounds are calculated by finding values for $$\theta $$ that satisfy:


 * $$-2\cdot \text{ln}\left( \frac{L(\theta )}{L(\hat{\theta })} \right)=\chi _{\alpha ;1}^{2}$$

This equation can be rewritten as:


 * $$L(\theta )=L(\hat{\theta })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}$$

For complete data, the likelihood function for the exponential distribution is given by:


 * $$L(\lambda )=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{t}_{i}};\lambda )=\underset{i=1}{\overset{N}{\mathop \prod }}\,\lambda \cdot {{e}^{-\lambda \cdot {{t}_{i}}}}$$

where the $${{t}_{i}}$$ values represent the original time-to-failure data. For a given value of $$\alpha $$, values for $$\lambda $$ can be found which represent the maximum and minimum values that satisfy the above likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level $$\delta ,$$ where $$\alpha =\delta $$ for two-sided bounds and $$\alpha =2\delta -1$$ for one-sided.

Example: LR Bounds for Lambda
Five units are put on a reliability test and experience failures at 20, 40, 60, 100, and 150 hours. Assuming an exponential distribution, the MLE parameter estimate is calculated to be $$\hat{\lambda }=0.013514$$. Calculate the 85% two-sided confidence bounds on these parameters using the likelihood ratio method. Solution

The first step is to calculate the likelihood function for the parameter estimates:


 * $$\begin{align}

L(\hat{\lambda })= & \underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};\hat{\lambda })=\underset{i=1}{\overset{N}{\mathop \prod }}\,\hat{\lambda }\cdot {{e}^{-\hat{\lambda }\cdot {{x}_{i}}}} \\ L(\hat{\lambda })= & \underset{i=1}{\overset{5}{\mathop \prod }}\,0.013514\cdot {{e}^{-0.013514\cdot {{x}_{i}}}} \\ L(\hat{\lambda })= & 3.03647\times {{10}^{-12}} \end{align}$$

where $${{x}_{i}}$$ are the original time-to-failure data points. We can now rearrange the likelihood ratio equation to the form:


 * $$L(\lambda )-L(\hat{\lambda })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0$$

Since our specified confidence level, $$\delta $$, is 85%, we can calculate the value of the chi-squared statistic, $$\chi _{0.85;1}^{2}=2.072251.$$ We can now substitute this information into the equation:


 * $$\begin{align}

L(\lambda )-L(\hat{\lambda })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}= & 0, \\ L(\lambda )-3.03647\times {{10}^{-12}}\cdot {{e}^{\tfrac{-2.072251}{2}}}= & 0, \\ L(\lambda )-1.07742\times {{10}^{-12}}= & 0. \end{align}$$

It now remains to find the values of $$\lambda $$ which satisfy this equation. Since there is only one parameter, there are only two values of $$\lambda $$ that will satisfy the equation. These values represent the $$\delta =85%\,\!$$ two-sided confidence limits of the parameter estimate $$\hat{\lambda }$$. For our problem, the confidence limits are:


 * $$\begin{align}

{{\lambda }_{0.85}}=(0.006572,0.024172) \end{align}$$

Bounds on Time and Reliability
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the exponential reliability equation into the likelihood function. The exponential reliability equation can be written as:


 * $$R={{e}^{-\lambda \cdot t}}$$

This can be rearranged to the form:


 * $$\lambda =\frac{-\text{ln}(R)}{t}$$

This equation can now be substituted into the likelihood ratio equation to produce a likelihood equation in terms of $$t$$ and $$R:$$


 * $$L(t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\left( \frac{-\text{ln}(R)}{t} \right)\cdot {{e}^{\left( \tfrac{\text{ln}(R)}{t} \right)\cdot {{x}_{i}}}}$$

The unknown parameter $$t/R$$ depends on what type of bounds are being determined. If one is trying to determine the bounds on time for the equation for the mean and the Bayes's rule equation for single parametera given reliability, then $$R$$ is a known constant and $$t$$ is the unknown parameter. Conversely, if one is trying to determine the bounds on reliability for a given time, then $$t$$ is a known constant and $$R$$ is the unknown parameter. Either way, the likelihood ratio function can be solved for the values of interest.

Example: LR Bounds on Time
For the data given above for the LR Bounds on Lambda example (five failures at 20, 40, 60, 100 and 150 hours), determine the 85% two-sided confidence bounds on the time estimate for a reliability of 90%. The ML estimate for the time at $$R(t)=90%$$ is $$\hat{t}=7.797$$. Solution

In this example, we are trying to determine the 85% two-sided confidence bounds on the time estimate of 7.797. This is accomplished by substituting $$R=0.90$$ and $$\alpha =0.85$$ into the likelihood ratio bound equation. It now remains to find the values of $$t$$ which satisfy this equation. Since there is only one parameter, there are only two values of $$t$$ that will satisfy the equation. These values represent the $$\delta =85%$$ two-sided confidence limits of the time estimate $$\hat{t}$$. For our problem, the confidence limits are:


 * $${{\hat{t}}_{R=0.9}}=(4.359,16.033)$$

Example: LR Bounds on Reliability
Again using the data given above for the LR Bounds on Lambda example (five failures at 20, 40, 60, 100 and 150 hours), determine the 85% two-sided confidence bounds on the reliability estimate for a $$t=50$$. The ML estimate for the time at $$t=50$$ is $$\hat{R}=50.881%$$.

Solution

In this example, we are trying to determine the 85% two-sided confidence bounds on the reliability estimate of 50.881%. This is accomplished by substituting $$t=50$$ and $$\alpha =0.85$$ into the likelihood ratio bound equation. It now remains to find the values of $$R$$ which satisfy this equation. Since there is only one parameter, there are only two values of $$t$$ that will satisfy the equation. These values represent the $$\delta =85%$$ two-sided confidence limits of the reliability estimate $$\hat{R}$$. For our problem, the confidence limits are:


 * $${{\hat{R}}_{t=50}}=(29.861%,71.794%)$$

Bounds on Parameters
From Confidence Bounds, we know that the posterior distribution of $$\lambda $$ can be written as:


 * $$f(\lambda |Data)=\frac{L(Data|\lambda )\varphi (\lambda )}{\int_{0}^{\infty }L(Data|\lambda )\varphi (\lambda )d\lambda }$$

where $$\varphi (\lambda )=\tfrac{1}{\lambda }$$, is the non-informative prior of $$\lambda $$.

With the above prior distribution, $$f(\lambda |Data)\,\!$$ can be rewritten as:


 * $$f(\lambda |Data)=\frac{L(Data|\lambda )\tfrac{1}{\lambda }}{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }$$

The one-sided upper bound of $$\lambda $$ is:


 * $$CL=P(\lambda \le {{\lambda }_{U}})=\int_{0}^f(\lambda |Data)d\lambda $$

The one-sided lower bound of $$\lambda $$ is:


 * $$1-CL=P(\lambda \le {{\lambda }_{L}})=\int_{0}^f(\lambda |Data)d\lambda $$

The two-sided bounds of $$\lambda $$ are:


 * $$CL=P({{\lambda }_{L}}\le \lambda \le {{\lambda }_{U}})=\int_^f(\lambda |Data)d\lambda $$

Bounds on Time (Type 1)
The reliable life equation is:


 * $$t=\frac{-\ln R}{\lambda }$$

For the one-sided upper bound on time we have:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(t\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln R}{\lambda }\le {{T}_{U}})$$

The above equation can be rewritten in terms of $$\lambda $$ as:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln R}\le \lambda )$$

From the above posterior distribuiton equation, we have:


 * $$CL=\frac{\int_{\tfrac{-\ln R}}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }$$

The above equation is solved w.r.t. $${{t}_{U}}.$$ The same method is applied for one-sided lower and two-sided bounds on time.

Bounds on Reliability (Type 2)
The one-sided upper bound on reliability is given by:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\exp (-\lambda t)\le {{R}_{U}})$$

The above equaation can be rewritten in terms of $$\lambda $$ as:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln {{R}_{U}}}{t}\le \lambda )$$

From the equation for posterior distribution we have:


 * $$CL=\frac{\int_{\tfrac{-\ln {{R}_{U}}}{t}}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }$$

The above equation is solved w.r.t. $${{R}_{U}}.$$ The same method can be used to calculate one-sided lower and two sided bounds on reliability.

General Examples
Example 8:

Example 9: