The Exponential Distribution

The exponential distribution is a commonly used distribution in reliability engineering. Mathematically, it is a fairly simple distribution, which many times leads to its use in inappropriate situations. It is, in fact, a special case of the Weibull distribution where $$\beta =1$$. The exponential distribution is used to model the behavior of units that have a constant failure rate (or units that do not degrade with time or wear out).

The 2-Parameter Exponential Distribution
The 2-parameter exponential pdf is given by:


 * $$f(t)=\lambda {{e}^{-\lambda (t-\gamma )}},f(t)\ge 0,\lambda >0,t\ge 0\text{ or }\gamma $$

where $$\gamma $$ is the location parameter. Some of the characteristics of the 2-parameter exponential distribution are [19]:
 * 1) The location parameter, $$\gamma $$, if positive, shifts the beginning of the distribution by a distance of $$\gamma $$ to the right of the origin, signifying that the chance failures start to occur only after $$\gamma $$ hours of operation, and cannot occur before.
 * 2) The scale parameter is $$\tfrac{1}{\lambda }=\bar{t}-\gamma =m-\gamma $$.
 * 3) The exponential $$pdf$$ has no shape parameter, as it has only one shape.
 * 4) The distribution starts at $$t=\gamma $$ at the level of $$f(t=\gamma )=\lambda $$ and decreases thereafter exponentially and monotonically as $$t$$ increases beyond $$\gamma $$ and is convex.
 * 5) As $$t\to \infty $$, $$f(t)\to 0$$.

The 1-Parameter Exponential Distribution
The 1-parameter exponential $$pdf$$ is obtained by setting $$\gamma =0$$, and is given by:


 * $$ \begin{align}f(t)= & \lambda {{e}^{-\lambda t}}=\frac{1}{m}{{e}^{-\tfrac{1}{m}t}},

& t\ge 0, \lambda >0,m>0 \end{align} $$

where:


 * $$\lambda $$ = constant rate, in failures per unit of measurement, (e.g., failures per hour, per cycle, etc.,)


 * $$\lambda =\frac{1}{m}$$,
 * $$m$$ = mean time between failures, or to failure,
 * $$t$$ = operating time, life, or age, in hours, cycles, miles, actuations, etc.

This distribution requires the knowledge of only one parameter, $$\lambda $$, for its application. Some of the characteristics of the 1-parameter exponential distribution are [19]:
 * The location parameter, $$\gamma $$, is zero.
 * The scale parameter is $$\tfrac{1}{\lambda }=m$$.
 * As $$\lambda $$ is decreased in value, the distribution is stretched out to the right, and as $$\lambda $$ is increased, the distribution is pushed toward the origin.
 * This distribution has no shape parameter as it has only one shape, (i.e., the exponential, and the only parameter it has is the failure rate, $$\lambda $$).
 * The distribution starts at $$t=0$$ at the level of $$f(t=0)=\lambda $$ and decreases thereafter exponentially and monotonically as $$t$$ increases, and is convex.
 * As $$t\to \infty $$, $$f(t)\to 0$$.
 * The $$pdf$$ can be thought of as a special case of the Weibull $$pdf$$ with $$\gamma =0$$ and $$\beta =1$$.

Probability Plotting
Estimation of the parameters for the exponential distribution via probability plotting is very similar to the process used when dealing with the Weibull distribution. Recall, however, that the appearance of the probability plotting paper and the methods by which the parameters are estimated vary from distribution to distribution, so there will be some noticeable differences. In fact, due to the nature of the exponential $$cdf$$, the exponential probability plot is the only one with a negative slope. This is because the y-axis of the exponential probability plotting paper represents the reliability, whereas the y-axis for most of the other life distributions represents the unreliability.

This is illustrated in the process of linearizing the $$cdf$$, which is necessary to construct the exponential probability plotting paper. For the two-parameter exponential distribution the cumulative density function is given by:


 * $$\begin{align}

F(t)=1-{{e}^{-\lambda (t-\gamma )}} \end{align}$$

Taking the natural logarithm of both sides of the above equation yields:


 * $$\ln \left[ 1-F(t) \right]=-\lambda (t-\gamma )$$

or:


 * $$\begin{align}

\ln [1-F(t)]=\lambda \gamma -\lambda t \end{align}$$

Now, let:


 * $$\begin{align}

y=\ln [1-F(t)] \end{align}$$


 * $$\begin{align}

a=\lambda \gamma \end{align}$$

and:


 * $$\begin{align}

b=-\lambda \end{align}$$

which results in the linear equation of:


 * $$\begin{align}

y=a+bt \end{align}$$

Note that with the exponential probability plotting paper, the y-axis scale is logarithmic and the x-axis scale is linear. This means that the zero value is present only on the x-axis. For $$t=0$$, $$R=1$$ and $$F(t)=0$$. So if we were to use $$F(t)$$ for the y-axis, we would have to plot the point $$(0,0)$$. However, since the y-axis is logarithmic, there is no place to plot this on the exponential paper. Also, the failure rate, $$\lambda $$, is the negative of the slope of the line, but there is an easier way to determine the value of $$\lambda $$ from the probability plot, as will be illustrated in the following example.

Rank Regression on Y for Exponential Distribution
Performing a rank regression on Y requires that a straight line be fitted to the set of available data points such that the sum of the squares of the vertical deviations from the points to the line is minimized. The least squares parameter estimation method (regression analysis) was discussed in Parameter Estimation, and the following equations for rank regression on Y (RRY) were derived:


 * $$\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}$$

and:


 * $$\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{N}}$$

In our case, the equations for $${{y}_{i}}$$ and $${{x}_{i}}$$ are:


 * $${{y}_{i}}=\ln [1-F({{t}_{i}})]$$

and:


 * $${{x}_{i}}={{t}_{i}}$$

and the $$F({{t}_{i}})$$ is estimated from the median ranks. Once $$\hat{a}$$ and $$\hat{b}$$ are obtained, then $$\hat{\lambda }$$ and $$\hat{\gamma }$$ can easily be obtained from above equations. For the one-parameter exponential, equations for estimating a and b become:


 * $$\begin{align}

\hat{a}= & 0, \\ \hat{b}= & \frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}} \end{align}$$

The Correlation Coefficient The estimator of $$\rho $$ is the sample correlation coefficient, $$\hat{\rho }$$, given by:


 * $$\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,({{x}_{i}}-\overline{x})({{y}_{i}}-\overline{y})}{\sqrt{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{x}_{i}}-\overline{x})}^{2}}\cdot \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{y}_{i}}-\overline{y})}^{2}}}}$$

Rank Regression on X for Exponential Distribution
Similar to rank regression on Y, performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.

Again the first task is to bring our exponential $$cdf$$ function into a linear form. This step is exactly the same as in regression on Y analysis. The deviation from the previous analysis begins on the least squares fit step, since in this case we treat $$x$$ as the dependent variable and $$y$$ as the independent variable. The best-fitting straight line to the data, for regression on X (see Parameter Estimation), is the straight line:


 * $$x=\hat{a}+\hat{b}y$$

The corresponding equations for $$\hat{a}$$ and $$\hat{b}$$ are:


 * $$\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}$$

and:


 * $$\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{N}}$$

where:


 * $${{y}_{i}}=\ln [1-F({{t}_{i}})]$$

and:


 * $${{x}_{i}}={{t}_{i}}$$

The values of $$F({{t}_{i}})$$ are estimated from the median ranks. Once $$\hat{a}$$ and $$\hat{b}$$ are obtained, solve for the unknown $$y$$ value, which corresponds to:


 * $$y=-\frac{\hat{a}}{\hat{b}}+\frac{1}{\hat{b}}x$$

Solving for the parameters from above equations we get:


 * $$a=-\frac{\hat{a}}{\hat{b}}=\lambda \gamma \Rightarrow \gamma =\hat{a}$$

and:


 * $$b=\frac{1}{\hat{b}}=-\lambda \Rightarrow \lambda =-\frac{1}{\hat{b}}$$

For the one-parameter exponential case, equations for estimating a and b become:


 * $$\begin{align}

\hat{a}= & 0 \\ \hat{b}= & \frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}} \end{align}$$

The correlation coefficient is evaluated as before.

Example 3:

Maximum Likelihood Estimation for Exponential Distribution
As outlined in Parameter Estimation, maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. This can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood equation with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the exponential distribution are covered in Appendix D.

Example 4:

Confidence Bounds
In this section, we present the methods used in the application to estimate the different types of confidence bounds for exponentially distributed data. The complete derivations were presented in detail (for a general function) in the chapter for Confidence Bounds. At this time we should point out that exact confidence bounds for the exponential distribution have been derived, and exist in a closed form, utilizing the $${{\chi }^{2}}$$ distribution. These are described in detail in Kececioglu [20], and are covered in the section in the test design chapter. For most exponential data analyses, Weibull++ will use the approximate confidence bounds, provided from the Fisher information matrix or the likelihood ratio, in order to stay consistent with all of the other available distributions in the application. The $${{\chi }^{2}}$$ confidence bounds for the exponential distribution are discussed in more detail in the test design chapter.

Bounds on the Parameters
For the failure rate $$\hat{\lambda }$$ the upper ($${{\lambda }_{U}}$$) and lower ($${{\lambda }_{L}}$$) bounds are estimated by [30]:


 * $$\begin{align}

& {{\lambda }_{U}}= & \hat{\lambda }\cdot {{e}^{\left[ \tfrac{{{K}_{\alpha }}\sqrt{Var(\hat{\lambda })}}{\hat{\lambda }} \right]}} \\ & &  \\  & {{\lambda }_{L}}= & \frac{\hat{\lambda }} \end{align}$$

where $${{K}_{\alpha }}$$ is defined by:


 * $$\alpha =\frac{1}{\sqrt{2\pi }}\int_^{\infty }{{e}^{-\tfrac{2}}}dt=1-\Phi ({{K}_{\alpha }})$$

If $$\delta $$ is the confidence level, then $$\alpha =\tfrac{1-\delta }{2}$$ for the two-sided bounds, and $$\alpha =1-\delta $$ for the one-sided bounds. The variance of $$\hat{\lambda },$$ $$Var(\hat{\lambda }),$$ is estimated from the Fisher matrix, as follows:


 * $$Var(\hat{\lambda })={{\left( -\frac{{{\partial }^{2}}\Lambda }{\partial {{\lambda }^{2}}} \right)}^{-1}}$$

where $$\Lambda $$ is the log-likelihood function of the exponential distribution, described in Appendix D.

Note that no true MLE solution exists for the case of the two-parameter exponential distribution. The mathematics simply break down while trying to simultaneously solve the partial derivative equations for both the $$\gamma $$ and $$\lambda $$ parameters, resulting in unrealistic conditions. The way around this conundrum involves setting $$\gamma ={{t}_{1}},$$ or the first time-to-failure, and calculating $$\lambda $$ in the regular fashion for this methodology. Weibull++ treats $$\gamma $$ as a constant when computing bounds, (i.e., $$Var(\hat{\gamma })=0$$). (See the discussion in Appendix D for more information.)

Bounds on Reliability
The reliability of the two-parameter exponential distribution is:


 * $$\hat{R}(t;\hat{\lambda })={{e}^{-\hat{\lambda }(t-\hat{\gamma })}}$$

The corresponding confidence bounds are estimated from:


 * $$\begin{align}

& {{R}_{L}}= & {{e}^{-{{\lambda }_{U}}(t-\hat{\gamma })}} \\ & {{R}_{U}}= & {{e}^{-{{\lambda }_{L}}(t-\hat{\gamma })}} \end{align}$$

These equations hold true for the one-parameter exponential distribution, with $$\gamma =0$$.

Bounds on Time
The bounds around time for a given exponential percentile, or reliability value, are estimated by first solving the reliability equation with respect to time, or reliable life:


 * $$\hat{t}=-\frac{1}\cdot \ln (R)+\hat{\gamma }$$

The corresponding confidence bounds are estimated from:


 * $$\begin{align}

& {{t}_{U}}= & -\frac{1}\cdot \ln (R)+\hat{\gamma } \\ & {{t}_{L}}= & -\frac{1}\cdot \ln (R)+\hat{\gamma } \end{align}$$

The same equations apply for the one-parameter exponential with $$\gamma =0.$$

Bounds on Parameters
For one-parameter distributions such as the exponential, the likelihood confidence bounds are calculated by finding values for $$\theta $$ that satisfy:


 * $$-2\cdot \text{ln}\left( \frac{L(\theta )}{L(\hat{\theta })} \right)=\chi _{\alpha ;1}^{2}$$

This equation can be rewritten as:


 * $$L(\theta )=L(\hat{\theta })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}$$

For complete data, the likelihood function for the exponential distribution is given by:


 * $$L(\lambda )=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{t}_{i}};\lambda )=\underset{i=1}{\overset{N}{\mathop \prod }}\,\lambda \cdot {{e}^{-\lambda \cdot {{t}_{i}}}}$$

where the $${{t}_{i}}$$ values represent the original time-to-failure data. For a given value of $$\alpha $$, values for $$\lambda $$ can be found which represent the maximum and minimum values that satisfy the above likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level $$\delta ,$$ where $$\alpha =\delta $$ for two-sided bounds and $$\alpha =2\delta -1$$ for one-sided.

Example 5:

Bounds on Time and Reliability
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the exponential reliability equation into the likelihood function. The exponential reliability equation can be written as:


 * $$R={{e}^{-\lambda \cdot t}}$$

This can be rearranged to the form:


 * $$\lambda =\frac{-\text{ln}(R)}{t}$$

This equation can now be substituted into the likelihood ratio equation to produce a likelihood equation in terms of $$t$$ and $$R:$$


 * $$L(t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\left( \frac{-\text{ln}(R)}{t} \right)\cdot {{e}^{\left( \tfrac{\text{ln}(R)}{t} \right)\cdot {{x}_{i}}}}$$

The unknown parameter $$t/R$$ depends on what type of bounds are being determined. If one is trying to determine the bounds on time for the equation for the mean and the Bayes's rule equation for single parametera given reliability, then $$R$$ is a known constant and $$t$$ is the unknown parameter. Conversely, if one is trying to determine the bounds on reliability for a given time, then $$t$$ is a known constant and $$R$$ is the unknown parameter. Either way, the likelihood ratio function can be solved for the values of interest.

Example 6:

Example 7:

Bounds on Parameters
From Confidence Bounds, we know that the posterior distribution of $$\lambda $$ can be written as:


 * $$f(\lambda |Data)=\frac{L(Data|\lambda )\varphi (\lambda )}{\int_{0}^{\infty }L(Data|\lambda )\varphi (\lambda )d\lambda }$$

where $$\varphi (\lambda )=\tfrac{1}{\lambda }$$, is the non-informative prior of $$\lambda $$.

With the above prior distribution, $$f(\lambda |Data)$$ can be rewritten as:


 * $$f(\lambda |Data)=\frac{L(Data|\lambda )\tfrac{1}{\lambda }}{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }$$

The one-sided upper bound of $$\lambda $$ is:


 * $$CL=P(\lambda \le {{\lambda }_{U}})=\int_{0}^f(\lambda |Data)d\lambda $$

The one-sided lower bound of $$\lambda $$ is:


 * $$1-CL=P(\lambda \le {{\lambda }_{L}})=\int_{0}^f(\lambda |Data)d\lambda $$

The two-sided bounds of $$\lambda $$ are:


 * $$CL=P({{\lambda }_{L}}\le \lambda \le {{\lambda }_{U}})=\int_^f(\lambda |Data)d\lambda $$

Bounds on Time (Type 1)
The reliable life equation is:


 * $$t=\frac{-\ln R}{\lambda }$$

For the one-sided upper bound on time we have:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(t\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln R}{\lambda }\le {{T}_{U}})$$

The above equation can be rewritten in terms of $$\lambda $$ as:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln R}\le \lambda )$$

From the above posterior distribuiton equation, we have:


 * $$CL=\frac{\int_{\tfrac{-\ln R}}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }$$

The above equation is solved w.r.t. $${{t}_{U}}.$$ The same method is applied for one-sided lower and two-sided bounds on time.

Bounds on Reliability (Type 2)
The one-sided upper bound on reliability is given by:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\exp (-\lambda t)\le {{R}_{U}})$$

The above equaation can be rewritten in terms of $$\lambda $$ as:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln {{R}_{U}}}{t}\le \lambda )$$

From the equation for posterior distribution we have:


 * $$CL=\frac{\int_{\tfrac{-\ln {{R}_{U}}}{t}}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }$$

The above equation is solved w.r.t. $${{R}_{U}}.$$ The same method can be used to calculate one-sided lower and two sided bounds on reliability.

General Examples
Example 8:

Example 9: