The Normal Distribution

The normal distribution, also known as the Gaussian distribution, is the most widely-used general purpose distribution. It is for this reason that it is included among the lifetime distributions commonly used for reliability and life data analysis. There are some who argue that the normal distribution is inappropriate for modeling lifetime data because the left-hand limit of the distribution extends to negative infinity. This could conceivably result in modeling negative times-to-failure. However, provided that the distribution in question has a relatively high mean and a relatively small standard deviation, the issue of negative failure times should not present itself as a problem. Nevertheless, the normal distribution has been shown to be useful for modeling the lifetimes of consumable items, such as printer toner cartridges.

Normal Probability Density Function
The $$pdf$$  of the normal distribution is given by:


 * $$f(t)=\frac{1}{\sigma \sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{t-\mu }{\sigma } \right)}^{2}}}}$$

where:


 * $$\mu$$ = mean of the normal times-to-faiure, also noted as $$\bar{T}$$,


 * $$\theta$$ = standard deviation of the times-to-failure

It is a two-parameter distribution with parameters $$\mu $$  (or  $$\bar{T}$$ ) and  $$\,\!$$, (i.e., the mean and the standard deviation, respectively).

The Normal Mean, Median and Mode
The normal mean or MTTF is actually one of the parameters of the distribution, usually denoted as $$\mu .$$  Because the normal distribution is symmetrical, the median and the mode are always equal to the mean:


 * $$\mu =\tilde{T}=\breve{T}$$

The Normal Standard Deviation
As with the mean, the standard deviation for the normal distribution is actually one of the parameters, usually denoted as $${{\sigma }_{T}}\,\!$$.

The Normal Reliability Function
The reliability for a mission of time $$T$$  for the normal distribution is determined by:


 * $$R(t)=\int_{t}^{\infty }f(x)dx=\int_{t}^{\infty }\frac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx$$

There is no closed-form solution for the normal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability, we will not discuss manual solution methods. For interested readers, full explanations can be found in the references.

The Normal Conditional Reliability Function
The normal conditional reliability function is given by:


 * $$R(t|T)=\frac{R(T+t)}{R(T)}=\frac{\int_{T+t}^{\infty }\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx}{\int_{T}^{\infty }\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx}$$

Once again, the use of standard normal tables for the calculation of the normal conditional reliability is necessary, as there is no closed form solution.

The Normal Reliable Life
Since there is no closed-form solution for the normal reliability function, there will also be no closed-form solution for the normal reliable life. To determine the normal reliable life, one must solve:


 * $$R(T)=\int_{T}^{\infty }\frac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{{{\sigma }}} \right)}^{2}}}}dt$$

for $$T$$.

The Normal Failure Rate Function
The instantaneous normal failure rate is given by:


 * $$\lambda (t)=\frac{f(t)}{R(t)}=\frac{\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{{{\sigma }}} \right)}^{2}}}}}{\int_{t}^{\infty }\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx}$$

Characteristics of the Normal Distribution
Some of the specific characteristics of the normal distribution are the following:
 * The normal $$pdf$$  has a mean,  $$\bar{T}$$, which is equal to the median,  $$\breve{T}$$, and also equal to the mode, $$\tilde{T}$$, or  $$\bar{T}=\breve{T}=\tilde{T}$$. This is because the normal distribution is symmetrical about its mean.




 * The mean, $$\mu $$, or the mean life or the  $$MTTF$$ , is also the location parameter of the normal  $$pdf$$ , as it locates the  $$pdf$$  along the abscissa. It can assume values of  $$-\infty <\bar{T}<\infty $$.
 * The normal $$pdf$$  has no shape parameter. This means that the normal  $$pdf$$  has only one shape, the bell shape, and this shape does not change.




 * The standard deviation, $$$$, is the scale parameter of the normal  $$pdf$$.


 * As $$$$  decreases, the  $$pdf$$  gets pushed toward the mean, or it becomes narrower and taller.


 * As $$$$  increases, the  $$pdf$$  spreads out away from the mean, or it becomes broader and shallower.


 * The standard deviation can assume values of $$0<<\infty $$.


 * The greater the variability, the larger the value of $$$$, and vice versa.


 * The standard deviation is also the distance between the mean and the point of inflection of the $$pdf$$, on each side of the mean. The point of inflection is that point of the  $$pdf$$  where the slope changes its value from a decreasing to an increasing one, or where the second derivative of the  $$pdf$$  has a value of zero.


 * The normal $$pdf$$  starts at  $$t=-\infty $$  with an  $$f(t)=0$$ . As  $$t$$  increases,  $$f(t)$$  also increases, goes through its point of inflection and reaches its maximum value at  $$t=\bar{T}$$ . Thereafter,  $$f(t)$$  decreases, goes through its point of inflection, and assumes a value of  $$f(t)=0$$  at  $$t=+\infty $$.

Weibull++ Notes on Negative Time Values

One of the disadvantages of using the normal distribution for reliability calculations is the fact that the normal distribution starts at negative infinity. This can result in negative values for some of the results. Negative values for time are not accepted in most of the components of Weibull++, nor are they implemented. Certain components of the application reserve negative values for suspensions, or will not return negative results. For example, the Quick Calculation Pad will return a null value (zero) if the result is negative. Only the Free-Form (Probit) data sheet can accept negative values for the random variable (x-axis values).

Probability Plotting
As described before, probability plotting involves plotting the failure times and associated unreliability estimates on specially constructed probability plotting paper. The form of this paper is based on a linearization of the $$cdf$$  of the specific distribution. For the normal distribution, the cumulative density function can be written as:


 * $$F(t)=\Phi \left( \frac{t-\mu } \right)$$

or:


 * $${{\Phi }^{-1}}\left[ F(t) \right]=-\frac{\mu}{\sigma}+\frac{1}{\sigma}t$$

where:


 * $$\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{2}}}dt$$

Now, let:


 * $$y={{\Phi }^{-1}}\left[ F(t) \right]$$


 * $$a=-\frac{\mu }{\sigma }$$

and:


 * $$b=\frac{1}{\sigma }$$

which results in the linear equation of:


 * $$\begin{align}

y=a+bT \end{align}$$

The normal probability paper resulting from this linearized $$cdf$$  function is shown next.



Since the normal distribution is symmetrical, the area under the $$pdf$$  curve from  $$-\infty $$  to  $$\mu $$  is  $$0.5$$, as is the area from  $$\mu $$  to  $$+\infty $$. Consequently, the value of $$\mu $$  is said to be the point where  $$R(t)=Q(t)=50%$$. This means that the estimate of $$\mu $$  can be read from the point where the plotted line crosses the 50% unreliability line.

To determine the value of $$\sigma $$  from the probability plot, it is first necessary to understand that the area under the  $$pdf$$  curve that lies between one standard deviation in either direction from the mean (or two standard deviations total) represents 68.3% of the area under the curve. This is represented graphically in the following figure.



Consequently, the interval between  $$Q(t)=84.15%$$  and  $$Q(t)=15.85%$$  represents two standard deviations, since this is an interval of 68.3% ( $$84.15-15.85=68.3$$ ), and is centered on the mean at 50%. As a result, the standard deviation can be estimated from:


 * $$\widehat{\sigma }=\frac{t(Q=84.15%)-t(Q=15.85%)}{2}$$

That is: the value of $$\widehat{\sigma }$$  is obtained by subtracting the time value where the plotted line crosses the 84.15% unreliability line from the time value where the plotted line crosses the 15.85% unreliability line and dividing the result by two. This process is illustrated in the following example.

Example 1:

Rank Regression on Y
Performing rank regression on Y requires that a straight line be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized.

The least squares parameter estimation method (regression analysis) was discussed in Parameter Estimation, and the following equations for regression on Y were derived:


 * $$\begin{align}\hat{a}= & \bar{b}-\hat{b}\bar{x} \\

=& \frac{\sum_{i=1}^N y_{i}}{N}-\hat{b}\frac{\sum_{i=1}^{N}x_{i}}{N}\\ \end{align} $$

and:


 * $$\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{N}}$$

In the case of the normal distribution, the equations for $${{y}_{i}}$$  and  $${{x}_{i}}$$  are:


 * $${{y}_{i}}={{\Phi }^{-1}}\left[ F({{t}_{i}}) \right]$$

and:


 * $$\begin{align}

{{x}_{i}}={{t}_{i}} \end{align}$$

where the values for $$F({{T}_{i}})$$  are estimated from the median ranks. Once $$\widehat{a}$$  and  $$\widehat{b}$$  are obtained,  $$\widehat{\sigma }$$  and  $$\widehat{\mu }$$  can easily be obtained from above equations.

The Correlation Coefficient The estimator of the sample correlation coefficient, $$\hat{\rho }$$, is given by:


 * $$\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,({{x}_{i}}-\overline{x})({{y}_{i}}-\overline{y})}{\sqrt{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{x}_{i}}-\overline{x})}^{2}}\cdot \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{y}_{i}}-\overline{y})}^{2}}}}$$

Example 2:

Rank Regression on X
As was mentioned previously, performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the fitted line is minimized.

Again, the first task is to bring our function, the probability of failure function for normal distribution, into a linear form. This step is exactly the same as in regression on Y analysis. All other equations apply in this case as they did for the regression on Y. The deviation from the previous analysis begins on the least squares fit step where: in this case, we treat $$x$$  as the dependent variable and  $$y$$  as the independent variable. The best-fitting straight line for the data, for regression on X, is the straight line:


 * $$x=\widehat{a}+\widehat{b}y$$

The corresponding equations for $$\widehat{a}$$  and  $$\widehat{b}$$  are:


 * $$\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}$$

and:


 * $$\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{N}}$$

where:


 * $${{y}_{i}}={{\Phi }^{-1}}\left[ F({{t}_{i}}) \right]$$

and:


 * $$\begin{align}

{{x}_{i}}={{t}_{i}} \end{align}$$

and the $$F({{t}_{i}})$$  values are estimated from the median ranks. Once $$\widehat{a}$$  and  $$\widehat{b}$$  are obtained, solve the above linear equation for the unknown value of  $$y$$  which corresponds to:


 * $$y=-\frac{\widehat{a}}{\widehat{b}}+\frac{1}{\widehat{b}}x$$

Solving for the parameters, we get:


 * $$a=-\frac{\widehat{a}}{\widehat{b}}=-\frac{\mu }{\sigma }\Rightarrow \mu =\widehat{a}$$

and:


 * $$b=\frac{1}{\widehat{b}}=\frac{1}{\sigma }\Rightarrow \sigma =\widehat{b}$$

The correlation coefficient is evaluated as before.

Example 3:

Maximum Likelihood Estimation
As it was outlined in Parameter Estimation, maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. This can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood function with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the normal distribution are covered in Appendix D.

Special Note About Bias

Estimators (i.e., parameter estimates) have properties such as unbiasedness, minimum variance, sufficiency, consistency, squared error constancy, efficiency and completeness [7][5]. Numerous books and papers deal with these properties and this coverage is beyond the scope of this reference.

However, we would like to briefly address one of these properties, unbiasedness. An estimator is said to be unbiased if the estimator $$\widehat{\theta }=d({{X}_{1,}}{{X}_{2,}}...,{{X}_{n)}}$$  satisfies the condition  $$E\left[ \widehat{\theta } \right]$$   $$=\theta $$  for all  $$\theta \in \Omega .$$

Note that $$E\left[ X \right]$$  denotes the expected value of X and is defined (for continuous distributions) by:


 * $$\begin{align}

E\left[ X \right]= \int_{\varpi }x\cdot f(x)dx \\ X\in & \varpi. \end{align}$$

It can be shown [7][5] that the MLE estimator for the mean of the normal (and lognormal) distribution does satisfy the unbiasedness criteria, or $$E\left[ \widehat{\mu } \right]$$   $$=\mu .$$  The same is not true for the estimate of the variance  $$\hat{\sigma }^{2}$$. The maximum likelihood estimate for the variance for the normal distribution is given by:


 * $$\hat{\sigma }^{2}=\frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}}$$

with a standard deviation of:


 * $$=\sqrt{\frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}}}$$

These estimates, however, have been shown to be biased. It can be shown [7][5] that the unbiased estimate of the variance and standard deviation for complete data is given by:


 * $$\begin{align}

\hat{\sigma }^{2}= & \left[ \frac{N}{N-1} \right]\cdot \left[ \frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}} \right]=\frac{1}{N-1}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}} \\ = & \sqrt{\left[ \frac{N}{N-1} \right]\cdot \left[ \frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}} \right]} \\ = & \sqrt{\frac{1}{N-1}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}}} \end{align}$$

Note that for larger values of $$N$$,  $$\sqrt{\left[ N/(N-1) \right]}$$  tends to 1.

The Use Unbiased Std on Normal Data option in the User Setup under the Calculations tab allows biasing to be considered when estimating the parameters.

When this option is selected, Weibull++ returns the unbiased standard deviation as defined. This is only true for complete data sets. For all other data types, Weibull++ by default returns the biased standard deviation as defined above regardless of the selection status of this option. The next figure shows this setting in Weibull++.

$$$$

Confidence Bounds
The method used by the application in estimating the different types of confidence bounds for normally distributed data is presented in this section. The complete derivations were presented in detail (for a general function) in Confidence Bounds.

Exact Confidence Bounds
There are closed-form solutions for exact confidence bounds for both the normal and lognormal distributions. However these closed-forms solutions only apply to complete data. To achieve consistent application across all possible data types, Weibull++ always uses the Fisher matrix method or likelihood ratio method in computing confidence intervals.

Bounds on the Parameters
The lower and upper bounds on the mean, $$\widehat{\mu }$$, are estimated from:


 * $$\begin{align}

& {{\mu }_{U}}= & \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound),} \\ & {{\mu }_{L}}= & \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)}\text{.} \end{align}$$

Since the standard deviation, $$$$, must be positive,  $$\ln $$  is treated as normally distributed, and the bounds are estimated from:


 * $$\begin{align}

& {{\sigma }_{U}}= & \cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var}}}}\text{ (upper bound),} \\ & {{\sigma }_{L}}= & \frac\text{ (lower bound),} \end{align}$$

where $${{K}_{\alpha }}$$  is defined by:


 * $$\alpha =\frac{1}{\sqrt{2\pi }}\int_^{\infty }{{e}^{-\tfrac{2}}}dt=1-\Phi ({{K}_{\alpha }})$$

If $$\delta $$  is the confidence level, then  $$\alpha =\tfrac{1-\delta }{2}$$  for the two-sided bounds and  $$\alpha =1-\delta $$  for the one-sided bounds. The variances and covariances of $$\widehat{\mu }$$  and  $$$$  are estimated from the Fisher matrix, as follows:


 * $$\left( \begin{matrix}

\widehat{Var}\left( \widehat{\mu } \right) & \widehat{Cov}\left( \widehat{\mu }, \right) \\ \widehat{Cov}\left( \widehat{\mu }, \right) & \widehat{Var}\left( \right)  \\ \end{matrix} \right)=\left( \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial } \\ {} & {} \\   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma^{2}}  \\ \end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1}$$

$$\Lambda $$ is the log-likelihood function of the normal distribution, described in Parameter Estimation and Appendix D.

Bounds on Reliability
The reliability of the normal distribution is:


 * $$\widehat{R}(t;\hat{\mu },)=\int_{t}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\widehat{\mu }} \right)}^{2}}}}dt$$

Let $$\widehat{z}=\tfrac{t-\widehat{\mu }}{}$$, the above equation then becomes:


 * $$\hat{R}(\widehat{z})=\int_{\widehat{z}(t)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz$$

The bounds on $$z$$  are estimated from:


 * $$\begin{align}

& {{z}_{U}}= & \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ & {{z}_{L}}= & \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \end{align}$$

where:


 * $$Var(\widehat{z})={{\left( \frac{\partial \hat{z}}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial \hat{z}}{\partial } \right)}^{2}}Var+2\left( \frac{\partial \hat{z}}{\partial \mu } \right)\left( \frac{\partial \hat{z}}{\partial } \right)Cov\left( \widehat{\mu }, \right)$$

or:


 * $$Var(\widehat{z})=\frac{1}{\widehat{\sigma }^{2}}\left[ Var(\widehat{\mu })+{{\widehat{z}}^{2}}Var+2\cdot \widehat{z}\cdot Cov\left( \widehat{\mu }, \right) \right]$$

The upper and lower bounds on reliability are:


 * $$\begin{align}

& {{R}_{U}}= & \int_^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (upper bound)} \\ & {{R}_{L}}= & \int_^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (lower bound)} \end{align}$$

Bounds on Time
The bounds around time for a given normal percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:


 * $$\hat{T}(\widehat{\mu },)=\widehat{\mu }+z\cdot $$

where:


 * $$z={{\Phi }^{-1}}\left[ F(T) \right]$$

and:


 * $$\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz$$

The next step is to calculate the variance of $$\hat{T}(\widehat{\mu },)$$  or:


 * $$\begin{align}

Var(\hat{T})= & {{\left( \frac{\partial \hat{T}}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial \hat{T}}{\partial } \right)}^{2}}Var \\ & +2\left( \frac{\partial \hat{T}}{\partial \mu } \right)\left( \frac{\partial \hat{T}}{\partial } \right)Cov\left( \widehat{\mu }, \right) \\ Var(\hat{T})= & Var(\widehat{\mu })+{{\widehat{z}}^{2}}Var+2\cdot z\cdot Cov\left( \widehat{\mu }, \right) \end{align}$$

The upper and lower bounds are then found by:


 * $$\begin{align}

& {{T}_{U}}= & \hat{T}+{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (upper bound)} \\ & {{T}_{L}}= & \hat{T}-{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (lower bound)} \end{align}$$

Example 4:

Bounds on Parameters
As covered in Confidence Bounds, the likelihood confidence bounds are calculated by finding values for $${{\theta }_{1}}$$  and  $${{\theta }_{2}}$$  that satisfy:


 * $$-2\cdot \text{ln}\left( \frac{L({{\theta }_{1}},{{\theta }_{2}})}{L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})} \right)=\chi _{\alpha ;1}^{2}$$

This equation can be rewritten as:


 * $$L({{\theta }_{1}},{{\theta }_{2}})=L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}$$

For complete data, the likelihood formula for the normal distribution is given by:


 * $$L(\mu ,\sigma )=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{t}_{i}};\mu ,\sigma )=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{\sigma \cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}_{i}}-\mu }{\sigma } \right)}^{2}}}}$$

where the $${{t}_{i}}$$  values represent the original time to failure data. For a given value of $$\alpha $$, values for  $$\mu $$  and  $$\sigma $$  can be found which represent the maximum and minimum values that satisfy the above likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level $$\delta ,$$  where  $$\alpha =\delta $$  for two-sided bounds and  $$\alpha =2\delta -1$$  for one-sided.

Example 5:

Bounds on Time and Reliability
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the normal reliability equation into the likelihood function. The normal reliability equation can be written as:


 * $$R=1-\Phi \left( \frac{t-\mu }{\sigma } \right)$$

This can be rearranged to the form:


 * $$\mu =t-\sigma \cdot {{\Phi }^{-1}}(1-R)$$

where $${{\Phi }^{-1}}$$  is the inverse standard normal. This equation can now be substituted into the likelihood ratio equation to produce an equation in terms of $$\sigma ,$$   $$t$$  and  $$R$$:


 * $$L(\sigma ,t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{\sigma \cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}_{i}}-\left[ t-\sigma \cdot {{\Phi }^{-1}}(1-R) \right]}{\sigma } \right)}^{2}}}}$$

The unknown parameter $$t/R$$  depends on what type of bounds are being determined. If one is trying to determine the bounds on time for a given reliability, then $$R$$  is a known constant and  $$t$$  is the unknown parameter. Conversely, if one is trying to determine the bounds on reliability for a given time, then $$t$$  is a known constant and  $$R$$  is the unknown parameter. The likelihood ratio equation can be used to solve the values of interest.

Example 6:

Example 7:

Bounds on Parameters
From Confidence Bounds, we know that the marginal posterior distribution of $$\mu $$  can be written as:


 * $$\begin{align}

f(\mu |Data)= & \int_{0}^{\infty }f(\mu ,\sigma |Data)d\sigma \\ = & \frac{\int_{0}^{\infty }L(Data|\mu ,\sigma )\varphi (\mu )\varphi (\sigma )d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\mu ,\sigma )\varphi (\mu )\varphi (\sigma )d\mu d\sigma } \end{align}$$

where:


 * $$\varphi (\sigma )$$ = $$\tfrac{1}{\sigma }$$ is the non-informative prior of  $$\sigma $$.


 * $$\varphi (\mu )$$ is a uniform distribution from - $$\infty $$  to + $$\infty $$, the non-informative prior of  $$\mu .$$

Using the above prior distributions, $$f(\mu |Data)$$  can be rewritten as:


 * $$f(\mu |Data)=\frac{\int_{0}^{\infty }L(Data|\mu ,\sigma )\tfrac{1}{\sigma }d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\mu ,\sigma )\tfrac{1}{\sigma }d\mu d\sigma }$$

The one-sided upper bound of  $$\mu $$  is:


 * $$CL=P(\mu \le {{\mu }_{U}})=\int_{-\infty }^f(\mu |Data)d\mu $$

The one-sided lower bound of $$\mu $$  is:


 * $$1-CL=P(\mu \le {{\mu }_{L}})=\int_{-\infty }^f(\mu |Data)d\mu $$

The two-sided bounds of $$\mu $$  are:


 * $$CL=P({{\mu }_{L}}\le \mu \le {{\mu }_{U}})=\int_^f(\mu |Data)d\mu $$

The same method can be used to obtained the bounds of $$\sigma $$.

Bounds on Time (Type 1)
The reliable life for the normal distribution is:


 * $$\begin{align}

T=\mu +\sigma {{\Phi }^{-1}}(1-R) \end{align}$$

The one-sided upper bound on time is:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\mu +\sigma {{\Phi }^{-1}}(1-R)\le {{T}_{U}})$$

The above equation can be rewritten in terms of $$\mu $$  as:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\mu \le {{T}_{U}}-\sigma {{\Phi }^{-1}}(1-R))$$

From the posterior distribution of $$\mu$$:


 * $$CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{{{T}_{U}}-\sigma {{\Phi }^{-1}}(1-R)}L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }$$

The same method can be applied for one-sided lower bounds and two-sided bounds on time.

Bounds on Reliability (Type 2)
The one-sided upper bound on reliability is:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\mu \le T-\sigma {{\Phi }^{-1}}(1-{{R}_{U}}))$$

From the posterior distribution of $$\mu$$:


 * $$CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{T-\sigma {{\Phi }^{-1}}(1-{{R}_{U}})}L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }$$

The same method can be used to calculate the one-sided lower bounds and the two-sided bounds on reliability.