Template:Bayesian Confidence Bounds: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
== Bayesian Confidence Bounds ==
== Bayesian Confidence Bounds ==


A fourth method of estimating confidence bounds is based on the Bayes theorem. This type of confidence bounds relies on a different school of thought in statistical analysis, where prior information is combined with sample data in order to make inferences on model parameters and their functions. An introduction to Bayesian methods is given in Chapter [[Parameter Estimation]]. Bayesian confidence bounds are derived from Bayes rule, which states that:  
A fourth method of estimating confidence bounds is based on the Bayes theorem. This type of confidence bounds relies on a different school of thought in statistical analysis, where prior information is combined with sample data in order to make inferences on model parameters and their functions. An introduction to Bayesian methods is given in the [[Parameter Estimation]] chapter. Bayesian confidence bounds are derived from Bayes rule, which states that:  


::<math>f(\theta |Data)=\frac{L(Data|\theta )\varphi (\theta )}{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }</math>
::<math>f(\theta |Data)=\frac{L(Data|\theta )\varphi (\theta )}{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }</math>


:where:  
:where:  
:#<math>f(\theta |Data)</math> is the <math>posterior</math> <math>pdf</math> of <math>\theta </math>  
:#<span class="texhtml">''f''(θ | ''D''''a''''t''''a'''''<b>)</b>''o''''s''''t''''e''''r''''i''''o''''r'''''d''''f'''</span> is the <span class="texhtml">''p'''</span> <span class="texhtml">''p'''</span> of <span class="texhtml">θ</span>
:#<math>\theta </math> is the parameter vector of the chosen distribution (i.e. Weibull, lognormal, etc.)  
:#<span class="texhtml">θ</span> is the parameter vector of the chosen distribution (i.e., Weibull, lognormal, etc.)  
:#<math>L(\bullet )</math> is the likelihood function  
:#<math>L(\bullet )</math> is the likelihood function  
:#<math>\varphi (\theta )</math> is the <math>prior</math> <math>pdf</math> of the parameter vector <math>\theta </math>  
:#<math>\varphi (\theta )</math> is the <span class="texhtml">''p''''r''''i''''o''''r''</span> <span class="texhtml">''p''''d''''f''</span> of the parameter vector <span class="texhtml">θ</span>  
:#<math>\varsigma </math> is the range of <math>\theta </math>.
:#<math>\varsigma </math> is the range of <span class="texhtml">θ</span>.


In other words, the prior knowledge is provided in the form of the prior <math>pdf</math> of the parameters, which in turn is combined with the sample data in order to obtain the posterior <math>pdf.</math> Different forms of prior information exist, such as past data, expert opinion or non-informative (refer to Chapter [[Parameter Estimation]]). It can be seen from the above Bayes rule formula that we are now dealing with distributions of parameters rather than single value parameters. For example, consider a one-parameter distribution with a positive parameter <math>{{\theta }_{1}}</math>. Given a set of sample data, and a prior distribution for <math>{{\theta }_{1}},</math> <math>\varphi ({{\theta }_{1}}),</math> the above Bayes rule formula can be written as:  
In other words, the prior knowledge is provided in the form of the prior <span class="texhtml">''p''''d''''f''</span> of the parameters, which in turn is combined with the sample data in order to obtain the posterior <span class="texhtml">''p''''d''''f''.</span> Different forms of prior information exist, such as past data, expert opinion or non-informative (refer to Chapter [[Parameter Estimation]]). It can be seen from the above Bayes rule formula that we are now dealing with distributions of parameters rather than single value parameters. For example, consider a one-parameter distribution with a positive parameter <span class="texhtml">θ<sub>1</sub></span>. Given a set of sample data, and a prior distribution for <span class="texhtml">θ<sub>1</sub>,</span> <math>\varphi ({{\theta }_{1}}),</math> the above Bayes rule formula can be written as:  


::<math>f({{\theta }_{1}}|Data)=\frac{L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}</math>
::<math>f({{\theta }_{1}}|Data)=\frac{L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}</math>


In other words, we now have the distribution of <math>{{\theta }_{1}}</math> and we can now make statistical inferences on this parameter, such as calculating probabilities. Specifically, the probability that <math>{{\theta }_{1}}</math> is less than or equal to a value <math>x,</math> <math>P({{\theta }_{1}}\le x)</math> can be obtained by integrating the posterior probability density function (''pdf''), or:  
In other words, we now have the distribution of <span class="texhtml">θ<sub>1</sub></span> and we can now make statistical inferences on this parameter, such as calculating probabilities. Specifically, the probability that <span class="texhtml">θ<sub>1</sub></span> is less than or equal to a value <span class="texhtml">''x'',</span> <math>P({{\theta }_{1}}\le x)</math> can be obtained by integrating the posterior probability density function (''pdf''), or:  


::<math>P({{\theta }_{1}}\le x)=\int_{0}^{x}f({{\theta }_{1}}|Data)d{{\theta }_{1}}</math>
::<math>P({{\theta }_{1}}\le x)=\int_{0}^{x}f({{\theta }_{1}}|Data)d{{\theta }_{1}}</math>


The above equation is the posterior ''cdf'', which essentially calculates a confidence bound on the parameter, where <math>P({{\theta }_{1}}\le x)</math> is the confidence level and <math>x</math> is the confidence bound. Substituting the posterior ''pdf'' into the above posterior ''cdf'' yields:  
The above equation is the posterior ''cdf'', which essentially calculates a confidence bound on the parameter, where <math>P({{\theta }_{1}}\le x)</math> is the confidence level and <span class="texhtml">''x''</span> is the confidence bound. Substituting the posterior ''pdf'' into the above posterior ''cdf'' yields:  


::<math>CL=\frac{\int_{0}^{x}L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}</math>
::<math>CL=\frac{\int_{0}^{x}L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}</math>


The only question at this point is what do we use as a prior distribution of <math>{{\theta }_{1}}.</math>. For the confidence bounds calculation application, non-informative prior distributions are utilized. Non-informative prior distributions are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not affected by external information, or when external information is not available. In the general case of calculating confidence bounds using Bayesian methods, the method should be independent of external information and it should only rely on the current data. Therefore, non-informative priors are used. Specifically, the uniform distribution is used as a prior distribution for the different parameters of the selected fitted distribution. For example, if the Weibull distribution is fitted to the data, the prior distributions for beta and eta are assumed to be uniform. The above equation can be generalized for any distribution having a vector of parameters <math>\theta ,</math> yielding the general equation for calculating Bayesian confidence bounds:  
The only question at this point is, what do we use as a prior distribution of <span class="texhtml">θ<sub>1</sub>?</span> For the confidence bounds calculation application, non-informative prior distributions are utilized. Non-informative prior distributions are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not affected by external information, or when external information is not available. In the general case of calculating confidence bounds using Bayesian methods, the method should be independent of external information and it should only rely on the current data. Therefore, non-informative priors are used. Specifically, the uniform distribution is used as a prior distribution for the different parameters of the selected fitted distribution. For example, if the Weibull distribution is fitted to the data, the prior distributions for beta and eta are assumed to be uniform. The above equation can be generalized for any distribution having a vector of parameters <span class="texhtml">θ,</span> yielding the general equation for calculating Bayesian confidence bounds:  


<br>  
<br>


::<math>CL=\frac{\underset{\xi }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }</math>
::<math>CL=\frac{\underset{\xi }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }</math>


where:  
where:  
:#<math>CL</math> is confidence level  
 
:#<math>\theta </math> is the parameter vector  
:#<span class="texhtml">''C''''L'''</span> is the confidence level
:#<span class="texhtml">θ</span> is the parameter vector  
:#<math>L(\bullet )</math> is the likelihood function  
:#<math>L(\bullet )</math> is the likelihood function  
:#<math>\varphi (\theta )</math> is the prior <math>pdf</math> of the parameter vector <math>\theta </math>  
:#<math>\varphi (\theta )</math> is the prior <span class="texhtml">''p''''d''''f''</span> of the parameter vector <span class="texhtml">θ</span>  
:#<math>\varsigma </math> is the range of <math>\theta </math>  
:#<math>\varsigma </math> is the range of <span class="texhtml">θ</span>  
:#<math>\xi </math> is the range in which <math>\theta </math> changes from <math>\Psi (T,R)</math> till <math>{\theta }'s</math> maximum value or from <math>{\theta }'s</math> minimum value till <math>\Psi (T,R)</math>  
:#<span class="texhtml">ξ</span> is the range in which <span class="texhtml">θ</span> changes from <span class="texhtml">Ψ(''T'',''R'')</span> till <span class="texhtml">''s''</span> maximum value, or from <span class="texhtml">θ</span> minimum value till <span class="texhtml">Ψ(''T'',''R'')</span>  
:#<math>\Psi (T,R)</math> is function such that if <math>T</math> is given then the bounds are calculated for <math>R</math> and if <math>R</math> is given, then he bounds are calculated for <math>T</math>.
:#<span class="texhtml">Ψ(''T'',''R'')</span> is a function such that if <span class="texhtml">''T''</span> is given, then the bounds are calculated for <span class="texhtml">''R''</span> .&nbsp;If <span class="texhtml">''R''</span> is given, then the bounds are calculated for <span class="texhtml">''T''</span>.


If <math>T</math> is given, then from the above equation and <math>\Psi </math> and for a given <math>CL,</math> the bounds on <math>R</math> are calculated. If <math>R</math> is given, then from the above equation and <math>\Psi </math> and for a given <math>CL,</math> the bounds on <math>T</math> are calculated.  
If <span class="texhtml">''T''</span> is given,&nbsp;then from the above equation and <span class="texhtml">Ψ</span> and for a given <span class="texhtml">''C''''L'''''<b>,</b>''L'',</span> the bounds on <span class="texhtml">''R''</span> are calculated. If <span class="texhtml">''R''</span> is given, then from the above equation and <span class="texhtml">Ψ</span> and for a given <span class="texhtml">''C'''</span> the bounds on <span class="texhtml">''T''</span> are calculated.  


=== Confidence Bounds on Time (Type 1) ===
=== Confidence Bounds on Time (Type 1) ===


For a given failure time distribution and a given reliability <math>R</math>, <math>T(R)</math> is a function of <math>R</math> and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. Bounds, for the case of other distributions, can be obtained in similar fashion. For the two-parameter Weibull distribution:  
For a given failure time distribution and a given reliability <span class="texhtml">''R''</span>, <span class="texhtml">''T''(''R'')</span> is a function of <span class="texhtml">''R''</span> and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. The bounds in&nbsp;other types of distributions can be obtained in similar fashion. For the two-parameter Weibull distribution:  


::<math>T(R)=\eta \exp (\frac{\ln (-\ln R)}{\beta })</math>
::<math>T(R)=\eta \exp (\frac{\ln (-\ln R)}{\beta })</math>


For a given reliability, the Bayesian one-sided upper bound estimate for <math>T(R)</math> is:  
For a given reliability, the Bayesian one-sided upper bound estimate for <span class="texhtml">''T''(''R'')</span> is:  


::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT</math>
::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT</math>


where <math>f(T|Data,R)</math> is the posterior distribution of Time <math>T.</math> Using the above equation, we have the following:  
where <span class="texhtml">''f''(''T'' | ''D''''a''''t''''a'''''<b>,''R'')</b></span> is the posterior distribution of Time <span class="texhtml">''T''.</span> Using the above equation, we have the following:


::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \exp (\frac{\ln (-\ln R)}{\beta })\le {{T}_{U}})</math>
::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \exp (\frac{\ln (-\ln R)}{\beta })\le {{T}_{U}})</math>


The above equation can be rewritten in terms of <math>\eta </math> as:  
The above equation can be rewritten in terms of <span class="texhtml">η</span> as:  


::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \le {{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta }))</math>
::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \le {{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta }))</math>


Applying the Bayes rule by assuming the priors of <math>\beta </math> and <math>\eta </math> are independent, we then obtain the following relationship:  
Applying the Bayes rule by assuming the priors of <span class="texhtml">β</span> and <span class="texhtml">η</span> are independent, we then obtain the following relationship:  


::<math>CL=\frac{\int_{0}^{\infty }\int_{0}^{{{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>
::<math>CL=\frac{\int_{0}^{\infty }\int_{0}^{{{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>


The above equation can be solved for <math>{{T}_{U}}(R)</math>, where:  
The above equation can be solved for <span class="texhtml">''T''<sub>''U''</sub>(''R'')</span>, where:  


:#<math>CL</math> is confidence level,  
:#<span class="texhtml">''C''''L'''</span> is the confidence level,
:#<math>\varphi (\beta )</math> is the prior <math>pdf</math> of the parameter <math>\beta </math>. For non-informative prior distribution, <math>\varphi (\beta )=\tfrac{1}{\beta }.</math>  
:#<math>\varphi (\beta )</math> is the prior <span class="texhtml">''p''''d''''f''</span> of the parameter <span class="texhtml">β</span>. For non-informative prior distribution, <math>\varphi (\beta )=\tfrac{1}{\beta }.</math>  
:#<math>\varphi (\eta )</math> is the prior <math>pdf</math> of the parameter <math>\eta .</math>. For non-informative prior distribution, <math>\varphi (\eta )=\tfrac{1}{\eta }.</math>  
:#<math>\varphi (\eta )</math> is the prior <span class="texhtml">''p''''d''''f''</span> of the parameter <span class="texhtml">η.</span> For non-informative prior distribution, <math>\varphi (\eta )=\tfrac{1}{\eta }.</math>  
:#<math>L(\bullet )</math> is the likelihood function.
:#<math>L(\bullet )</math> is the likelihood function.


<br>  
<br>


The same method can be used to get the one-sided lower bound of <math>T(R)</math> from:  
The same method can be used to get the one-sided lower bound of <span class="texhtml">''T''(''R'')</span> from:  


::<math>CL=\frac{\int_{0}^{\infty }\int_{{{T}_{L}}\exp (\frac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>
::<math>CL=\frac{\int_{0}^{\infty }\int_{{{T}_{L}}\exp (\frac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>


The above equation can be solved to get <math>{{T}_{L}}(R)</math>. <br> The Bayesian two-sided bounds estimate for <math>T(R)</math> is:  
The above equation can be solved to get <span class="texhtml">''T''<sub>''L''</sub>(''R'')</span>. <br>The Bayesian two-sided bounds estimate for <span class="texhtml">''T''(''R'')</span> is:  


::<math>CL=\int_{{{T}_{L}}(R)}^{{{T}_{U}}(R)}f(T|Data,R)dT</math>
::<math>CL=\int_{{{T}_{L}}(R)}^{{{T}_{U}}(R)}f(T|Data,R)dT</math>
Line 88: Line 89:
::<math>(1-CL)/2=\int_{0}^{{{T}_{L}}(R)}f(T|Data,R)dT</math>
::<math>(1-CL)/2=\int_{0}^{{{T}_{L}}(R)}f(T|Data,R)dT</math>


Using the same method for the one-sided bounds, <math>{{T}_{U}}(R)</math> and <math>{{T}_{L}}(R)</math> can be solved.  
Using the same method for the one-sided bounds, <span class="texhtml">''T''<sub>''U''</sub>(''R'')</span> and <span class="texhtml">''T''<sub>''L''</sub>(''R'')</span> can be solved.  


=== Confidence Bounds on Reliability (Type 2) ===
=== Confidence Bounds on Reliability (Type 2) ===


For a given failure time distribution and a given time <math>T</math>, <math>R(T)</math> is a function of <math>T</math> and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. Bounds, for the case of other distributions, can be obtained in similar fashion. For example, for two parameter Weibull distribution:  
For a given failure time distribution and a given time <span class="texhtml">''T''</span>, <span class="texhtml">''R''(''T'')</span> is a function of <span class="texhtml">''T''</span> and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. The bounds in other types of&nbsp;distributions can be obtained in similar fashion. For example, for two parameter Weibull distribution:  


::<math>R=\exp (-{{(\frac{T}{\eta })}^{\beta }})</math>
::<math>R=\exp (-{{(\frac{T}{\eta })}^{\beta }})</math>


The Bayesian one-sided upper bound estimate for <math>R(T)</math> is:  
The Bayesian one-sided upper bound estimate for <span class="texhtml">''R''(''T'')</span> is:  


::<math>CL=\int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR</math>
::<math>CL=\int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR</math>


Similar with the bounds on Time, the following is obtained:  
Similar to the bounds on Time, the following is obtained:  


::<math>CL=\frac{\int_{0}^{\infty }\int_{0}^{T\exp (-\frac{\ln (-\ln {{R}_{U}})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>
::<math>CL=\frac{\int_{0}^{\infty }\int_{0}^{T\exp (-\frac{\ln (-\ln {{R}_{U}})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>


The above equation can be solved to get <math>{{R}_{U}}(T)</math>.  
The above equation can be solved to get <span class="texhtml">''R''<sub>''U''</sub>(''T'')</span>.  


The Bayesian one-sided lower bound estimate for R(T) is:  
The Bayesian one-sided lower bound estimate for R(T) is:  
Line 114: Line 115:
::<math>CL=\frac{\int_{0}^{\infty }\int_{T\exp (-\frac{\ln (-\ln {{R}_{L}})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>
::<math>CL=\frac{\int_{0}^{\infty }\int_{T\exp (-\frac{\ln (-\ln {{R}_{L}})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }</math>


The above equation can be solved to get <math>{{R}_{L}}(T)</math>. <br> The Bayesian two-sided bounds estimate for <math>R(T)</math> is:  
The above equation can be solved to get <span class="texhtml">''R''<sub>''L''</sub>(''T'')</span>. <br>The Bayesian two-sided bounds estimate for <span class="texhtml">''R''(''T'')</span> is:  


::<math>CL=\int_{{{R}_{L}}(T)}^{{{R}_{U}}(T)}f(R|Data,T)dR</math>
::<math>CL=\int_{{{R}_{L}}(T)}^{{{R}_{U}}(T)}f(R|Data,T)dR</math>
Line 126: Line 127:
::<math>\int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR=(1-CL)/2</math>
::<math>\int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR=(1-CL)/2</math>


<br> Using the same method for one-sided bounds, <math>{{R}_{U}}(T)</math> and <math>{{R}_{L}}(T)</math> can be solved.
<br>Using the same method for one-sided bounds, <span class="texhtml">''R''<sub>''U''</sub>(''T'')</span> and <span class="texhtml">''R''<sub>''L''</sub>(''T'')</span> can be solved.

Revision as of 21:48, 9 March 2012

Bayesian Confidence Bounds

A fourth method of estimating confidence bounds is based on the Bayes theorem. This type of confidence bounds relies on a different school of thought in statistical analysis, where prior information is combined with sample data in order to make inferences on model parameters and their functions. An introduction to Bayesian methods is given in the Parameter Estimation chapter. Bayesian confidence bounds are derived from Bayes rule, which states that:

[math]\displaystyle{ f(\theta |Data)=\frac{L(Data|\theta )\varphi (\theta )}{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta } }[/math]
where:
  1. f(θ | D'a't'a)o's't'e'r'i'o'rd'f is the p p of θ
  2. θ is the parameter vector of the chosen distribution (i.e., Weibull, lognormal, etc.)
  3. [math]\displaystyle{ L(\bullet ) }[/math] is the likelihood function
  4. [math]\displaystyle{ \varphi (\theta ) }[/math] is the p'r'i'o'r p'd'f of the parameter vector θ
  5. [math]\displaystyle{ \varsigma }[/math] is the range of θ.

In other words, the prior knowledge is provided in the form of the prior p'd'f of the parameters, which in turn is combined with the sample data in order to obtain the posterior p'd'f. Different forms of prior information exist, such as past data, expert opinion or non-informative (refer to Chapter Parameter Estimation). It can be seen from the above Bayes rule formula that we are now dealing with distributions of parameters rather than single value parameters. For example, consider a one-parameter distribution with a positive parameter θ1. Given a set of sample data, and a prior distribution for θ1, [math]\displaystyle{ \varphi ({{\theta }_{1}}), }[/math] the above Bayes rule formula can be written as:

[math]\displaystyle{ f({{\theta }_{1}}|Data)=\frac{L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}} }[/math]

In other words, we now have the distribution of θ1 and we can now make statistical inferences on this parameter, such as calculating probabilities. Specifically, the probability that θ1 is less than or equal to a value x, [math]\displaystyle{ P({{\theta }_{1}}\le x) }[/math] can be obtained by integrating the posterior probability density function (pdf), or:

[math]\displaystyle{ P({{\theta }_{1}}\le x)=\int_{0}^{x}f({{\theta }_{1}}|Data)d{{\theta }_{1}} }[/math]

The above equation is the posterior cdf, which essentially calculates a confidence bound on the parameter, where [math]\displaystyle{ P({{\theta }_{1}}\le x) }[/math] is the confidence level and x is the confidence bound. Substituting the posterior pdf into the above posterior cdf yields:

[math]\displaystyle{ CL=\frac{\int_{0}^{x}L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}} }[/math]

The only question at this point is, what do we use as a prior distribution of θ1? For the confidence bounds calculation application, non-informative prior distributions are utilized. Non-informative prior distributions are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not affected by external information, or when external information is not available. In the general case of calculating confidence bounds using Bayesian methods, the method should be independent of external information and it should only rely on the current data. Therefore, non-informative priors are used. Specifically, the uniform distribution is used as a prior distribution for the different parameters of the selected fitted distribution. For example, if the Weibull distribution is fitted to the data, the prior distributions for beta and eta are assumed to be uniform. The above equation can be generalized for any distribution having a vector of parameters θ, yielding the general equation for calculating Bayesian confidence bounds:


[math]\displaystyle{ CL=\frac{\underset{\xi }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta } }[/math]

where:

  1. C'L is the confidence level
  2. θ is the parameter vector
  3. [math]\displaystyle{ L(\bullet ) }[/math] is the likelihood function
  4. [math]\displaystyle{ \varphi (\theta ) }[/math] is the prior p'd'f of the parameter vector θ
  5. [math]\displaystyle{ \varsigma }[/math] is the range of θ
  6. ξ is the range in which θ changes from Ψ(T,R) till s maximum value, or from θ minimum value till Ψ(T,R)
  7. Ψ(T,R) is a function such that if T is given, then the bounds are calculated for R . If R is given, then the bounds are calculated for T.

If T is given, then from the above equation and Ψ and for a given C''L,L, the bounds on R are calculated. If R is given, then from the above equation and Ψ and for a given C the bounds on T are calculated.

Confidence Bounds on Time (Type 1)

For a given failure time distribution and a given reliability R, T(R) is a function of R and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. The bounds in other types of distributions can be obtained in similar fashion. For the two-parameter Weibull distribution:

[math]\displaystyle{ T(R)=\eta \exp (\frac{\ln (-\ln R)}{\beta }) }[/math]

For a given reliability, the Bayesian one-sided upper bound estimate for T(R) is:

[math]\displaystyle{ CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT }[/math]

where f(T | D'a't'a,R) is the posterior distribution of Time T. Using the above equation, we have the following:

[math]\displaystyle{ CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \exp (\frac{\ln (-\ln R)}{\beta })\le {{T}_{U}}) }[/math]

The above equation can be rewritten in terms of η as:

[math]\displaystyle{ CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \le {{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta })) }[/math]

Applying the Bayes rule by assuming the priors of β and η are independent, we then obtain the following relationship:

[math]\displaystyle{ CL=\frac{\int_{0}^{\infty }\int_{0}^{{{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta } }[/math]

The above equation can be solved for TU(R), where:

  1. C'L is the confidence level,
  2. [math]\displaystyle{ \varphi (\beta ) }[/math] is the prior p'd'f of the parameter β. For non-informative prior distribution, [math]\displaystyle{ \varphi (\beta )=\tfrac{1}{\beta }. }[/math]
  3. [math]\displaystyle{ \varphi (\eta ) }[/math] is the prior p'd'f of the parameter η. For non-informative prior distribution, [math]\displaystyle{ \varphi (\eta )=\tfrac{1}{\eta }. }[/math]
  4. [math]\displaystyle{ L(\bullet ) }[/math] is the likelihood function.


The same method can be used to get the one-sided lower bound of T(R) from:

[math]\displaystyle{ CL=\frac{\int_{0}^{\infty }\int_{{{T}_{L}}\exp (\frac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta } }[/math]

The above equation can be solved to get TL(R).
The Bayesian two-sided bounds estimate for T(R) is:

[math]\displaystyle{ CL=\int_{{{T}_{L}}(R)}^{{{T}_{U}}(R)}f(T|Data,R)dT }[/math]
which is equivalent to:
[math]\displaystyle{ (1+CL)/2=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT }[/math]
and:
[math]\displaystyle{ (1-CL)/2=\int_{0}^{{{T}_{L}}(R)}f(T|Data,R)dT }[/math]

Using the same method for the one-sided bounds, TU(R) and TL(R) can be solved.

Confidence Bounds on Reliability (Type 2)

For a given failure time distribution and a given time T, R(T) is a function of T and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. The bounds in other types of distributions can be obtained in similar fashion. For example, for two parameter Weibull distribution:

[math]\displaystyle{ R=\exp (-{{(\frac{T}{\eta })}^{\beta }}) }[/math]

The Bayesian one-sided upper bound estimate for R(T) is:

[math]\displaystyle{ CL=\int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR }[/math]

Similar to the bounds on Time, the following is obtained:

[math]\displaystyle{ CL=\frac{\int_{0}^{\infty }\int_{0}^{T\exp (-\frac{\ln (-\ln {{R}_{U}})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta } }[/math]

The above equation can be solved to get RU(T).

The Bayesian one-sided lower bound estimate for R(T) is:

[math]\displaystyle{ 1-CL=\int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR }[/math]

Using the posterior distribution, the following is obtained:

[math]\displaystyle{ CL=\frac{\int_{0}^{\infty }\int_{T\exp (-\frac{\ln (-\ln {{R}_{L}})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta } }[/math]

The above equation can be solved to get RL(T).
The Bayesian two-sided bounds estimate for R(T) is:

[math]\displaystyle{ CL=\int_{{{R}_{L}}(T)}^{{{R}_{U}}(T)}f(R|Data,T)dR }[/math]

which is equivalent to:

[math]\displaystyle{ \int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR=(1+CL)/2 }[/math]
and
[math]\displaystyle{ \int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR=(1-CL)/2 }[/math]


Using the same method for one-sided bounds, RU(T) and RL(T) can be solved.