Template:Bayesian Confidence Bounds

Bayesian Confidence Bounds
A fourth method of estimating confidence bounds is based on the Bayes theorem. This type of confidence bounds relies on a different school of thought in statistical analysis, where prior information is combined with sample data in order to make inferences on model parameters and their functions. An introduction to Bayesian methods is given in Chapter Parameter Estimation. Bayesian confidence bounds are derived from Bayes rule, which states that:


 * $$f(\theta |Data)=\frac{L(Data|\theta )\varphi (\theta )}{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }$$


 * where:
 * $$f(\theta |Data)$$ is the $$posterior$$ $$pdf$$ of $$\theta $$
 * $$\theta $$ is the parameter vector of the chosen distribution (i.e. Weibull, lognormal, etc.)
 * $$L(\bullet )$$ is the likelihood function
 * $$\varphi (\theta )$$ is the $$prior$$ $$pdf$$ of the parameter vector $$\theta $$
 * $$\varsigma $$ is the range of $$\theta $$.

In other words, the prior knowledge is provided in the form of the prior $$pdf$$ of the parameters, which in turn is combined with the sample data in order to obtain the posterior $$pdf.$$ Different forms of prior information exist, such as past data, expert opinion or non-informative (refer to Chapter Parameter Estimation). It can be seen from Eqn. (BayesRule) that we are now dealing with distributions of parameters rather than single value parameters. For example, consider a one-parameter distribution with a positive parameter $${{\theta }_{1}}$$. Given a set of sample data, and a prior distribution for $${{\theta }_{1}},$$ $$\varphi ({{\theta }_{1}}),$$ Eqn. (BayesRule) can be written as:


 * $$f({{\theta }_{1}}|Data)=\frac{L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}$$

In other words, we now have the distribution of $${{\theta }_{1}}$$ and we can now make statistical inferences on this parameter, such as calculating probabilities. Specifically, the probability that $${{\theta }_{1}}$$ is less than or equal to a value $$x,$$ $$P({{\theta }_{1}}\le x)$$ can be obtained by integrating Eqn. (BayesEX), or:


 * $$P({{\theta }_{1}}\le x)=\int_{0}^{x}f({{\theta }_{1}}|Data)d{{\theta }_{1}}$$

Eqn. (IntBayes) essentially calculates a confidence bound on the parameter, where $$P({{\theta }_{1}}\le x)$$ is the confidence level and $$x$$ is the confidence bound. Substituting Eqn. (BayesEX) into Eqn. (IntBayes) yields:


 * $$CL=\frac{\int_{0}^{x}L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}$$

The only question at this point is what do we use as a prior distribution of $${{\theta }_{1}}.$$. For the confidence bounds calculation application, non-informative prior distributions are utilized. Non-informative prior distributions are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not affected by external information, or when external information is not available. In the general case of calculating confidence bounds using Bayesian methods, the method should be independent of external information and it should only rely on the current data. Therefore, non-informative priors are used. Specifically, the uniform distribution is used as a prior distribution for the different parameters of the selected fitted distribution. For example, if the Weibull distribution is fitted to the data, the prior distributions for beta and eta are assumed to be uniform. Eqn. (BayesCLEX) can be generalized for any distribution having a vector of parameters $$\theta ,$$ yielding the general equation for calculating Bayesian confidence bounds:


 * $$CL=\frac{\underset{\xi }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }$$


 * where:
 * $$CL$$ is confidence level
 * $$\theta $$ is the parameter vector
 * $$L(\bullet )$$ is the likelihood function
 * $$\varphi (\theta )$$ is the prior $$pdf$$ of the parameter vector $$\theta $$
 * $$\varsigma $$ is the range of $$\theta $$
 * $$\xi $$ is the range in which $$\theta $$ changes from $$\Psi (T,R)$$ till $${\theta }'s$$ maximum value or from $${\theta }'s$$ minimum value till $$\Psi (T,R)$$
 * $$\Psi (T,R)$$ is function such that if $$T$$ is given then the bounds are calculated for $$R$$ and if $$R$$ is given, then he bounds are calculated for $$T$$.

If $$T$$ is given, then from Eqn. (BayesCL) and $$\Psi $$ and for a given $$CL,$$ the bounds on $$R$$ are calculated. If $$R$$ is given, then from Eqn. (BayesCL) and $$\Psi $$ and for a given $$CL,$$ the bounds on $$T$$ are calculated.

Confidence Bounds on Time (Type 1)
For a given failure time distribution and a given reliability $$R$$, $$T(R)$$ is a function of $$R$$ and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. Bounds, for the case of other distributions, can be obtained in similar fashion. For the two-parameter Weibull distribution:


 * $$T(R)=\eta \exp (\frac{\ln (-\ln R)}{\beta })$$

For a given reliability, the Bayesian one-sided upper bound estimate for $$T(R)$$ is:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT$$

where $$f(T|Data,R)$$ is the posterior distribution of Time $$T.$$ Using Eqn. (T bayes), we have the following:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \exp (\frac{\ln (-\ln R)}{\beta })\le {{T}_{U}})$$

Eqn. (cl) can be rewritten in terms of $$\eta $$ as:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \le {{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta }))$$

From Eqns. (IntBayes), (BayesCLEX) and (BayesCL), by assuming the priors of $$\beta $$ and $$\eta $$ are independent, we then obtain the following relationship:


 * $$CL=\frac{\int_{0}^{\infty }\int_{0}^{{{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

Eqn. (cl2) can be solved for $${{T}_{U}}(R)$$, where:


 * $$CL$$ is confidence level,
 * $$\varphi (\beta )$$ is the prior $$pdf$$ of the parameter $$\beta $$. For non-informative prior distribution, $$\varphi (\beta )=\tfrac{1}{\beta }.$$
 * $$\varphi (\eta )$$ is the prior $$pdf$$ of the parameter $$\eta .$$. For non-informative prior distribution, $$\varphi (\eta )=\tfrac{1}{\eta }.$$
 * $$L(\bullet )$$ is the likelihood function.

The same method can be used to get the one-sided lower bound of $$T(R)$$ from:


 * $$CL=\frac{\int_{0}^{\infty }\int_{{{T}_{L}}\exp (\frac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

Eqn. (cl5) can be solved to get $${{T}_{L}}(R)$$. The Bayesian two-sided bounds estimate for $$T(R)$$ is:


 * $$CL=\int_{{{T}_{L}}(R)}^{{{T}_{U}}(R)}f(T|Data,R)dT$$


 * which is equivalent to:


 * $$(1+CL)/2=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT$$


 * and:


 * $$(1-CL)/2=\int_{0}^{{{T}_{L}}(R)}f(T|Data,R)dT$$

Using the same method for the one-sided bounds, $${{T}_{U}}(R)$$ and $${{T}_{L}}(R)$$ can be solved.

Confidence Bounds on Reliability (Type 2)
For a given failure time distribution and a given time $$T$$, $$R(T)$$ is a function of $$T$$ and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. Bounds, for the case of other distributions, can be obtained in similar fashion. For example, for two parameter Weibull distribution:


 * $$R=\exp (-{{(\frac{T}{\eta })}^{\beta }})$$

The Bayesian one-sided upper bound estimate for $$R(T)$$ is:


 * $$CL=\int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR$$

Similar with the bounds on Time, the following is obtained:


 * $$CL=\frac{\int_{0}^{\infty }\int_{0}^{T\exp (-\frac{\ln (-\ln {{R}_{U}})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

Eqn. (cl3) can be solved to get $${{R}_{U}}(T)$$.

The Bayesian one-sided lower bound estimate for R(T) is:


 * $$1-CL=\int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR$$

Using the posterior distribution, the following is obtained:


 * $$CL=\frac{\int_{0}^{\infty }\int_{T\exp (-\frac{\ln (-\ln {{R}_{L}})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

Eqn. (cl4) can be solved to get $${{R}_{L}}(T)$$. The Bayesian two-sided bounds estimate for $$R(T)$$ is:


 * $$CL=\int_{{{R}_{L}}(T)}^{{{R}_{U}}(T)}f(R|Data,T)dR$$

which is equivalent to:


 * $$\int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR=(1+CL)/2$$


 * and


 * $$\int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR=(1-CL)/2$$

Using the same method for one-sided bounds, $${{R}_{U}}(T)$$ and $${{R}_{L}}(T)$$ can be solved.