Template:Bayesian Confidence Bounds

Bayesian Confidence Bounds
A fourth method of estimating confidence bounds is based on the Bayes theorem. This type of confidence bounds relies on a different school of thought in statistical analysis, where prior information is combined with sample data in order to make inferences on model parameters and their functions. An introduction to Bayesian methods is given in the Parameter Estimation chapter. Bayesian confidence bounds are derived from Bayes rule, which states that:


 * $$f(\theta |Data)=\frac{L(Data|\theta )\varphi (\theta )}{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }$$


 * where:
 * f(θ | D'a't'a)o's't'e'r'i'o'rd'f is the p p of θ
 * θ is the parameter vector of the chosen distribution (i.e., Weibull, lognormal, etc.)
 * $$L(\bullet )$$ is the likelihood function
 * $$\varphi (\theta )$$ is the p'r'i'o'r p'd'f of the parameter vector θ
 * $$\varsigma $$ is the range of θ.

In other words, the prior knowledge is provided in the form of the prior p'd'f of the parameters, which in turn is combined with the sample data in order to obtain the posterior p'd'f. Different forms of prior information exist, such as past data, expert opinion or non-informative (refer to Chapter Parameter Estimation). It can be seen from the above Bayes rule formula that we are now dealing with distributions of parameters rather than single value parameters. For example, consider a one-parameter distribution with a positive parameter θ1. Given a set of sample data, and a prior distribution for θ1, $$\varphi ({{\theta }_{1}}),$$ the above Bayes rule formula can be written as:


 * $$f({{\theta }_{1}}|Data)=\frac{L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}$$

In other words, we now have the distribution of θ1 and we can now make statistical inferences on this parameter, such as calculating probabilities. Specifically, the probability that θ1 is less than or equal to a value x, $$P({{\theta }_{1}}\le x)$$ can be obtained by integrating the posterior probability density function (pdf), or:


 * $$P({{\theta }_{1}}\le x)=\int_{0}^{x}f({{\theta }_{1}}|Data)d{{\theta }_{1}}$$

The above equation is the posterior cdf, which essentially calculates a confidence bound on the parameter, where $$P({{\theta }_{1}}\le x)$$ is the confidence level and x is the confidence bound. Substituting the posterior pdf into the above posterior cdf yields:


 * $$CL=\frac{\int_{0}^{x}L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}{\int_{0}^{\infty }L(Data|{{\theta }_{1}})\varphi ({{\theta }_{1}})d{{\theta }_{1}}}$$

The only question at this point is, what do we use as a prior distribution of θ1? For the confidence bounds calculation application, non-informative prior distributions are utilized. Non-informative prior distributions are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not affected by external information, or when external information is not available. In the general case of calculating confidence bounds using Bayesian methods, the method should be independent of external information and it should only rely on the current data. Therefore, non-informative priors are used. Specifically, the uniform distribution is used as a prior distribution for the different parameters of the selected fitted distribution. For example, if the Weibull distribution is fitted to the data, the prior distributions for beta and eta are assumed to be uniform. The above equation can be generalized for any distribution having a vector of parameters θ, yielding the general equation for calculating Bayesian confidence bounds:


 * $$CL=\frac{\underset{\xi }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }{\underset{\varsigma }{\int{\mathop{}_{}^{}}}\,L(Data|\theta )\varphi (\theta )d\theta }$$

where:


 * ''C'L is the confidence level
 * θ is the parameter vector
 * $$L(\bullet )$$ is the likelihood function
 * $$\varphi (\theta )$$ is the prior p'd'f of the parameter vector θ
 * $$\varsigma $$ is the range of θ
 * ξ is the range in which θ changes from Ψ(T,R) till s maximum value, or from θ minimum value till Ψ(T,R)
 * Ψ(T,R) is a function such that if T is given, then the bounds are calculated for R . If R is given, then the bounds are calculated for T.

If T is given, then from the above equation and Ψ and for a given C'L,L, the bounds on R are calculated. If R is given, then from the above equation and Ψ and for a given C' the bounds on T are calculated.

Confidence Bounds on Time (Type 1)
For a given failure time distribution and a given reliability R, T(R) is a function of R and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. The bounds in other types of distributions can be obtained in similar fashion. For the two-parameter Weibull distribution:


 * $$T(R)=\eta \exp (\frac{\ln (-\ln R)}{\beta })$$

For a given reliability, the Bayesian one-sided upper bound estimate for T(R) is:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT$$

where f(T | D'a't'a,R) is the posterior distribution of Time T. Using the above equation, we have the following:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \exp (\frac{\ln (-\ln R)}{\beta })\le {{T}_{U}})$$

The above equation can be rewritten in terms of η as:


 * $$CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\eta \le {{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta }))$$

Applying the Bayes rule by assuming the priors of β and η are independent, we then obtain the following relationship:


 * $$CL=\frac{\int_{0}^{\infty }\int_{0}^{{{T}_{U}}\exp (-\frac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

The above equation can be solved for TU(R), where:


 * ''C'L is the confidence level,
 * $$\varphi (\beta )$$ is the prior p'd'f of the parameter β . For non-informative prior distribution, $$\varphi (\beta )=\tfrac{1}{\beta }.$$
 * $$\varphi (\eta )$$ is the prior p'd'f of the parameter η. For non-informative prior distribution, $$\varphi (\eta )=\tfrac{1}{\eta }.$$
 * $$L(\bullet )$$ is the likelihood function.

The same method can be used to get the one-sided lower bound of T(R) from:


 * $$CL=\frac{\int_{0}^{\infty }\int_{{{T}_{L}}\exp (\frac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

The above equation can be solved to get TL(R). The Bayesian two-sided bounds estimate for T(R) is:


 * $$CL=\int_{{{T}_{L}}(R)}^{{{T}_{U}}(R)}f(T|Data,R)dT$$


 * which is equivalent to:


 * $$(1+CL)/2=\int_{0}^{{{T}_{U}}(R)}f(T|Data,R)dT$$


 * and:


 * $$(1-CL)/2=\int_{0}^{{{T}_{L}}(R)}f(T|Data,R)dT$$

Using the same method for the one-sided bounds, TU(R) and TL(R) can be solved.

Confidence Bounds on Reliability (Type 2)
For a given failure time distribution and a given time T, R(T) is a function of T and the distribution parameters. To illustrate the procedure for obtaining confidence bounds, the two-parameter Weibull distribution is used as an example. The bounds in other types of distributions can be obtained in similar fashion. For example, for two parameter Weibull distribution:


 * $$R=\exp (-{{(\frac{T}{\eta })}^{\beta }})$$

The Bayesian one-sided upper bound estimate for R(T) is:


 * $$CL=\int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR$$

Similar to the bounds on Time, the following is obtained:


 * $$CL=\frac{\int_{0}^{\infty }\int_{0}^{T\exp (-\frac{\ln (-\ln {{R}_{U}})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

The above equation can be solved to get RU(T).

The Bayesian one-sided lower bound estimate for R(T) is:


 * $$1-CL=\int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR$$

Using the posterior distribution, the following is obtained:


 * $$CL=\frac{\int_{0}^{\infty }\int_{T\exp (-\frac{\ln (-\ln {{R}_{L}})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }$$

The above equation can be solved to get RL(T). The Bayesian two-sided bounds estimate for R(T) is:


 * $$CL=\int_{{{R}_{L}}(T)}^{{{R}_{U}}(T)}f(R|Data,T)dR$$

which is equivalent to:


 * $$\int_{0}^{{{R}_{U}}(T)}f(R|Data,T)dR=(1+CL)/2$$


 * and


 * $$\int_{0}^{{{R}_{L}}(T)}f(R|Data,T)dR=(1-CL)/2$$

Using the same method for one-sided bounds, RU(T) and RL(T) can be solved.