Template:Fisher Matrix Confidence Bounds: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
 
Line 1: Line 1:
== Fisher Matrix Confidence Bounds  ==
#REDIRECT [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds]]
 
This section presents an overview of the theory on obtaining approximate confidence bounds on suspended (multiple censored) data. The methodology used is the so-called Fisher matrix bounds (FM), described in [[Appendix D: Weibull References|Nelson [30]]]&nbsp;and [[Appendix D: Weibull References|Lloyd and Lipow [24]]]. These bounds are employed in most other commercial statistical applications. In general, these bounds tend to be more optimistic than the non-parametric rank based bounds. This may be a concern, particularly when dealing with small sample sizes. Some statisticians feel that the Fisher matrix bounds are too optimistic when dealing with small sample sizes and prefer to use other techniques for calculating confidence bounds, such as the likelihood ratio bounds. <br>
 
=== Approximate Estimates of the Mean and Variance of a Function  ===
 
In utilizing FM bounds for functions, one must first determine the mean and variance of the function in question (i.e., reliability function, failure rate function, etc.). An example of the methodology and assumptions for an arbitrary function <span class="texhtml">''G''</span> is presented next.
 
'''Single Parameter Case'''
 
For simplicity, consider a one-parameter distribution represented by a general function <span class="texhtml">''G'',</span> which is a function of one parameter estimator, say <math>G(\widehat{\theta }).</math> For example, the mean of the exponential distribution is a function of the parameter <span class="texhtml">λ</span>: <span class="texhtml">''G''(λ) = 1 / λ = μ</span>. Then, in general, the expected value of <math>G\left( \widehat{\theta } \right)</math> can be found by: <br>
 
::<math>E\left( G\left( \widehat{\theta } \right) \right)=G(\theta )+O\left( \frac{1}{n} \right)</math>
 
where <span class="texhtml">''G''(θ)</span> is some function of <span class="texhtml">θ</span>, such as the reliability function, and <span class="texhtml">θ</span> is the population parameter where <math>E\left( \widehat{\theta } \right)=\theta </math> as <math>n\to \infty </math> . The term <math>O\left( \tfrac{1}{n} \right)</math> is a function of <span class="texhtml">''n''</span>, the sample size, and tends to zero, as fast as <math>\tfrac{1}{n},</math> as <math>n\to \infty .</math> For example, in the case of <math>\widehat{\theta }=1/\overline{x}</math> and <span class="texhtml">''G''(''x'') = 1 / ''x''</span>, then <math>E(G(\widehat{\theta }))=\overline{x}+O\left( \tfrac{1}{n} \right)</math> where <math>O\left( \tfrac{1}{n} \right)=\tfrac{{{\sigma }^{2}}}{n}</math>. Thus as <math>n\to \infty </math>, <math>E(G(\widehat{\theta }))=\mu </math> where <span class="texhtml">μ</span> and <span class="texhtml">σ</span> are the mean and standard deviation, respectively. Using the same one-parameter distribution, the variance of the function <math>G\left( \widehat{\theta } \right)</math> can then be estimated by: <br>
 
::<math>Var\left( G\left( \widehat{\theta } \right) \right)=\left( \frac{\partial G}{\partial \widehat{\theta }} \right)_{\widehat{\theta }=\theta }^{2}Var\left( \widehat{\theta } \right)+O\left( \frac{1}{{{n}^{\tfrac{3}{2}}}} \right)</math>
 
'''Two-Parameter Case'''
 
Consider a Weibull distribution with two parameters <span class="texhtml">β</span> and <span class="texhtml">η</span>. For a given value of <span class="texhtml">''t''</span>, <math>R(t)=G(\beta ,\eta )={{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}</math>. Repeating the previous method for the case of a two-parameter distribution, it is generally true that for a function <span class="texhtml">''G''</span>, which is a function of two parameter estimators, say <math>G\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right)</math>, that:
 
::<math>E\left( G\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right) \right)=G\left( {{\theta }_{1}},{{\theta }_{2}} \right)+O\left( \frac{1}{n} \right)</math>
 
<br>
 
and:
 
<br>
 
::<math>\begin{align}
Var( G( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}}))= &{(\frac{\partial G}{\partial {{\widehat{\theta }}_{1}}})^2}_{{\widehat{\theta_{1}}}={\theta_{1}}}Var(\widehat{\theta_{1}})+{(\frac{\partial G}{\partial {{\widehat{\theta }}_{2}}})^2}_{{\widehat{\theta_{2}}}={\theta_{2}}}Var(\widehat{\theta_{2}})\\
& +2{(\frac{\partial G}{\partial {{\widehat{\theta }}_{1}}})^2}_{{\widehat{\theta_{1}}}={\theta_{1}}}{(\frac{\partial G}{\partial {{\widehat{\theta }}_{2}}})^2}_{{\widehat{\theta_{2}}}={\theta_{2}}}Cov(\widehat{\theta_{1}},\widehat{\theta_{2}}) \\
& +O(\frac{1}{n^{\tfrac{3}{2}}})
\end{align}
 
</math>
 
<br>Note that the derivatives of the above equation are evaluated at <math>{{\widehat{\theta }}_{1}}={{\theta }_{1}}</math> and <math>{{\widehat{\theta }}_{2}}={{\theta }_{2}},</math> where E <math>\left( {{\widehat{\theta }}_{1}} \right)\simeq {{\theta }_{1}}</math> and E <math>\left( {{\widehat{\theta }}_{2}} \right)\simeq {{\theta }_{2}}.</math> <br><br>
 
'''Parameter Variance and Covariance Determination'''
 
The determination of the variance and covariance of the parameters is accomplished via the use of the Fisher information matrix. For a two-parameter distribution, and using maximum likelihood estimates (MLE), the log-likelihood function for censored data is given by:
 
::<math>\begin{align}
  \ln [L]= & \Lambda =\underset{i=1}{\overset{R}{\mathop \sum }}\,\ln [f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}})] \\
  & \text{ }+\underset{j=1}{\overset{M}{\mathop \sum }}\,\ln [1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}})] \\
  & \text{ }+\underset{l=1}{\overset{P}{\mathop \sum }}\,\ln \left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},{{\theta }_{2}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},{{\theta }_{2}}) \right\} 
\end{align}</math>
 
In the equation above, the first summation is for ''complete data'', the second summation is for ''right censored data'' and the third summation is for ''interval or left censored data''. <br>
 
Then the Fisher information matrix is given by:
 
::<math>{{F}_{0}}=\left[ \begin{matrix}
  {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} \right]}_{0}} & {} & {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} \right]}_{0}}  \\
  {} & {} & {}  \\
  {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{2}}\partial {{\theta }_{1}}} \right]}_{0}} & {} & {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}} \right]}_{0}}  \\
\end{matrix} \right]</math>
 
The subscript <span class="texhtml">0</span> indicates that the quantity is evaluated at <math>{{\theta }_{1}}={{\theta }_{{{1}_{0}}}}</math> and <math>{{\theta }_{2}}={{\theta }_{{{2}_{0}}}},</math> the true values of the parameters. <br>So for a sample of <span class="texhtml">''N''</span> units where <span class="texhtml">''R''</span> units have failed, <span class="texhtml">''S''</span> have been suspended, and <span class="texhtml">''P''</span> have failed within a time interval, and <span class="texhtml">''N'' = ''R'' + ''M'' + ''P'',</span> one could obtain the sample local information matrix by:
 
::<math>F={{\left[ \begin{matrix}
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}}  \\
  {} & {} & {}  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{2}}\partial {{\theta }_{1}}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}}  \\
\end{matrix} \right]}^{}}</math>
 
<br>Substituting&nbsp;the values of the estimated parameters, in this case <math>{{\widehat{\theta }}_{1}}</math> and <math>{{\widehat{\theta }}_{2}}</math>, and then inverting the matrix, one can then obtain the local estimate of the covariance matrix or:
 
<br>
 
::<math>\left[ \begin{matrix}
  \widehat{Var}\left( {{\widehat{\theta }}_{1}} \right) & {} & \widehat{Cov}\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right)  \\
  {} & {} & {}  \\
  \widehat{Cov}\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right) & {} & \widehat{Var}\left( {{\widehat{\theta }}_{2}} \right)  \\
\end{matrix} \right]={{\left[ \begin{matrix}
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}}  \\
  {} & {} & {}  \\
  -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{2}}\partial {{\theta }_{1}}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}}  \\
\end{matrix} \right]}^{-1}}</math>
 
<br>Then the variance of a function (<span class="texhtml">''V''''a''''r''(''G'')</span>) can be estimated using equation for the variance. Values for the variance and covariance of the parameters are obtained from Fisher Matrix equation. Once they have been obtained, the approximate confidence bounds on the function are given as:
 
::<math>C{{B}_{R}}=E(G)\pm {{z}_{\alpha }}\sqrt{Var(G)}</math>
 
which is the estimated value plus or minus a certain number of standard deviations. We address finding <span class="texhtml">''z''<sub>α</sub></span> next.
 
<br>
 
=== Approximate Confidence Intervals on the Parameters  ===
 
In general, MLE estimates of the parameters are asymptotically normal, meaning that for large sample sizes,&nbsp;a distribution of parameter estimates from the same population would be very close to the normal distribution. Thus if <math>\widehat{\theta }</math> is the MLE estimator for <span class="texhtml">θ</span>, in the case of a single parameter distribution estimated from a large sample of <span class="texhtml">''n''</span> units, then:
 
::<math>z\equiv \frac{\widehat{\theta }-\theta }{\sqrt{Var\left( \widehat{\theta } \right)}}</math>
 
<br>follows an approximating normal distribution. That is
 
<br>
 
::<math>P\left( x\le z \right)\to \Phi \left( z \right)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt</math>
 
<br>for large <span class="texhtml">''n''</span>. We now place confidence bounds on <span class="texhtml">θ,</span> at some confidence level <span class="texhtml">δ</span>, bounded by the two end points <span class="texhtml">''C''<sub>1</sub></span> and <span class="texhtml">''C''<sub>2</sub></span> where:
 
::<math>P\left( {{C}_{1}}<\theta <{{C}_{2}} \right)=\delta </math>
 
<br>From the above equation:
 
<br>
 
::<math>P\left( -{{K}_{\tfrac{1-\delta }{2}}}<\frac{\widehat{\theta }-\theta }{\sqrt{Var\left( \widehat{\theta } \right)}}<{{K}_{\tfrac{1-\delta }{2}}} \right)\simeq \delta </math>
 
<br>where <span class="texhtml">''K''<sub>α</sub></span> is defined by:
 
<br>
 
::<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi \left( {{K}_{\alpha }} \right)</math>
 
<br>Now by simplifying the equation for the confidence level, one can obtain the approximate two-sided confidence bounds on the parameter <span class="texhtml">θ,</span> at a confidence level <span class="texhtml">δ,</span> or:
 
::<math>\left( \widehat{\theta }-{{K}_{\tfrac{1-\delta }{2}}}\cdot \sqrt{Var\left( \widehat{\theta } \right)}<\theta <\widehat{\theta }+{{K}_{\tfrac{1-\delta }{2}}}\cdot \sqrt{Var\left( \widehat{\theta } \right)} \right)</math>
 
The upper one-sided bounds are given by:
 
::<math>\theta <\widehat{\theta }+{{K}_{1-\delta }}\sqrt{Var(\widehat{\theta })}</math>
 
while the lower one-sided bounds are given by:
 
::<math>\theta >\widehat{\theta }-{{K}_{1-\delta }}\sqrt{Var(\widehat{\theta })}</math>
 
If <math>\widehat{\theta }</math> must be positive, then <math>\ln \widehat{\theta }</math> is treated as normally distributed. The two-sided approximate confidence bounds on the parameter <span class="texhtml">θ</span>, at confidence level <span class="texhtml">δ</span>, then become:
 
::<math>\begin{align}
  & {{\theta }_{U}}= & \widehat{\theta }\cdot {{e}^{\tfrac{{{K}_{\tfrac{1-\delta }{2}}}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}\text{ (Two-sided upper)} \\
& {{\theta }_{L}}= & \frac{\widehat{\theta }}{{{e}^{\tfrac{{{K}_{\tfrac{1-\delta }{2}}}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}}\text{    (Two-sided lower)} 
\end{align}</math>
 
<br>The one-sided approximate confidence bounds on the parameter <span class="texhtml">θ</span>, at confidence level <span class="texhtml">δ,</span> can be found from:
 
::<math>\begin{align}
  & {{\theta }_{U}}= & \widehat{\theta }\cdot {{e}^{\tfrac{{{K}_{1-\delta }}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}\text{ (One-sided upper)} \\
& {{\theta }_{L}}= & \frac{\widehat{\theta }}{{{e}^{\tfrac{{{K}_{1-\delta }}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}}\text{    (One-sided lower)} 
\end{align}</math>
 
<br>The same procedure can be extended for the case of a two or more parameter distribution. Lloyd and Lipow [[Appendix: Weibull References|[24]]] further elaborate on this procedure.
 
=== Confidence Bounds on Time (Type 1)  ===
 
Type 1 confidence bounds are confidence bounds around time for a given reliability. For example, when using the one-parameter exponential distribution, the corresponding time for a given exponential percentile (i.e., y-ordinate or unreliability, <span class="texhtml">''Q'' = 1 − ''R'')</span> is determined by solving the unreliability function for the time, <span class="texhtml">''T''</span>, or:
 
::<math>\begin{align}\widehat{T}(Q)= &-\frac{1}{\widehat{\lambda }}
                    \ln (1-Q)= & -\frac{1}{\widehat{\lambda }}\ln (R)
          \end{align}</math>
 
Bounds on time (Type 1) return the confidence bounds around this time value by determining the confidence intervals around <math>\widehat{\lambda }</math> and substituting these values into the above equation. The bounds on <math>\widehat{\lambda }</math> are determined using the method for the bounds on parameters, with its variance obtained from the Fisher Matrix. Note that the procedure is slightly more complicated for distributions with more than one parameter.
 
<br>
 
=== Confidence Bounds on Reliability (Type 2)  ===
 
Type 2 confidence bounds are confidence bounds around reliability. For example, when using the two-parameter exponential distribution, the reliability function is:
 
::<math>\widehat{R}(T)={{e}^{-\widehat{\lambda }\cdot T}}</math>
 
<br>Reliability bounds (Type 2) return the confidence bounds by determining the confidence intervals around <math>\widehat{\lambda }</math> and substituting these values into the above equation. The bounds on <math>\widehat{\lambda }</math> are determined using the method for the bounds on parameters, with its variance obtained from the Fisher Matrix. Once again, the procedure is more complicated for distributions with more than one parameter.

Latest revision as of 23:32, 12 August 2012