Template:Fielded systems rga: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
=Fielded Systems=
#REDIRECT [[RGA Models for Repairable Systems Analysis]]
The previous chapters presented analysis methods for data obtained during developmental testing. However, data from systems in the field can also be analyzed in RGA. This type of data is called fielded systems data and is analogous to warranty data. Fielded systems can be categorized into two basic types: one-time or nonrepairable systems and reusable or repairable systems. In the latter case, under continuous operation, the system is repaired, but not replaced after each failure. For example, if a water pump in a vehicle fails, the water pump is replaced and the vehicle is repaired.
Two types of analysis are presented in this chapter. The first is repairable systems analysis where the reliability of a system can be tracked and quantified based on data from multiple systems in the field. The second is fleet analysis where data from multiple systems in the field can be collected and analyzed so that reliability metrics for the fleet as a whole can be quantified.
{{repairable systems analysis rga}}
 
==Parameter Estimation==
<br>
Suppose that the number of systems under study is  <math>K</math>  and the  <math>{{q}^{th}}</math>  system is observed continuously from time  <math>{{S}_{q}}</math>  to time  <math>{{T}_{q}}</math> ,  <math>q=1,2,\ldots ,K</math> . During the period  <math>[{{S}_{q}},{{T}_{q}}]</math> , let  <math>{{N}_{q}}</math>  be the number of failures experienced by the  <math>{{q}^{th}}</math>  system and let  <math>{{X}_{i,q}}</math>  be the age of this system at the  <math>{{i}^{th}}</math>  occurrence of failure,  <math>i=1,2,\ldots ,{{N}_{q}}</math> . It is also possible that the times  <math>{{S}_{q}}</math>  and  <math>{{T}_{q}}</math>  may be observed failure times for the  <math>{{q}^{th}}</math>  system. If  <math>{{X}_{{{N}_{q}},q}}={{T}_{q}}</math>  then the data on the  <math>{{q}^{th}}</math>  system is said to be failure terminated and  <math>{{T}_{q}}</math>  is a random variable with  <math>{{N}_{q}}</math>  fixed. If  <math>{{X}_{{{N}_{q}},q}}<{{T}_{q}}</math>  then the data on the  <math>{{q}^{th}}</math>  system is said to be time terminated with  <math>{{N}_{q}}</math>  a random variable. The maximum likelihood estimates of  <math>\lambda </math>  and  <math>\beta </math>  are values satisfying the Eqns. (lambdaPowerLaw) and (BetaPowerLaw).
 
 
::<math>\begin{align}
  & \widehat{\lambda }= & \frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\left( T_{q}^{\widehat{\beta }}-S_{q}^{\widehat{\beta }} \right)} \\
& \widehat{\beta }= & \frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{\widehat{\lambda }\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\left[ T_{q}^{\widehat{\beta }}\ln ({{T}_{q}})-S_{q}^{\widehat{\beta }}\ln ({{S}_{q}}) \right]-\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\underset{i=1}{\overset{{{N}_{q}}}{\mathop{\sum }}}\,\ln ({{X}_{i,q}})} 
\end{align}</math>
 
 
where  <math>0\ln 0</math>  is defined to be 0. In general, these equations cannot be solved explicitly for  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta },</math>  but must be solved by iterative procedures. Once  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta }</math>  have been estimated, the maximum likelihood estimate of the intensity function is given by:
 
::<math>\widehat{u}(t)=\widehat{\lambda }\widehat{\beta }{{t}^{\widehat{\beta }-1}}</math>
 
If  <math>{{S}_{1}}={{S}_{2}}=\ldots ={{S}_{q}}=0</math>  and  <math>{{T}_{1}}={{T}_{2}}=\ldots ={{T}_{q}}</math>  <math>\,(q=1,2,\ldots ,K)</math>  then the maximum likelihood estimates  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta }</math>  are in closed form.
 
::<math>\begin{align}
  & \widehat{\lambda }= & \frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{K{{T}^{\beta }}} \\
& \widehat{\beta }= & \frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\underset{i=1}{\overset{{{N}_{q}}}{\mathop{\sum }}}\,\ln (\tfrac{T}{{{X}_{iq}}})} 
\end{align}</math>
 
 
The following examples illustrate these estimation procedures.
<br>
<br>
=====Example 1=====
<br>
For the data in Table 13.1, the starting time for each system is equal to  <math>0</math>  and the ending time for each system is 2000 hours. Calculate the maximum likelihood estimates  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta }</math> .
 
<br>
{|system= align="center" border="1"
|-
|colspan="3" style="text-align:center"|Table 13.1 - Repairable system failure data
|-
!System 1 ( <math>{{X}_{i1}}</math> )
!System 2 ( <math>{{X}_{i2}}</math> )
!System 3 ( <math>{{X}_{i3}}</math> )
|-
|1.2|| 1.4|| 0.3
|-
|55.6|| 35.0|| 32.6
|-
|72.7|| 46.8|| 33.4
|-
|111.9|| 65.9|| 241.7
|-
|121.9|| 181.1|| 396.2
|-
|303.6|| 712.6|| 444.4
|-
|326.9|| 1005.7|| 480.8
|-
|1568.4|| 1029.9 ||588.9
|-
|1913.5|| 1675.7|| 1043.9
|-
| ||1787.5|| 1136.1
|-
| ||1867.0|| 1288.1
|-
| || ||1408.1
|-
| || ||1439.4
|-
| || ||1604.8
|-
|<math>{{N}_{1}}=9</math> || <math>{{N}_{2}}=11</math> ||<math>{{N}_{3}}=14</math>
|}
 
<br>
'''Solution'''
<br>
Since the starting time for each system is equal to zero and each system has an equivalent ending time, the general Eqns. (lambdaPowerLaw) and (BetaPowerLaw) reduce to the closed form Eqns. (sample1) and (sample2). The maximum likelihood estimates of  <math>\hat{\beta }</math>  and  <math>\hat{\lambda }</math>  are then calculated as follows:
 
::<math>\begin{align}
  & \widehat{\beta }= & \frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\underset{i=1}{\overset{{{N}_{q}}}{\mathop{\sum }}}\,\ln (\tfrac{T}{{{X}_{iq}}})} \\
& = & 0.45300 
\end{align}</math>
 
 
::<math>\begin{align}
  & \widehat{\lambda }= & \frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{K{{T}^{\beta }}} \\
& = & 0.36224 
\end{align}</math>
 
 
[[Image:rga13.2.png|thumb|center|300px|Instantaneous Failure Intensity vs. Time plot.]]
 
<br>
The system failure intensity function is then estimated by:
 
::<math>\widehat{u}(t)=\widehat{\lambda }\widehat{\beta }{{t}^{\widehat{\beta }-1}},\text{ }t>0</math>
 
Figure wpp intensity is a plot of  <math>\widehat{u}(t)</math>  over the period (0, 3000). Clearly, the estimated failure intensity function is most representative over the range of the data and any extrapolation should be viewed with the usual caution.
 
===Goodness-of-Fit Tests for Repairable System Analysis===
<br>
It is generally desirable to test the compatibility of a model and data by a statistical goodness-of-fit test. A parametric Cramér-von Mises goodness-of-fit test is used for the multiple system and repairable system Power Law model, as proposed by Crow in [17]. This goodness-of-fit test is appropriate whenever the start time for each system is 0 and the failure data is complete over the continuous interval  <math>[0,{{T}_{q}}]</math>  with no gaps in the data. The Chi-Squared test is a goodness-of-fit test that can be applied under more general circumstances. In addition, the Common Beta Hypothesis test also can be used to compare the intensity functions of the individual systems by comparing the  <math>{{\beta }_{q}}</math>  values of each system. Lastly, the Laplace Trend test checks for trends within the data. Due to their general applicatoin, the Common Beta Hypothesis test and the Laplace Trend test are both presented in Appendix B. The Cramér-von Mises and Chi-Squared goodness-of-fit tests are illustrated next.
<br>
<br>
====Cramér-von Mises Test====
<br>
To illustrate the application of the Cramér-von Mises statistic for multiple system data, suppose that  <math>K</math>  like systems are under study and you wish to test the hypothesis  <math>{{H}_{1}}</math>  that their failure times follow a non-homogeneous Poisson process. Suppose information is available for the  <math>{{q}^{th}}</math>  system over the interval  <math>[0,{{T}_{q}}]</math>  , with successive failure times    ,  <math>(q=1,2,\ldots ,\,K)</math> . The Cramér-von Mises test can be performed with the following steps:
<br>
<br>
Step 1: If  <math>{{x}_{{{N}_{q}}q}}={{T}_{q}}</math>  (failure terminated) let  <math>{{M}_{q}}={{N}_{q}}-1</math> , and if  <math>{{x}_{{{N}_{q}}q}}<T</math>  (time terminated) let  <math>{{M}_{q}}={{N}_{q}}</math> . Then:
 
::<math>M=\underset{q=1}{\overset{K}{\mathop \sum }}\,{{M}_{q}}</math>
 
Step 2: For each system divide each successive failure time by the corresponding end time  <math>{{T}_{q}}</math> , <math>\,i=1,2,...,{{M}_{q}}.</math>  Calculate the  <math>M</math>  values:
 
::<math>{{Y}_{iq}}=\frac{{{X}_{iq}}}{{{T}_{q}}},i=1,2,\ldots ,{{M}_{q}},\text{ }q=1,2,\ldots ,K</math>
 
 
Step 3: Next calculate  <math>\overline{\beta }</math> , the unbiased estimate of  <math>\beta </math> , from:
 
::<math>\overline{\beta }=\frac{M-1}{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\underset{i=1}{\overset{Mq}{\mathop{\sum }}}\,\ln \left( \tfrac{{{T}_{q}}}{{{X}_{i}}{{}_{q}}} \right)}</math>
 
 
Step 4: Treat the  <math>{{Y}_{iq}}</math>  values as one group and order them from smallest to largest. Name these ordered values  <math>{{z}_{1}},\,{{z}_{2}},\ldots ,{{z}_{M}}</math> , such that  <math>{{z}_{1}}<\ \ {{z}_{2}}<\ldots <{{z}_{M}}</math> .
<br>
<br>
Step 5: Calculate the parametric Cramér-von Mises statistic.
 
::<math>C_{M}^{2}=\frac{1}{12M}+\underset{j=1}{\overset{M}{\mathop \sum }}\,{{(Z_{j}^{\overline{\beta }}-\frac{2j-1}{2M})}^{2}}</math>
 
 
Critical values for the Cramér-von Mises test are presented in Table B.2 of Appendix B.
<br>
<br>
Step 6: If the calculated  <math>C_{M}^{2}</math>  is less than the critical value then accept the hypothesis that the failure times for the  <math>K</math>  systems follow the non-homogeneous Poisson process with intensity function  <math>u(t)=\lambda \beta {{t}^{\beta -1}}</math> .
<br>
<br>
=====Example 2=====
<br>
For the data from Example 1, use the Cramér-von Mises test to examine the compatibility of the model at a significance level  <math>\alpha =0.10</math>
<br>
<br>
''Solution''
<br>
Step 1:
 
::<math>\begin{align}
  & {{X}_{9,1}}= & 1913.5<2000,\,\ {{M}_{1}}=9 \\
& {{X}_{11,2}}= & 1867<2000,\,\ {{M}_{2}}=11 \\
& {{X}_{14,3}}= & 1604.8<2000,\,\ {{M}_{3}}=14 \\
& M= & \underset{q=1}{\overset{3}{\mathop \sum }}\,{{M}_{q}}=34 
\end{align}</math>
 
 
Step 2: Calculate  <math>{{Y}_{iq}},</math>  treat the  <math>{{Y}_{iq}}</math>  values as one group and order them from smallest to largest. Name these ordered values  <math>{{z}_{1}},\,{{z}_{2}},\ldots ,{{z}_{M}}</math> .
<br>
<br>
Step 3: Calculate  <math>\overline{\beta }=\tfrac{M-1}{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\underset{i=1}{\overset{Mq}{\mathop{\sum }}}\,\ln \left( \tfrac{{{T}_{q}}}{{{X}_{i}}{{}_{q}}} \right)}=0.4397</math>
<br>
<br>
Step 4: Calculate  <math>C_{M}^{2}=\tfrac{1}{12M}+\underset{j=1}{\overset{M}{\mathop{\sum }}}\,{{(Z_{j}^{\overline{\beta }}-\tfrac{2j-1}{2M})}^{2}}=0.0611</math>
<br>
<br>
Step 5: Find the critical value (CV) from Table B.2 for  <math>M=34</math>  at a significance level  <math>\alpha =0.10</math> .  <math>CV=0.172</math> .
<br>
<br>
Step 6: Since  <math>C_{M}^{2}<CV</math> , accept the hypothesis that the failure times for the  <math>K=3</math>  repairable systems follow the non-homogeneous Poisson process with intensity function  <math>u(t)=\lambda \beta {{t}^{\beta -1}}</math> .
<br>
<br>
 
====Chi-Squared Test====
<br>
The parametric Cramér-von Mises test described above requires that the starting time,  <math>{{S}_{q}}</math> , be equal to 0 for each of the  <math>K</math>  systems. Although not as powerful as the Cramér-von Mises test, the Chi-Squared test can be applied regardless of the starting times. The expected number of failures for a system over its age  <math>(a,b)</math>  for the Chi-Squared test is estimated by  <math>\widehat{\lambda }{{b}^{\widehat{\beta }}}-\widehat{\lambda }{{a}^{\widehat{\beta }}}=\widehat{\theta }</math> , where  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta }</math>  are the maximum likelihood estimates.
The computed  <math>{{\chi }^{2}}</math>  statistic is:
 
::<math>{{\chi }^{2}}=\underset{j=1}{\overset{d}{\mathop \sum }}\,{{\frac{\left[ N(j)-\theta (j) \right]}{\widehat{\theta }(j)}}^{2}}</math>
 
where  <math>d</math>  is the total number of intervals. The random variable  <math>{{\chi }^{2}}</math>  is approximately Chi-Square distributed with  <math>df=d-2</math>  degrees of freedom. There must be at least three intervals and the length of the intervals do not have to be equal. It is common practice to require that the expected number of failures for each interval,  <math>\theta (j)</math> , be at least five. If  <math>\chi _{0}^{2}>\chi _{\alpha /2,d-2}^{2}</math>  or if  <math>\chi _{0}^{2}<\chi _{1-(\alpha /2),d-2}^{2}</math> , reject the null hypothesis.
 
===Confidence Bounds for Repairable Systems Analysis===
====Bounds on  <math>\beta </math>====
=====Fisher Matrix Bounds=====
The parameter  <math>\beta </math>  must be positive, thus  <math>\ln \beta </math>  is approximately treated as being normally distributed.
 
 
::<math>\frac{\ln (\widehat{\beta })-\ln (\beta )}{\sqrt{Var\left[ \ln (\widehat{\beta }) \right]}}\ \tilde{\ }\ N(0,1)</math>
 
 
::<math>C{{B}_{\beta }}=\widehat{\beta }{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{\beta })}/\widehat{\beta }}}</math>
 
 
::<math>\widehat{\beta }=\frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{\widehat{\lambda }\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\left[ (T_{q}^{\widehat{\beta }}\ln ({{T}_{q}})-S_{q}^{\widehat{\beta }}\ln ({{S}_{q}}) \right]-\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\underset{i=1}{\overset{{{N}_{q}}}{\mathop{\sum }}}\,\ln ({{X}_{i}}{{}_{q}})}</math>
 
 
All variance can be calculated using the Fisher Information Matrix.
<br>
<math>\Lambda </math>  is the natural log-likelihood function.
 
 
::<math>\Lambda =\underset{q=1}{\overset{K}{\mathop \sum }}\,\left[ {{N}_{q}}(\ln (\lambda )+\ln (\beta ))-\lambda (T_{q}^{\beta }-S_{q}^{\beta })+(\beta -1)\underset{i=1}{\overset{{{N}_{q}}}{\mathop \sum }}\,\ln ({{x}_{iq}}) \right]</math>
 
 
::<math>\frac{{{\partial }^{2}}\Lambda }{\partial {{\lambda }^{2}}}=-\frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{{{\lambda }^{2}}}</math>
 
 
::<math>\frac{{{\partial }^{2}}\Lambda }{\partial \lambda \partial \beta }=-\underset{q=1}{\overset{K}{\mathop \sum }}\,\left[ T_{q}^{\beta }\ln ({{T}_{q}})-S_{q}^{\beta }\ln ({{S}_{q}}) \right]</math>
 
 
::<math>\frac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}}=-\frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}}{{{\beta }^{2}}}-\lambda \underset{q=1}{\overset{K}{\mathop \sum }}\,\left[ T_{q}^{\beta }{{(\ln ({{T}_{q}}))}^{2}}-S_{q}^{\beta }{{(\ln ({{S}_{q}}))}^{2}} \right]</math>
 
=====Crow Bounds=====
Calculate the conditional maximum likelihood estimate of  <math>\tilde{\beta }</math> :
 
 
::<math>\tilde{\beta }=\frac{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,{{M}_{q}}}{\underset{q=1}{\overset{K}{\mathop{\sum }}}\,\underset{i=1}{\overset{M}{\mathop{\sum }}}\,\ln \left( \tfrac{{{T}_{q}}}{{{X}_{iq}}} \right)}</math>
 
 
The Crow 2-sided  <math>(1-a)</math> 100-percent confidence bounds on  <math>\beta </math>  are:
 
::<math>\begin{align}
  & {{\beta }_{L}}= & \tilde{\beta }\frac{\chi _{\tfrac{\alpha }{2},2M}^{2}}{2M} \\
& {{\beta }_{U}}= & \tilde{\beta }\frac{\chi _{1-\tfrac{\alpha }{2},2M}^{2}}{2M} 
\end{align}</math>
 
 
====Bounds on  <math>\lambda </math>====
=====Fisher Matrix Bounds=====
The parameter  <math>\lambda </math>  must be positive, thus  <math>\ln \lambda </math>  is approximately treated as being normally distributed. These bounds are based on:
 
 
::<math>\frac{\ln (\widehat{\lambda })-\ln (\lambda )}{\sqrt{Var\left[ \ln (\widehat{\lambda }) \right]}}\ \tilde{\ }\ N(0,1)</math>
 
<br>
The approximate confidence bounds on  <math>\lambda </math>  are given as:
 
 
::<math>C{{B}_{\lambda }}=\widehat{\lambda }{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{\lambda })}/\widehat{\lambda }}}</math>
 
 
where  <math>\widehat{\lambda }=\tfrac{n}{T_{K}^{{\hat{\beta }}}}</math> .
The variance calculation is the same as Eqns. (var1), (var2) and (var3).
<br>
<br>
=====Crow Bounds=====
''Time Terminated''
<br>
The confidence bounds on  <math>\lambda </math>  for time terminated data are calculated using:
 
 
::<math>\begin{align}
  & {{\lambda }_{L}}= & \frac{\chi _{\tfrac{\alpha }{2},2N}^{2}}{2\cdot \underset{q=1}{\overset{K}{\mathop{\sum }}}\,T_{q}^{^{\beta }}} \\
& {{\lambda }_{u}}= & \frac{\chi _{1-\tfrac{\alpha }{2},2N+2}^{2}}{2\cdot \underset{q=1}{\overset{K}{\mathop{\sum }}}\,T_{q}^{^{\beta }}} 
\end{align}</math>
 
 
 
''Failure Terminated''
<br>
The confidence bounds on  <math>\lambda </math>  for failure terminated data are calculated using:
 
 
::<math>\begin{align}
  & {{\lambda }_{L}}= & \frac{\chi _{\tfrac{\alpha }{2},2N}^{2}}{2\cdot \underset{q=1}{\overset{K}{\mathop{\sum }}}\,T_{q}^{^{\beta }}} \\
& {{\lambda }_{u}}= & \frac{\chi _{1-\tfrac{\alpha }{2},2N}^{2}}{2\cdot \underset{q=1}{\overset{K}{\mathop{\sum }}}\,T_{q}^{^{\beta }}} 
\end{align}</math>
 
 
====Bounds on Growth Rate====
Since the growth rate is equal to  <math>1-\beta </math> , the confidence bounds are:
 
::<math>\begin{align}
  & Gr.\text{ }Rat{{e}_{L}}= & 1-{{\beta }_{U}} \\
& Gr.\text{ }Rat{{e}_{U}}= & 1-{{\beta }_{L}} 
\end{align}</math>
 
If Fisher Matrix confidence bounds are used then  <math>{{\beta }_{L}}</math>  and  <math>{{\beta }_{U}}</math>  are obtained from Eqn. (betafc). If Crow bounds are used then  <math>{{\beta }_{L}}</math>  and  <math>{{\beta }_{U}}</math>  are obtained from Eqn. (betacc).
<br>
<br>
 
====Bounds on Cumulative MTBF====
=====Fisher Matrix Bounds=====
The cumulative MTBF,  <math>{{m}_{c}}(t)</math> , must be positive, thus  <math>\ln {{m}_{c}}(t)</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln ({{\widehat{m}}_{c}}(t))-\ln ({{m}_{c}}(t))}{\sqrt{Var\left[ \ln ({{\widehat{m}}_{c}}(t)) \right]}}\ \tilde{\ }\ N(0,1)</math>
 
The approximate confidence bounds on the cumulative MTBF are then estimated from:
 
 
::<math>CB={{\widehat{m}}_{c}}(t){{e}^{\pm {{z}_{\alpha }}\sqrt{Var({{\widehat{m}}_{c}}(t))}/{{\widehat{m}}_{c}}(t)}}</math>
 
:where:
 
::<math>{{\widehat{m}}_{c}}(t)=\frac{1}{\widehat{\lambda }}{{t}^{1-\widehat{\beta }}}</math>
 
 
::<math>\begin{align}
  & Var({{\widehat{m}}_{c}}(t))= & {{\left( \frac{\partial {{m}_{c}}(t)}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial {{m}_{c}}(t)}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda }) \\
&  & +2\left( \frac{\partial {{m}_{c}}(t)}{\partial \beta } \right)\left( \frac{\partial {{m}_{c}}(t)}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda })\, 
\end{align}</math>
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3).
 
::<math>\begin{align}
  & \frac{\partial {{m}_{c}}(t)}{\partial \beta }= & -\frac{1}{\widehat{\lambda }}{{t}^{1-\widehat{\beta }}}\ln (t) \\
& \frac{\partial {{m}_{c}}(t)}{\partial \lambda }= & -\frac{1}{{{\widehat{\lambda }}^{2}}}{{t}^{1-\widehat{\beta }}} 
\end{align}</math>
 
 
=====Crow Bounds=====
To calculate the Crow confidence bounds on cumulative MTBF, first calculate the Crow cumulative failure intensity confidence bounds:
 
::<math>C{{(t)}_{L}}=\frac{\chi _{\tfrac{\alpha }{2},2N}^{2}}{2\cdot t}</math>
 
 
::<math>C{{(t)}_{u}}=\frac{\chi _{1-\tfrac{\alpha }{2},2N+2}^{2}}{2\cdot t}</math>
 
:Then
 
::<math>\begin{align}
  & {{[MTB{{F}_{c}}]}_{L}}= & \frac{1}{C{{(t)}_{U}}} \\
& {{[MTB{{F}_{c}}]}_{U}}= & \frac{1}{C{{(t)}_{L}}} 
\end{align}</math>
 
 
====Bounds on Instantaneous MTBF====
=====Fisher Matrix Bounds=====
The instantaneous MTBF,  <math>{{m}_{i}}(t)</math> , must be positive, thus  <math>\ln {{m}_{i}}(t)</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln ({{\widehat{m}}_{i}}(t))-\ln ({{m}_{i}}(t))}{\sqrt{Var\left[ \ln ({{\widehat{m}}_{i}}(t)) \right]}}\ \tilde{\ }\ N(0,1)</math>
 
 
The approximate confidence bounds on the instantaneous MTBF are then estimated from:
 
::<math>CB={{\widehat{m}}_{i}}(t){{e}^{\pm {{z}_{\alpha }}\sqrt{Var({{\widehat{m}}_{i}}(t))}/{{\widehat{m}}_{i}}(t)}}</math>
 
:where:
 
::<math>{{\widehat{m}}_{i}}(t)=\frac{1}{\lambda \beta {{t}^{\beta -1}}}</math>
 
::<math>\begin{align}
  & Var({{\widehat{m}}_{i}}(t))= & {{\left( \frac{\partial {{m}_{i}}(t)}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial {{m}_{i}}(t)}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda }) \\
&  & +2\left( \frac{\partial {{m}_{i}}(t)}{\partial \beta } \right)\left( \frac{\partial {{m}_{i}}(t)}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda }) 
\end{align}</math>
 
 
The variance calculation is the same as (var1), (var2) and (var3).
 
::<math>\begin{align}
  & \frac{\partial {{m}_{i}}(t)}{\partial \beta }= & -\frac{1}{\widehat{\lambda }{{\widehat{\beta }}^{2}}}{{t}^{1-\widehat{\beta }}}-\frac{1}{\widehat{\lambda }\widehat{\beta }}{{t}^{1-\widehat{\beta }}}\ln (t) \\
& \frac{\partial {{m}_{i}}(t)}{\partial \lambda }= & -\frac{1}{{{\widehat{\lambda }}^{2}}\widehat{\beta }}{{t}^{1-\widehat{\beta }}} 
\end{align}</math>
 
 
=====Crow Bounds=====
''Failure Terminated Data''
<br>
To calculate the bounds for failure terminated data, consider the following equation:
 
::<math>G(\mu |n)=\mathop{}_{0}^{\infty }\frac{{{e}^{-x}}{{x}^{n-2}}}{(n-2)!}\underset{i=0}{\overset{n-1}{\mathop \sum }}\,\frac{1}{i!}{{\left( \frac{\mu }{x} \right)}^{i}}\exp (-\frac{\mu }{x})\,dx</math>
 
Find the values  <math>{{p}_{1}}</math>  and  <math>{{p}_{2}}</math>  by finding the solution  <math>c</math>  to  <math>G({{n}^{2}}/c|n)=\xi </math>  for  <math>\xi =\tfrac{\alpha }{2}</math>  and  <math>\xi =1-\tfrac{\alpha }{2}</math> , respectively. If using the biased parameters,  <math>\hat{\beta }</math>  and  <math>\hat{\lambda }</math> , then the upper and lower confidence bounds are:
 
::<math>\begin{align}
  & {{[MTB{{F}_{i}}]}_{L}}= & MTB{{F}_{i}}\cdot {{p}_{1}} \\
& {{[MTB{{F}_{i}}]}_{U}}= & MTB{{F}_{i}}\cdot {{p}_{2}} 
\end{align}</math>
 
where  <math>MTB{{F}_{i}}=\tfrac{1}{\hat{\lambda }\hat{\beta }{{t}^{\hat{\beta }-1}}}</math> . If using the unbiased parameters,  <math>\bar{\beta }</math>  and  <math>\bar{\lambda }</math> , then the upper and lower confidence bounds are:
 
::<math>\begin{align}
  & {{[MTB{{F}_{i}}]}_{L}}= & MTB{{F}_{i}}\cdot \left( \frac{N-2}{N} \right)\cdot {{p}_{1}} \\
& {{[MTB{{F}_{i}}]}_{U}}= & MTB{{F}_{i}}\cdot \left( \frac{N-2}{N} \right)\cdot {{p}_{2}} 
\end{align}</math>
 
where  <math>MTB{{F}_{i}}=\tfrac{1}{\hat{\lambda }\hat{\beta }{{t}^{\hat{\beta }-1}}}</math> .
<br>
<br>
''Time Terminated Data''
<br>
To calculate the bounds for time terminated data, consider the following equation where  <math>{{I}_{1}}(.)</math>  is the modified Bessel function of order one:
 
::<math>H(x|k)=\underset{j=1}{\overset{k}{\mathop \sum }}\,\frac{{{x}^{2j-1}}}{{{2}^{2j-1}}(j-1)!j!{{I}_{1}}(x)}</math>
 
Find the values  <math>{{\Pi }_{1}}</math>  and  <math>{{\Pi }_{2}}</math>  by finding the solution  <math>x</math>  to  <math>H(x|k)=\tfrac{\alpha }{2}</math>  and  <math>H(x|k)=1-\tfrac{\alpha }{2}</math>  in the cases corresponding to the lower and upper bounds, respectively. <br>
Calculate  <math>\Pi =\tfrac{{{n}^{2}}}{4{{x}^{2}}}</math>  for each case. If using the biased parameters,  <math>\hat{\beta }</math>  and  <math>\hat{\lambda }</math> , then the upper and lower confidence bounds are:
 
::<math>\begin{align}
  & {{[MTB{{F}_{i}}]}_{L}}= & MTB{{F}_{i}}\cdot {{\Pi }_{1}} \\
& {{[MTB{{F}_{i}}]}_{U}}= & MTB{{F}_{i}}\cdot {{\Pi }_{2}} 
\end{align}</math>
 
where  <math>MTB{{F}_{i}}=\tfrac{1}{\hat{\lambda }\hat{\beta }{{t}^{\hat{\beta }-1}}}</math> . If using the unbiased parameters,  <math>\bar{\beta }</math>  and  <math>\bar{\lambda }</math> , then the upper and lower confidence bounds are:
 
::<math>\begin{align}
  & {{[MTB{{F}_{i}}]}_{L}}= & MTB{{F}_{i}}\cdot \left( \frac{N-1}{N} \right)\cdot {{\Pi }_{1}} \\
& {{[MTB{{F}_{i}}]}_{U}}= & MTB{{F}_{i}}\cdot \left( \frac{N-1}{N} \right)\cdot {{\Pi }_{2}} 
\end{align}</math>
 
where  <math>MTB{{F}_{i}}=\tfrac{1}{\hat{\lambda }\hat{\beta }{{t}^{\hat{\beta }-1}}}</math> .
<br>
<br>
 
====Bounds on Cumulative Failure Intensity====
=====Fisher Matrix Bounds=====
The cumulative failure intensity,  <math>{{\lambda }_{c}}(t)</math>  must be positive, thus  <math>\ln {{\lambda }_{c}}(t)</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln ({{\widehat{\lambda }}_{c}}(t))-\ln ({{\lambda }_{c}}(t))}{\sqrt{Var\left[ \ln ({{\widehat{\lambda }}_{c}}(t)) \right]}}\ \tilde{\ }\ N(0,1)</math>
 
The approximate confidence bounds on the cumulative failure intensity are then estimated using:
 
::<math>CB={{\widehat{\lambda }}_{c}}(t){{e}^{\pm {{z}_{\alpha }}\sqrt{Var({{\widehat{\lambda }}_{c}}(t))}/{{\widehat{\lambda }}_{c}}(t)}}</math>
 
:where:
 
::<math>{{\widehat{\lambda }}_{c}}(t)=\widehat{\lambda }{{t}^{\widehat{\beta }-1}}</math>
 
:and:
 
::<math>\begin{align}
  & Var({{\widehat{\lambda }}_{c}}(t))= & {{\left( \frac{\partial {{\lambda }_{c}}(t)}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial {{\lambda }_{c}}(t)}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda }) \\
&  & +2\left( \frac{\partial {{\lambda }_{c}}(t)}{\partial \beta } \right)\left( \frac{\partial {{\lambda }_{c}}(t)}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda }) 
\end{align}</math>
 
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3):
 
::<math>\begin{align}
  & \frac{\partial {{\lambda }_{c}}(t)}{\partial \beta }= & \widehat{\lambda }{{t}^{\widehat{\beta }-1}}\ln (t) \\
& \frac{\partial {{\lambda }_{c}}(t)}{\partial \lambda }= & {{t}^{\widehat{\beta }-1}} 
\end{align}</math>
 
<br>
=====Crow Bounds=====
The Crow cumulative failure intensity confidence bounds are given by:
 
::<math>C{{(t)}_{L}}=\frac{\chi _{\tfrac{\alpha }{2},2N}^{2}}{2\cdot t}</math>
 
 
::<math>C{{(t)}_{u}}=\frac{\chi _{1-\tfrac{\alpha }{2},2N+2}^{2}}{2\cdot t}</math>
 
 
====Bounds on Instantaneous Failure Intensity====
=====Fisher Matrix Bounds=====
The instantaneous failure intensity,  <math>{{\lambda }_{i}}(t)</math> , must be positive, thus  <math>\ln {{\lambda }_{i}}(t)</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln ({{\widehat{\lambda }}_{i}}(t))-\ln ({{\lambda }_{i}}(t))}{\sqrt{Var\left[ \ln ({{\widehat{\lambda }}_{i}}(t)) \right]}}\sim N(0,1)</math>
 
<br>
The approximate confidence bounds on the instantaneous failure intensity are then estimated from:
 
::<math>CB={{\widehat{\lambda }}_{i}}(t){{e}^{\pm {{z}_{\alpha }}\sqrt{Var({{\widehat{\lambda }}_{i}}(t))}/{{\widehat{\lambda }}_{i}}(t)}}</math>
 
 
where  <math>{{\lambda }_{i}}(t)=\lambda \beta {{t}^{\beta -1}}</math>  and:
 
::<math>\begin{align}
  & Var({{\widehat{\lambda }}_{i}}(t))= & {{\left( \frac{\partial {{\lambda }_{i}}(t)}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial {{\lambda }_{i}}(t)}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda }) \\
&  & +2\left( \frac{\partial {{\lambda }_{i}}(t)}{\partial \beta } \right)\left( \frac{\partial {{\lambda }_{i}}(t)}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda }) 
\end{align}</math>
 
<br>
The variance calculation is the same as Eqns. (var1), (var2) and (var3):
 
::<math>\begin{align}
  & \frac{\partial {{\lambda }_{i}}(t)}{\partial \beta }= & \hat{\lambda }{{t}^{\widehat{\beta }-1}}+\hat{\lambda }\hat{\beta }{{t}^{\widehat{\beta }-1}}\ln (t) \\
& \frac{\partial {{\lambda }_{i}}(t)}{\partial \lambda }= & \widehat{\beta }{{t}^{\widehat{\beta }-1}} 
\end{align}</math>
 
 
=====Crow Bounds=====
The Crow instantaneous failure intensity confidence bounds are given as:
 
::<math>\begin{align}
  & {{[{{\lambda }_{i}}(t)]}_{L}}= & \frac{1}{{{[MTB{{F}_{i}}]}_{U}}} \\
& {{[{{\lambda }_{i}}(t)]}_{U}}= & \frac{1}{{{[MTB{{F}_{i}}]}_{L}}} 
\end{align}</math>
 
 
====Bounds on Time Given Cumulative MTBF====
=====Fisher Matrix Bounds=====
The time,  <math>T</math> , must be positive, thus  <math>\ln T</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln (\widehat{T})-\ln (T)}{\sqrt{Var\left[ \ln (\widehat{T}) \right]}}\ \tilde{\ }\ N(0,1)</math>
 
The confidence bounds on the time are given by:
 
::<math>CB=\widehat{T}{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{T})}/\widehat{T}}}</math>
 
:where:
 
::<math>Var(\widehat{T})={{\left( \frac{\partial T}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial T}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+2\left( \frac{\partial T}{\partial \beta } \right)\left( \frac{\partial T}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda })</math>
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3).
 
::<math>\widehat{T}={{(\lambda \cdot {{m}_{c}})}^{1/(1-\beta )}}</math>
 
 
::<math>\begin{align}
  & \frac{\partial T}{\partial \beta }= & \frac{{{(\lambda \cdot {{m}_{c}})}^{1/(1-\beta )}}\ln (\lambda \cdot {{m}_{c}})}{{{(1-\beta )}^{2}}} \\
& \frac{\partial T}{\partial \lambda }= & \frac{{{(\lambda \cdot {{m}_{c}})}^{1/(1-\beta )}}}{\lambda (1-\beta )} 
\end{align}</math>
 
 
=====Crow Bounds=====
Step 1: Calculate:
 
::<math>\hat{T}={{\left( \frac{{{\lambda }_{c}}(T)}{{\hat{\lambda }}} \right)}^{\tfrac{1}{\beta -1}}}</math>
 
Step 2: Estimate the number of failures:
 
::<math>N(\hat{T})=\hat{\lambda }{{\hat{T}}^{{\hat{\beta }}}}</math>
 
Step 3: Obtain the confidence bounds on time given the cumulative failure intensity by solving for  <math>{{t}_{l}}</math>  and  <math>{{t}_{u}}</math>  in the following equations:
 
::<math>\begin{align}
  & {{t}_{l}}= & \frac{\chi _{\tfrac{\alpha }{2},2N}^{2}}{2\cdot {{\lambda }_{c}}(T)} \\
& {{t}_{u}}= & \frac{\chi _{1-\tfrac{\alpha }{2},2N+2}^{2}}{2\cdot {{\lambda }_{c}}(T)} 
\end{align}</math>
 
<br>
 
====Bounds on Time Given Instantaneous MTBF====
=====Fisher Matrix Bounds=====
The time,  <math>T</math> , must be positive, thus  <math>\ln T</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln (\widehat{T})-\ln (T)}{\sqrt{Var\left[ \ln (\widehat{T}) \right]}}\ \tilde{\ }\ N(0,1)</math>
 
The confidence bounds on the time are given by:
 
::<math>CB=\widehat{T}{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{T})}/\widehat{T}}}</math>
 
:where:
 
::<math>Var(\widehat{T})={{\left( \frac{\partial T}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial T}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+2\left( \frac{\partial T}{\partial \beta } \right)\left( \frac{\partial T}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda })</math>
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3).
 
 
::<math>\widehat{T}={{(\lambda \beta \cdot MTB{{F}_{i}})}^{1/(1-\beta )}}</math>
 
 
::<math>\begin{align}
  & \frac{\partial T}{\partial \beta }= & {{\left( \lambda \beta \cdot MTB{{F}_{i}} \right)}^{1/(1-\beta )}}[\frac{1}{{{(1-\beta )}^{2}}}\ln (\lambda \beta \cdot MTB{{F}_{i}})+\frac{1}{\beta (1-\beta )}] \\
& \frac{\partial T}{\partial \lambda }= & \frac{{{(\lambda \beta \cdot MTB{{F}_{i}})}^{1/(1-\beta )}}}{\lambda (1-\beta )} 
\end{align}</math>
 
<br>
=====Crow Bounds=====
Step 1: Calculate the confidence bounds on the instantaneous MTBF as presented in Section 5.5.2.
<br>
Step 2: Calculate the bounds on time as follows.
<br>
<br>
''Failure Terminated Data''
 
::<math>\hat{T}={{(\frac{\lambda \beta \cdot MTB{{F}_{i}}}{c})}^{1/(1-\beta )}}</math>
 
 
So the lower an upper bounds on time are:
 
 
::<math>{{\hat{T}}_{L}}={{(\frac{\lambda \beta \cdot MTB{{F}_{i}}}{{{c}_{1}}})}^{1/(1-\beta )}}</math>
 
 
::<math>{{\hat{T}}_{U}}={{(\frac{\lambda \beta \cdot MTB{{F}_{i}}}{{{c}_{2}}})}^{1/(1-\beta )}}</math>
 
 
''Time Terminated Data''
 
::<math>\hat{T}={{(\frac{\lambda \beta \cdot MTB{{F}_{i}}}{\Pi })}^{1/(1-\beta )}}</math>
 
 
So the lower and upper bounds on time are:
 
 
::<math>{{\hat{T}}_{L}}={{(\frac{\lambda \beta \cdot MTB{{F}_{i}}}{{{\Pi }_{1}}})}^{1/(1-\beta )}}</math>
 
 
::<math>{{\hat{T}}_{U}}={{(\frac{\lambda \beta \cdot MTB{{F}_{i}}}{{{\Pi }_{2}}})}^{1/(1-\beta )}}</math>
 
 
====Bounds on Time Given Cumulative Failure Intensity====
=====Fisher Matrix Bounds=====
The time,  <math>T</math> , must be positive, thus  <math>\ln T</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln (\widehat{T})-\ln (T)}{\sqrt{Var\left[ \ln \widehat{T} \right]}}\ \tilde{\ }\ N(0,1)</math>
 
The confidence bounds on the time are given by:
 
::<math>CB=\widehat{T}{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{T})}/\widehat{T}}}</math>
 
:where:
 
::<math>Var(\widehat{T})={{\left( \frac{\partial T}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial T}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+2\left( \frac{\partial T}{\partial \beta } \right)\left( \frac{\partial T}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda })</math>
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3):
 
::<math>\widehat{T}={{\left( \frac{{{\lambda }_{c}}(T)}{\lambda } \right)}^{1/(\beta -1)}}</math>
 
::<math>\begin{align}
  & \frac{\partial T}{\partial \beta }= & \frac{-{{\left( \tfrac{{{\lambda }_{c}}(T)}{\lambda } \right)}^{1/(\beta -1)}}\ln \left( \tfrac{{{\lambda }_{c}}(T)}{\lambda } \right)}{{{(1-\beta )}^{2}}} \\
& \frac{\partial T}{\partial \lambda }= & {{\left( \frac{{{\lambda }_{c}}(T)}{\lambda } \right)}^{1/(\beta -1)}}\frac{1}{\lambda (1-\beta )} 
\end{align}</math>
 
 
=====Crow Bounds=====
Step 1: Calculate:
 
 
::<math>\hat{T}={{\left( \frac{{{\lambda }_{c}}(T)}{{\hat{\lambda }}} \right)}^{\tfrac{1}{\beta -1}}}</math>
 
 
Step 2: Estimate the number of failures:
 
 
::<math>N(\hat{T})=\hat{\lambda }{{\hat{T}}^{{\hat{\beta }}}}</math>
 
 
Step 3: Obtain the confidence bounds on time given the cumulative failure intensity by solving for  <math>{{t}_{l}}</math>  and  <math>{{t}_{u}}</math>  in the following equations:
 
::<math>\begin{align}
  & {{t}_{l}}= & \frac{\chi _{\tfrac{\alpha }{2},2N}^{2}}{2\cdot {{\lambda }_{c}}(T)} \\
& {{t}_{u}}= & \frac{\chi _{1-\tfrac{\alpha }{2},2N+2}^{2}}{2\cdot {{\lambda }_{c}}(T)} 
\end{align}</math>
 
 
====Bounds on Time Given Instantaneous Failure Intensity====
=====Fisher Matrix Bounds=====
These bounds are based on:
 
::<math>\frac{\ln (\widehat{T})-\ln (T)}{\sqrt{Var\left[ \ln (\widehat{T}) \right]}}\sim N(0,1)</math>
 
 
The confidence bounds on the time are given by:
 
 
::<math>CB=\widehat{T}{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{T})}/\widehat{T}}}</math>
 
:where:
 
::<math>\begin{align}
  & Var(\widehat{T})= & {{\left( \frac{\partial T}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial T}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda }) \\
&  & +2\left( \frac{\partial T}{\partial \beta } \right)\left( \frac{\partial T}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda }) 
\end{align}</math>
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3).
 
::<math>\widehat{T}={{\left( \frac{{{\lambda }_{i}}(T)}{\lambda \cdot \beta } \right)}^{1/(\beta -1)}}</math>
 
 
::<math>\begin{align}
  & \frac{\partial T}{\partial \beta }= & {{\left( \frac{{{\lambda }_{i}}(T)}{\lambda \cdot \beta } \right)}^{1/(\beta -1)}}[-\frac{\ln (\tfrac{{{\lambda }_{i}}(T)}{\lambda \cdot \beta })}{{{(\beta -1)}^{2}}}+\frac{1}{\beta (1-\beta )}] \\
& \frac{\partial T}{\partial \lambda }= & {{\left( \frac{{{\lambda }_{i}}(T)}{\lambda \cdot \beta } \right)}^{1/(\beta -1)}}\frac{1}{\lambda (1-\beta )} 
\end{align}</math>
 
 
=====Crow Bounds=====
Step 1: Calculate  <math>{{\lambda }_{i}}(T)=\tfrac{1}{MTB{{F}_{i}}}</math> .
<br>
Step 2: Use the equations from 13.1.7.9 to calculate the bounds on time given the instantaneous failure intensity.
<br>
<br>
====Bounds on Reliability====
=====Fisher Matrix Bounds=====
These bounds are based on:
 
::<math>\log it(\widehat{R}(t))\sim N(0,1)</math>
 
 
::<math>\log it(\widehat{R}(t))=\ln \left\{ \frac{\widehat{R}(t)}{1-\widehat{R}(t)} \right\}</math>
 
 
The confidence bounds on reliability are given by:
 
::<math>CB=\frac{\widehat{R}(t)}{\widehat{R}(t)+(1-\widehat{R}(t)){{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{R}(t))}/\left[ \widehat{R}(t)(1-\widehat{R}(t)) \right]}}}</math>
 
 
::<math>Var(\widehat{R}(t))={{\left( \frac{\partial R}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial R}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+2\left( \frac{\partial R}{\partial \beta } \right)\left( \frac{\partial R}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda })</math>
 
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3).
 
::<math>\begin{align}
  & \frac{\partial R}{\partial \beta }= & {{e}^{-[\widehat{\lambda }{{(t+d)}^{\widehat{\beta }}}-\widehat{\lambda }{{t}^{\widehat{\beta }}}]}}[\lambda {{t}^{\widehat{\beta }}}\ln (t)-\lambda {{(t+d)}^{\widehat{\beta }}}\ln (t+d)] \\
& \frac{\partial R}{\partial \lambda }= & {{e}^{-[\widehat{\lambda }{{(t+d)}^{\widehat{\beta }}}-\widehat{\lambda }{{t}^{\widehat{\beta }}}]}}[{{t}^{\widehat{\beta }}}-{{(t+d)}^{\widehat{\beta }}}] 
\end{align}</math>
 
 
=====Crow Bounds=====
''Failure Terminated Data''
<br>
With failure terminated data, the 100( <math>1-\alpha </math> )% confidence interval for the current reliability at time  <math>t</math>  in a specified mission time  <math>d</math>  is:
 
::<math>({{[\widehat{R}(d)]}^{\tfrac{1}{{{p}_{1}}}}},{{[\hat{R}(d)]}^{\tfrac{1}{{{p}_{2}}}}})</math>
 
:where
 
::<math>\widehat{R}(\tau )={{e}^{-[\widehat{\lambda }{{(t+\tau )}^{\widehat{\beta }}}-\widehat{\lambda }{{t}^{\widehat{\beta }}}]}}</math>
 
<math>{{p}_{1}}</math> and  <math>{{p}_{2}}</math>  can be obtained from Eqn. (ft).
<br>
<br>
''Time Terminated Data''
<br>
With time terminated data, the 100( <math>1-\alpha </math> )% confidence interval for the current reliability at time  <math>t</math>  in a specified mission time  <math>\tau </math>  is:
 
::<math>({{[\widehat{R}(d)]}^{\tfrac{1}{{{p}_{1}}}}},{{[\hat{R}(d)]}^{\tfrac{1}{{{p}_{2}}}}})</math>
 
:where:
 
::<math>\widehat{R}(d)={{e}^{-[\widehat{\lambda }{{(t+d)}^{\widehat{\beta }}}-\widehat{\lambda }{{t}^{\widehat{\beta }}}]}}</math>
 
<math>{{p}_{1}}</math>  and  <math>{{p}_{2}}</math>  can be obtained from Eqn. (tt).
 
====Bounds on Time Given Reliability and Mission Time====
=====Fisher Matrix Bounds=====
The time,  <math>t</math> , must be positive, thus  <math>\ln t</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln (\hat{t})-\ln (t)}{\sqrt{Var\left[ \ln (\hat{t}) \right]}}\sim N(0,1)</math>
 
The confidence bounds on time are calculated by using:
 
::<math>CB=\hat{t}{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\hat{t})}/\hat{t}}}</math>
 
:where:
 
::<math>Var(\hat{t})={{\left( \frac{\partial t}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial t}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+2\left( \frac{\partial t}{\partial \beta } \right)\left( \frac{\partial t}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda })</math>
 
::<math>\hat{t}</math>  is calculated numerically from:
 
::<math>\widehat{R}(d)={{e}^{-[\widehat{\lambda }{{(\hat{t}+d)}^{\widehat{\beta }}}-\widehat{\lambda }{{{\hat{t}}}^{\widehat{\beta }}}]}}\text{ };\text{ }d\text{ = mission time}</math>
 
The variance calculations are done by:
 
::<math>\begin{align}
  & \frac{\partial t}{\partial \beta }= & \frac{{{{\hat{t}}}^{{\hat{\beta }}}}\ln (\hat{t})-{{(\hat{t}+d)}^{{\hat{\beta }}}}\ln (\hat{t}+d)}{\hat{\beta }{{(\hat{t}+d)}^{\hat{\beta }-1}}-\hat{\beta }{{{\hat{t}}}^{\hat{\beta }-1}}} \\
& \frac{\partial t}{\partial \lambda }= & \frac{{{{\hat{t}}}^{{\hat{\beta }}}}-{{(\hat{t}+d)}^{{\hat{\beta }}}}}{\hat{\lambda }\hat{\beta }{{(\hat{t}+d)}^{\hat{\beta }-1}}-\hat{\lambda }\hat{\beta }{{{\hat{t}}}^{\hat{\beta }-1}}} 
\end{align}</math>
 
=====Crow Bounds=====
''Failure Terminated Data''
<br>
Step 1: Calculate  <math>({{\hat{R}}_{lower}},{{\hat{R}}_{upper}})=({{R}^{\tfrac{1}{{{p}_{1}}}}},{{R}^{\tfrac{1}{{{p}_{2}}}}})</math> .
<br>
Step 2: Let  <math>R={{\hat{R}}_{lower}}</math>  and solve for  <math>{{t}_{1}}</math>  numerically using  <math>R={{e}^{-[\widehat{\lambda }{{({{{\hat{t}}}_{1}}+d)}^{\widehat{\beta }}}-\widehat{\lambda }\hat{t}_{1}^{\widehat{\beta }}]}}</math> .
<br>
Step 3: Let  <math>R={{\hat{R}}_{upper}}</math>  and solve for  <math>{{t}_{2}}</math>  numerically using  <math>R={{e}^{-[\widehat{\lambda }{{({{{\hat{t}}}_{2}}+d)}^{\widehat{\beta }}}-\widehat{\lambda }\hat{t}_{2}^{\widehat{\beta }}]}}</math> .
<br>
Step 4: If  <math>{{t}_{1}}<{{t}_{2}}</math> , then  <math>{{t}_{lower}}={{t}_{1}}</math>  and  <math>{{t}_{upper}}={{t}_{2}}</math> . If  <math>{{t}_{1}}>{{t}_{2}}</math> , then  <math>{{t}_{lower}}={{t}_{2}}</math>  and  <math>{{t}_{upper}}={{t}_{1}}</math> .
<br>
<br>
''Time Terminated Data''
<br>
Step 1: Calculate  <math>({{\hat{R}}_{lower}},{{\hat{R}}_{upper}})=({{R}^{\tfrac{1}{{{\Pi }_{1}}}}},{{R}^{\tfrac{1}{{{\Pi }_{2}}}}})</math> .
<br>
Step 2: Let  <math>R={{\hat{R}}_{lower}}</math>  and solve for  <math>{{t}_{1}}</math>  numerically using  <math>R={{e}^{-[\widehat{\lambda }{{({{{\hat{t}}}_{1}}+d)}^{\widehat{\beta }}}-\widehat{\lambda }\hat{t}_{1}^{\widehat{\beta }}]}}</math> .
<br>
Step 3: Let  <math>R={{\hat{R}}_{upper}}</math>  and solve for  <math>{{t}_{2}}</math>  numerically using  <math>R={{e}^{-[\widehat{\lambda }{{({{{\hat{t}}}_{2}}+d)}^{\widehat{\beta }}}-\widehat{\lambda }\hat{t}_{2}^{\widehat{\beta }}]}}</math> .
<br>
Step 4: If  <math>{{t}_{1}}<{{t}_{2}}</math> , then  <math>{{t}_{lower}}={{t}_{1}}</math>  and  <math>{{t}_{upper}}={{t}_{2}}</math> . If  <math>{{t}_{1}}>{{t}_{2}}</math> , then  <math>{{t}_{lower}}={{t}_{2}}</math>  and  <math>{{t}_{upper}}={{t}_{1}}</math> .
<br>
<br>
====Bounds on Mission Time Given Reliability and Time====
=====Fisher Matrix Bounds=====
The mission time,  <math>d</math> , must be positive, thus  <math>\ln \left( d \right)</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln (\hat{d})-\ln (d)}{\sqrt{Var\left[ \ln (\hat{d}) \right]}}\sim N(0,1)</math>
 
 
The confidence bounds on mission time are given by using:
 
 
::<math>CB=\hat{d}{{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\hat{d})}/\hat{d}}}</math>
 
 
:where:
 
 
::<math>Var(\hat{d})={{\left( \frac{\partial d}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial d}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })+2\left( \frac{\partial td}{\partial \beta } \right)\left( \frac{\partial d}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda })</math>
 
 
Calculate  <math>\hat{d}</math>  from:
 
 
::<math>\hat{d}={{\left[ {{t}^{{\hat{\beta }}}}-\frac{\ln (R)}{{\hat{\lambda }}} \right]}^{\tfrac{1}{{\hat{\beta }}}}}-t</math>
 
 
The variance calculations are done by:
 
 
::<math>\begin{align}
  & \frac{\partial d}{\partial \beta }= & \left[ \frac{{{t}^{{\hat{\beta }}}}\ln (t)}{{{(t+\hat{d})}^{{\hat{\beta }}}}}-\ln (t+\hat{d}) \right]\cdot \frac{t+\hat{d}}{{\hat{\beta }}} \\
& \frac{\partial d}{\partial \lambda }= & \frac{{{t}^{{\hat{\beta }}}}-{{(t+\hat{d})}^{{\hat{\beta }}}}}{\hat{\lambda }\hat{\beta }{{(t+\hat{d})}^{\hat{\beta }-1}}} 
\end{align}</math>
 
 
=====Crow Bounds=====
''Failure Terminated Data''
<br>
Step 1: Calculate  <math>({{\hat{R}}_{lower}},{{\hat{R}}_{upper}})=({{R}^{\tfrac{1}{{{p}_{1}}}}},{{R}^{\tfrac{1}{{{p}_{2}}}}})</math> .
<br>
Step 2: Let  <math>R={{\hat{R}}_{lower}}</math>  and solve for  <math>{{d}_{1}}</math>  such that:
 
 
::<math>{{d}_{1}}={{\left( {{t}^{{\hat{\beta }}}}-\frac{\ln ({{R}_{lower}})}{{\hat{\lambda }}} \right)}^{\tfrac{1}{{\hat{\beta }}}}}-t</math>
 
 
Step 3: Let  <math>R={{\hat{R}}_{upper}}</math>  and solve for  <math>{{d}_{2}}</math>  such that:
 
 
::<math>{{d}_{2}}={{\left( {{t}^{{\hat{\beta }}}}-\frac{\ln ({{R}_{upper}})}{{\hat{\lambda }}} \right)}^{\tfrac{1}{{\hat{\beta }}}}}-t</math>
 
 
Step 4: If  <math>{{d}_{1}}<{{d}_{2}}</math> , then  <math>{{d}_{lower}}={{d}_{1}}</math>  and  <math>{{d}_{upper}}={{d}_{2}}</math> . If  <math>{{d}_{1}}>{{d}_{2}}</math> , then  <math>{{d}_{lower}}={{d}_{2}}</math>  and  <math>{{d}_{upper}}={{d}_{1}}</math> .
<br>
<br>
''Time Terminated Data''
<br>
Step 1: Calculate  <math>({{\hat{R}}_{lower}},{{\hat{R}}_{upper}})=({{R}^{\tfrac{1}{{{\Pi }_{1}}}}},{{R}^{\tfrac{1}{{{\Pi }_{2}}}}})</math> .
<br>
Step 2: Let  <math>R={{\hat{R}}_{lower}}</math>  and solve for  <math>{{d}_{1}}</math>  using Eqn. (CBR1).
<br>
Step 3: Let  <math>R={{\hat{R}}_{upper}}</math>  and solve for  <math>{{d}_{2}}</math>  using Eqn. (CBR2).
<br>
Step 4: If  <math>{{d}_{1}}<{{d}_{2}}</math> , then  <math>{{d}_{lower}}={{d}_{1}}</math>  and  <math>{{d}_{upper}}={{d}_{2}}</math> . If  <math>{{d}_{1}}>{{d}_{2}}</math> , then  <math>{{d}_{lower}}={{d}_{2}}</math>  and  <math>{{d}_{upper}}={{d}_{1}}</math> .
<br>
<br>
====Bounds on Cumulative Number of Failures====
=====Fisher Matrix Bounds=====
The cumulative number of failures,  <math>N(t)</math> , must be positive, thus  <math>\ln \left( N(t) \right)</math>  is approximately treated as being normally distributed.
 
::<math>\frac{\ln (\widehat{N}(t))-\ln (N(t))}{\sqrt{Var\left[ \ln \widehat{N}(t) \right]}}\sim N(0,1)</math>
 
 
::<math>N(t)=\widehat{N}(t){{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{N}(t))}/\widehat{N}(t)}}</math>
 
 
:where:
 
::<math>\widehat{N}(t)=\widehat{\lambda }{{t}^{\widehat{\beta }}}</math>
 
<br>
::<math>\begin{align}
  & Var(\widehat{N}(t))= & {{\left( \frac{\partial N(t)}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial N(t)}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda }) \\
&  & +2\left( \frac{\partial N(t)}{\partial \beta } \right)\left( \frac{\partial N(t)}{\partial \lambda } \right)cov(\widehat{\beta },\widehat{\lambda }) 
\end{align}</math>
 
 
The variance calculation is the same as Eqns. (var1), (var2) and (var3).
 
<br>
::<math>\begin{align}
  & \frac{\partial N(t)}{\partial \beta }= & \hat{\lambda }{{t}^{\widehat{\beta }}}\ln (t) \\
& \frac{\partial N(t)}{\partial \lambda }= & t\widehat{\beta } 
\end{align}</math>
 
<br>
=====Crow Bounds=====
::<math>\begin{array}{*{35}{l}}
  {{N}_{L}}(T)=\tfrac{T}{\widehat{\beta }}{{\lambda }_{i}}{{(T)}_{L}}  \\
  {{N}_{U}}(T)=\tfrac{T}{\widehat{\beta }}{{\lambda }_{i}}{{(T)}_{U}}  \\
\end{array}</math>
 
where  <math>{{\lambda }_{i}}{{(T)}_{L}}</math>  and  <math>{{\lambda }_{i}}{{(T)}_{U}}</math>  can be obtained from Eqn. (inr).
<br>
<br>
=====Example 3=====
Using the data from Example 1, calculate the mission reliability at  <math>t=2000</math>  hours and mission time  <math>d=40</math>  hours  along with the confidence bounds at the 90% confidence level.
<br>
''Solution''
<br>
The maximum likelihood estimates of  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta }</math>  from Example 1 are:
 
 
::<math>\begin{align}
  & \widehat{\beta }= & 0.45300 \\
& \widehat{\lambda }= & 0.36224 
\end{align}</math>
 
 
From Eq. (reliability), the mission reliability at  <math>t=2000</math>  for mission time  <math>d=40</math>  is:
 
::<math>\begin{align}
  & \widehat{R}(t)= & {{e}^{-\left[ \lambda {{\left( t+d \right)}^{\beta }}-\lambda {{t}^{\beta }} \right]}} \\
& = & 0.90292 
\end{align}</math>
 
 
At the 90% confidence level and  <math>T=2000</math>  hours, the Fisher Matrix confidence bounds for the mission reliability for mission time  <math>d=40</math>  are given by:
 
::<math>CB=\frac{\widehat{R}(t)}{\widehat{R}(t)+(1-\widehat{R}(t)){{e}^{\pm {{z}_{\alpha }}\sqrt{Var(\widehat{R}(t))}/\left[ \widehat{R}(t)(1-\widehat{R}(t)) \right]}}}</math>
 
 
::<math>\begin{align}
  & {{[\widehat{R}(t)]}_{L}}= & 0.83711 \\
& {{[\widehat{R}(t)]}_{U}}= & 0.94392 
\end{align}</math>
 
 
The Crow confidence bounds for the mission reliability are:
 
::<math>\begin{align}
  & {{[\widehat{R}(t)]}_{L}}= & {{[\widehat{R}(\tau )]}^{\tfrac{1}{{{\Pi }_{1}}}}} \\
& = & {{[0.90292]}^{\tfrac{1}{0.71440}}} \\
& = & 0.86680 \\
& {{[\widehat{R}(t)]}_{U}}= & {{[\widehat{R}(\tau )]}^{\tfrac{1}{{{\Pi }_{2}}}}} \\
& = & {{[0.90292]}^{\tfrac{1}{1.6051}}} \\
& = & 0.93836 
\end{align}</math>
 
 
Figures ConfReliFish and ConfRelCrow show the Fisher Matrix and Crow confidence bounds on mission reliability for mission time  <math>d=40</math> .
 
[[Image:rga13.3.png|thumb|center|300px|Conditional Reliability vs. Time plot with Fisher Matrix confidence bounds.]]
<br>
<br>
[[Image:rga13.4.png|thumb|center|300px|Conditional Reliability vs. Time plot with Crow confidence bounds.]]
 
<br>
 
===Economical Life Model===
<br>
One consideration in reducing the cost to maintain repairable systems is to establish an overhaul policy that will minimize the total life cost of the system. However, an overhaul policy makes sense only if  <math>\beta >1</math> . It does not make sense to implement an overhaul policy if  <math>\beta <1</math>  since wearout is not present. If you assume that there is a point at which it is cheaper to overhaul a system than to continue repairs, what is the overhaul time that will minimize the total life cycle cost while considering repair cost and the cost of overhaul?
<br>
Denote  <math>{{C}_{1}}</math>  as the average repair cost (unscheduled),  <math>{{C}_{2}}</math>  as the replacement or overhaul cost and  <math>{{C}_{3}}</math>  as the average cost of scheduled maintenance. Scheduled maintenance is performed for every  <math>S</math>  miles or time interval. In addition, let  <math>{{N}_{1}}</math>  be the number of failures in  <math>[0,t]</math>  and let  <math>{{N}_{2}}</math>  be the number of replacements in  <math>[0,t]</math> . Suppose that replacement or overhaul occurs at times  <math>T</math> ,  <math>2T</math> ,  <math>3T</math> . The problem is to select the optimum overhaul time  <math>T={{T}_{0}}</math>  so as to minimize the long term average system cost (unscheduled maintenance, replacement cost and scheduled maintenance). Since  <math>\beta >1</math> , the average system cost is minimized when the system is overhauled (or replaced) at time  <math>{{T}_{0}}</math>  such that the instantaneous maintenance cost equals the average system cost.
The total system cost between overhaul or replacement is:
 
::<math>TSC(T)={{C}_{1}}E(N(T))+{{C}_{2}}+{{C}_{3}}\frac{T}{S}</math>
 
So the average system cost is:
 
::<math>C(T)=\frac{{{C}_{1}}E(N(T))+{{C}_{2}}+{{C}_{3}}\tfrac{T}{S}}{T}</math>
 
 
The instantaneous maintenance cost at time  <math>T</math>  is equal to:
 
::<math>IMC(T)={{C}_{1}}\lambda \beta {{T}^{\beta -1}}+\frac{{{C}_{3}}}{S}</math>
 
 
The following equation holds at optimum overhaul time  <math>{{T}_{0}}</math> :
 
 
::<math>\begin{align}
  & {{C}_{1}}\lambda \beta T_{0}^{\beta -1}+\frac{{{C}_{3}}}{S}= & \frac{{{C}_{1}}E(N(T))+{{C}_{2}}+{{C}_{3}}\tfrac{T}{S}}{T} \\
& = & \frac{{{C}_{1}}\lambda T_{0}^{\beta }+{{C}_{2}}+{{C}_{3}}\tfrac{{{T}_{0}}}{S}}{{{T}_{0}}} 
\end{align}</math>
 
 
:Therefore:
 
::<math>{{T}_{0}}={{\left[ \frac{{{C}_{2}}}{\lambda (\beta -1){{C}_{1}}} \right]}^{1/\beta }}</math>
 
 
When there is no scheduled maintenance, Eqn. (ecolm) becomes:
 
::<math>{{C}_{1}}\lambda \beta T_{0}^{\beta -1}=\frac{{{C}_{1}}\lambda T_{0}^{\beta }+{{C}_{2}}}{{{T}_{0}}}</math>
 
 
The optimum overhaul time,  <math>{{T}_{0}}</math> , is the same as Eqn. (optimt), so for periodic maintenance scheduled every  <math>S</math>  miles, the replacement or overhaul time is the same as for the unscheduled and replacement or overhaul cost model.
 
==Fleet Analysis==
<br>
Fleet analysis is similar to the repairable systems analysis described previously. The main difference is that a fleet of systems is considered and the models are applied to the fleet failures rather than to the system failures. In other words, repairable system analysis models the number of system failures versus system time; whereas fleet analysis models the number of fleet failures versus fleet time.
<br>
 
The main motivation for fleet analysis is to enable the application of the Crow Extended model for fielded data. In many cases, reliability improvements might be necessary on systems that are already in the field. These types of reliability improvements are essentially delayed fixes (BD modes) as described in Chapter 9.
<br>
 
Recall from Chapter 9 that in order to make projections using the Crow Extended model, the  <math>\beta </math>  of the combined A and BD modes should be equal to 1. Since the failure intensity in a fielded system might be changing over time (e.g. increasing if the system wears out), this assumption might be violated. In such a scenario, the Crow Extended model cannot be used. However, if a fleet of systems is considered and the number of fleet failures versus fleet time is modeled, the failures might become random. This is because there is a mixture of systems within a fleet, new and old, and when the failures of this mixture of systems are viewed from a cumulative fleet time point of view, they may be random. Figures Repairable and Fleet illustrate this concept. Figure Repairable shows the number of failures over system age. It can be clearly seen that as the systems age, the intensity of the failures increases (wearout). The superposition system line, which brings the failures from the different systems under a single timeline, also illustrates this observation. On the other hand, if you take the same four systems and combine their failures from a fleet perspective, and consider fleet failures over cumulative fleet hours, then the failures seem to be random. Figure Fleet illustrates this concept in the System Operation plot when you consider the Cum. Time Line. In this case, the  <math>\beta </math>  of the fleet will be equal to 1 and the Crow Extended model can be used for quantifying the effects of future reliability improvements on the fleet.
<br>
<br>
[[Image:rga13.5.png|thumb|center|300px|Repairable System Operation plot.]]
<br>
<br>
[[Image:rga13.6.png|thumb|center|300px|Fleet System Operation plot.]]
===Methodology===
<br>
Figures Repairable and Fleet illustrate that the difference between repairable system data analysis and fleet analysis is the way that the dataset is treated. In fleet analysis, the time-to-failure data from each system is stacked to a cumulative timeline. For example, consider the two systems in Table 13.2.
<br>
{|style= align="center" border="1"
|-
|colspan="3" style="text-align:center"|Table 13.2 - System data
|-
!System
!Failure Times (hr)
!End Time (hr)
|-
|1|| 3, 7|| 10
|-
|2|| 4, 9, 13|| 15
|}
 
The data set is first converted to an accumulated timeline, as follows:
<br>
:• System 1 is considered first. The accumulated timeline is therefore 3 and 7 hours.
:• System 1's End Time is 10 hours. System 2's first failure is at 4 hours. This failure time is added to System 1's End Time to give an accumulated failure time of 14 hours.
:• The second failure for System 2 occurred 5 hours after the first failure. This time interval is added to the accumulated timeline to give 19 hours.
:• The third failure for System 2 occurred 4 hours after the second failure. The accumulated failure time is 19 + 4 = 23 hours.
:• System 2's end time is 15 hours, or 2 hours after the last failure. The total accumulated operating time for the fleet is 25 hours (23 + 2 = 25).
<br>
In general, the accumulated operating time  <math>{{Y}_{j}}</math>  is calculated by:
 
::<math>{{Y}_{j}}={{X}_{i,q}}+\underset{q=1}{\overset{K-1}{\mathop \sum }}\,{{T}_{q}},\text{ }m=1,2,...,N</math>
 
:where:
<br>
:• <math>{{X}_{i,q}}</math>  is the  <math>{{i}^{th}}</math>  failure of the  <math>{{q}^{th}}</math>  system
:• <math>{{T}_{q}}</math>  is the end time of the  <math>{{q}^{th}}</math>  system
:• <math>K</math>  is the total number of systems
:• <math>N</math>  is the total number of failures from all systems ( <math>N=\underset{j=1}{\overset{K}{\mathop{\sum }}}\,{{N}_{q}}</math> )
<br>
As this example demonstrates, the accumulated timeline is determined based on the order of the systems. So if you consider the data in Table 13.2 by taking System 2 first, the accumulated timeline would be: 4, 9, 13, 18, 22, with an end time of 25. Therefore, the order in which the systems are considered is somewhat important. However, in the next step of the analysis the data from the accumulated timeline will be grouped into time intervals, effectively eliminating the importance of the order of the systems. Keep in mind that this will NOT always be true. This is true only when the order of the systems was random to begin with. If there is some logic/pattern in the order of the systems, then it will remain even if the cumulative timeline is converted to grouped data. For example, consider a system that wears out with age. This means that more failures will be observed as this system ages and these failures will occur more frequently. Within a fleet of such systems, there will be new and old systems in operation. If the dataset collected is considered from the newest to the oldest system, then even if the data points are grouped, the pattern of fewer failures at the beginning and more failures at later time intervals will still be present. If the objective of the analysis is to determine the difference between newer and older systems, then that order for the data will be acceptable. However, if the objective of the analysis is to determine the reliability of the fleet, then the systems should be randomly ordered.
<br>
<br>
 
===Data Analysis===
Once the accumulated timeline has been generated, it is then converted into grouped data. To accomplish this, a group interval is required. The group interval length should be chosen so that it is representative of the data.  Also note that the intervals do not have to be of equal length. Once the data points have been grouped, the parameters can be obtained using maximum likelihood estimation as described in Chapter 5 in the Grouped Data Analysis section. The data in Table 13.2 can be grouped into 5 hr intervals. This interval length is sufficiently large to insure that there are failures within each interval. The grouped data set is given in Table 13.3.
 
<br>
{|style= align="center" border="1"
|-
|colspan="2" style="text-align:center"|Table 13.3 - Grouped data
|-
!Failures in Interval
!Interval End Time
|-
|1|| 5
|-
|1|| 10
|-
|1|| 15
|-
|1|| 20
|-
|1|| 25
|}
 
The Crow-AMSAA model for Grouped Failure Times is used for the data in Table 13.3 and the parameters of the model are solved by satisfying the following maximum likelihood equations (Chapter 5).
 
 
::<math>\begin{matrix}
  \widehat{\lambda }=\frac{n}{T_{k}^{\widehat{\beta }}} \\
  \underset{i=1}{\overset{k}{\mathop \sum }}\,{{n}_{i}}\left[ \frac{T_{i}^{\widehat{\beta }}\ln {{T}_{i-1}}-T_{i-1}^{\widehat{\beta }}\ln {{T}_{i-1}}}{T_{i}^{\widehat{\beta }}-T_{i-1}^{\widehat{\beta }}}-\ln {{T}_{k}} \right]=0 \\
\end{matrix}</math>
 
 
====Example 4====
Table 13.4 presents data for a fleet of 27 systems. A cycle is a complete history from overhaul to overhaul. The failure history for the last completed cycle for each system is recorded. This is a random sample of data from the fleet. These systems are in the order in which they were selected. Suppose the intervals to group the current data are 10000, 20000, 30000, 40000 and the final interval is defined by the termination time. Conduct the fleet analysis.
<br>
{|style= align="center" border="1"
|-
|colspan="4" style="text-align:center"|Table 13.4 - Sample fleet data
|-
!System
!Cycle Time  <math>{{T}_{j}}</math>
!Number of failures  <math>{{N}_{j}}</math>
!Failure Time  <math>{{X}_{ij}}</math>
|-
|1|| 1396|| 1|| 1396
|-
|2|| 4497|| 1|| 4497
|-
|3|| 525|| 1|| 525
|-
|4|| 1232|| 1|| 1232
|-
|5|| 227|| 1|| 227
|-
|6|| 135|| 1|| 135
|-
|7|| 19|| 1|| 19
|-
|8|| 812|| 1|| 812
|-
|9|| 2024|| 1|| 2024
|-
|10|| 943|| 2|| 316, 943
|-
|11|| 60|| 1|| 60
|-
|12|| 4234|| 2|| 4233, 4234
|-
|13|| 2527|| 2|| 1877, 2527
|-
|14|| 2105|| 2|| 2074, 2105
|-
|15|| 5079|| 1|| 5079
|-
|16|| 577|| 2|| 546, 577
|-
|17|| 4085|| 2|| 453, 4085
|-
|18|| 1023|| 1|| 1023
|-
|19|| 161|| 1|| 161
|-
|20|| 4767|| 2|| 36, 4767
|-
|21|| 6228|| 3|| 3795, 4375, 6228
|-
|22|| 68|| 1|| 68
|-
|23|| 1830|| 1|| 1830
|-
|24|| 1241|| 1|| 1241
|-
|25|| 2573|| 2|| 871, 2573
|-
|26|| 3556|| 1|| 3556
|-
|27|| 186|| 1|| 186
|-
|Total||52110|| 37||
|}
=====Solution=====
For the system data in Table 13.4, the data can be grouped into 10000, 20000, 30000, 4000 and 52110 time intervals. Table 13.5 gives the grouped data.
 
 
{|style= align="center" border="2"
|-
|colspan="2" style="text-align:center"|Table 13.5 - Grouped data
|-
!Time
!Observed Failures
|-
|10000|| 8
|-
|20000|| 16
|-
|30000|| 22
|-
|40000|| 27
|-
|52110|| 37
|}
Based on the above time intervals, the maximum likelihood estimates of  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta }</math>  for this data set are then given by:
 
 
::<math>\begin{matrix}
  \widehat{\lambda }=0.00147 \\
  \widehat{\beta }=0.93328 \\
\end{matrix}</math>
 
 
Figure fle shows the System Operation plot.
 
<math></math>
[[Image:rga13.7.png|thumb|center|300px|System Operation plot for fleet data.]]
<br>
 
===Applying the Crow Extended Model to Fleet Data===
<br>
As it was mentioned previously, the main motivation of the fleet analysis is to apply the Crow Extended model for in-service reliability improvements. The methodology to be used is identical to the application of the Crow Extended model for Grouped Data described in Chapter 9. Consider the fleet data in Table 13.4. In order to apply the Crow Extended model, put  <math>N=37</math>  failure times on a cumulative time scale over  <math>(0,T)</math> , where  <math>T=52110</math> . In the example, each  <math>{{T}_{i}}</math>  corresponds to a failure time  <math>{{X}_{ij}}</math> . This is often not the situation. However, in all cases the accumulated operating time  <math>{{Y}_{q}}</math>  at a failure time  <math>{{X}_{ir}}</math>  is:
 
::<math>\begin{align}
  & {{Y}_{q}}= & {{X}_{i,r}}+\underset{j=1}{\overset{r-1}{\mathop \sum }}\,{{T}_{j}},\ \ \ q=1,2,\ldots ,N \\
& N= & \underset{j=1}{\overset{K}{\mathop \sum }}\,{{N}_{j}} 
\end{align}</math>
 
 
And  <math>q</math>  indexes the successive order of the failures. Thus, in this example  <math>N=37,\,{{Y}_{1}}=1396,\,{{Y}_{2}}=5893,\,{{Y}_{3}}=6418,\ldots ,{{Y}_{37}}=52110</math> . See Table 13.6.
<br>
<br>
{|style= align="center" border="1"
|-
|colspan="7" style="text-align:center"|Table 13.6 - Test-find-test fleet data
|-
!<math>q</math>
!<math>{{Y}_{q}}</math>
!Mode
!
!<math>q</math>
!<math>{{Y}_{q}}</math>
!Mode
|-
|1|| 1396|| BD1|| || 20|| 26361|| BD1
|-
|2|| 5893|| BD2|| || 21|| 26392|| A
|-
|3|| 6418|| A|| || 22|| 26845|| BD8
|-
|4|| 7650|| BD3|| || 23|| 30477|| BD1
|-
|5|| 7877|| BD4|| || 24|| 31500|| A
|-
|6|| 8012|| BD2|| || 25|| 31661|| BD3
|-
|7|| 8031|| BD2|| || 26|| 31697|| BD2
|-
|8|| 8843|| BD1|| || 27|| 36428|| BD1
|-
|9|| 10867|| BD1|| || 28|| 40223|| BD1
|-
|10|| 11183|| BD5|| || 29|| 40803|| BD9
|-
|11|| 11810|| A|| || 30|| 42656|| BD1
|-
|12|| 11870|| BD1|| || 31|| 42724|| BD10
|-
|13|| 16139|| BD2|| || 32|| 44554|| BD1
|-
|14|| 16104|| BD6|| || 33|| 45795|| BD11
|-
|15|| 18178|| BD7|| || 34|| 46666|| BD12
|-
|16|| 18677|| BD2|| || 35|| 48368|| BD1
|-
|17|| 20751|| BD4|| || 36|| 51924|| BD13
|-
|18|| 20772|| BD2|| || 37|| 52110|| BD2
|-
|19|| 25815|| BD1|| || ||
|}
 
Each system failure time in Table 13.4 corresponds to a problem and a cause (failure mode). The management strategy can be to not fix the failure mode (A mode) or to fix the failure mode with a delayed corrective action (BD mode). There are  <math>{{N}_{A}}=4</math>  failures due to A failure modes. There are  <math>{{N}_{BD}}=33</math>  total failures due to  <math>M=13</math>  distinct BD failure modes. Some of the distinct BD modes had repeats of the same problem. For example, mode BD1 had 12 occurrences of the same problem. Therefore, in this example, there are 13 distinct corrective actions corresponding to 13 distinct BD failure modes.
The objective of the Crow Extended model is to estimate the impact of the 13 distinct corrective actions.The analyst will choose an average effectiveness factor (EF) based on the proposed corrective actions and historical experience. Historical industry and government data supports a typical average effectiveness factor  <math>\overline{d}=.70</math>  for many systems. In this example, an average EF of <math>\bar{d}=0.4</math>  was assumed in order to be conservative regarding the impact of the proposed corrective actions. Since there are no BC failure modes (corrective actions applied during the test), the projected failure intensity is:
 
::<math>\widehat{r}(T)=\left( \frac{{{N}_{A}}}{T}+\underset{i=1}{\overset{M}{\mathop \sum }}\,(1-{{d}_{i}})\frac{{{N}_{i}}}{T} \right)+\overline{d}h(T)</math>
 
 
The first term is estimated by:
 
::<math>{{\widehat{\lambda }}_{A}}=\frac{{{N}_{A}}}{T}=0.000077</math>
 
 
The second term is:
 
::<math>\underset{i=1}{\overset{M}{\mathop \sum }}\,(1-{{d}_{i}})\frac{{{N}_{i}}}{T}=0.00038</math>
 
 
This estimates the growth potential failure intensity:
 
::<math>\begin{align}
  & {{\widehat{\gamma }}_{GP}}(T)= & \frac{{{N}_{A}}}{T}+\underset{i=1}{\overset{M}{\mathop \sum }}\,(1-{{d}_{i}})\frac{{{N}_{i}}}{T} \\
& = & 0.00046 
\end{align}</math>
 
To estimate the last term  <math>\overline{d}h(T)</math>  of the Crow Extended model, partition the data in Table 13.6 into intervals. This partition consists of  <math>D</math>  successive intervals. The length of the  <math>{{q}^{th}}</math>  interval is  <math>{{L}_{q}},</math>  <math>\,q=1,2,\ldots ,D</math> . It is not required that the intervals be of the same length, but there should be several (e.g. at least 5) cycles per interval on average. Also, let  <math>{{S}_{1}}={{L}_{1}},</math>  <math>{{S}_{2}}={{L}_{1}}+{{L}_{2}},\ldots ,</math>  etc. be the accumulated time through the  <math>{{q}^{th}}</math>  interval. For the  <math>{{q}^{th}}</math>  interval note the number of distinct BD modes,  <math>M{{I}_{q}}</math> , appearing for the first time,  <math>q=1,2,\ldots ,D</math> . See Table 13.7.
<br>
<br>
{|style= align="center" border="1"
|-
|colspan="4" style="text-align:center"|Table 13.7 - Grouped data for distinct BD modes
|-
!Interval
!No. of Distinct BD Mode Failures
!Length
!Accumulated Time
|-
|1|| MI <math>_{1}</math> || L <math>_{1}</math> || S <math>_{1}</math>
|-
|2|| MI <math>_{2}</math>|| L <math>_{2}</math>|| S <math>_{2}</math>
|-
|.|| .|| .|| .
|-
|.|| .|| .|| .
|-
|.|| .|| .|| .
|-
|D|| MI <math>_{D}</math> || L <math>_{D}</math>|| S <math>_{D}</math>
|}
The term  <math>\widehat{h}(T)</math>  is calculated as  <math>\widehat{h}(T)=\widehat{\lambda }\widehat{\beta }{{T}^{\widehat{\beta }-1}}</math> and the values  <math>\widehat{\lambda }</math>  and  <math>\widehat{\beta }</math>  satisfy Eqns. (cc1) and (cc2). This is the grouped data version of the Crow-AMSAA model applied only to the first occurrence of distinct BD modes.
For the data in Table 13.6 the first 4 intervals had a length of 10000 and the last interval was 12110. Therefore,  <math>D=5</math> . This choice gives an average of about 5 overhaul cycles per interval. See Table 13.8.
<br>
<br>
{|style= align="center" border="1"
|-
|colspan="4" style="text-align:center"|Table 13.8 - Grouped data for distinct BD modes from Table 13.6
|-
!Interval
!No. of Distinct BD Mode Failures
!Length
!Accumulated Time
|-
|1|| 4|| 10000|| 10000
|-
|2|| 3|| 10000|| 20000
|-
|3|| 1|| 10000|| 30000
|-
|4|| 0|| 10000|| 40000
|-
|5|| 5|| 12110|| 52110
|-
|Total|| 13||
|}
 
:Thus:
 
::<math>\begin{align}
  & \widehat{\lambda }= & 0.00330 \\
& \widehat{\beta }= & 0.76219 
\end{align}</math>
 
:This gives:
 
::<math>\begin{align}
  & \widehat{h}(T)= & \widehat{\lambda }\widehat{\beta }{{T}^{\widehat{\beta }-1}} \\
& = & 0.00019 
\end{align}</math>
 
Consequently, for  <math>\overline{d}=0.4</math>  the last term of the Crow Extended model is given by:
 
::<math>\overline{d}h(T)=0.000076</math>
 
 
The projected failure intensity is:
 
::<math>\begin{align}
  & \widehat{r}(T)= & \frac{{{N}_{A}}}{T}+\underset{i=1}{\overset{M}{\mathop \sum }}\,(1-{{d}_{i}})\frac{{{N}_{i}}}{T}+\overline{d}h(T) \\
& = & 0.000077+0.6\times (0.00063)+0.4\times (0.00019) \\
& = & 0.000533 
\end{align}</math>
 
 
This estimates that the 13 proposed corrective actions will reduce the number of failures per cycle of operation hours from the current  <math>\widehat{r}(0)=\tfrac{{{N}_{A}}+{{N}_{BD}}}{T}=0.00071</math>  to  <math>\widehat{r}(T)=0.00053.</math>  The average time between failures is estimated to increase from the current 1408.38 hours to 1876.93 hours.
<br>
<br>
 
===Confidence Bounds===
For fleet data analysis using the Crow-AMSAA model, the confidence bounds are calculated using the same procedure as described in Section 5.4. For fleet data analysis using the Crow Extended model, the confidence bounds are calculated using the same procedure as described in Section 9.6.1.
<br>
<br>
 
==General Examples==
<br>
===Example 5 (fleet data)===
<br>
Eleven systems from the field were chosen for the purposes of a fleet analysis. Each system had at least one failure. All of the systems had a start time equal to zero and the last failure for each system corresponds to the end time. Group the data based on a fixed interval of 3000 hours and assume a fixed effectiveness factor equal to 0.4. Do the following:
<br>
<br>
1) Estimate the parameters of the Crow Extended model.
<br>
2) Based on the analysis does it appear that the systems were randomly ordered?
<br>
3) After the implementation of the delayed fixes, how many failures would you expect within the next 4000 hours of fleet operation.
 
<br>
 
{|style= align="center" border="1"
|-
|colspan="2" style="text-align:center"|Table 13.9 - Fleet data for Example 5
|-
!System
!Times-to-Failure
|-
|1|| 1137 BD1, 1268 BD2
|-
|2|| 682 BD3, 744 A, 1336 BD1
|-
|3|| 95 BD1, 1593 BD3
|-
|4|| 1421 A
|-
|5|| 1091 A, 1574 BD2
|-
|6|| 1415 BD4
|-
|7|| 598 BD4, 1290 BD1
|-
|8|| 1556 BD5
|-
|9|| 55 BD4
|-
|10|| 730 BD1, 1124 BD3
|-
|11|| 1400 BD4, 1568 A
|}
 
====Solution to Example 5=====
<br>
:1) Figure Repair1 shows the estimated Crow Extended parameters.
:2) Upon observing the estimated parameter  <math>\beta </math>  it does appear that the systems were randomly ordered since  <math>\beta =0.8569</math> . This value is close to 1. You can also verify that the confidence bounds on  <math>\beta </math>  include 1 by going to the QCP and calculating the parameter bounds or by viewing the Beta Bounds plot. However, you can also determine graphically if the systems were randomly ordered by using the System Operation plot as shown in Figure Repair2. Looking at the Cum. Time Line, it does not appear that the failures have a trend associated with them. Therefore, the systems can be assumed to be randomly ordered.
 
<math></math>
[[Image:rga13.8.png|thumb|center|300px|Estimated Crow Extended parameters.]]
<br>
<br>
[[Image:rga13.9.png|thumb|center|300px|System Operation plot.]]
<br>
 
===Example 6 (repairable system data)===
<br>
This case study is based on the data given in the article Graphical Analysis of Repair Data by Dr. Wayne Nelson [23]. The data in Table 13.10 represents repair data on an automatic transmission from a sample of 34 cars. For each car, the data set shows mileage at the time of each transmission repair, along with the latest mileage. The + indicates the latest mileage observed without failure. Car 1, for example, had a repair at 7068 miles and was observed until 26,744 miles. Do the following:
<br>
 
:1) Estimate the parameters of the Power Law model.
:2) Estimate the number of warranty claims for a 36,000 mile warranty policy for an estimated fleet of 35,000 vehicles.
 
<br>
 
{|style= align="center" border="1"
|-
|colspan="5" style="text-align:center"|Table 13.10 - Automatic transmission data
|-
!Car
!Mileage
!
!Car
!Mileage
|-
|1|| 7068, 26744+|| || 18|| 17955+
|-
|2|| 28, 13809+|| || 19|| 19507+
|-
|3|| 48, 1440, 29834+|| || 20|| 24177+
|-
|4|| 530, 25660+|| || 21|| 22854+
|-
|5|| 21762+|| || 22|| 17844+
|-
|6|| 14235+|| || 23|| 22637+
|-
|7|| 1388, 18228+|| || 24|| 375, 19607+
|-
|8|| 21401+|| || 25|| 19403+
|-
|9|| 21876+|| || 26|| 20997+
|-
|10|| 5094, 18228+|| || 27|| 19175+
|-
|11|| 21691+|| || 28|| 20425+
|-
|12|| 20890+|| || 29|| 22149+
|-
|13|| 22486+|| || 30|| 21144+
|-
|14|| 19321+|| || 31|| 21237+
|-
|15|| 21585+|| || 32|| 14281+
|-
|16|| 18676+|| || 33|| 8250, 21974+
|-
|17|| 23520+|| || 34|| 19250, 21888+
|}
 
====Solution to Example 6====
<br>
:1) The estimated Power Law parameters are shown in Figure Repair3.
:2) The expected number of failures at 36,000 miles can be estimated using the QCP as shown in Figure Repair4. The model predicts that 0.3559 failures per system will occur by 36,000 miles. This means that for a fleet of 35,000 vehicles, the expected warranty claims are 0.3559 * 35,000 = 12,456.
 
<math></math>
[[Image:rga13.10.png|thumb|center|300px|Entered transmission data and the estimated Power Law parameters.]]
 
<math></math>
[[Image:rga13.11.png|thumb|center|300px|Cumulative number of failures at 36,000 miles.]]
 
<br>
 
===Example 7 (repairable system data)===
<br>
Field data have been collected for a system that begins its wearout phase at time zero. The start time for each system is equal to zero and the end time for each system is 10,000 miles. Each system is scheduled to undergo an overhaul after a certain number of miles. It has been determined that the cost of an overhaul is four times more expensive than a repair. Table 13.11 presents the data. Do the following:
<br>
:1) Estimate the parameters of the Power Law model.
:2) Determine the optimum overhaul interval.
:3) If  <math>\beta <1</math> , would it be cost-effective to implement an overhaul policy?
 
<br>
 
{|style= align="center" border="1"
|-
|colspan="3" style="text-align:center"|Table 13.11 - Field data
|-
!System 1
!System 2
!System 3
|-
|1006.3|| 722.7|| 619.1
|-
|2261.2|| 1950.9|| 1519.1
|-
|2367|| 3259.6|| 2956.6
|-
|2615.5|| 4733.9|| 3114.8
|-
|2848.1|| 5105.1|| 3657.9
|-
|4073|| 5624.1|| 4268.9
|-
|5708.1|| 5806.3|| 6690.2
|-
|6464.1|| 5855.6|| 6803.1
|-
|6519.7|| 6325.2|| 7323.9
|-
|6799.1 ||6999.4|| 7501.4
|-
|7342.9 ||7084.4|| 7641.2
|-
|7736 ||7105.9|| 7851.6
|-
|8246.1|| 7290.9|| 8147.6
|-
| || 7614.2|| 8221.9
|-
| || 8332.1|| 9560.5
|-
| || 8368.5|| 9575.4
|-
| || 8947.9||
|-
| || 9012.3 ||
|-
| || 9135.9 ||
|-
| || 9147.5 ||
|-
| || 9601 ||
|}
 
====Solution to Example 7====
:1) Figure Repair5 shows the estimated Power Law parameters.
:2) The QCP can be used to calculate the optimum overhaul interval as shown in Figure Repair6.
:3) Since  <math>\beta <1</math>  then the systems are not wearing out and it would not be cost-effective to implement an overhaul policy. An overhaul policy makes sense only if the systems are wearing out. Otherwise, an overhauled unit would have the same probability of failing as a unit that was not overhauled.
 
<math></math>
[[Image:rga13.12.png|thumb|center|300px|Entered data and the estimated Power Law parameters.]]
<br>
<br>
[[Image:rga13.13.png|thumb|center|300px|The optimum overhaul interval.]]
 
===Example 8 (repairable system data)===
<br>
Failures and fixes of two repairable systems in the field are recorded. Both systems start from time 0. System 1 ends at time = 504 and system 2 ends at time = 541. All the BD modes are fixed at the end of the test. A fixed effectiveness factor equal to 0.6 is used. Answer the following questions:
<br>
:1) Estimate the parameters of the Crow Extended model.
:2) Calculate the projected MTBF after the delayed fixes.
:3) What is the expected number of failures at time 1,000, if no fixes were performed for the future failures?
 
====Solution to Example 8====
:1) Figure CrowExtendedRepair shows the estimated Crow Extended parameters.
:2) Figure CrowExtendedMTBF shows the projected MTBF at time = 541 (i.e. the age of the oldest system).
:3) Figure CrowExtendedNumofFailure shows the expected number of failures at time = 1,000.
 
<math></math>
[[Image:rga13.14.png|thumb|center|300px|Crow Extended model for repairable systems.]]
<br>
<br>
[[Image:rga13.15.png|thumb|center|300px|MTBF's from Crow Extended model.]]
<br>
<br>
[[Image:rga13.16.png|thumb|center|300px|Cumulative number of failures at time = 1,000.]]
 
<br>

Latest revision as of 06:11, 23 August 2012