Recurrent Event Data Analysis: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
(Redirected page to Recurrent Event Data Analysis)
Line 1: Line 1:
{{template:LDABOOK|20|Non-Parametric Recurrent Events Data Analysis}}
#REDIRECT [[Recurrent Event Data Analysis]]
== Introduction  ==
 
Recurrent Event Data Analysis (RDA) is used in various applied fields such as reliability, medicine, social sciences, economics, business and criminology.
 
Whereas in life data analysis (LDA) it was assumed that events (failures) were independent and identically distributed (iid), there are many cases where events are dependent and not identically distributed (such as repairable system data) or where the analyst is interested in modeling the number of occurrences of events over time rather than the length of time prior to the first event, as in LDA.
 
Weibull++ provides both parametric and non-parametric approaches to analyze such data.
 
:*The non-parametric approach is based on the well-known Mean Cumulative Function (MCF). The Weibull++ module for this type of analysis builds upon the work of Dr. Wayne Nelson, who has written extensively on the calculation and applications of MCF [[Appendix: Weibull References|[31]]].
 
:*The parametric approach is based on the General Renewal Process (GRP) model, which is particularly useful in understanding the effects of the repairs on the age of a system. Traditionally, the commonly used models for analyzing repairable systems data are the perfect renewal processes (PRP), which corresponds to perfect repairs, and the nonhomogeneous Poisson processes (NHPP), which corresponds to minimal repairs. However, most repair activities may realistically not result in such extreme situations but in a complicated intermediate one (general repair or imperfect repair/maintenance), which are well treated with the GRP model.
 
== Non-Parametric Recurrent Event Data Analysis  ==
 
Non-parametric RDA provides a non-parametric graphical estimate of the mean cumulative number or cost of recurrence per unit versus age. In the reliability field, the Mean Cumulative Function (MCF) can be used to: [[Appendix: Weibull References|[31]]]
 
:*Evaluate whether the population repair (or cost) rate increases or decreases with age (this is useful for product retirement and burn-in decisions).
:*Estimate the average number or cost of repairs per unit during warranty or some time period.
:*Compare two or more sets of data from different designs, production periods, maintenance policies, environments, operating conditions, etc.
:*Predict future numbers and costs of repairs, such as the expected number of failures next month, quarter, or year.
:*Reveal unexpected information and insight.
 
=== The Mean Cumulative Function (MCF)  ===
 
In a non-parametric analysis of recurrent event data, each population unit can be described by a cumulative history function for the cumulative number of recurrences. It is a staircase function that depicts the cumulative number of recurrences of a particular event, such as repairs over time. The figure below depicts a unit's cumulative history function.
 
[[Image:Lda11.1.png|center|250px]]
 
The non-parametric model for a population of units is described as the population of cumulative history functions (curves). It is the population of all staircase functions of every unit in the population. At age t, the units have a distribution of their cumulative number of events. That is, a fraction of the population has accumulated 0 recurrences, another fraction has accumulated 1 recurrence, another fraction has accumulated 2 recurrences, etc. This distribution differs at different ages <span class="texhtml">''t''</span> , and has a mean <span class="texhtml">''M''(''t'')</span> called the mean cumulative function (MCF). The <span class="texhtml">''M''(''t'')</span> is the point-wise average of all population cumulative history functions (see figure below).
 
[[Image:Lda11.2.png|center|250px]]
 
For the case of uncensored data, the mean cumulative function <math>M{{(t)}_{i}}\ </math> values at different recurrence ages <span class="texhtml">''t''<sub>''i''</sub></span> are estimated by calculating the average of the cumulative number of recurrences of events for each unit in the population at <span class="texhtml">''t''<sub>''i''</sub></span> . When the histories are censored, the following steps are applied.
 
'''1st Step - Order all ages:'''
 
Order all recurrence and censoring ages from smallest to largest. If a recurrence age for a unit is the same as its censoring (suspension) age, then the recurrence age goes first. If multiple units have a common recurrence or censoring age, then these units could be put in a certain order or be sorted randomly.
 
'''2nd Step - Calculate the number, <span class="texhtml">''r''<sub>''i''</sub></span> , of units that passed through age <span class="texhtml">''t''<sub>''i''</sub></span>&nbsp;:'''
 
::<math>\begin{align}
  & {{r}_{i}}= & {{r}_{i-1}}\quad \quad \text{if }{{t}_{i}}\text{ is a recurrence age} \\
& {{r}_{i}}= & {{r}_{i-1}}-1\text{  if }{{t}_{i}}\text{ is a censoring age} 
\end{align}</math>
 
<span class="texhtml">''N''</span> is the total number of units and <span class="texhtml">''r''<sub>1</sub> = ''N''</span> at the first observed age which could be a recurrence or suspension.
 
'''3rd Step - Calculate the MCF estimate, M*(t):'''
 
For each sample recurrence age <span class="texhtml">''t''<sub>''i''</sub></span>, calculate the mean cumulative function estimate as follows
 
::<math>{{M}^{*}}({{t}_{i}})=\frac{1}{{{r}_{i}}}+{{M}^{*}}({{t}_{i-1}})</math>
 
where <math>{{M}^{*}}(t)=\tfrac{1}{{{r}_{1}}}</math> at the earliest observed recurrence age, <span class="texhtml">''t''<sub>1</sub></span> .
 
<br>'''Example 1:''' {{Example: Recurrent Events Data Non-parameteric MCF Example}}
 
 
===Confidence Limits for the MCF===
Upper and lower confidence limits for  <math>M({{t}_{i}})</math>  are:
 
::<math>\begin{align}
  & {{M}_{U}}({{t}_{i}})= {{M}^{*}}({{t}_{i}}).{{e}^{\tfrac{{{K}_{\alpha }}.\sqrt{Var[{{M}^{*}}({{t}_{i}})]}}{{{M}^{*}}({{t}_{i}})}}} \\
& {{M}_{L}}({{t}_{i}})=  \frac{{{M}^{*}}({{t}_{i}})}{{{e}^{\tfrac{{{K}_{\alpha }}.\sqrt{Var[{{M}^{*}}({{t}_{i}})]}}{{{M}^{*}}({{t}_{i}})}}}} 
\end{align}</math>
 
where  <math>\alpha </math>  ( <math>50%<\alpha <100%</math> ) is  confidence level,  <math>{{K}_{\alpha }}</math>  is the  <math>\alpha </math>  standard normal percentile and  <math>Var[{{M}^{*}}({{t}_{i}})]</math>  is the variance of the MCF estimate at recurrence age  <math>{{t}_{i}}</math> . The variance is calculated as follows:
 
::<math>Var[{{M}^{*}}({{t}_{i}})]=Var[{{M}^{*}}({{t}_{i-1}})]+\frac{1}{r_{i}^{2}}\left[ \underset{j\in {{R}_{i}}}{\overset{}{\mathop \sum }}\,{{\left( {{d}_{ji}}-\frac{1}{{{r}_{i}}} \right)}^{2}} \right]</math>
 
where  <math>{r}_{i}</math>  is defined in the equation of the survivals,  <math>{{R}_{i}}</math>  is the set of the units that have not been suspended by  <math>i</math>  and  <math>{{d}_{ji}}</math>  is defined as follows:
 
::<math>\begin{align}
  & {{d}_{ji}}= 1\text{  if the }{{j}^{\text{th }}}\text{unit had an event recurrence at age }{{t}_{i}} \\
& {{d}_{ji}}=  0\text{  if the }{{j}^{\text{th }}}\text{unit did not have an event reoccur at age }{{t}_{i}} 
\end{align}</math>
 
 
'''Example 2:'''
{{Example: Recurrent Events Data Non-parameteric MCF Bound Example}}
 
 
'''Example 3:'''
{{Example: Recurrent Events Data Non-parameteric Transmission Example}}
 
== Parametric Recurrent Event Data Analysis ==
 
Weibull++'s parametric RDA&nbsp;folio is a tool for modeling recurrent event data. It can capture the trend, estimate the rate and predict the total number of recurrences. The failure and repair data of a repairable system can be treated as one type of recurrence data. Past and current repairs may affect the future failure process. For most recurrent events, time (distance, cycles, etc.) is a key factor. With time, the recurrence rate may remain constant, increase or decrease. For other recurrent events, not only the time, but also the number of events can affect the recurrence process (e.g., the debugging process in software development). The parametric analysis approach utilizes the General Renewal Process (GRP) model [[Appendix: Weibull References|[28]]]. In this model, the repair time is assumed to be negligible so that the processes can be viewed as point processes. This model provides a way to describe the rate of occurrence of events over time, such as in the case of data obtained from a repairable system. This model is particularly useful in modeling the failure behavior of a specific system and understanding the effects of&nbsp;the repairs on the age of that system. For example, consider a system that is repaired after a failure, where the repair does not bring the system to an as-good-as-new or an as-bad-as-old condition. In other words, the system is partially rejuvenated after the repair. Traditionally, in as-bad-as-old repairs, also known as minimal repairs, the failure data from such a system would have been modeled using a homogeneous or non-homogeneous Poisson process (NHPP). On rare occasions, a Weibull distribution has been used as well in cases where the system is almost as-good-as-new after the repair, also known as a perfect renewal process (PRP). However, for the intermediate states after the repair, there has not been a commercially available model, even though many models have been proposed in literature. In Weibull++, the GRP model provides the capability to&nbsp;model&nbsp;systems with partial renewal (general repair or imperfect repair/maintenance) and allows for a variety of predictions such as reliability, expected failures, etc.
 
=== The GRP Model  ===
 
In this model, the concept of virtual age is introduced. Let&nbsp;<math>{{t}_{1}},{{t}_{2}},\cdots ,{{t}_{n}}</math> represent the&nbsp;successive failure times and let <math>{{x}_{1}},{{x}_{2}},\cdots ,{{x}_{n}}</math> represent the time between failures ( <math>{{t}_{i}}=\sum_{j=1}^{i}{{x}_{j}})</math> . Assume that after each event, actions are taken to improve the system performance. Let <span class="texhtml">''q''</span> be the action effectiveness factor. There are two GRP models:
 
Type I:
 
::<span class="texhtml">''v''<sub>''i''</sub> = ''v''<sub>''i'' − 1</sub> + ''q''''x'''''<b><sub></sub> = </b>''q'''''t''<sub>''i''</sub></span>
 
Type II:
 
::<math>{{v}_{i}}=q({{v}_{i-1}}+{{x}_{i}})={{q}^{i}}{{x}_{1}}+{{q}^{i-1}}{{x}_{2}}+\cdots +{{x}_{i}}</math>
 
where <span class="texhtml">''v''<sub>''i''</sub></span> is the virtual age of the system right after <span class="texhtml">''i''</span> th repair. The Type I model assumes that the <span class="texhtml">''i''</span> th repair cannot remove the damage incurred before the ith failure. It can only reduce the additional age <span class="texhtml">''x''<sub>''i''</sub></span> to <span class="texhtml">''q''''x'''''<b><sub>''i''</sub></b></span> . The Type II model assumes that at the <span class="texhtml">''i''</span> th repair, the virtual age has been accumulated to <span class="texhtml">''v''<sub>''i'' − 1</sub> + ''x''<sub>''i''</sub></span> . The <span class="texhtml">''i''</span> th repair will remove the cumulative damage from both current and previous failures by reducing the virtual age to <span class="texhtml">''q''(''v''<sub>''i'' − 1</sub> + ''x''<sub>''i''</sub>)</span> .
 
The power law function is used to model the rate of recurrence, which is:
 
::<span class="texhtml">λ(''t'') = λβ''t''<sup>β − 1</sup></span>
 
The conditional <span class="texhtml">''pdf''</span> is:
 
::<math>f({{t}_{i}}|{{t}_{i-1}})=\lambda \beta {{({{x}_{i}}+{{v}_{i-1}})}^{\beta -1}}{{e}^{-\lambda \left[ {{\left( {{x}_{i}}+{{v}_{i-1}} \right)}^{\beta }}-v_{i-1}^{\beta } \right]}}</math>
 
MLE method is used to estimate the model parameters. The log likelihood function is [[Appendix: Weibull References|[28]]]:
 
::<math>\begin{align}
  & \ln (L)= n(\ln \lambda +\ln \beta )-\lambda \left[ {{\left( T-{{t}_{n}}+{{v}_{n}} \right)}^{\beta }}-v_{n}^{\beta } \right] \\
  & -\lambda \underset{i=1}{\overset{n}{\mathop \sum }}\,\left[ {{\left( {{x}_{i}}+{{v}_{i-1}} \right)}^{\beta }}-v_{i}^{\beta } \right]+(\beta -1)\underset{i=1}{\overset{n}{\mathop \sum }}\,\ln ({{x}_{i}}+{{v}_{i-1}}) 
\end{align}</math>
 
where <span class="texhtml">''n''</span> is the total number of events during the entire observation period. <span class="texhtml">''T''</span> is the stop time of the observation. <span class="texhtml">''T'' = ''t''<sub>''n''</sub></span> if the observation stops right after the last event.
 
 
=== Confidence Bounds  ===
 
In general, in order to obtain the virtual age, the exact occurrence time of each event (failure) should be available (see equations for Type I and Type II models). However, the times are unknown until the corresponding events occur. For this reason, there are no closed-form expressions for total failure number and failure intensity, which are functions of failure times and virtual age. Therefore, in Weibull++, a Monte Carlo simulation is used to predict values of virtual time, failure number, MTBF and failure rate. The approximate confidence bounds obtained from simulation are provided. The uncertainty of model parameters is also considered in the bounds.
 
==== Bounds on Cumulative Failure (Event) Numbers  ====
 
The variance of the cumulative failure number <span class="texhtml">''N''(''t'')</span> is:
 
::<math>Var[N(t)]=Var\left[ E(N(t)|\lambda ,\beta ,q) \right]+E\left[ Var(N(t)|\lambda ,\beta ,q) \right]</math>
 
The first term accounts for the uncertainty of the parameter estimation. The second term considers the uncertainty caused by the renewal process even when model parameters are fixed. However, unless <span class="texhtml">''q'' = 1</span> , <math>Var\left[ E(N(t)|\lambda ,\beta ,q) \right]</math> cannot be calculated because <span class="texhtml">''E''(''N''(''t''))</span> cannot be expressed as a closed-form function of <span class="texhtml">λ,β,</span> and <span class="texhtml">''q''</span> . In order to consider the uncertainty of the parameter estimation, <math>Var\left[ E(N(t)|\lambda ,\beta ,q) \right]</math> is approximated by:
 
::<math>Var\left[ E(N(t)|\lambda ,\beta ,q) \right]=Var[E(N({{v}_{t}})|\lambda ,\beta )]=Var[\lambda v_{t}^{\beta }]</math>
 
where <span class="texhtml">''v''<sub>''t''</sub></span> is the expected virtual age at time <span class="texhtml">''t''</span> and <math>Var[\lambda v_{t}^{\beta }]</math> is::
 
::<math>\begin{align}
  & Var[\lambda v_{t}^{\beta }]= & {{\left( \frac{\partial (\lambda v_{t}^{\beta })}{\partial \beta } \right)}^{2}}Var(\hat{\beta })+{{\left( \frac{\partial (\lambda v_{t}^{\beta })}{\partial \lambda } \right)}^{2}}Var(\hat{\lambda }) \\
&  +2\frac{\partial (\lambda v_{t}^{\beta })}{\partial \beta }\frac{\partial (\lambda v_{t}^{\beta })}{\partial \lambda }Cov(\hat{\beta },\hat{\lambda }) 
\end{align}</math>
 
By conducting this approximation, the uncertainty of <span class="texhtml">λ</span> and <span class="texhtml">β</span> are considered. The value of <span class="texhtml">''v''<sub>''t''</sub></span> and the value of the second term in the equation for the variance of number of failures are obtained through the Monte Carlo simulation using parameters <math>\hat{\lambda },\hat{\beta },\hat{q},</math> which are the ML estimators. The same simulation is used to estimate the cumulative number of failures <math>\hat{N}(t)=E(N(t)|\hat{\lambda },\hat{\beta },\hat{q})</math> .
 
Once the variance and the expected value of <span class="texhtml">''N''(''t'')</span> have been obtained, the bounds can be calculated by assuming that&nbsp;<span class="texhtml">''N''(''t'')</span> is lognormally distributed as:
 
::<math>\frac{\ln N(t)-\ln \hat{N}(t)}{\sqrt{Var(\ln N(t))}}\tilde{\ }N(0,1)</math>
 
The upper and lower bounds for a given confidence level <span class="texhtml">α</span> can be calculated by:
 
::<math>N{{(t)}_{U,L}}=\hat{N}(t){{e}^{\pm {{z}_{a}}\sqrt{Var(N(t))}/\hat{N}(t)}}</math>
 
where <span class="texhtml">''z''<sub>''a''</sub></span> is the standard normal distribution.
 
If <span class="texhtml">''N''(''t'')</span> is assumed to be normally distributed, the bounds can be calculated by:
 
::<math>N{{(t)}_{U}}=\hat{N}(t)+{{z}_{a}}\sqrt{Var(N(t))}</math>
 
::<math>N{{(t)}_{L}}=\hat{N}(t)-{{z}_{a}}\sqrt{Var(N(t))}</math>
 
In Weibull++, the <span class="texhtml">''N''(''t'')<sub>''U''</sub></span> is the smaller of the upper bounds obtained from lognormal and normal distribution appoximation. The <span class="texhtml">''N''(''t'')<sub>''L''</sub></span> is set to the largest of the lower bounds obtained from lognormal and normal distribution appoximation. This combined method can prevent the out-of-range values of bounds for some small <span class="texhtml">''t''</span> values.
 
==== Bounds of Cumulative Failure Intensity and MTBF  ====
 
For a given time <span class="texhtml">''t''</span> , the expected value of cumulative MTBF <span class="texhtml">''m''<sub>''c''</sub>(''t'')</span> and cumulative failure intensity <span class="texhtml">λ<sub>''c''</sub>(''t'')</span> can be calculated using the following equations:
 
::<math>{{\hat{\lambda }}_{c}}(t)=\frac{\hat{N}(t)}{t};{{\hat{m}}_{c}}(t)=\frac{t}{\hat{N}(t)}</math>
 
The bounds can be easily obtained from the corresponding bounds of <span class="texhtml">''N''(''t'').</span>
 
::<math>\begin{align}
  & {{{\hat{\lambda }}}_{c}}{{(t)}_{L}}= & \frac{\hat{N}{{(t)}_{L}}}{t};\text{  }{{{\hat{\lambda }}}_{c}}{{(t)}_{L}}=\frac{\hat{N}{{(t)}_{L}}}{t};\text{  } \\
& {{{\hat{m}}}_{c}}{{(t)}_{L}}= & \frac{t}{\hat{N}{{(t)}_{U}}};\text{  }{{{\hat{m}}}_{c}}{{(t)}_{U}}=\frac{t}{\hat{N}{{(t)}_{L}}} 
\end{align}</math>
 
==== Bounds on Instantaneous Failure Intensity and MTBF  ====
 
The instantaneous failure intensity is given by:
 
::<math>{{\lambda }_{i}}(t)=\lambda \beta v_{t}^{\beta -1}</math>
 
where <span class="texhtml">''v''<sub>''t''</sub></span> is the virtual age at time <span class="texhtml">''t''.</span> When <math>q\ne 1,</math> it is obtained from simulation. When <span class="texhtml">''q'' = 1</span> , <span class="texhtml">''v''<sub>''t''</sub> = ''t''</span> from model Type I and Type II.
 
The variance of instantaneous failure intensity can be calculated by:
 
::<math>\begin{align}
  & Var({{\lambda }_{i}}(t))= {{\left( \frac{\partial {{\lambda }_{i}}(t)}{\partial \beta } \right)}^{2}}Var(\hat{\beta })+{{\left( \frac{\partial {{\lambda }_{i}}(t)}{\partial \lambda } \right)}^{2}}Var(\hat{\lambda }) \\
&  +2\frac{\partial {{\lambda }_{i}}(t)}{\partial \beta }\frac{\partial {{\lambda }_{i}}(t)}{\partial \lambda }Cov(\hat{\beta },\hat{\lambda })+{{\left( \frac{\partial {{\lambda }_{i}}(t)}{\partial v(t)} \right)}^{2}}Var({{{\hat{v}}}_{t}}) 
\end{align}</math>
 
The expected value and variance of <span class="texhtml">''v''<sub>''t''</sub></span> are obtained from the Monte Carlo simulation with parameters <math>\hat{\lambda },\hat{\beta },\hat{q}.</math> Because of the simulation accuracy and the convergence problem in calculation of <math>Var(\hat{\beta }),Var(\hat{\lambda })</math> and <math>Cov(\hat{\beta },\hat{\lambda }),</math> <span class="texhtml">''V''''a''''r''(λ<sub>''i''</sub>(''t''))</span> can be a negative value at some time points. When this case happens, the bounds of instantaneous failure intensity are not provided.
 
Once the variance and the expected value of <span class="texhtml">λ<sub>''i''</sub>(''t'')</span> are obtained, the bounds can be calculated by assuming that &nbsp;<span class="texhtml">λ<sub>''i''</sub>(''t'')</span> is lognormally distributed as:
 
::<math>\frac{\ln {{\lambda }_{i}}(t)-\ln {{{\hat{\lambda }}}_{i}}(t)}{\sqrt{Var(\ln {{\lambda }_{i}}(t))}}\tilde{\ }N(0,1)</math>
 
The upper and lower bounds for a given confidence level <span class="texhtml">α</span> can be calculated by:
 
::<math>{{\lambda }_{i}}(t)={{\hat{\lambda }}_{i}}(t){{e}^{\pm {{z}_{a}}\sqrt{Var({{\lambda }_{i}}(t))}/{{{\hat{\lambda }}}_{i}}(t)}}</math>
 
where <span class="texhtml">''z''<sub>''a''</sub></span> is the standard normal distribution.
 
If <span class="texhtml">λ<sub>''i''</sub>(''t'')</span> is assumed to be normally distributed, the bounds can be calculated by:
 
::<math>{{\lambda }_{i}}{{(t)}_{U}}={{\hat{\lambda }}_{i}}(t)+{{z}_{a}}\sqrt{Var(N(t))}</math>
 
::<math>{{\lambda }_{i}}{{(t)}_{L}}={{\hat{\lambda }}_{i}}(t)-{{z}_{a}}\sqrt{Var(N(t))}</math>
 
In Weibull++, <span class="texhtml">λ<sub>''i''</sub>(''t'')<sub>''U''</sub></span> is set to the smaller of the two upper bounds obtained from the above lognormal and normal distribution appoximation. <span class="texhtml">λ<sub>''i''</sub>(''t'')<sub>''L''</sub></span> is set to the largest of the two lower bounds obtained from the above lognormal and normal distribution appoximation. This combination method can prevent the out of range values of bounds when <span class="texhtml">''t''</span> values are small.
 
For a given time <span class="texhtml">''t''</span> , the expected value of cumulative MTBF <span class="texhtml">''m''<sub>''i''</sub>(''t'')</span> is:
 
::<math>{{\hat{m}}_{i}}(t)=\frac{1}{{{{\hat{\lambda }}}_{i}}(t)}\text{  }</math>
 
The upper and lower bounds can be easily obtained from the corresponding bounds of <span class="texhtml">λ<sub>''i''</sub>(''t''):</span>
 
::<math>{{\hat{m}}_{i}}{{(t)}_{U}}=\frac{1}{{{{\hat{\lambda }}}_{i}}{{(t)}_{L}}}</math>
 
<br>
 
::<math>{{\hat{m}}_{i}}{{(t)}_{L}}=\frac{1}{{{{\hat{\lambda }}}_{i}}{{(t)}_{U}}}</math>
 
==== Bounds on Conditional Reliability  ====
 
Given mission start time <span class="texhtml">''t''<sub>0</sub></span> and mission time <span class="texhtml">''T''</span> , the conditional reliability can be calculated by:
 
::<math>R(T|{{t}_{0}})=\frac{R(T+{{v}_{0}})}{R({{v}_{0}})}={{e}^{-\lambda [{{({{v}_{0}}+T)}^{\beta }}-{{v}_{0}}]}}</math>
 
<span class="texhtml">''v''<sub>0</sub></span> is the virtual age corresponding to time <span class="texhtml">''t''<sub>0</sub></span> . The expected value and the variance of <span class="texhtml">''v''<sub>0</sub></span> are obtained from Monte Carlo simulation. The variance of the conditional reliability <span class="texhtml">''R''(''T'' | ''t''<sub>0</sub>)</span> is:
 
::<math>\begin{align}
  & Var(R)=  {{\left( \frac{\partial R}{\partial \beta } \right)}^{2}}Var(\hat{\beta })+{{\left( \frac{\partial R}{\partial \lambda } \right)}^{2}}Var(\hat{\lambda }) \\
&  +2\frac{\partial R}{\partial \beta }\frac{\partial R}{\partial \lambda }Cov(\hat{\beta },\hat{\lambda })+{{\left( \frac{\partial R}{\partial {{v}_{0}}} \right)}^{2}}Var({{{\hat{v}}}_{0}}) 
\end{align}</math>
 
Because of the simulation accuracy and the convergence problem in calculation of <math>Var(\hat{\beta }),Var(\hat{\lambda })</math> and <math>Cov(\hat{\beta },\hat{\lambda }),</math> <span class="texhtml">''V''''a''''r''(''R'')</span> can be a negative value at some time points. When this case happens, the bounds are not provided.
 
The bounds are based on:
 
::<math>\log \text{it}(\hat{R}(T))\tilde{\ }N(0,1)</math>
 
::<math>\log \text{it}(\hat{R}(T))=\ln \left\{ \frac{\hat{R}(T)}{1-\hat{R}(T)} \right\}</math>
 
The confidence bounds on reliability are given by:
 
::<math>R=\frac{{\hat{R}}}{\hat{R}+(1-\hat{R}){{e}^{\pm \sqrt{Var(R)}/[\hat{R}(1-\hat{R})]}}}</math>
 
It will be compared with the bounds obtained from:
 
::<math>R=\hat{R}{{e}^{\pm {{z}_{a}}\sqrt{Var(R)}/\hat{R}}}</math>
 
The smaller of the two upper bounds will be the final upper bound and the larger of the two lower bounds will be the final lower bound.
 
<br>'''Example 4:''' {{Example: Recurrent Events Data Parameteric Air-Condition Example}}

Revision as of 07:49, 29 June 2012