Crow-AMSAA (NHPP)

Dr. Larry H. Crow [17] noted that the Duane Model could be stochastically represented as a Weibull process, allowing for statistical procedures to be used in the application of this model in reliability growth. This statistical extension became what is known as the Crow-AMSAA (NHPP) model. This method was first developed at the U.S. Army Materiel Systems Analysis Activity (AMSAA). It is frequently used on systems when usage is measured on a continuous scale. It can also be applied for the analysis of one shot items when there is high reliability and a large number of trials.

Test programs are generally conducted on a phase by phase basis. The Crow-AMSAA model is designed for tracking the reliability within a test phase and not across test phases. A development testing program may consist of several separate test phases. If corrective actions are introduced during a particular test phase, then this type of testing and the associated data are appropriate for analysis by the Crow-AMSAA model. The model analyzes the reliability growth progress within each test phase and can aid in determining the following:


 * Reliability of the configuration currently on test
 * Reliability of the configuration on test at the end of the test phase
 * Expected reliability if the test time for the phase is extended
 * Growth rate
 * Confidence intervals
 * Applicable goodness-of-fit tests

Background
The reliability growth pattern for the Crow-AMSAA model is exactly the same pattern as for the Duane postulate, that is, the cumulative number of failures is linear when plotted on ln-ln scale. Unlike the Duane postulate, the Crow-AMSAA model is statistically based. Under the Duane postulate, the failure rate is linear on ln-ln scale. However, for the Crow-AMSAA model statistical structure, the failure intensity of the underlying non-homogeneous Poisson process (NHPP) is linear when plotted on ln-ln scale.

Let $$N(t)\,\!$$ be the cumulative number of failures observed in cumulative test time $$t\,\!$$, and let $$\rho (t)\,\!$$ be the failure intensity for the Crow-AMSAA model. Under the NHPP model, $$\rho (t)\Delta t\,\!$$ is approximately the probably of a failure occurring over the interval $$[t,t+\Delta t]\,\!$$ for small $$\Delta t\,\!$$. In addition, the expected number of failures experienced over the test interval $$[0,T]\,\!$$ under the Crow-AMSAA model is given by:


 * $$E[N(T)]=\int_{0}^{T}\rho (t)dt\,\!$$

The Crow-AMSAA model assumes that $$\rho (T)\,\!$$ may be approximated by the Weibull failure rate function:


 * $$\rho (T)=\frac{\beta }{{T}^{\beta -1}}\,\!$$

Therefore, if $$\lambda =\tfrac{1},\,\!$$ the intensity function, $$\rho (T),\,\!$$ or the instantaneous failure intensity, $${{\lambda }_{i}}(T)\,\!$$, is defined as:


 * $${{\lambda }_{i}}(T)=\lambda \beta {{T}^{\beta -1}},\text{with }T>0,\text{ }\lambda >0\text{ and }\beta >0\,\!$$

In the special case of exponential failure times, there is no growth and the failure intensity, $$\rho (t)\,\!$$, is equal to $$\lambda \,\!$$. In this case, the expected number of failures is given by:


 * $$\begin{align}

E[N(T)]= & \int_{0}^{T}\rho (t)dt \\ = & \lambda T  \end{align}\,\!$$

In order for the plot to be linear when plotted on ln-ln scale under the general reliability growth case, the following must hold true where the expected number of failures is equal to:


 * $$\begin{align}

E[N(T)]= & \int_{0}^{T}\rho (t)dt \\ = & \lambda {{T}^{\beta }} \end{align}\,\!$$

To put a statistical structure on the reliability growth process, consider again the special case of no growth. In this case the number of failures, $$N(T),\,\!$$ experienced during the testing over $$[0,T]\,\!$$ is random. The expected number of failures, $$N(T),\,\!$$ is said to follow the homogeneous (constant) Poisson process with mean $$\lambda T\,\!$$ and is given by:


 * $$\underset{}{\overset{}{\mathop{\Pr }}}\,[N(T)=n]=\frac{n!};\text{ }n=0,1,2,\ldots \,\!$$

The Crow-AMSAA model generalizes this no growth case to allow for reliability growth due to corrective actions. This generalization keeps the Poisson distribution for the number of failures but allows for the expected number of failures, $$E[N(T)],\,\!$$ to be linear when plotted on ln-ln scale. The Crow-AMSAA model lets $$E[N(T)]=\lambda {{T}^{\beta }}\,\!$$. The probability that the number of failures, $$N(T),\,\!$$ will be equal to $$n\,\!$$ under growth is then given by the Poisson distribution:


 * $$\underset{}{\overset{}{\mathop{\Pr }}}\,[N(T)=n]=\frac{n!};\text{ }n=0,1,2,\ldots \,\!$$

This is the general growth situation, and the number of failures, $$N(T)\,\!$$, follows a non-homogeneous Poisson process. The exponential, "no growth" homogeneous Poisson process is a special case of the non-homogeneous Crow-AMSAA model. This is reflected in the Crow-AMSAA model parameter where $$\beta =1\,\!$$. The cumulative failure rate, $${{\lambda }_{c}}\,\!$$, is:


 * $$\begin{align}

{{\lambda }_{c}}=\lambda {{T}^{\beta -1}} \end{align}\,\!$$

The cumulative $$MTB{{F}_{c}}\,\!$$ is:


 * $$MTB{{F}_{c}}=\frac{1}{\lambda }{{T}^{1-\beta }}\,\!$$

As mentioned above, the local pattern for reliability growth within a test phase is the same as the growth pattern observed by Duane. The Duane $$MTB{{F}_{c}}\,\!$$ is equal to:


 * $$MTB{{F}_}=b{{T}^{\alpha }}\,\!$$

And the Duane cumulative failure rate, $${{\lambda }_{c}}\,\!$$, is:


 * $${{\lambda }_}=\frac{1}{b}{{T}^{-\alpha }}\,\!$$

Thus a relationship between Crow-AMSAA parameters and Duane parameters can be developed, such that:


 * $$\begin{align}

{{b}_{DUANE}}= & \frac{1} \\ {{\alpha }_{DUANE}}= & 1-{{\beta }_{AMSAA}} \end{align}\,\!$$

Note that these relationships are not absolute. They change according to how the parameters (slopes, intercepts, etc.) are defined when the analysis of the data is performed. For the exponential case, $$\beta =1\,\!$$, then $${{\lambda }_{i}}(T)=\lambda \,\!$$, a constant. For $$\beta >1\,\!$$, $${{\lambda }_{i}}(T)\,\!$$ is increasing. This indicates a deterioration in system reliability. For $$\beta <1\,\!$$, $${{\lambda }_{i}}(T)\,\!$$ is decreasing. This is indicative of reliability growth. Note that the model assumes a Poisson process with the Weibull intensity function, not the Weibull distribution. Therefore, statistical procedures for the Weibull distribution do not apply for this model. The parameter $$\lambda \,\!$$ is called a scale parameter because it depends upon the unit of measurement chosen for $$T\,\!$$, while $$\beta \,\!$$ is the shape parameter that characterizes the shape of the graph of the intensity function.

The total number of failures, $$N(T)\,\!$$, is a random variable with Poisson distribution. Therefore, the probability that exactly $$n\,\!$$ failures occur by time $$T\,\!$$ is:


 * $$P[N(T)=n]=\frac{n!}\,\!$$

The number of failures occurring in the interval from $${{T}_{1}}\,\!$$ to $${{T}_{2}}\,\!$$ is a random variable having a Poisson distribution with mean:


 * $$\theta ({{T}_{2}})-\theta ({{T}_{1}})=\lambda (T_{2}^{\beta }-T_{1}^{\beta })\,\!$$

The number of failures in any interval is statistically independent of the number of failures in any interval that does not overlap the first interval. At time $${{T}_{0}}\,\!$$, the failure intensity is $${{\lambda }_{i}}({{T}_{0}})=\lambda \beta T_{0}^{\beta -1}\,\!$$. If improvements are not made to the system after time $${{T}_{0}}\,\!$$, it is assumed that failures would continue to occur at the constant rate $${{\lambda }_{i}}({{T}_{0}})=\lambda \beta T_{0}^{\beta -1}\,\!$$. Future failures would then follow an exponential distribution with mean $$m({{T}_{0}})=\tfrac{1}{\lambda \beta T_{0}^{\beta -1}}\,\!$$. The instantaneous MTBF of the system at time $$T\,\!$$ is:


 * $$m(T)=\frac{1}{\lambda \beta {{T}^{\beta -1}}}\,\!$$

$$m(T)\,\!$$ is also called the demonstrated (or achieved) MTBF.

Note About Applicability
The Duane and Crow-AMSAA models are the most frequently used reliability growth models. Their relationship comes from the fact that both make use of the underlying observed linear relationship between the logarithm of cumulative MTBF and cumulative test time. However, the Duane model does not provide a capability to test whether the change in MTBF observed over time is significantly different from what might be seen due to random error between phases. The Crow-AMSAA model allows for such assessments. Also, the Crow-AMSAA allows for development of hypothesis testing procedures to determine growth presence in the data (where $$\beta <1\,\!$$ indicates that there is growth in MTBF, $$\beta =1\,\!$$ indicates a constant MTBF and $$\beta >1\,\!$$ indicates a decreasing MTBF). Additionally, the Crow-AMSAA model views the process of reliability growth as probabilistic, while the Duane model views the process as deterministic.

Failure Times Data
A description of Failure Times Data is presented in the RGA Data Types page.

Parameter Estimation for Failure Times Data
The parameters for the Crow-AMSAA (NHPP) model are estimated using maximum likelihood estimation (MLE). The probability density function (pdf) of the $${{i}^{th}}\,\!$$ event given that the $${{(i-1)}^{th}}\,\!$$ event occurred at $${{T}_{i-1}}\,\!$$ is:


 * $$f({{T}_{i}}|{{T}_{i-1}})=\frac{\beta }{\eta }{{\left( \frac{{{T}_{i}}}{\eta } \right)}^{\beta -1}}\cdot {{e}^{-\tfrac{1}\left( T_{i}^{\beta }-T_{i-1}^{\beta } \right)}}\,\!$$

Let $$\lambda =\tfrac{1},\,\!$$, the likelihood function is:


 * $$L={{\lambda }^{n}}{{\beta }^{n}}{{e}^{-\lambda {{T}^{*\beta }}}}\underset{i=1}{\overset{n}{\mathop \prod }}\,T_{i}^{\beta -1}\,\!$$

where $${{T}^{*}}\,\!$$ is the termination time and is given by:


 * $${{T}^{*}}=\left\{ \begin{matrix}

{{T}_{n}}\text{ if the test is failure terminated} \\ T>{{T}_{n}}\text{ if the test is time terminated} \\ \end{matrix} \right\}\,\!$$

Taking the natural log on both sides:


 * $$\Lambda =n\ln \lambda +n\ln \beta -\lambda {{T}^{*\beta }}+(\beta -1)\underset{i=1}{\overset{n}{\mathop \sum }}\,\ln {{T}_{i}}\,\!$$

And differentiating with respect to $$\lambda \,\!$$ yields:


 * $$\frac{\partial \Lambda }{\partial \lambda }=\frac{n}{\lambda }-{{T}^{*\beta }}\,\!$$

Set equal to zero and solve for $$\lambda \,\!$$ :


 * $$\hat{\lambda }=\frac{n}\,\!$$

Now differentiate with respect to $$\beta \,\!$$ :


 * $$\frac{\partial \Lambda }{\partial \beta }=\frac{n}{\beta }-\lambda {{T}^{*\beta }}\ln {{T}^{*}}+\underset{i=1}{\overset{n}{\mathop \sum }}\,\ln {{T}_{i}}\,\!$$

Set equal to zero and solve for $$\beta \,\!$$ :


 * $$\hat{\beta }=\frac{n}{n\ln {{T}^{*}}-\underset{i=1}{\overset{n}{\mathop{\sum }}}\,\ln {{T}_{i}}}\,\!$$

This equation is used for both failure terminated and time terminated test data.

Biasing and Unbiasing of Beta
The equation above returns the biased estimate, $$\hat{\beta }\,\!$$. The unbiased estimate, $$\bar{\beta }\,\!$$, can be calculated by using the following relationships. For time terminated data (the test ends after a specified test time):


 * $$\bar{\beta }=\frac{N-1}{N}\hat{\beta }\,\!$$

For failure terminated data (the test ends after a specified number of failures):


 * $$\bar{\beta }=\frac{N-2}{N-1}\hat{\beta }\,\!$$

By default $$\hat{\beta }\,\!$$ is returned. $$\bar{\beta }\,\!$$ can be returned by selecting the Calculate unbiased beta option on the Calculations tab of the Application Setup.

Cramér-von Mises Test
The Cramér-von Mises (CVM) goodness-of-fit test validates the hypothesis that the data follows a non-homogeneous Poisson process with a failure intensity equal to $$u(t)=\lambda \beta {{t}^{\beta -1}}\,\!$$. This test can be applied when the failure data is complete over the continuous interval $$[0,{{T}_{q}}]\,\!$$ with no gaps in the data. The CVM data type applies to all data types when the failure times are known, except for Fleet data.

If the individual failure times are known, a Cramér-von Mises statistic is used to test the null hypothesis that a non-homogeneous Poisson process with the failure intensity function $$\rho \left( t \right)=\lambda \,\beta \,{{t}^{\beta -1}}\left( \lambda >0,\beta >0,t>0 \right)\,\!$$ properly describes the reliability growth of a system. The Cramér-von Mises goodness-of-fit statistic is then given by the following expression:


 * $$C_{M}^{2}=\frac{1}{12M}+\underset{i=1}{\overset{M}{\mathop \sum }}\,{{\left[ {{\left( \frac{{{T}_{i}}}{T} \right)}^{{\bar{\beta }}}}-\frac{2i-1}{2M} \right]}^{2}}\,\!$$

where:


 * $$M=\left\{ \begin{matrix}

N\text{ if the test is time terminated} \\ N-1\text{ if the test is failure terminated} \\ \end{matrix} \right\}\,\!$$
 * $${\bar{\beta }}\,\!$$ is the unbiased value of Beta.

The failure times, $${{T}_{i}}\,\!$$, must be ordered so that $${{T}_{1}}<{{T}_{2}}<\ldots <{{T}_{M}}\,\!$$. If the statistic $$C_{M}^{2}\,\!$$ is less than the critical value corresponding to $$M\,\!$$ for a chosen significance level, then you can fail to reject the null hypothesis that the Crow-AMSAA model adequately fits the data.

Critical Values
The following table displays the critical values for the Cramér-von Mises goodness-of-fit test given the sample size, $$M\,\!$$, and the significance level, $$\alpha \,\!$$.

The significance level represents the probability of rejecting the hypothesis even if it's true. So, there is a risk associated with applying the goodness-of-fit test (i.e., there is a chance that the CVM will indicate that the model does not fit, when in fact it does). As the significance level is increased, the CVM test becomes more stringent. Keep in mind that the CVM test passes when the test statistic is less than the critical value. Therefore, the larger the critical value, the more room there is to work with (e.g., a CVM test with a significance level equal to 0.1 is more strict than a test with 0.01).

Confidence Bounds
The RGA software provides two methods to estimate the confidence bounds for the Crow Extended model when applied to developmental testing data. The Fisher Matrix approach is based on the Fisher Information Matrix and is commonly employed in the reliability field. The Crow bounds were developed by Dr. Larry Crow. See the Crow-AMSAA Confidence Bounds chapter for details on how the confidence bounds are calculated.

Multiple Systems
When more than one system is placed on test during developmental testing, there are multiple data types which are available depending on the testing strategy and the format of the data. The data types that allow for the analysis of multiple systems using the Crow-AMSAA (NHPP) model are given below:


 * Multiple Systems (Known Operating Times)
 * Multiple Systems (Concurrent Operating Times)
 * Multiple Systems with Dates

Goodness-of-fit Tests
For all multiple systems data types, the Cramér-von Mises (CVM) Test is available. For Multiple Systems (Concurrent Operating Times) and Multiple Systems with Dates, two additional tests are also available: Laplace Trend Test and Common Beta Hypothesis.

Multiple Systems (Known Operating Times)
A description of Multiple Systems (Known Operating Times) is presented on the RGA Data Types page.

Consider the data in the table below for two prototypes that were placed in a reliability growth test.

Developmental Test Data for Two Identical Systems

The Failed Unit column indicates the system that failed and is meant to be informative, but it does not affect the calculations. To combine the data from both systems, the system ages are added together at the times when a failure occurred. This is seen in the Total Test Time column above. Once the single timeline is generated, then the calculations for the parameters Beta and Lambda are the same as the process presented for Failure Times Data. The results of this analysis would match the results of Failure Times - Example 1.

Multiple Systems (Concurrent Operating Times)
A description of Multiple Systems (Concurrent Operating Times) is presented on the RGA Data Types page.

Parameter Estimation for Multiple Systems (Concurrent Operating Times)
To estimate the parameters, the equivalent system must first be determined. The equivalent single system (ESS) is calculated by summing the usage across all systems when a failure occurs. Keep in mind that Multiple Systems (Concurrent Operating Times) assumes that the systems are running simultaneously and accumulate the same usage. If the systems have different end times then the equivalent system must only account for the systems that are operating when a failure occurred. Systems with a start time greater than zero are shifted back to t = 0. This is the same as having a start time equal to zero and the converted end time is equal to the end time minus the start time. In addition, all failures times are adjusted by subtracting the start time from each value to ensure that all values occur within t = 0 and the adjusted end time. A start time greater than zero indicates that it is not known as to what events occurred at a time less than the start time. This may have been caused by the events during this period not being tracked and/or recorded properly.

As an example, consider two systems have entered a reliability growth test. Both systems have a start time equal to zero and both begin the test with the same configuration. System 1 operated for 100 hours and System 2 operated for 125 hours. The failure times for each system are given below:


 * System 1: 25, 47, 80
 * System 2: 15, 62, 89, 110

To build the ESS, the total accumulated hours across both systems is taken into account when a failure occurs. Therefore, given the data for Systems 1 and 2, the ESS is comprised of the following events: 30, 50, 94, 124, 160, 178, 210.

The ESS combines the data from both systems into a single timeline. The termination time for the ESS is (100 + 125) = 225 hours. The parameter estimates for $$\hat{\beta }\,\!$$ and $$\hat{\lambda}\,\!$$ are then calculated using the ESS. This process is the same as the method for Failure Times data.

Multiple Systems with Dates
An overview of the Multiple Systems with Dates data type is presented on the RGA Data Types page. While Multiple Systems with Dates requires a date for each event, including the start and end times for each system, once the equivalent single system is determined, the parameter estimation is the same as it is for Multiple Systems (Concurrent Operating Times). See Parameter Estimation for Multiple Systems (Concurrent Operating Times) for details.

Grouped Data
A description of Grouped Data is presented in the RGA Data Types page.

Parameter Estimation for Grouped Data
For analyzing grouped data, we follow the same logic described previously for the Duane model. If the $$E[N(T)]\,\!$$ equation from the Background section above is linearized:


 * $$\begin{align}

\ln [E(N(T))]=\ln \lambda +\beta \ln T \end{align}\,\!$$

According to Crow [9], the likelihood function for the grouped data case, (where $${{n}_{1}},\,\!$$ $${{n}_{2}},\,\!$$ $${{n}_{3}},\ldots ,\,\!$$ $${{n}_{k}}\,\!$$ failures are observed and $$k\,\!$$ is the number of groups), is:


 * $$\underset{i=1}{\overset{k}{\mathop \prod }}\,\underset{}{\overset{}{\mathop{\Pr }}}\,({{N}_{i}}={{n}_{i}})=\underset{i=1}{\overset{k}{\mathop \prod }}\,\frac{{{(\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })}^{{{n}_{i}}}}\cdot {{e}^{-(\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })}}}{{{n}_{i}}!}\,\!$$

And the MLE of $$\lambda \,\!$$ based on this relationship is:


 * $$\hat{\lambda }=\frac{n}{T_{k}^{\hat{\beta }}}\,\!$$

where $$n \,\!$$ is the total number of failures from all the groups.

The estimate of $$\beta \,\!$$ is the value $$\hat{\beta }\,\!$$ that satisfies:


 * $$\underset{i=1}{\overset{k}{\mathop \sum }}\,{{n}_{i}}\left[ \frac{T_{i}^{\hat{\beta }}\ln {{T}_{i}}-T_{i-1}^{\hat{\beta }}\ln {{T}_{i-1}}}{T_{i}^{\hat{\beta }}-T_{i-1}^{\hat{\beta }}}-\ln {{T}_{k}} \right]=0\,\!$$

See Crow-AMSAA Confidence Bounds for details on how confidence bounds for grouped data are calculated.

Chi-Squared Test
A chi-squared goodness-of-fit test is used to test the null hypothesis that the Crow-AMSAA reliability model adequately represents a set of grouped data. This test is applied only when the data is grouped. The expected number of failures in the interval from $${{T}_{i-1}}\,\!$$ to $${{T}_{i}}\,\!$$ is approximated by:


 * $${{\hat{\theta }}_{i}}=\hat{\lambda }\left( T_{i}^-T_{i-1}^ \right)\,\!$$

For each interval, $${{\hat{\theta }}_{i}}\,\!$$ shall not be less than 5 and, if necessary, adjacent intervals may have to be combined so that the expected number of failures in any combined interval is at least 5. Let the number of intervals after this recombination be $$d\,\!$$, and let the observed number of failures in the $${{i}^{th}}\,\!$$ new interval be $${{N}_{i}}\,\!$$. Finally, let the expected number of failures in the $${{i}^{th}}\,\!$$ new interval be $${{\hat{\theta }}_{i}}\,\!$$. Then the following statistic is approximately distributed as a chi-squared random variable with degrees of freedom $$d-2\,\!$$.


 * $${{\chi }^{2}}=\underset{i=1}{\overset{d}{\mathop \sum }}\,\frac\,\!$$

The null hypothesis is rejected if the $${{\chi }^{2}}\,\!$$ statistic exceeds the critical value for a chosen significance level. In this case, the hypothesis that the Crow-AMSAA model adequately fits the grouped data shall be rejected. Critical values for this statistic can be found in chi-squared distribution tables.

Discrete Data
The Crow-AMSAA model can be adapted for the analysis of success/failure data (also called discrete or attribute data). The following discrete data types are available:


 * Sequential
 * Grouped per Configuration
 * Mixed

Sequential data and Grouped per Configuration are very similar as the parameter estimation methodology is the same for both data types. Mixed data is a combination of Sequential Data and Grouped per Configuration and is presented in Mixed Data.

Grouped per Configuration
Suppose system development is represented by $$i\,\!$$ configurations. This corresponds to $$i-1\,\!$$ configuration changes, unless fixes are applied at the end of the test phase, in which case there would be $$i\,\!$$ configuration changes. Let $${{N}_{i}}\,\!$$ be the number of trials during configuration $$i\,\!$$ and let $${{M}_{i}}\,\!$$ be the number of failures during configuration $$i\,\!$$. Then the cumulative number of trials through configuration $$i\,\!$$, namely $${{T}_{i}}\,\!$$, is the sum of the $${{N}_{i}}\,\!$$ for all $$i\,\!$$, or:


 * $${{T}_{i}}=\underset{}{\overset{}{\mathop \sum }}\,{{N}_{i}}\,\!$$

And the cumulative number of failures through configuration $$i\,\!$$, namely $${{K}_{i}}\,\!$$, is the sum of the $${{M}_{i}}\,\!$$ for all $$i\,\!$$, or:


 * $${{K}_{i}}=\underset{}{\overset{}{\mathop \sum }}\,{{M}_{i}}\,\!$$

The expected value of $${{K}_{i}}\,\!$$ can be expressed as $$E[{{K}_{i}}]\,\!$$ and defined as the expected number of failures by the end of configuration $$i\,\!$$. Applying the learning curve property to $$E[{{K}_{i}}]\,\!$$ implies:


 * $$E\left[ {{K}_{i}} \right]=\lambda T_{i}^{\beta }\,\!$$

Denote $${{f}_{1}}\,\!$$ as the probability of failure for configuration 1 and use it to develop a generalized equation for $${{f}_{i}}\,\!$$ in terms of the $${{T}_{i}}\,\!$$ and $${{N}_{i}}\,\!$$. From the equation above, the expected number of failures by the end of configuration 1 is:


 * $$E\left[ {{K}_{1}} \right]=\lambda T_{1}^{\beta }={{f}_{1}}{{N}_{1}}\,\!$$


 * $$\therefore {{f}_{1}}=\frac{\lambda T_{1}^{\beta }}\,\!$$

Applying the $$E\left[ {{K}_{i}}\right]\,\!$$ equation again and noting that the expected number of failures by the end of configuration 2 is the sum of the expected number of failures in configuration 1 and the expected number of failures in configuration 2:


 * $$\begin{align}

E\left[ {{K}_{2}} \right] = & \lambda T_{2}^{\beta } \\ = & {{f}_{1}}{{N}_{1}}+{{f}_{2}}{{N}_{2}} \\ = & \lambda T_{1}^{\beta }+{{f}_{2}}{{N}_{2}} \end{align}\,\!$$


 * $$\therefore {{f}_{2}}=\frac{\lambda T_{2}^{\beta }-\lambda T_{1}^{\beta }}\,\!$$

By this method of inductive reasoning, a generalized equation for the failure probability on a configuration basis, $${{f}_{i}}\,\!$$, is obtained, such that:


 * $${{f}_{i}}=\frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}\,\!$$

In this equation, $$i\,\!$$ represents the trial number. Thus, an equation for the reliability (probability of success) for the $${{i}^{th}}\,\!$$ configuration is obtained:


 * $$\begin{align}

{{R}_{i}}=1-{{f}_{i}} \end{align}\,\!$$

Sequential Data
From the Grouped per Configuration section, the following equation is given:


 * $${{f}_{i}}=\frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}\,\!$$

For the special case where $${{N}_{i}}=1\,\!$$ for all $$i\,\!$$, the equation above becomes a smooth curve, $${{g}_{i}}\,\!$$, that represents the probability of failure for trial by trial data, or:


 * $${{g}_{i}}=\lambda \cdot {{i}^{\beta }}-\lambda \cdot {{\left( i-1 \right)}^{\beta }}\,\!$$

When $${{N}_{i}}=1\,\!$$, this is the same as Sequential Data where systems are tested on a trial-by-trial basis. The equation for the reliability for the $${{i}^{th}}\,\!$$ trial is:


 * $$\begin{align}

{{R}_{i}}=1-{{g}_{i}} \end{align}\,\!$$

Parameter Estimation for Discrete Data
This section describes procedures for estimating the parameters of the Crow-AMSAA model for success/failure data which includes Sequential data and Grouped per Configuration. An example is presented illustrating these concepts. The estimation procedures provide maximum likelihood estimates (MLEs) for the model's two parameters, $$\lambda \,\!$$ and $$\beta \,\!$$. The MLEs for $$\lambda \,\!$$ and $$\beta \,\!$$ allow for point estimates for the probability of failure, given by:


 * $${{\hat{f}}_{i}}=\frac{\hat{\lambda }T_{i}^-\hat{\lambda }T_{i-1}^}=\frac{\hat{\lambda }\left( T_{i}^-T_{i-1}^ \right)}\,\!$$

And the probability of success (reliability) for each configuration $$i\,\!$$ is equal to:


 * $${{\hat{R}}_{i}}=1-{{\hat{f}}_{i}}\,\!$$

The likelihood function is:


 * $$\underset{i=1}{\overset{k}{\mathop \prod }}\,\left( \begin{matrix}

{{N}_{i}} \\ {{M}_{i}} \\ \end{matrix} \right){{\left( \frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }} \right)}^}{{\left( \frac{{{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta }} \right)}^{{{N}_{i}}-{{M}_{i}}}}\,\!$$

Taking the natural log on both sides yields:


 * $$\begin{align}

& \Lambda = & \underset{i=1}{\overset{K}{\mathop \sum }}\,\left[ \ln \left( \begin{matrix}  {{N}_{i}}  \\   {{M}_{i}}  \\ \end{matrix} \right)+{{M}_{i}}\left[ \ln (\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })-\ln {{N}_{i}} \right] \right] \\ & & +\underset{i=1}{\overset{K}{\mathop \sum }}\,\left[ ({{N}_{i}}-{{M}_{i}})\left[ \ln ({{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta })-\ln {{N}_{i}} \right] \right] \end{align}\,\!$$

Taking the derivative with respect to $$\lambda \,\!$$ and $$\beta \,\!$$ respectively, exact MLEs for $$\lambda \,\!$$ and $$\beta \,\!$$ are values satisfying the following two equations:


 * $$\begin{align}

& \underset{i=1}{\overset{K}{\mathop \sum }}\,{{H}_{i}}\times {{S}_{i}}= & 0 \\ & \underset{i=1}{\overset{K}{\mathop \sum }}\,{{U}_{i}}\times {{S}_{i}}= & 0 \end{align}\,\!$$

where:


 * $$\begin{align}

{{H}_{i}}= & \left[ T_{i}^{\beta }\ln {{T}_{i}}-T_{i-1}^{\beta }\ln {{T}_{i-1}} \right] \\ {{S}_{i}}= & \frac{\left[ \lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta } \right]}-\frac{{{N}_{i}}-{{M}_{i}}}{\left[ {{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta } \right]} \\ {{U}_{i}}= & T_{i}^{\beta }-T_{i-1}^{\beta }\, \end{align}\,\!$$

Mixed Data
The Mixed data type provides additional flexibility in terms of how it can handle different testing strategies. Systems can be tested using different configurations in groups or individual trial by trial, or a mixed combination of individual trials and configurations of more than one trial. The Mixed data type allows you to enter the data so that it represents how the systems were tested within the total number of trials. For example, if you launched five (5) missiles for a given configuration and none of them failed during testing, then there would be a row within the data sheet indicating that this configuration operated successfully for these five trials. If the very next trial, the sixth, failed then this would be a separate row within the data. The flexibility with the data entry allows for a greater understanding in terms of how the systems were tested by simply examining the data. The methodology for estimating the parameters $$\hat{\beta }\,\!$$ and $$\hat{\lambda}\,\!$$ are the same as those presented in the Grouped Data section. With Mixed data, the average reliability and average unreliability within a given interval can also be calculated.

The average unreliability is calculated as:


 * $$\text{Average Unreliability }({{t}_{1,}}{{t}_{2}})=\frac{\lambda t_{2}^{\beta }-\lambda t_{1}^{\beta }}{{{t}_{2}}-{{t}_{1}}}\,\!$$

and the average reliability is calculated as:


 * $$\text{Average Reliability }({{t}_{1,}}{{t}_{2}})=1-\frac{\lambda t_{2}^{\beta }-\lambda t_{1}^{\beta }}{{{t}_{2}}-{{t}_{1}}}\,\!$$

Mixed Data Confidence Bounds
Bounds on Average Failure Probability The process to calculate the average unreliability confidence bounds for Mixed data is as follows:


 * 1) Calculate the average failure probability $$({{t}_{1}},{{t}_{2}})\,\!$$.
 * 2) There will exist a $${{t}^{*}}\,\!$$ between $${{t}_{1}}\,\!$$ and $${{t}_{2}}\,\!$$ such that the instantaneous unreliability at $${{t}^{*}}\,\!$$ equals the average unreliability $$({{t}_{1}},{{t}_{2}})\,\!$$. The confidence intervals for the instantaneous unreliability at $${{t}^{*}}\,\!$$ are the confidence intervals for the average unreliability $$({{t}_{1}},{{t}_{2}})\,\!$$.

Bounds on Average Reliability The process to calculate the average reliability confidence bounds for Mixed data is as follows:


 * 1) Calculate confidence bounds for average unreliability $$({{t}_{1}},{{t}_{2}})\,\!$$ as described above.
 * 2) The confidence bounds for reliability are 1 minus these confidence bounds for average unreliability.