RGA Overview

From ReliaWiki
Jump to navigation Jump to search


Overview

Rgai2.1.png

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/reliability_growth_and_repairable_system_analysis

Chapter 2: RGA Overview


RGAbox.png

Chapter 2  
RGA Overview  

Synthesis-icon.png

Available Software:
RGA

Examples icon.png

More Resources:
RGA examples


Overview

Rgai2.1.png

Template loop detected: Template:What is reliability growth?

Template loop detected: Template:Why Reliability Growth?

Template loop detected: Template:Elements of a reliability growth program

Why Are Reliability Growth Models Needed?

In order to effectively manage a reliability growth program and attain the reliability goals, it is imperative that valid reliability assessments of the system be available. Assessments of interest generally include estimating the current reliability of the system configuration under test and estimating the projected increase in reliability if proposed corrective actions are incorporated into the system. These and other metrics give management information on what actions to take in order to attain the reliability goals. Reliability growth assessments are made in a dynamic environment where the reliability is changing due to corrective actions. The objective of most reliability growth models is to account for this changing situation in order to estimate the current and future reliability and other metrics of interest. The decision for choosing a particular growth model is typically based on how well it is expected to provide useful information to management and engineering. Reliability growth can be quantified by looking at various metrics of interest such as the increase in the MTBF, the decrease in the failure intensity or the increase in the mission success probability, which are generally mathematically related and can be derived from each other. All key estimates used in reliability growth management such as demonstrated reliability, projected reliability and estimates of the growth potential can generally be expressed in terms of the MTBF, failure intensity or mission reliability. Changes in these values, typically as a function of test time, are collectively called reliability growth trends and are usually presented as reliability growth curves. These curves are often constructed based on certain mathematical and statistical models called reliability growth models. The ability to accurately estimate the demonstrated reliability and calculate projections to some point in the future can help determine the following:

• Whether the stated reliability requirements will be achieved
• The associated time for meeting such requirements
• The associated costs of meeting such requirements
• The correlation of reliability changes with reliability activities In addition, demonstrated reliability and projections assessments aid in:
• Establishing warranties
• Planning for maintenance resources and logistic activities
• Life-cycle-cost analysis

Reliability Growth Analysis


Reliability growth analysis is the process of collecting, modeling, analyzing and interpreting data from the reliability growth development test program (development testing). In addition, reliability growth models can be applied for data collected from the field (fielded systems). Fielded systems analysis also includes the ability to analyze data of complex repairable systems. Depending on the metric(s) of interest and the data collection method, different models can be utilized (or developed) to analyze the growth processes. As an example of such a model development, consider the simple case presented in the next section.

A Simple Reliability Growth Model

For the sake of simplicity, first look at the case when you are interested in a unit that can only succeed or fail. For example, consider the case of a wine glass designed to withstand a fall of three feet onto a level cement surface.

Rgai2.2.png


The success/failure result of such a drop is determined by whether or not the glass breaks.

Furthermore, assume that:
• You will continue to drop the glass, looking at the results and then adjusting the design after each failure until you are sure that the glass will not break.

• Any redesign effort is either completely successful or it does not change the inherent reliability ( [math]\displaystyle{ R }[/math] ). In other words, the reliability is either 1 or [math]\displaystyle{ R }[/math] , [math]\displaystyle{ 0\lt R\lt 1 }[/math] .

• When testing the product, if a success is encountered on any given trial, no corrective action or redesign is implemented.

• If the trial fails, then you will redesign the product.

• When the product is redesigned, assume that the probability of fixing the product permanently before the next trial is [math]\displaystyle{ \alpha }[/math] . In other words, the glass may or may not have been fixed.

• Let [math]\displaystyle{ {{P}_{n}}(0) }[/math] and [math]\displaystyle{ {{P}_{n}}(1) }[/math] be the probabilities that the glass is unreliable and reliable, respectively, just before the [math]\displaystyle{ {{n}^{th}} }[/math] trial, and that the glass is in the unreliable state just before the first trial, [math]\displaystyle{ {{P}_{1}}(0) }[/math] .

Rgai2.3.png

Now given the above assumptions, the question of how the glass could be in the unreliable state just before trial [math]\displaystyle{ n }[/math] can be answered in two mutually exclusive ways:
The first possibility is the probability of a successful trial, [math]\displaystyle{ (1-p) }[/math] , where [math]\displaystyle{ p }[/math] is the probability of failure in trial [math]\displaystyle{ n-1 }[/math] , while being in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , before the [math]\displaystyle{ n-1 }[/math] trial:

[math]\displaystyle{ (1-p){{P}_{n-1}}(0) }[/math]

Secondly, the glass could have failed the trial, with probability [math]\displaystyle{ p }[/math] , when in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , and having failed the trial, an unsuccessful attempt was made to fix, with probability [math]\displaystyle{ (1-\alpha ) }[/math] :

[math]\displaystyle{ p(1-\alpha ){{P}_{n-1}}(0) }[/math]


Therefore, the sum of these two probabilities, or possible events, gives the probability of being unreliable just before trial [math]\displaystyle{ n }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)=(1-p){{P}_{n-1}}(0)+p(1-\alpha ){{P}_{n-1}}(0) }[/math]
or:
[math]\displaystyle{ {{P}_{n}}(0)=(1-p\alpha ){{P}_{n-1}}(0) }[/math]

By induction, since [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)={{(1-p\alpha )}^{n-1}} }[/math]


To determine the probability of being in the reliable state just before trial [math]\displaystyle{ n }[/math] , Eqn. (eq3) is subtracted from 1, therefore:

[math]\displaystyle{ {{P}_{n}}(1)=1-{{(1-p\alpha )}^{n-1}} }[/math]


Define the reliability [math]\displaystyle{ {{R}_{n}} }[/math] of the glass as the probability of not failing at trial [math]\displaystyle{ n }[/math] . The probability of not failing at trial [math]\displaystyle{ n }[/math] is the sum of being reliable just before trial [math]\displaystyle{ n }[/math] , [math]\displaystyle{ (1-{{(1-p\alpha )}^{n-1}}) }[/math] , and being unreliable just before trial [math]\displaystyle{ n }[/math] but not failing [math]\displaystyle{ \left( {{(1-p\alpha )}^{n-1}}(1-p) \right) }[/math] , thus:

[math]\displaystyle{ {{R}_{n}}=\left( 1-{{(1-p\alpha )}^{n-1}} \right)+\left( (1-p){{(1-p\alpha )}^{n-1}} \right) }[/math]
or:
[math]\displaystyle{ {{R}_{n}}=1-{{(1-p\alpha )}^{n-1}}\cdot p }[/math]


Now instead of [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] , assume that the glass has some initial reliability or that the probability that the glass is in the unreliable state at [math]\displaystyle{ n=1 }[/math] , [math]\displaystyle{ {{P}_{1}}(0)=\beta }[/math] , then:

[math]\displaystyle{ {{R}_{n}}=1-\beta p{{(1-p\alpha )}^{n-1}} }[/math]

When [math]\displaystyle{ \beta \lt 1 }[/math] , the reliability at the [math]\displaystyle{ {{n}^{th}} }[/math] trial is larger than when it was certain that the device was unreliable at trial [math]\displaystyle{ n=1 }[/math] . A trend of reliability growth is observed in Eqn. (eq6). Let [math]\displaystyle{ A=\beta p }[/math] and [math]\displaystyle{ C=ln\left( \tfrac{1}{1-p\alpha } \right)\gt 0 }[/math] , then Eqn. (eq6) becomes:

[math]\displaystyle{ {{R}_{n}}=1-A{{e}^{-C(n-1)}} }[/math]


Eqn. (eq7) is now a model that can be utilized to obtain the reliability (or probability that the glass will not break) after the [math]\displaystyle{ {{n}^{th}} }[/math] trial. Additional models, their applications and methods of estimating their parameters are presented in the following chapters.

Fielded Systems

When a complex system with new technology is fielded and subjected to a customer use environment, there is often considerable interest in assessing its reliability and other related performance metrics, such as availability. This interest in evaluating the system reliability based on actual customer usage failure data may be motivated by a number of factors. For example, the reliability that is generally measured during development is typically related to the system's inherent reliability capability. This inherent capability may differ from actual use experience because of different operating conditions or environment, different maintenance policies, different levels of experience of maintenance personnel, etc. Although operational tests are conducted for many systems during development, it is generally recognized that in many cases these tests may not yield complete data representative of an actual use environment. Moreover, the testing during development is typically limited by the usual cost and schedule constraints, which prevent obtaining a system's reliability profile over an extended portion of its life. Other interests in measuring the reliability of a fielded system may center on, for example, logistics and maintenance policies, quality and manufacturing issues, burn-in, wearout, mission reliability or warranties.

Most complex systems are repaired, not replaced, when they fail. For example, a complex communication system or a truck would be repaired upon failure, not thrown away and replaced by a new system. A number of books and papers in literature have stressed that the usual non-repairable reliability analysis methodologies, such as the Weibull distribution, are not appropriate for repairable system reliability analyses and have suggested the use of nonhomogeneous Poisson process models instead. The homogeneous process is equivalent to the widely used Poisson distribution and exponential times between system failures can be modeled appropriately when the system's failure intensity is not affected by the system's age. However, to realistically consider burn-in, wearout, useful life, maintenance policies, warranties, mission reliability, etc., the analyst will often require an approach that recognizes that the failure intensity of these systems may not be constant over the operating life of interest but may change with system age. A useful, and generally practical, extension of the homogeneous Poisson process, is the nonhomogeneous Poisson process, which allows for the system failure intensity to change with system age. Typically, the reliability analysis of a repairable system under customer use will involve data generated by multiple systems. Crow [17] proposed the Weibull process or power law nonhomogeneous Poisson process for this type of analysis and developed appropriate statistical procedures for maximum likelihood estimation, goodness-of-fit and confidence bounds.

Failure Rate and Failure Intensity

Failure rate and failure intensity are very similar terms. The term failure intensity typically refers to a process such as a reliability growth program. The system age when a system is first put into service is time [math]\displaystyle{ 0 }[/math] . Under the non-homogeneous Poisson process (NHPP), the first failure is governed by a distribution [math]\displaystyle{ F(x) }[/math] with failure rate [math]\displaystyle{ r(x) }[/math] . Each succeeding failure is governed by the intensity function [math]\displaystyle{ u(t) }[/math] of the process. Let [math]\displaystyle{ t }[/math] be the age of the system and [math]\displaystyle{ \Delta t }[/math] is very small. The probability that a system of age [math]\displaystyle{ t }[/math] fails between [math]\displaystyle{ t }[/math] and [math]\displaystyle{ t+\Delta t }[/math] is given by the intensity function [math]\displaystyle{ u(t)\Delta t }[/math] . Notice that this probability is not conditioned on not having any system failures up to time [math]\displaystyle{ t }[/math] , as is the case for a failure rate. The failure intensity [math]\displaystyle{ u(t) }[/math] for the NHPP has the same functional form as the failure rate governing the first system failure. Therefore, [math]\displaystyle{ u(t)=r(t) }[/math] , where [math]\displaystyle{ r(t) }[/math] is the failure rate for the distribution function of the first system failure. If the first system failure follows the Weibull distribution, the failure rate is:

[math]\displaystyle{ r(x)=\lambda \beta {{x}^{\beta -1}} }[/math]

Under minimal repair, the system intensity function is:

[math]\displaystyle{ u(t)=\lambda \beta {{t}^{\beta -1}} }[/math]

This is the power law model. It can be viewed as an extension of the Weibull distribution. The Weibull distribution governs the first system failure and the power law model governs each succeeding system failure. Additional information on the power law model can also be found in Chapter 13.

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/reliability_growth_and_repairable_system_analysis

Chapter 2: RGA Overview


RGAbox.png

Chapter 2  
RGA Overview  

Synthesis-icon.png

Available Software:
RGA

Examples icon.png

More Resources:
RGA examples


Overview

Rgai2.1.png

Template loop detected: Template:What is reliability growth?

Template loop detected: Template:Why Reliability Growth?

Template loop detected: Template:Elements of a reliability growth program

Why Are Reliability Growth Models Needed?

In order to effectively manage a reliability growth program and attain the reliability goals, it is imperative that valid reliability assessments of the system be available. Assessments of interest generally include estimating the current reliability of the system configuration under test and estimating the projected increase in reliability if proposed corrective actions are incorporated into the system. These and other metrics give management information on what actions to take in order to attain the reliability goals. Reliability growth assessments are made in a dynamic environment where the reliability is changing due to corrective actions. The objective of most reliability growth models is to account for this changing situation in order to estimate the current and future reliability and other metrics of interest. The decision for choosing a particular growth model is typically based on how well it is expected to provide useful information to management and engineering. Reliability growth can be quantified by looking at various metrics of interest such as the increase in the MTBF, the decrease in the failure intensity or the increase in the mission success probability, which are generally mathematically related and can be derived from each other. All key estimates used in reliability growth management such as demonstrated reliability, projected reliability and estimates of the growth potential can generally be expressed in terms of the MTBF, failure intensity or mission reliability. Changes in these values, typically as a function of test time, are collectively called reliability growth trends and are usually presented as reliability growth curves. These curves are often constructed based on certain mathematical and statistical models called reliability growth models. The ability to accurately estimate the demonstrated reliability and calculate projections to some point in the future can help determine the following:

• Whether the stated reliability requirements will be achieved
• The associated time for meeting such requirements
• The associated costs of meeting such requirements
• The correlation of reliability changes with reliability activities In addition, demonstrated reliability and projections assessments aid in:
• Establishing warranties
• Planning for maintenance resources and logistic activities
• Life-cycle-cost analysis

Reliability Growth Analysis


Reliability growth analysis is the process of collecting, modeling, analyzing and interpreting data from the reliability growth development test program (development testing). In addition, reliability growth models can be applied for data collected from the field (fielded systems). Fielded systems analysis also includes the ability to analyze data of complex repairable systems. Depending on the metric(s) of interest and the data collection method, different models can be utilized (or developed) to analyze the growth processes. As an example of such a model development, consider the simple case presented in the next section.

A Simple Reliability Growth Model

For the sake of simplicity, first look at the case when you are interested in a unit that can only succeed or fail. For example, consider the case of a wine glass designed to withstand a fall of three feet onto a level cement surface.

Rgai2.2.png


The success/failure result of such a drop is determined by whether or not the glass breaks.

Furthermore, assume that:
• You will continue to drop the glass, looking at the results and then adjusting the design after each failure until you are sure that the glass will not break.

• Any redesign effort is either completely successful or it does not change the inherent reliability ( [math]\displaystyle{ R }[/math] ). In other words, the reliability is either 1 or [math]\displaystyle{ R }[/math] , [math]\displaystyle{ 0\lt R\lt 1 }[/math] .

• When testing the product, if a success is encountered on any given trial, no corrective action or redesign is implemented.

• If the trial fails, then you will redesign the product.

• When the product is redesigned, assume that the probability of fixing the product permanently before the next trial is [math]\displaystyle{ \alpha }[/math] . In other words, the glass may or may not have been fixed.

• Let [math]\displaystyle{ {{P}_{n}}(0) }[/math] and [math]\displaystyle{ {{P}_{n}}(1) }[/math] be the probabilities that the glass is unreliable and reliable, respectively, just before the [math]\displaystyle{ {{n}^{th}} }[/math] trial, and that the glass is in the unreliable state just before the first trial, [math]\displaystyle{ {{P}_{1}}(0) }[/math] .

Rgai2.3.png

Now given the above assumptions, the question of how the glass could be in the unreliable state just before trial [math]\displaystyle{ n }[/math] can be answered in two mutually exclusive ways:
The first possibility is the probability of a successful trial, [math]\displaystyle{ (1-p) }[/math] , where [math]\displaystyle{ p }[/math] is the probability of failure in trial [math]\displaystyle{ n-1 }[/math] , while being in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , before the [math]\displaystyle{ n-1 }[/math] trial:

[math]\displaystyle{ (1-p){{P}_{n-1}}(0) }[/math]

Secondly, the glass could have failed the trial, with probability [math]\displaystyle{ p }[/math] , when in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , and having failed the trial, an unsuccessful attempt was made to fix, with probability [math]\displaystyle{ (1-\alpha ) }[/math] :

[math]\displaystyle{ p(1-\alpha ){{P}_{n-1}}(0) }[/math]


Therefore, the sum of these two probabilities, or possible events, gives the probability of being unreliable just before trial [math]\displaystyle{ n }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)=(1-p){{P}_{n-1}}(0)+p(1-\alpha ){{P}_{n-1}}(0) }[/math]
or:
[math]\displaystyle{ {{P}_{n}}(0)=(1-p\alpha ){{P}_{n-1}}(0) }[/math]

By induction, since [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)={{(1-p\alpha )}^{n-1}} }[/math]


To determine the probability of being in the reliable state just before trial [math]\displaystyle{ n }[/math] , Eqn. (eq3) is subtracted from 1, therefore:

[math]\displaystyle{ {{P}_{n}}(1)=1-{{(1-p\alpha )}^{n-1}} }[/math]


Define the reliability [math]\displaystyle{ {{R}_{n}} }[/math] of the glass as the probability of not failing at trial [math]\displaystyle{ n }[/math] . The probability of not failing at trial [math]\displaystyle{ n }[/math] is the sum of being reliable just before trial [math]\displaystyle{ n }[/math] , [math]\displaystyle{ (1-{{(1-p\alpha )}^{n-1}}) }[/math] , and being unreliable just before trial [math]\displaystyle{ n }[/math] but not failing [math]\displaystyle{ \left( {{(1-p\alpha )}^{n-1}}(1-p) \right) }[/math] , thus:

[math]\displaystyle{ {{R}_{n}}=\left( 1-{{(1-p\alpha )}^{n-1}} \right)+\left( (1-p){{(1-p\alpha )}^{n-1}} \right) }[/math]
or:
[math]\displaystyle{ {{R}_{n}}=1-{{(1-p\alpha )}^{n-1}}\cdot p }[/math]


Now instead of [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] , assume that the glass has some initial reliability or that the probability that the glass is in the unreliable state at [math]\displaystyle{ n=1 }[/math] , [math]\displaystyle{ {{P}_{1}}(0)=\beta }[/math] , then:

[math]\displaystyle{ {{R}_{n}}=1-\beta p{{(1-p\alpha )}^{n-1}} }[/math]

When [math]\displaystyle{ \beta \lt 1 }[/math] , the reliability at the [math]\displaystyle{ {{n}^{th}} }[/math] trial is larger than when it was certain that the device was unreliable at trial [math]\displaystyle{ n=1 }[/math] . A trend of reliability growth is observed in Eqn. (eq6). Let [math]\displaystyle{ A=\beta p }[/math] and [math]\displaystyle{ C=ln\left( \tfrac{1}{1-p\alpha } \right)\gt 0 }[/math] , then Eqn. (eq6) becomes:

[math]\displaystyle{ {{R}_{n}}=1-A{{e}^{-C(n-1)}} }[/math]


Eqn. (eq7) is now a model that can be utilized to obtain the reliability (or probability that the glass will not break) after the [math]\displaystyle{ {{n}^{th}} }[/math] trial. Additional models, their applications and methods of estimating their parameters are presented in the following chapters.

Fielded Systems

When a complex system with new technology is fielded and subjected to a customer use environment, there is often considerable interest in assessing its reliability and other related performance metrics, such as availability. This interest in evaluating the system reliability based on actual customer usage failure data may be motivated by a number of factors. For example, the reliability that is generally measured during development is typically related to the system's inherent reliability capability. This inherent capability may differ from actual use experience because of different operating conditions or environment, different maintenance policies, different levels of experience of maintenance personnel, etc. Although operational tests are conducted for many systems during development, it is generally recognized that in many cases these tests may not yield complete data representative of an actual use environment. Moreover, the testing during development is typically limited by the usual cost and schedule constraints, which prevent obtaining a system's reliability profile over an extended portion of its life. Other interests in measuring the reliability of a fielded system may center on, for example, logistics and maintenance policies, quality and manufacturing issues, burn-in, wearout, mission reliability or warranties.

Most complex systems are repaired, not replaced, when they fail. For example, a complex communication system or a truck would be repaired upon failure, not thrown away and replaced by a new system. A number of books and papers in literature have stressed that the usual non-repairable reliability analysis methodologies, such as the Weibull distribution, are not appropriate for repairable system reliability analyses and have suggested the use of nonhomogeneous Poisson process models instead. The homogeneous process is equivalent to the widely used Poisson distribution and exponential times between system failures can be modeled appropriately when the system's failure intensity is not affected by the system's age. However, to realistically consider burn-in, wearout, useful life, maintenance policies, warranties, mission reliability, etc., the analyst will often require an approach that recognizes that the failure intensity of these systems may not be constant over the operating life of interest but may change with system age. A useful, and generally practical, extension of the homogeneous Poisson process, is the nonhomogeneous Poisson process, which allows for the system failure intensity to change with system age. Typically, the reliability analysis of a repairable system under customer use will involve data generated by multiple systems. Crow [17] proposed the Weibull process or power law nonhomogeneous Poisson process for this type of analysis and developed appropriate statistical procedures for maximum likelihood estimation, goodness-of-fit and confidence bounds.

Failure Rate and Failure Intensity

Failure rate and failure intensity are very similar terms. The term failure intensity typically refers to a process such as a reliability growth program. The system age when a system is first put into service is time [math]\displaystyle{ 0 }[/math] . Under the non-homogeneous Poisson process (NHPP), the first failure is governed by a distribution [math]\displaystyle{ F(x) }[/math] with failure rate [math]\displaystyle{ r(x) }[/math] . Each succeeding failure is governed by the intensity function [math]\displaystyle{ u(t) }[/math] of the process. Let [math]\displaystyle{ t }[/math] be the age of the system and [math]\displaystyle{ \Delta t }[/math] is very small. The probability that a system of age [math]\displaystyle{ t }[/math] fails between [math]\displaystyle{ t }[/math] and [math]\displaystyle{ t+\Delta t }[/math] is given by the intensity function [math]\displaystyle{ u(t)\Delta t }[/math] . Notice that this probability is not conditioned on not having any system failures up to time [math]\displaystyle{ t }[/math] , as is the case for a failure rate. The failure intensity [math]\displaystyle{ u(t) }[/math] for the NHPP has the same functional form as the failure rate governing the first system failure. Therefore, [math]\displaystyle{ u(t)=r(t) }[/math] , where [math]\displaystyle{ r(t) }[/math] is the failure rate for the distribution function of the first system failure. If the first system failure follows the Weibull distribution, the failure rate is:

[math]\displaystyle{ r(x)=\lambda \beta {{x}^{\beta -1}} }[/math]

Under minimal repair, the system intensity function is:

[math]\displaystyle{ u(t)=\lambda \beta {{t}^{\beta -1}} }[/math]

This is the power law model. It can be viewed as an extension of the Weibull distribution. The Weibull distribution governs the first system failure and the power law model governs each succeeding system failure. Additional information on the power law model can also be found in Chapter 13.

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/reliability_growth_and_repairable_system_analysis

Chapter 2: RGA Overview


RGAbox.png

Chapter 2  
RGA Overview  

Synthesis-icon.png

Available Software:
RGA

Examples icon.png

More Resources:
RGA examples


Overview

Rgai2.1.png

Template loop detected: Template:What is reliability growth?

Template loop detected: Template:Why Reliability Growth?

Template loop detected: Template:Elements of a reliability growth program

Why Are Reliability Growth Models Needed?

In order to effectively manage a reliability growth program and attain the reliability goals, it is imperative that valid reliability assessments of the system be available. Assessments of interest generally include estimating the current reliability of the system configuration under test and estimating the projected increase in reliability if proposed corrective actions are incorporated into the system. These and other metrics give management information on what actions to take in order to attain the reliability goals. Reliability growth assessments are made in a dynamic environment where the reliability is changing due to corrective actions. The objective of most reliability growth models is to account for this changing situation in order to estimate the current and future reliability and other metrics of interest. The decision for choosing a particular growth model is typically based on how well it is expected to provide useful information to management and engineering. Reliability growth can be quantified by looking at various metrics of interest such as the increase in the MTBF, the decrease in the failure intensity or the increase in the mission success probability, which are generally mathematically related and can be derived from each other. All key estimates used in reliability growth management such as demonstrated reliability, projected reliability and estimates of the growth potential can generally be expressed in terms of the MTBF, failure intensity or mission reliability. Changes in these values, typically as a function of test time, are collectively called reliability growth trends and are usually presented as reliability growth curves. These curves are often constructed based on certain mathematical and statistical models called reliability growth models. The ability to accurately estimate the demonstrated reliability and calculate projections to some point in the future can help determine the following:

• Whether the stated reliability requirements will be achieved
• The associated time for meeting such requirements
• The associated costs of meeting such requirements
• The correlation of reliability changes with reliability activities In addition, demonstrated reliability and projections assessments aid in:
• Establishing warranties
• Planning for maintenance resources and logistic activities
• Life-cycle-cost analysis

Reliability Growth Analysis


Reliability growth analysis is the process of collecting, modeling, analyzing and interpreting data from the reliability growth development test program (development testing). In addition, reliability growth models can be applied for data collected from the field (fielded systems). Fielded systems analysis also includes the ability to analyze data of complex repairable systems. Depending on the metric(s) of interest and the data collection method, different models can be utilized (or developed) to analyze the growth processes. As an example of such a model development, consider the simple case presented in the next section.

A Simple Reliability Growth Model

For the sake of simplicity, first look at the case when you are interested in a unit that can only succeed or fail. For example, consider the case of a wine glass designed to withstand a fall of three feet onto a level cement surface.

Rgai2.2.png


The success/failure result of such a drop is determined by whether or not the glass breaks.

Furthermore, assume that:
• You will continue to drop the glass, looking at the results and then adjusting the design after each failure until you are sure that the glass will not break.

• Any redesign effort is either completely successful or it does not change the inherent reliability ( [math]\displaystyle{ R }[/math] ). In other words, the reliability is either 1 or [math]\displaystyle{ R }[/math] , [math]\displaystyle{ 0\lt R\lt 1 }[/math] .

• When testing the product, if a success is encountered on any given trial, no corrective action or redesign is implemented.

• If the trial fails, then you will redesign the product.

• When the product is redesigned, assume that the probability of fixing the product permanently before the next trial is [math]\displaystyle{ \alpha }[/math] . In other words, the glass may or may not have been fixed.

• Let [math]\displaystyle{ {{P}_{n}}(0) }[/math] and [math]\displaystyle{ {{P}_{n}}(1) }[/math] be the probabilities that the glass is unreliable and reliable, respectively, just before the [math]\displaystyle{ {{n}^{th}} }[/math] trial, and that the glass is in the unreliable state just before the first trial, [math]\displaystyle{ {{P}_{1}}(0) }[/math] .

Rgai2.3.png

Now given the above assumptions, the question of how the glass could be in the unreliable state just before trial [math]\displaystyle{ n }[/math] can be answered in two mutually exclusive ways:
The first possibility is the probability of a successful trial, [math]\displaystyle{ (1-p) }[/math] , where [math]\displaystyle{ p }[/math] is the probability of failure in trial [math]\displaystyle{ n-1 }[/math] , while being in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , before the [math]\displaystyle{ n-1 }[/math] trial:

[math]\displaystyle{ (1-p){{P}_{n-1}}(0) }[/math]

Secondly, the glass could have failed the trial, with probability [math]\displaystyle{ p }[/math] , when in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , and having failed the trial, an unsuccessful attempt was made to fix, with probability [math]\displaystyle{ (1-\alpha ) }[/math] :

[math]\displaystyle{ p(1-\alpha ){{P}_{n-1}}(0) }[/math]


Therefore, the sum of these two probabilities, or possible events, gives the probability of being unreliable just before trial [math]\displaystyle{ n }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)=(1-p){{P}_{n-1}}(0)+p(1-\alpha ){{P}_{n-1}}(0) }[/math]
or:
[math]\displaystyle{ {{P}_{n}}(0)=(1-p\alpha ){{P}_{n-1}}(0) }[/math]

By induction, since [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)={{(1-p\alpha )}^{n-1}} }[/math]


To determine the probability of being in the reliable state just before trial [math]\displaystyle{ n }[/math] , Eqn. (eq3) is subtracted from 1, therefore:

[math]\displaystyle{ {{P}_{n}}(1)=1-{{(1-p\alpha )}^{n-1}} }[/math]


Define the reliability [math]\displaystyle{ {{R}_{n}} }[/math] of the glass as the probability of not failing at trial [math]\displaystyle{ n }[/math] . The probability of not failing at trial [math]\displaystyle{ n }[/math] is the sum of being reliable just before trial [math]\displaystyle{ n }[/math] , [math]\displaystyle{ (1-{{(1-p\alpha )}^{n-1}}) }[/math] , and being unreliable just before trial [math]\displaystyle{ n }[/math] but not failing [math]\displaystyle{ \left( {{(1-p\alpha )}^{n-1}}(1-p) \right) }[/math] , thus:

[math]\displaystyle{ {{R}_{n}}=\left( 1-{{(1-p\alpha )}^{n-1}} \right)+\left( (1-p){{(1-p\alpha )}^{n-1}} \right) }[/math]
or:
[math]\displaystyle{ {{R}_{n}}=1-{{(1-p\alpha )}^{n-1}}\cdot p }[/math]


Now instead of [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] , assume that the glass has some initial reliability or that the probability that the glass is in the unreliable state at [math]\displaystyle{ n=1 }[/math] , [math]\displaystyle{ {{P}_{1}}(0)=\beta }[/math] , then:

[math]\displaystyle{ {{R}_{n}}=1-\beta p{{(1-p\alpha )}^{n-1}} }[/math]

When [math]\displaystyle{ \beta \lt 1 }[/math] , the reliability at the [math]\displaystyle{ {{n}^{th}} }[/math] trial is larger than when it was certain that the device was unreliable at trial [math]\displaystyle{ n=1 }[/math] . A trend of reliability growth is observed in Eqn. (eq6). Let [math]\displaystyle{ A=\beta p }[/math] and [math]\displaystyle{ C=ln\left( \tfrac{1}{1-p\alpha } \right)\gt 0 }[/math] , then Eqn. (eq6) becomes:

[math]\displaystyle{ {{R}_{n}}=1-A{{e}^{-C(n-1)}} }[/math]


Eqn. (eq7) is now a model that can be utilized to obtain the reliability (or probability that the glass will not break) after the [math]\displaystyle{ {{n}^{th}} }[/math] trial. Additional models, their applications and methods of estimating their parameters are presented in the following chapters.

Fielded Systems

When a complex system with new technology is fielded and subjected to a customer use environment, there is often considerable interest in assessing its reliability and other related performance metrics, such as availability. This interest in evaluating the system reliability based on actual customer usage failure data may be motivated by a number of factors. For example, the reliability that is generally measured during development is typically related to the system's inherent reliability capability. This inherent capability may differ from actual use experience because of different operating conditions or environment, different maintenance policies, different levels of experience of maintenance personnel, etc. Although operational tests are conducted for many systems during development, it is generally recognized that in many cases these tests may not yield complete data representative of an actual use environment. Moreover, the testing during development is typically limited by the usual cost and schedule constraints, which prevent obtaining a system's reliability profile over an extended portion of its life. Other interests in measuring the reliability of a fielded system may center on, for example, logistics and maintenance policies, quality and manufacturing issues, burn-in, wearout, mission reliability or warranties.

Most complex systems are repaired, not replaced, when they fail. For example, a complex communication system or a truck would be repaired upon failure, not thrown away and replaced by a new system. A number of books and papers in literature have stressed that the usual non-repairable reliability analysis methodologies, such as the Weibull distribution, are not appropriate for repairable system reliability analyses and have suggested the use of nonhomogeneous Poisson process models instead. The homogeneous process is equivalent to the widely used Poisson distribution and exponential times between system failures can be modeled appropriately when the system's failure intensity is not affected by the system's age. However, to realistically consider burn-in, wearout, useful life, maintenance policies, warranties, mission reliability, etc., the analyst will often require an approach that recognizes that the failure intensity of these systems may not be constant over the operating life of interest but may change with system age. A useful, and generally practical, extension of the homogeneous Poisson process, is the nonhomogeneous Poisson process, which allows for the system failure intensity to change with system age. Typically, the reliability analysis of a repairable system under customer use will involve data generated by multiple systems. Crow [17] proposed the Weibull process or power law nonhomogeneous Poisson process for this type of analysis and developed appropriate statistical procedures for maximum likelihood estimation, goodness-of-fit and confidence bounds.

Failure Rate and Failure Intensity

Failure rate and failure intensity are very similar terms. The term failure intensity typically refers to a process such as a reliability growth program. The system age when a system is first put into service is time [math]\displaystyle{ 0 }[/math] . Under the non-homogeneous Poisson process (NHPP), the first failure is governed by a distribution [math]\displaystyle{ F(x) }[/math] with failure rate [math]\displaystyle{ r(x) }[/math] . Each succeeding failure is governed by the intensity function [math]\displaystyle{ u(t) }[/math] of the process. Let [math]\displaystyle{ t }[/math] be the age of the system and [math]\displaystyle{ \Delta t }[/math] is very small. The probability that a system of age [math]\displaystyle{ t }[/math] fails between [math]\displaystyle{ t }[/math] and [math]\displaystyle{ t+\Delta t }[/math] is given by the intensity function [math]\displaystyle{ u(t)\Delta t }[/math] . Notice that this probability is not conditioned on not having any system failures up to time [math]\displaystyle{ t }[/math] , as is the case for a failure rate. The failure intensity [math]\displaystyle{ u(t) }[/math] for the NHPP has the same functional form as the failure rate governing the first system failure. Therefore, [math]\displaystyle{ u(t)=r(t) }[/math] , where [math]\displaystyle{ r(t) }[/math] is the failure rate for the distribution function of the first system failure. If the first system failure follows the Weibull distribution, the failure rate is:

[math]\displaystyle{ r(x)=\lambda \beta {{x}^{\beta -1}} }[/math]

Under minimal repair, the system intensity function is:

[math]\displaystyle{ u(t)=\lambda \beta {{t}^{\beta -1}} }[/math]

This is the power law model. It can be viewed as an extension of the Weibull distribution. The Weibull distribution governs the first system failure and the power law model governs each succeeding system failure. Additional information on the power law model can also be found in Chapter 13.

Why Are Reliability Growth Models Needed?

In order to effectively manage a reliability growth program and attain the reliability goals, it is imperative that valid reliability assessments of the system be available. Assessments of interest generally include estimating the current reliability of the system configuration under test and estimating the projected increase in reliability if proposed corrective actions are incorporated into the system. These and other metrics give management information on what actions to take in order to attain the reliability goals. Reliability growth assessments are made in a dynamic environment where the reliability is changing due to corrective actions. The objective of most reliability growth models is to account for this changing situation in order to estimate the current and future reliability and other metrics of interest. The decision for choosing a particular growth model is typically based on how well it is expected to provide useful information to management and engineering. Reliability growth can be quantified by looking at various metrics of interest such as the increase in the MTBF, the decrease in the failure intensity or the increase in the mission success probability, which are generally mathematically related and can be derived from each other. All key estimates used in reliability growth management such as demonstrated reliability, projected reliability and estimates of the growth potential can generally be expressed in terms of the MTBF, failure intensity or mission reliability. Changes in these values, typically as a function of test time, are collectively called reliability growth trends and are usually presented as reliability growth curves. These curves are often constructed based on certain mathematical and statistical models called reliability growth models. The ability to accurately estimate the demonstrated reliability and calculate projections to some point in the future can help determine the following:

• Whether the stated reliability requirements will be achieved
• The associated time for meeting such requirements
• The associated costs of meeting such requirements
• The correlation of reliability changes with reliability activities In addition, demonstrated reliability and projections assessments aid in:
• Establishing warranties
• Planning for maintenance resources and logistic activities
• Life-cycle-cost analysis

Reliability Growth Analysis


Reliability growth analysis is the process of collecting, modeling, analyzing and interpreting data from the reliability growth development test program (development testing). In addition, reliability growth models can be applied for data collected from the field (fielded systems). Fielded systems analysis also includes the ability to analyze data of complex repairable systems. Depending on the metric(s) of interest and the data collection method, different models can be utilized (or developed) to analyze the growth processes. As an example of such a model development, consider the simple case presented in the next section.

A Simple Reliability Growth Model

For the sake of simplicity, first look at the case when you are interested in a unit that can only succeed or fail. For example, consider the case of a wine glass designed to withstand a fall of three feet onto a level cement surface.

Rgai2.2.png


The success/failure result of such a drop is determined by whether or not the glass breaks.

Furthermore, assume that:
• You will continue to drop the glass, looking at the results and then adjusting the design after each failure until you are sure that the glass will not break.

• Any redesign effort is either completely successful or it does not change the inherent reliability ( [math]\displaystyle{ R }[/math] ). In other words, the reliability is either 1 or [math]\displaystyle{ R }[/math] , [math]\displaystyle{ 0\lt R\lt 1 }[/math] .

• When testing the product, if a success is encountered on any given trial, no corrective action or redesign is implemented.

• If the trial fails, then you will redesign the product.

• When the product is redesigned, assume that the probability of fixing the product permanently before the next trial is [math]\displaystyle{ \alpha }[/math] . In other words, the glass may or may not have been fixed.

• Let [math]\displaystyle{ {{P}_{n}}(0) }[/math] and [math]\displaystyle{ {{P}_{n}}(1) }[/math] be the probabilities that the glass is unreliable and reliable, respectively, just before the [math]\displaystyle{ {{n}^{th}} }[/math] trial, and that the glass is in the unreliable state just before the first trial, [math]\displaystyle{ {{P}_{1}}(0) }[/math] .

Rgai2.3.png

Now given the above assumptions, the question of how the glass could be in the unreliable state just before trial [math]\displaystyle{ n }[/math] can be answered in two mutually exclusive ways:
The first possibility is the probability of a successful trial, [math]\displaystyle{ (1-p) }[/math] , where [math]\displaystyle{ p }[/math] is the probability of failure in trial [math]\displaystyle{ n-1 }[/math] , while being in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , before the [math]\displaystyle{ n-1 }[/math] trial:

[math]\displaystyle{ (1-p){{P}_{n-1}}(0) }[/math]

Secondly, the glass could have failed the trial, with probability [math]\displaystyle{ p }[/math] , when in the unreliable state, [math]\displaystyle{ {{P}_{n-1}}(0) }[/math] , and having failed the trial, an unsuccessful attempt was made to fix, with probability [math]\displaystyle{ (1-\alpha ) }[/math] :

[math]\displaystyle{ p(1-\alpha ){{P}_{n-1}}(0) }[/math]


Therefore, the sum of these two probabilities, or possible events, gives the probability of being unreliable just before trial [math]\displaystyle{ n }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)=(1-p){{P}_{n-1}}(0)+p(1-\alpha ){{P}_{n-1}}(0) }[/math]
or:
[math]\displaystyle{ {{P}_{n}}(0)=(1-p\alpha ){{P}_{n-1}}(0) }[/math]

By induction, since [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] :

[math]\displaystyle{ {{P}_{n}}(0)={{(1-p\alpha )}^{n-1}} }[/math]


To determine the probability of being in the reliable state just before trial [math]\displaystyle{ n }[/math] , Eqn. (eq3) is subtracted from 1, therefore:

[math]\displaystyle{ {{P}_{n}}(1)=1-{{(1-p\alpha )}^{n-1}} }[/math]


Define the reliability [math]\displaystyle{ {{R}_{n}} }[/math] of the glass as the probability of not failing at trial [math]\displaystyle{ n }[/math] . The probability of not failing at trial [math]\displaystyle{ n }[/math] is the sum of being reliable just before trial [math]\displaystyle{ n }[/math] , [math]\displaystyle{ (1-{{(1-p\alpha )}^{n-1}}) }[/math] , and being unreliable just before trial [math]\displaystyle{ n }[/math] but not failing [math]\displaystyle{ \left( {{(1-p\alpha )}^{n-1}}(1-p) \right) }[/math] , thus:

[math]\displaystyle{ {{R}_{n}}=\left( 1-{{(1-p\alpha )}^{n-1}} \right)+\left( (1-p){{(1-p\alpha )}^{n-1}} \right) }[/math]
or:
[math]\displaystyle{ {{R}_{n}}=1-{{(1-p\alpha )}^{n-1}}\cdot p }[/math]


Now instead of [math]\displaystyle{ {{P}_{1}}(0)=1 }[/math] , assume that the glass has some initial reliability or that the probability that the glass is in the unreliable state at [math]\displaystyle{ n=1 }[/math] , [math]\displaystyle{ {{P}_{1}}(0)=\beta }[/math] , then:

[math]\displaystyle{ {{R}_{n}}=1-\beta p{{(1-p\alpha )}^{n-1}} }[/math]

When [math]\displaystyle{ \beta \lt 1 }[/math] , the reliability at the [math]\displaystyle{ {{n}^{th}} }[/math] trial is larger than when it was certain that the device was unreliable at trial [math]\displaystyle{ n=1 }[/math] . A trend of reliability growth is observed in Eqn. (eq6). Let [math]\displaystyle{ A=\beta p }[/math] and [math]\displaystyle{ C=ln\left( \tfrac{1}{1-p\alpha } \right)\gt 0 }[/math] , then Eqn. (eq6) becomes:

[math]\displaystyle{ {{R}_{n}}=1-A{{e}^{-C(n-1)}} }[/math]


Eqn. (eq7) is now a model that can be utilized to obtain the reliability (or probability that the glass will not break) after the [math]\displaystyle{ {{n}^{th}} }[/math] trial. Additional models, their applications and methods of estimating their parameters are presented in the following chapters.

Fielded Systems

When a complex system with new technology is fielded and subjected to a customer use environment, there is often considerable interest in assessing its reliability and other related performance metrics, such as availability. This interest in evaluating the system reliability based on actual customer usage failure data may be motivated by a number of factors. For example, the reliability that is generally measured during development is typically related to the system's inherent reliability capability. This inherent capability may differ from actual use experience because of different operating conditions or environment, different maintenance policies, different levels of experience of maintenance personnel, etc. Although operational tests are conducted for many systems during development, it is generally recognized that in many cases these tests may not yield complete data representative of an actual use environment. Moreover, the testing during development is typically limited by the usual cost and schedule constraints, which prevent obtaining a system's reliability profile over an extended portion of its life. Other interests in measuring the reliability of a fielded system may center on, for example, logistics and maintenance policies, quality and manufacturing issues, burn-in, wearout, mission reliability or warranties.

Most complex systems are repaired, not replaced, when they fail. For example, a complex communication system or a truck would be repaired upon failure, not thrown away and replaced by a new system. A number of books and papers in literature have stressed that the usual non-repairable reliability analysis methodologies, such as the Weibull distribution, are not appropriate for repairable system reliability analyses and have suggested the use of nonhomogeneous Poisson process models instead. The homogeneous process is equivalent to the widely used Poisson distribution and exponential times between system failures can be modeled appropriately when the system's failure intensity is not affected by the system's age. However, to realistically consider burn-in, wearout, useful life, maintenance policies, warranties, mission reliability, etc., the analyst will often require an approach that recognizes that the failure intensity of these systems may not be constant over the operating life of interest but may change with system age. A useful, and generally practical, extension of the homogeneous Poisson process, is the nonhomogeneous Poisson process, which allows for the system failure intensity to change with system age. Typically, the reliability analysis of a repairable system under customer use will involve data generated by multiple systems. Crow [17] proposed the Weibull process or power law nonhomogeneous Poisson process for this type of analysis and developed appropriate statistical procedures for maximum likelihood estimation, goodness-of-fit and confidence bounds.

Failure Rate and Failure Intensity

Failure rate and failure intensity are very similar terms. The term failure intensity typically refers to a process such as a reliability growth program. The system age when a system is first put into service is time [math]\displaystyle{ 0 }[/math] . Under the non-homogeneous Poisson process (NHPP), the first failure is governed by a distribution [math]\displaystyle{ F(x) }[/math] with failure rate [math]\displaystyle{ r(x) }[/math] . Each succeeding failure is governed by the intensity function [math]\displaystyle{ u(t) }[/math] of the process. Let [math]\displaystyle{ t }[/math] be the age of the system and [math]\displaystyle{ \Delta t }[/math] is very small. The probability that a system of age [math]\displaystyle{ t }[/math] fails between [math]\displaystyle{ t }[/math] and [math]\displaystyle{ t+\Delta t }[/math] is given by the intensity function [math]\displaystyle{ u(t)\Delta t }[/math] . Notice that this probability is not conditioned on not having any system failures up to time [math]\displaystyle{ t }[/math] , as is the case for a failure rate. The failure intensity [math]\displaystyle{ u(t) }[/math] for the NHPP has the same functional form as the failure rate governing the first system failure. Therefore, [math]\displaystyle{ u(t)=r(t) }[/math] , where [math]\displaystyle{ r(t) }[/math] is the failure rate for the distribution function of the first system failure. If the first system failure follows the Weibull distribution, the failure rate is:

[math]\displaystyle{ r(x)=\lambda \beta {{x}^{\beta -1}} }[/math]

Under minimal repair, the system intensity function is:

[math]\displaystyle{ u(t)=\lambda \beta {{t}^{\beta -1}} }[/math]

This is the power law model. It can be viewed as an extension of the Weibull distribution. The Weibull distribution governs the first system failure and the power law model governs each succeeding system failure. Additional information on the power law model can also be found in Chapter 13.