Statistical Background: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
No edit summary
 
(181 intermediate revisions by 7 users not shown)
Line 1: Line 1:
{{Template:bsbook|3}}
{{Template:bsbook|2}}
 
 
=Introduction =
This chapter presents a brief review of statistical principles and terminology.  The objective of this chapter is to introduce concepts from probability theory and statistics that will be used in later chapters. As such, this chapter is not intended to cover this subject completely, but rather to provide an overview of applicable concepts as a foundation that you can refer to when more complex concepts are introduced.   
This chapter presents a brief review of statistical principles and terminology.  The objective of this chapter is to introduce concepts from probability theory and statistics that will be used in later chapters. As such, this chapter is not intended to cover this subject completely, but rather to provide an overview of applicable concepts as a foundation that you can refer to when more complex concepts are introduced.   
<br>
<br>


If you are familiar with basic probability theory and life data analysis, you may wish to skip this chapter.  If you would like additional information, we encourage you to review other references on the subject.
If you are familiar with basic probability theory and life data analysis, you may wish to skip this chapter.  If you would like additional information, we encourage you to review other references on the subject.
[[Category:Completed Theoretical Review]]


=A Brief Introduction to Probability Theory =
=A Brief Introduction to Probability Theory =
===Basic Definitions===
==Basic Definitions==
Before considering the methodology for estimating system reliability, some basic concepts from probability theory should be reviewed.
Before considering the methodology for estimating system reliability, some basic concepts from probability theory should be reviewed.
<br>
 
The terms that follow are important in creating and analyzing reliability block diagrams.
The terms that follow are important in creating and analyzing reliability block diagrams.
<br>
<br>
#Experiment <math>(E)</math> :  An experiment is any well-defined action that may result in a number of outcomes.  For example, the rolling of dice can be considered an experiment.
#Experiment <math>(E)\,\!</math> :  An experiment is any well-defined action that may result in a number of outcomes.  For example, the rolling of dice can be considered an experiment.
#Outcome <math>(O)</math> :  An outcome is defined as any possible result of an experiment.
#Outcome <math>(O)\,\!</math> :  An outcome is defined as any possible result of an experiment.
#Sample space <math>(S)</math> :  The sample space is defined as the set of all possible outcomes of an experiment.
#Sample space <math>(S)\,\!</math> :  The sample space is defined as the set of all possible outcomes of an experiment.
#Event:  An event is a collection of outcomes.
#Event:  An event is a collection of outcomes.
#Union of two events <math>A</math> and <math>B</math>   <math>(A\cup B)</math> :  The union of two events <math>A</math> and <math>B</math> is the set of outcomes that belong to <math>A</math> or <math>B</math> or both.
#Union of two events <math>A\,\!</math> and <math>B\,\!</math> <math>(A\cup B)\,\!</math> :  The union of two events <math>A\,\!</math> and <math>B\,\!</math> is the set of outcomes that belong to <math>A\,\!</math> or <math>B\,\!</math> or both.
#Intersection of two events <math>A</math> and <math>B</math>   <math>(A\cap B)</math> :  The intersection of two events <math>A</math> and <math>B</math> is the set of outcomes that belong to both <math>A</math> and <math>B</math> .
#Intersection of two events <math>A\,\!</math> and <math>B\,\!</math> <math>(A\cap B)\,\!</math> :  The intersection of two events <math>A\,\!</math> and <math>B\,\!</math> is the set of outcomes that belong to both <math>A\,\!</math> and <math>B\,\!</math>.
#Complement of event A ( <math>\overline{A}</math> ):  A complement of an event <math>A</math> contains all outcomes of the sample space, <math>S</math> , that do not belong to <math>A</math> .
#Complement of event A ( <math>\overline{A}\,\!</math> ):  A complement of an event <math>A\,\!</math> contains all outcomes of the sample space, <math>S\,\!</math>, that do not belong to <math>A\,\!</math>.
#Null event ( <math>\varnothing</math> ):  A null event is an empty set that has no outcomes.
#Null event ( <math>\varnothing\,\!</math> ):  A null event is an empty set that has no outcomes.
#Probability:  Probability is a numerical measure of the likelihood of an event relative to a set of alternative events.  For example, there is a <math>50%</math>  probability of observing heads relative to observing tails when flipping a coin (assuming a fair or unbiased coin).
#Probability:  Probability is a numerical measure of the likelihood of an event relative to a set of alternative events.  For example, there is a 50% probability of observing heads relative to observing tails when flipping a coin (assuming a fair or unbiased coin).
<br>
 
====Experiment Example====
'''Example'''
Consider an experiment that consists of the rolling of a six-sided die.  The numbers on each side of the die are the possible outcomes.  Accordingly, the sample space is <math>S=\{1,2,3,4,5,6\}</math> .   
 
<br>
Consider an experiment that consists of the rolling of a six-sided die.  The numbers on each side of the die are the possible outcomes.  Accordingly, the sample space is <math>S=\{1,2,3,4,5,6\}\,\!</math>.   
Let <math>A</math> be the event of rolling a 3, 4 or 6 ( <math>A=\{3,4,6\}</math> ) and let <math>B</math> be the event of rolling a 2, 3 or 5 ( <math>B=\{2,3,5\}</math> ).
 
#The union of <math>A</math> and <math>B</math> is: <math>A\cup B=\{2,3,4,5,6\}.</math>  
Let <math>A\,\!</math> be the event of rolling a 3, 4 or 6, <math>A=\{3,4,6\}\,\!</math>, and let <math>B\,\!</math> be the event of rolling a 2, 3 or 5, <math>B=\{2,3,5\}\,\!</math>.
#The intersection of <math>A</math> and <math>B</math> is: <math>A\cap B=\{3\}.</math>  
#The union of <math>A\,\!</math> and <math>B\,\!</math> is: <math>A\cup B=\{2,3,4,5,6\}\,\!</math>.
#The complement of <math>A</math> is: <math>\overline{A}=\{1,2,5\}.</math>
#The intersection of <math>A\,\!</math> and <math>B\,\!</math> is: <math>A\cap B=\{3\}\,\!</math>.
#The complement of <math>A\,\!</math> is: <math>\overline{A}=\{1,2,5\}\,\!</math>.
 
==Probability Properties, Theorems and Axioms==
 
The probability of an event <math>A\,\!</math> is expressed as <math>P(A)\,\!</math> and has the following properties:
# <math>0\le P(A)\le 1\,\!</math>
# <math>P(A)=1-P(\overline{A})\,\!</math>
# <math>P(\varnothing)=0\,\!</math>
# <math>P(S)=1\,\!</math>  


===Probability Properties, Theorems and Axioms===
In other words, when an event is certain to occur, it has a probability equal to 1; when it is impossible for the event to occur, it has a probability equal to 0.
<br>
The probability of an event  <math>A</math>  is expressed as  <math>P(A)</math>  and has the following properties:
# <math>0\le P(A)\le 1.</math>
# <math>P(A)=1-P(\overline{A}).</math>
# <math>P(\varnothing)=0.</math>
#<math>P(S)=1.</math>
<br>
In other words, when an event is certain to occur, it has a probability equal to <math>1</math> ; when it is impossible for the event to occur, it has a probability equal to <math>0</math> .
<br>
It can also be shown that the probability of the union of two events  <math>A</math>  and  <math>B</math>  is:
<br>


::<math>P(A\cup B)=P(A)+P(B)-P(A\cap B)\ (eqn 1)</math>
It can also be shown that the probability of the union of two events <math>A\,\!</math> and <math>B\,\!</math> is:


::<math>P(A\cup B)=P(A)+P(B)-P(A\cap B)\ \,\!</math>


<br>
Similarly, the probability of the union of three events, <math>A\,\!</math>, <math>B\,\!</math> and <math>C\,\!</math> is given by:  
Similarly, the probability of the union of three events, <math>A</math> , <math>B</math> and <math>C</math> is given by:  


<br>
::<math>\begin{align}
::<math>\begin{align}
P(A\cup B\cup C)= & P(A)+P(B)+P(C) \\  
P(A\cup B\cup C)= & P(A)+P(B)+P(C) \\  
& -P(A\cap B)-P(A\cap C) \\  
& -P(A\cap B)-P(A\cap C) \\  
& -P(B\cap C)+P(A\cap B\cap C)   
& -P(B\cap C)+P(A\cap B\cap C)   
\end{align}</math>
\end{align}\,\!</math>


===Mutually Exclusive Events===


<br>
Two events <math>A\,\!</math> and <math>B\,\!</math> are said to be mutually exclusive if it is impossible for them to occur simultaneously ( <math>A\cap B\,\!</math> = <math>\varnothing\,\!</math> ). In such cases, the expression for the union of these two events reduces to the following, since the probability of the intersection of these events is defined as zero.  
====Mutually Exclusive Events====
 
<br>
::<math>P(A\cup B)=P(A)+P(B)\,\!</math>
Two events <math>A</math> and <math>B</math> are said to be mutually exclusive if it is impossible for them to occur simultaneously ( <math>A\cap B</math> = <math>\varnothing</math> ). In such cases, the expression for the union of these two events reduces to the following, since the probability of the intersection of these events is defined as zero.  
 
===Conditional Probability===
 
The conditional probability of two events <math>A\,\!</math> and <math>B\,\!</math> is defined as the probability of one of the events occurring, knowing that the other event has already occurred. The expression below denotes the probability of <math>A\,\!</math> occurring given that <math>B\,\!</math> has already occurred.
 
::<math>P(A|B)=\frac{P(A\cap B)}{P(B)}\ \,\!</math>
 
Note that knowing that event <math>B\,\!</math> has occurred reduces the sample space.
 
===Independent Events===
If knowing <math>B\,\!</math> gives no information about <math>A\,\!</math>, then the events are said to be ''independent'' and the conditional probability expression reduces to:


<br>
::<math>P(A|B)=P(A)\ \,\!</math>
::<math>P(A\cup B)=P(A)+P(B)</math>


From the definition of conditional probability, <math>P(A|B)=\frac{P(A\cap B)}{P(B)}\ \,\!</math> can be written as:


<br>
::<math>P(A\cap B)=P(A|B)P(B)\ \,\!</math>


====Conditional Probability====
Since events <math>A\,\!</math> and <math>B\,\!</math> are independent, the expression reduces to:


The conditional probability of two events  <math>A</math>  and  <math>B</math>  is defined as the probability of one of the events occurring, knowing that the other event has already occurred. The expression below denotes the probability of  <math>A</math>  occurring given that  <math>B</math> has already occurred.
::<math>P(A\cap B)=P(A)P(B)\ \,\!</math>


::<math>P(A|B)=\frac{P(A\cap B)}{P(B)}\ (eqn 2)</math>
If a group of <math>n\,\!</math> events <math>{{A}_{i}}\,\!</math> are independent, then:


Note that knowing that event  <math>B</math>  has occurred reduces the sample space.
::<math>P\left[ \underset{i=1}{\overset{n}{\mathop \bigcap }}\,{{A}_{i}} \right]=\underset{i=1}{\overset{n}{\mathop \prod }}\,P({{A}_{i}})\ \,\!</math>
<br>


====Independent Events====
If knowing  <math>B</math>  gives no information about  <math>A</math> , then the events are said to be ''independent'' and the conditional probability expression reduces to:
<br>
::<math>P(A|B)=P(A)\ (eqn 3)</math>
<br>
From the definition of conditional probability, Eqn. (Conditional) can be written as:
<br>
::<math>P(A\cap B)=P(A|B)P(B)\ (eqn 4)</math>
<br>
Since events  <math>A</math>  and  <math>B</math>  are independent, the expression reduces to:
<br>
::<math>P(A\cap B)=P(A)P(B)\ (eqn 5)</math>
<br>
If a group of  <math>n</math>  events  <math>{{A}_{i}}</math>  are independent, then:
<br>
::<math>P\left[ \underset{i=1}{\overset{n}{\mathop \bigcap }}\,{{A}_{i}} \right]=\underset{i=1}{\overset{n}{\mathop \prod }}\,P({{A}_{i}})\ (eqn 6)</math>
<br>
As an illustration, consider the outcome of a six-sided die roll.  The probability of rolling a 3 is one out of six or:  
As an illustration, consider the outcome of a six-sided die roll.  The probability of rolling a 3 is one out of six or:  
<br>
 
::<math>P(O=3)=1/6=0.16667</math>
::<math>\begin{align}
<br>
P(O=3)=1/6=0.16667  
\end{align}\,\!</math>
 
All subsequent rolls of the die are independent events, since knowing the outcome of the first die roll gives no information as to the outcome of subsequent die rolls (unless the die is loaded).  Thus the probability of rolling a 3 on the second die roll is again:  
All subsequent rolls of the die are independent events, since knowing the outcome of the first die roll gives no information as to the outcome of subsequent die rolls (unless the die is loaded).  Thus the probability of rolling a 3 on the second die roll is again:  
<br>
 
::<math>P(O=3)=1/6=0.16667</math>
::<math>\begin{align}
<br>
P(O=3)=1/6=0.16667  
\end{align}\,\!</math>
 
However, if one were to ask the probability of rolling a double 3 with two dice, the result would be:  
However, if one were to ask the probability of rolling a double 3 with two dice, the result would be:  
<br>
 
::<math>\begin{align}
::<math>\begin{align}
0.16667\cdot 0.16667= & 0.027778 \\  
0.16667\cdot 0.16667= & 0.027778 \\  
= & \frac{1}{36}   
= & \frac{1}{36}   
\end{align}</math>
\end{align}\,\!</math>
<br>


===Example 1===
===Example 1===
<br>
Consider a system where two hinged members are holding a load in place, as shown next.
Consider a system, as shown in Figure "System for Example 1", where two hinged members are holding a load in place.
[[Image:chp3ex1.png|thumb|center|400px|Figure 3.1: System for Example 1]]
<br>
The system fails if either member fails and the load is moved from its position.
#Let  <math>A=</math>  event of failure of Component 1 and let  <math>\overline{A}</math>  <math>=</math>  the event of not failure of Component 1.
#Let  <math>B=</math>  event of failure of Component 2 and let  <math>\overline{B}</math>  <math>=</math>  the event of not failure of Component 2.
<br>
Failure occurs if Component 1 or Component 2 or both fail. The system probability of failure (or unreliability) is:


[[Image:chp3ex1.png|center|350px|System for Example 1|link=]]


::<math>{{P}_{f}}=P(A\cup B)=P(A)+P(B)-P(A\cap B)</math>
The system fails if either member fails and the load is moved from its position.
#Let <math>A=\,\!</math> event of failure of Component 1 and let <math>\overline{A}\,\!</math> <math>=\,\!</math> the event of not failure of Component 1.
#Let <math>B=\,\!</math> event of failure of Component 2 and let <math>\overline{B}\,\!</math> <math>=\,\!</math> the event of not failure of Component 2.


Failure occurs if Component 1 or Component 2 or both fail. The system probability of failure (or unreliability) is:


<br>
::<math>{{P}_{f}}=P(A\cup B)=P(A)+P(B)-P(A\cap B)\,\!</math>
Assuming independence (or that the failure of either component is not influenced by the success or failure of the other component), the system probability of failure becomes the sum of the probabilities of  <math>A</math>  and  <math>B</math> occurring minus the product of the probabilities:


<br>
Assuming independence (or that the failure of either component is not influenced by the success or failure of the other component), the system probability of failure becomes the sum of the probabilities of <math>A\,\!</math> and <math>B\,\!</math> occurring minus the product of the probabilities:


::<math>{{P}_{f}}=P(A\cup B)=P(A)+P(B)-P(A)P(B)\,\!</math>


::<math>{{P}_{f}}=P(A\cup B)=P(A)+P(B)-P(A)P(B)</math>
Another approach is to calculate the probability of the system not failing (i.e., the reliability of the system):
<br>
 
 
Another approach is to calculate the probability of the system not failing (i.e. the reliability of the system):
 


::<math>\begin{align}
::<math>\begin{align}
Line 146: Line 125:
= & P(\overline{A}\cap\overline{B})\\
= & P(\overline{A}\cap\overline{B})\\
= & P(\overline{A})P(\overline{B})
= & P(\overline{A})P(\overline{B})
\end{align}</math>
\end{align}\,\!</math>
 


Then the probability of system failure is simply 1 (or 100%) minus the reliability:
Then the probability of system failure is simply 1 (or 100%) minus the reliability:


 
::<math>\begin{align}
::<math>{{P}_{f}}=1-Reliability</math>
{{P}_{f}}=1-Reliability  
\end{align}\,\!</math>


===Example 2===
===Example 2===
Consider a system with a load being held in place by two rigid members, as shown next.


<br>
[[Image:chp3ex2.png|center|350px|System for Example 2.|link=]]
Consider a system with a load being held in place by two rigid members, as shown in Figure 3.2.
[[Image:chp3ex2.png|thumb|center|400px|Figure 3.2: System for Example 2.]]
<br>
   
   
:• Let <math>A=</math> event of failure of Component 1.
:• Let <math>A=\,\!</math> event of failure of Component 1.
:• Let <math>B=</math> event of failure of Component 2.
:• Let <math>B=\,\!</math> event of failure of Component 2.
:• The system fails if Component 1 fails and Component 2 fails.  In other words, both components must fail for the system to fail.
:• The system fails if Component 1 fails and Component 2 fails.  In other words, both components must fail for the system to fail.
The system probability of failure is defined as the intersection of events <math>A</math> and <math>B</math> :
The system probability of failure is defined as the intersection of events <math>A\,\!</math> and <math>B\,\!</math> :


::<math>{{P}_{f}}=P(A\cap B)\ (eqn 7) </math>
::<math>{{P}_{f}}=P(A\cap B)) \,\!</math>


====Case 1====
'''Case 1'''
Assuming independence (i.e. either one of the members is sufficiently strong to hold the load in place), the probability of system failure becomes the product of the probabilities of 
<br>
::<math>A</math>  and  <math>B</math>  failing:
<br>
::<math>{{P}_{f}}=P(A\cap B)=P(A)P(B)</math>
<br>


The reliability of the system now becomes:
Assuming independence (i.e., either one of the members is sufficiently strong to hold the load in place), the probability of system failure becomes the product of the probabilities of <math>A\,\!</math> and <math>B\,\!</math> failing:
<br>
::<math>Reliability=1-{{P}_{f}}=1-P(A)P(B)\ (eqn 8)</math>
<br>


====Case 2====
::<math>{{P}_{f}}=P(A\cap B)=P(A)P(B)\,\!</math>
If independence is not assumed (e.g. when one component fails the other one is then more likely to fail), then the simplification given in Eqn8 is no longer applicable.  In this case, Eqn7 must be used.  We will examine this dependency in later sections under the subject of load sharing.
<br>


=A Brief Introduction to Continuous Life Distributions =
The reliability of the system now becomes:
===Random Variables===
<br>
[[Image:chp3randomvariables.png|thumb|center|400px|]]
<br>
In general, most problems in reliability engineering deal with quantitative measures, such as the time-to-failure of a product, or qualitative measures, such as whether a product is defective or non-defective. We can then use a random variable  <math>X</math>  to denote these possible measures.
<br>
In the case of times-to-failure, our random variable  <math>X</math>  is the time-to-failure of the product and can take on an infinite number of possible values in a range from  <math>0</math> to infinity (since we do not know the exact time a priori). Our product can be found failed at any time after time 0 (e.g. at 12 hours or at 100 hours and so forth), thus  <math>X</math> can take on any value in this range.  In this case, our random variable  <math>X</math> is said to be a continuous random variable. In this reference, we will deal almost exclusively with continuous random variables.
<br>
In judging a product to be defective or non-defective, only two outcomes are possible.  That is,  <math>X</math>  is a random variable that can take on one of only two values (let's say defective = 0 and non-defective = 1). In this case, the variable is said to be a discrete random variable. 
<br>


===The Probability and Cumulative Density (Distribution) Functions===
::<math>Reliability=1-{{P}_{f}}=1-P(A)P(B)\ \,\!</math>
<br>
The probability density function ( <math>pdf</math> ) and cumulative distribution function ( <math>cdf</math> ) are two of the most important statistical functions in reliability and are very closely related.  When these functions are known, almost any other reliability measure of interest can be derived or obtained. We will now take a closer look at these functions and how they relate to other reliability measures, such as the reliability function and failure rate.
<br>
====Designations====
From probability and statistics, given a continuous random variable  <math>X,</math>  we denote:
<br>
:• The probability density function,  <math>pdf</math>, as  <math>f(x).</math>  


:• The cumulative distribution function,  <math>cdf</math>, as  <math>F(x).</math>
'''Case 2'''


The  <math>pdf</math>  and  <math>cdf</math>  give a complete description of the probability distribution of a random variable. Figure 3.3 illustrates a  <math>pdf,</math> while Figure 3.4 illustrates the <math>pdf</math> - <math>cdf</math>  relationship.
If independence is not assumed (e.g., when one component fails the other one is then more likely to fail), then the simplification given in <math>Reliability=1-{{P}_{f}}=1-P(A)P(B)\ \,\!</math> is no longer applicableIn this case, <math>{{P}_{f}}=P(A\cap B)) \,\!</math> must be used. We will examine this dependency in later sections under the subject of load sharing.
<br>


====Definitions====
=A Brief Introduction to Continuous Life Distributions =
<br>
{{:Brief Statistical Background}}
If  <math>X</math>  is a continuous random variable, then the probability density function,  <math>pdf</math> , of  <math>X</math>  is a function,  <math>f(x)</math> , such that for any two numbers,  <math>a</math>  and  <math>b</math>  with  <math>a\le b</math> :
<br>


::<math>P(a\le X\le b)=\int_{a}^{b}f(x)dx\ (eqn 9)</math>
=A Brief Introduction to Life-Stress Relationships= <!-- THIS SECTION HEADER IS LINKED FROM: Time-Dependent_System_Reliability_(Analytical). IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). -->
<br>
That is, the probability that  <math>X</math>  takes on a value in the interval  <math>[a,b]</math>  is the area under the density function from  <math>a</math>  to <math>b,</math>  as shown in Figure 3.3.  The  <math>pdf</math>  represents the relative frequency of failure times as a function of time.
<br>
::<math></math>
<br>
[[Image:3.3.gif|thumb|center|300px|Figure 3.3: Example of a ''pdf''.]]
<br>
The ''cumulative distribution function'',  <math>cdf</math>, is a function,  <math>F(x)</math> , of a random variable  <math>X</math>, and is defined for a number  <math>x</math>  by:
<br>
::<math>F(x)=P(X\le x)=\int_{0}^{x}f(s)ds\ (eqn 10)</math>
<br>
That is, for a number  <math>x</math> ,  <math>F(x)</math>  is the probability that the observed value of  <math>X</math>  will be at most  <math>x</math>. The  <math>cdf</math>  represents the cumulative values of the  <math>pdf</math>. That is, the value of a point on the curve of the  <math>cdf</math>  represents the area under the curve to the left of that point on the  <math>pdf</math>. In reliability, the  <math>cdf</math>  is used to measure the probability that the item in question will fail before the associated time value,  <math>t</math>, and is also called unreliability.
<br>
Note that depending on the density function, denoted by  <math>f(x)</math>, the limits will vary based on the region over which the distribution is defined. For example, for the life distributions considered in this reference, with the exception of the normal distribution, this range would be  <math>[0,+\infty ].</math>
<br>
[[Image:chp3pdf.png|thumb|center|300px|Figure 3.4: Graphical representation of the relationship between ''pdf'' and ''cdf''.]]


<br>
In certain cases when one or more of the characteristics of the distribution change based on an outside factor, one may be interested in formulating a model that includes both the life distribution and a model that describes how a characteristic of the distribution changes.  In reliability, the most common "outside factor" is the stress applied to the component. In system analysis, stress comes into play when dealing with units in a load sharing configuration. When components of a system operate in a load sharing configuration, each component supports a portion of the total load for that aspect of the system. When one or more load sharing components fail, the operating components must take on an increased portion of the load in order to compensate for the failure(s). Therefore, the reliability of each component is dependent upon the performance of the other components in the load sharing configuration.


====Mathematical Relationship Between the  <math>pdf</math>  and  <math>cdf</math>====
Traditionally in a reliability block diagram, one assumes independence and thus an item's failure characteristics can be fully described by its failure distribution.  However, if the configuration includes load sharing redundancy, then a single failure distribution is no longer sufficient to describe an item's failure characteristicsInstead, the item will fail differently when operating under different loads and the load applied to the component will vary depending on the performance of the other component(s) in the configurationTherefore, a more complex model is needed to fully describe the failure characteristics of such blocksThis model must describe both the effect of the load (or stress) on the life of the product and the probability of failure of the item at the specified load.  The models, theory and methodology used in Quantitative Accelerated Life Testing (QALT) data analysis can be used to obtain the desired model for this situation.  The objective of QALT analysis is to relate the applied stress to life (or a life distribution)Identically in the load sharing case, one again wants to relate the applied stress (or load) to lifeThe following figure graphically illustrates the probability density function (''pdf'') for a standard item, where only a single distribution is required.
The mathematical relationship between the  <math>pdf</math>  and  <math>cdf</math>  is given by:
<br>
::<math>F(x)=\int_{0}^{x}f(s)ds \ (eqn 11)</math>
<br>
Where  <math>s</math>  is a dummy integration variable.
<br>
:Conversely:
<br>
::<math>f(x)=\frac{d(F(x))}{dx}</math>
<br>
The  <math>cdf</math>  is the area under the probability density function up to a value of  <math>x</math> .  The total area under the  <math>pdf</math> (Figure 3.5) is always equal to 1, or mathematically:
<br>
[[Image:3.5.gif|thumb|center|300px|Figure 3.5: Total area under a ''pdf''.]]
<br>
::<math>\int_{-\infty}^{+\infty }f(x)dx=1</math>
<br>
::<math></math>
<br>
The well-known normal (or Gaussian) distribution is an example of a probability density function. The  <math>pdf</math>  for this distribution is given by: 
<br>
::<math>f(t)=\frac{1}{\sigma \sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{\sigma } \right)}^{2}}}}</math>
<br>
Where  <math>\mu </math>  is the mean and  <math>\sigma </math>  is the standard deviation. The normal distribution is a two-parameter distribution having two parameters,  <math>\mu </math>  and  <math>\sigma </math> .
<br>
Another is the lognormal distribution, whose  <math>pdf</math>  is given by:
<br>
::<math>f(t)=\frac{1}{t\cdot {{\sigma }^{\prime }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}^{\prime }}-{{\mu }^{\prime }}}{{{\sigma }^{\prime }}} \right)}^{2}}}}</math>
<br>
where  <math>{\mu }'</math>  is the mean of the natural logarithms of the times-to-failure and  <math>{\sigma }'</math>  is the standard deviation of the natural logarithms of the times-to-failure.  Again, this is a two-parameter distribution.
 
===The Reliability Function===
The reliability function can be derived using the previous definition of the cumulative distribution function, Eqn11. From our definition of the <math>cdf</math> , the probability of an event occurring by time  <math>t</math>  is given by:
<br>
::<math>F(t)=\int_{0}^{t}f(s)ds\ (eqn 12)</math>
<br>
Or, one could equate this event to the probability of a unit failing by time  <math>t</math> .
<br>
<math></math>
<br>
Since this function defines the probability of failure by a certain time, we could consider this the unreliability function.  Subtracting this probability from 1 will give us the reliability function, one of the most important functions in life data analysis.  The reliability function gives the probability of success of a unit undertaking a mission of a given time duration.
Figure 3.6 illustrates this.
<br>
To show this mathematically, we first define the unreliability function,  <math>Q(t)</math>, which is the probability of failure, or the probability that our time-to-failure is in the region of  <math>0</math>  and  <math>t</math>This is the same as the  <math>cdf</math>.  So from Eqn. (12):
<br>
::<math>Q(t)=F(t)=\int_{0}^{t}f(s)ds</math>
<br>
[[Image:3.6.gif|thumb|center|400px|Figure 3.6: Reliability as area under ''pdf''.]]
 
<br>
Reliability and unreliability are the only two events being considered and they are mutually exclusive; hence, the sum of these probabilities is equal to unity. 
<br>
Then:
<br>
::<math>\begin{align}
  Q(t)+R(t)= & 1 \\
  R(t)= & 1-Q(t) \\
  R(t)= & 1-\int_{0}^{t}f(s)ds \\
  R(t)= & \int_{t}^{\infty }f(s)ds 
\end{align}</math>
<br>
:Conversely:
<br>
::<math>f(t)=-\frac{d(R(t))}{dt}</math>
<br>
 
===The Conditional Reliability Function===
Conditional reliability is the probability of successfully completing another mission following the successful completion of a previous mission.  The time of the previous mission and the time for the mission to be undertaken must be taken into account for conditional reliability calculations.  The conditional reliability function is given by:
<br>
::<math>R(T,t)=\frac{R(T+t)}{R(T)}\ (eqn 13)</math>
 
===The Failure Rate Function===
 
The failure rate function enables the determination of the number of failures occurring per unit timeOmitting the derivation, the failure rate is mathematically given as:
<br>
::<math>\lambda (t)=\frac{f(t)}{R(t)}\ (eqn 14)</math>
<br>
This gives the instantaneous failure rate, also known as the hazard functionIt is useful in characterizing the failure behavior of a product, determining maintenance crew allocation, planning for spares provisioning, etc.  Failure rate is denoted as failures per unit time.
 
===Mean Life (MTTF)===
 
The mean life function, which provides a measure of the average time of operation to failure, is given by:
<br>
::<math>\overline{T}=m=\int_{0}^{\infty }t\cdot f(t)dt</math>
<br>
This is the expected or average time-to-failure and is denoted as the  <math>MTTF</math> (Mean Time To Failure)
<br>
The  <math>MTTF</math> , even though an index of reliability performance, does not give any information on the failure distribution of the product in question when dealing with most lifetime distributions.  Because vastly different distributions can have identical means, it is unwise to use the <math>MTTF</math>  as the sole measure of the reliability of a product.
 
===Median Life===
Median life,
<math>\tilde{T}</math>,
is the value of the random variable that has exactly one-half of the area under the <math>pdf</math> to its left and one-half to its right.
It represents the centroid of the distribution.   
The median is obtained by solving the following equation for  <math>\breve{T}</math>. (For individual data, the median is the midpoint value.)
<br>
::<math>\int_{-\infty}^{{\breve{T}}}f(t)dt=0.5\ (eqn 15)</math>
<br>
 
===Modal Life===
The modal life(or mode), <math>\tilde{T}</math>, is the value of  <math>T</math> that satisfies:
<br>
::<math>\frac{d\left[ f(t) \right]}{dt}=0\ (eqn 16)</math>
<br>
For a continuous distribution, the mode is that value of <math>t</math> that corresponds to the maximum probability density (the value at which the <math>pdf</math> has its maximum value, or the peak of the curve).
 
===Distributions===
A statistical distribution is fully described by its  <math>pdf</math> .  In the previous sections, we used the definition of the  <math>pdf</math>  to show how all other functions most commonly used in reliability engineering and life data analysis can be derived.  The reliability function, failure rate function, mean time function, and median life function can be determined directly from the  <math>pdf</math>  definition, or  <math>f(t)</math> .  Different distributions exist, such as the normal (Gaussian), exponential, Weibull, etc., and each has a predefined form of <math>f(t)</math>  that can be found in many references.  In fact, there are certain references that are devoted exclusively to different types of statistical distributions.  These distributions were formulated by statisticians, mathematicians and engineers to mathematically model or represent certain behavior.  For example, the Weibull distribution was formulated by Waloddi Weibull and thus it bears his name.  Some distributions tend to better represent life data and are most commonly called ''lifetime distributions''.
<br>
The exponential distribution is one of the simplest and most commonly used distributions.  The  <math>pdf</math>  of the exponential distribution is mathematically defined as:
<br>
::<math>f(t)=\lambda {{e}^{-\lambda t}}</math>
<br>
In this definition, note that  <math>t</math>  is our random variable representing time and the Greek letter  <math>\lambda </math>  (lambda) represents what is commonly referred to as the parameter of the distribution.  For any distribution, the parameter or parameters of the distribution are estimated from analysis of the dataFor example, in the case of the most well-known distribution, namely the normal (or Gaussian) distribution, the  <math>pdf</math>  is given by:
 
::<math>f(t)=\frac{1}{\sigma \sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{\sigma } \right)}^{2}}}}\ (eqn 17)</math>
<br>
Where the mean,  <math>\mu ,</math>  and the standard deviation,  <math>\sigma ,</math>  are its parametersBoth of these parameters are estimated from the data (i.e. the mathematical mean and standard deviation of the sample data are used to represent the parameters for the entire population).  Once these parameters have been estimated, our function,  <math>f(t)</math> , is fully defined and we can obtain any value for <math>f(t)</math>  given a value of  <math>t</math> .
<br>
Given the mathematical representation of a distribution, we can also derive all of the functions needed for life data analysis.  These functions will also depend only on the value of  <math>t</math>  after the value of the distribution parameters have been estimated from data.
<br>
For example, we know that the exponential distribution  <math>pdf</math>  is given by:
 
::<math>f(t)=\lambda {{e}^{-\lambda t}}</math>


Thus, the reliability function can be derived by:  
[[Image:chp3LSR1.png|center|300px|Single ''pdf''|link=]]
The next figure represents a load sharing item by using a 3-D surface that illustrates the  ''pdf'' , load and time.
[[Image:Pdf_lifestress1.png|center|300px|''pdf'' and life-stress relationship.|link=]]
The following figure shows the reliability curve for a load sharing item vs. the applied load.
[[Image:chp3reliabilityvsloadsurface.png|center|400px|Reliability and life-stress relationship.|link=]]


::<math>\begin{align}
To formulate the model, a life distribution is combined with a life-stress relationshipThe distribution choice is based on the product's failure characteristics while the life-stress relationship is based on how the stress affects the life characteristics. The following figure graphically shows these elements of the formulation.
R(t)= & 1-\int_{0}^{t}\lambda {{e}^{-\lambda T}}dT \\
[[Image:chp3formulation.png|center|550px|A life distribution and a life-stress relationship.|link=]]
= & 1-\left[ 1-{{e}^{-\lambda t}} \right] \\
The next figure shows the combination of both an underlying distribution and a life-stress model by plotting a ''pdf'' against both time and stress.
= & {{e}^{-\lambda t}} 
[[Image:chp3formulation2.png|center|400px|''pdf'' vs. time and stress.|link=]]
\end{align}\ (eqn 18)</math>
<br>
The failure rate function is given by:
<br>
::<math>\begin{align}
\lambda (t)= & \frac{f(t)}{R(t)} \\
= & \frac{\lambda {{e}^{-\lambda (t)}}}{{{e}^{-\lambda (t)}}} \\
= & \lambda 
\end{align}</math>
<br>
The Mean Time To Failure (MTTF) is given by:
<br>
::<math>\begin{align}
\overline{T}= & \underset{0}{\overset{\infty }{\mathop \int }}\,t\cdot f(t)dt \\
= & \underset{0}{\overset{\infty }{\mathop \int }}\,t\cdot \lambda \cdot {{e}^{-\lambda t}}dt \\
= & \frac{1}{\lambda } 
\end{align}</math>
<br>
Exactly the same methodology can be applied to any distribution, given its  <math>pdf</math> , with various degrees of difficulty depending on the complexity of  <math>f(t)</math> .
 
===Commonly Used Distributions===
There are many different lifetime distributions that can be used.  [[Appendix_B:_References | ReliaSoft [25]]] presents a thorough overview of commonly used lifetime distributions.  [[Appendix_B:_References | Leemis [17]]] and others also present a good overview of many of these distributions.
 
=A Brief Introduction to Life-Stress Relationships=
 
<br>
In certain cases when one or more of the characteristics of the distribution change based on an outside factor, one may be interested in formulating a model that includes both the life distribution and a model that describes how a characteristic of the distribution changes.  In reliability, the most common "outside factor" is the stress applied to the component. In system analysis, stress comes into play when dealing with units in a load sharing configuration. When components of a system operate in a load sharing configuration, each component supports a portion of the total load for that aspect of the system. When one or more load sharing components fail, the operating components must take on an increased portion of the load in order to compensate for the failure(s). Therefore, the reliability of each component is dependent upon the performance of the other components in the load sharing configuration.
<br>
Traditionally in a reliability block diagram, one assumes independence and thus an item's failure characteristics can be fully described by its failure distributionHowever, if the configuration includes load sharing redundancy, then a single failure distribution is no longer sufficient to describe an item's failure characteristics.  Instead, the item will fail differently when operating under different loads and the load applied to the component will vary depending on the performance of the other component(s) in the configuration.  Therefore, a more complex model is needed to fully describe the failure characteristics of such blocks.  This model must describe both the effect of the load (or stress) on the life of the product and the probability of failure of the item at the specified load.  The models, theory and methodology used in Quantitative Accelerated Life Testing (QALT) data analysis can be used to obtain the desired model for this situation.  The objective of QALT analysis is to relate the applied stress to life (or a life distribution).  Identically in the load sharing case, one again wants to relate the applied stress (or load) to life. Figure 3.7 graphically illustrates the probability density function ( <math>pdf</math> ) for a standard item, where only a single distribution is required.  Figure 3.8 represents a load sharing item by using a 3-D surface that illustrates the  <math>pdf</math> , load and time.  Figure 3.9 shows the reliability curve for a load sharing item vs. the applied load.
<br>
[[Image:chp3LSR1.png|thumb|center|300px|Figure 3.7: Single ''pdf'']]
<br>
[[Image:chp3LSR2.png|thumb|center|300px|Figure 3.8: ''pdf'' and life-stress relationship.</center>]]


<br>
The assumed underlying life distribution can be any life distribution.  The most commonly used life distributions include the Weibull, the exponential and the lognormal.  The life-stress relationship describes how a specific life characteristic changes with the application of stress.  The life characteristic can be any life measure such as the mean, median, <math>R(x)\,\!</math>, <math>F(x)\,\!</math>, etc.  It is expressed as a function of stress.  Depending on the assumed underlying life distribution, different life characteristics are considered.  Typical life characteristics for some distributions are shown in the next table.


=Formulation=
{| border="1" align="center" style="border-collapse:collapse;" cellpadding="2"  
To formulate the model, a life distribution is combined with a life-stress relationship.  The distribution choice is based on the product's failure characteristics while the life-stress relationship is based on how the stress affects the life characteristics. Figure 3.10 graphically shows these elements of the formulation and Figure 3.11 shows the combination of both an underlying distribution and a life-stress model by plotting a  <math>pdf</math>  against both time and stress.
<br>
<br>
The assumed underlying life distribution can be any life distribution.  The most commonly used life distributions include the Weibull, the exponential and the lognormal.  The life-stress relationship describes how a specific life characteristic changes with the application of stress.  The life characteristic can be any life measure such as the mean, median,  <math>R(x)</math> ,  <math>F(x)</math> , etc.  It is expressed as a function of stress.  Depending on the assumed underlying life distribution, different life characteristics are considered.  Typical life characteristics for some distributions are shown in the next table.
<br>
{| border="1" align="center"
|-
|-
! ''Distribution''
! ''Distribution''
Line 416: Line 188:
|-
|-
| Weibull
| Weibull
| <math>\Beta</math>*, <math>\eta </math>  
| <math>\beta\,\!</math>*, <math>\eta \,\!</math>  
| Scale parameter, <math>\eta </math>
| Scale parameter, <math>\eta \,\!</math>
|-
|-
| Exponential
| Exponential
| <math>\lambda </math>
| <math>\lambda \,\!</math>
| Mean Life, (<math>1/{\lambda} </math>)
| Mean Life, (<math>1/{\lambda} \,\!</math>)
|-
|-
| Lognormal
| Lognormal
| <math>\bar{T} </math>, <math>\sigma </math>*
| <math>\bar{T} \,\!</math>, <math>\sigma \,\!</math>*
| Median, <math>\breve{T} </math>
| Median, <math>\breve{T} \,\!</math>
|-
|-
|colspan="3" style="text-align:center"|*Usually assumed constant
|colspan="3" style="text-align:center"|*Usually assumed constant
|}
|}
<br>
<br>
For example, when considering the Weibull distribution, the scale parameter,  <math>\eta </math> , is chosen to be the life characteristic that is stress-dependent while  <math>\beta </math>  is assumed to remain constant across different stress levels.  A life-stress relationship is then assigned to  <math>\eta .</math>  The three life-stress models supported by BlockSim are presented next.
<br>
[[Image:chp3reliabilityvsloadsurface.png|thumb|center|300px|Figure 3.9: Reliability and life-stress relationship.]]
<br>
[[Image:chp3formulation.png|thumb|center|300px|Figure 3.10: A life distribution and a life-stress relationship.]]
<br>
[[Image:chp3formulation2.png|thumb|center|300px|Figure 3.11: ''pdf'' vs. time and stress.]]
<br>
====Inverse Power Law Relationship====
The Inverse Power Law (IPL) model is given by:
::<math>L(V)=\frac{1}{K{{V}^{n}}}\ (eqn 19)</math>
<br>
Where:
<br>
:• <math>L</math>  represents a quantifiable life measure, such as mean life, characteristic life, median life,  <math>B(x)</math>  life, etc.<br>
:• <math>V</math>  represents the stress level.<br>
:• <math>K</math>  is one of the model parameters to be determined,  <math>(K>0).</math> <br>
:• <math>n</math>  is another model parameter to be determined.<br>


====Arrhenius Relationship====
For example, when considering the Weibull distribution, the scale parameter, <math>\eta \,\!</math>, is chosen to be the life characteristic that is stress-dependent while <math>\beta \,\!</math> is assumed to remain constant across different stress levelsA life-stress relationship is then assigned to <math>\eta .\,\!</math>   
The Arrhenius model is given by:
<br>
::<math>L(V)=C{{e}^{\tfrac{B}{V}}}\ (eqn 20)</math>
<br>
Where:
:• <math>L</math> represents a quantifiable life measure, such as mean life, characteristic life, median life or  <math>B(x)</math>  life, etc.<br>
:• <math>V</math> represents the stress level (in absolute units if it is temperature).<br>
:• <math>C</math> is one of the model parameters to be determined, ( <math>C>0).</math> <br>
:• <math>B</math>  is another model parameter to be determined.<br>
<br>


====Eyring Relationship====
For a detailed discussion of this topic, see ReliaSoft's Accelerated Life Testing Data Analysis Reference.
The Eyring model is given by:
<br>
::<math>L(V)=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\ (eqn 21)</math>
<br>
Where:
:• <math>L</math>  represents a quantifiable life measure, such as mean life, characteristic life, median life,  <math>B(x)</math>  life, etc.<br>
:• <math>V</math>  represents the stress level.<br>
:• <math>A</math>  is one of the model parameters to be determined.<br>
:• <math>B</math>  is another model parameter to be determined.<br>
 
===IPL-Weibull Combining a Life Distribution and a Life-Stress Relationship===
We will illustrate the use of the life distributions and life-stress relationships by combining the Weibull distribution and the IPL model.
The IPL-Weibull model can be derived by setting  <math>\eta =L(V)</math> , yielding the following IPL-Weibull  <math>pdf</math> :
<br>
::<math>f(t,V)=\beta K{{V}^{n}}{{\left( K{{V}^{n}}t \right)}^{\beta -1}}{{e}^{-{{\left( K{{V}^{n}}t \right)}^{\beta }}}}\ (eqn 22)</math>
<br>
The IPL-Weibull model yields the IPL-exponential model for  <math>\beta =1.</math>
====Mean or MTTF====
The mean,  <math>\overline{T}</math>  (also called  <math>MTTF</math> ), of the IPL-Weibull relationship is given by:
<br>
::<math>\overline{T}=\frac{1}{K{{V}^{n}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\ (eqn 23)</math>
<br>
Where  <math>\Gamma \left( \tfrac{1}{\beta }+1 \right)</math>  is the Gamma function evaluated at the value of  <math>\left( \tfrac{1}{\beta }+1 \right)</math> .
 
====IPL-Weibull Reliability Function====
The IPL-Weibull reliability function is given by:
<br>
::<math>R(T,V)={{e}^{-{{\left( K{{V}^{n}}T \right)}^{\beta }}}}</math>
<br>
====Conditional Reliability Function====
The IPL-Weibull conditional reliability function at a specified stress level is given by:
<br>
::<math>R(T,t,V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-{{\left[ K{{V}^{n}}\left( T+t \right) \right]}^{\beta }}}}}{{{e}^{-{{\left( K{{V}^{n}}T \right)}^{\beta }}}}}\ (eqn 24)</math>
<br>
:Or:
<br>
::<math>R(T,t,V)={{e}^{-\left[ {{\left( K{{V}^{n}}\left( T+t \right) \right)}^{\beta }}-{{\left( K{{V}^{n}}T \right)}^{\beta }} \right]}}</math>
<br>
 
====Reliable Life====
For the IPL-Weibull relationship, the reliable life,  <math>{{T}_{R}}</math> , of a unit for a specified reliability and starting the mission at age zero is given by:
<br>
::<math>{{T}_{R}}=\frac{1}{K{{V}^{n}}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\ (eqn 25)</math>
<br>
 
====IPL-Weibull Failure Rate Function====
The IPL-Weibull failure rate function,  <math>\lambda (T)</math> , is given by:
<br>
::<math>\lambda \left( T,V \right)=\frac{f\left( T,V \right)}{R\left( T,V \right)}=\beta K{{V}^{n}}{{\left( K{{V}^{n}}T \right)}^{\beta -1}}\ (eqn 26)</math>

Latest revision as of 18:56, 15 September 2023

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/system_analysis

Chapter 2: Statistical Background


BlockSimbox.png

Chapter 2  
Statistical Background  

Synthesis-icon.png

Available Software:
BlockSim

Examples icon.png

More Resources:
BlockSim examples

This chapter presents a brief review of statistical principles and terminology. The objective of this chapter is to introduce concepts from probability theory and statistics that will be used in later chapters. As such, this chapter is not intended to cover this subject completely, but rather to provide an overview of applicable concepts as a foundation that you can refer to when more complex concepts are introduced.

If you are familiar with basic probability theory and life data analysis, you may wish to skip this chapter. If you would like additional information, we encourage you to review other references on the subject.

A Brief Introduction to Probability Theory

Basic Definitions

Before considering the methodology for estimating system reliability, some basic concepts from probability theory should be reviewed.

The terms that follow are important in creating and analyzing reliability block diagrams.

  1. Experiment [math]\displaystyle{ (E)\,\! }[/math] : An experiment is any well-defined action that may result in a number of outcomes. For example, the rolling of dice can be considered an experiment.
  2. Outcome [math]\displaystyle{ (O)\,\! }[/math] : An outcome is defined as any possible result of an experiment.
  3. Sample space [math]\displaystyle{ (S)\,\! }[/math] : The sample space is defined as the set of all possible outcomes of an experiment.
  4. Event: An event is a collection of outcomes.
  5. Union of two events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] [math]\displaystyle{ (A\cup B)\,\! }[/math] : The union of two events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] is the set of outcomes that belong to [math]\displaystyle{ A\,\! }[/math] or [math]\displaystyle{ B\,\! }[/math] or both.
  6. Intersection of two events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] [math]\displaystyle{ (A\cap B)\,\! }[/math] : The intersection of two events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] is the set of outcomes that belong to both [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math].
  7. Complement of event A ( [math]\displaystyle{ \overline{A}\,\! }[/math] ): A complement of an event [math]\displaystyle{ A\,\! }[/math] contains all outcomes of the sample space, [math]\displaystyle{ S\,\! }[/math], that do not belong to [math]\displaystyle{ A\,\! }[/math].
  8. Null event ( [math]\displaystyle{ \varnothing\,\! }[/math] ): A null event is an empty set that has no outcomes.
  9. Probability: Probability is a numerical measure of the likelihood of an event relative to a set of alternative events. For example, there is a 50% probability of observing heads relative to observing tails when flipping a coin (assuming a fair or unbiased coin).

Example

Consider an experiment that consists of the rolling of a six-sided die. The numbers on each side of the die are the possible outcomes. Accordingly, the sample space is [math]\displaystyle{ S=\{1,2,3,4,5,6\}\,\! }[/math].

Let [math]\displaystyle{ A\,\! }[/math] be the event of rolling a 3, 4 or 6, [math]\displaystyle{ A=\{3,4,6\}\,\! }[/math], and let [math]\displaystyle{ B\,\! }[/math] be the event of rolling a 2, 3 or 5, [math]\displaystyle{ B=\{2,3,5\}\,\! }[/math].

  1. The union of [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] is: [math]\displaystyle{ A\cup B=\{2,3,4,5,6\}\,\! }[/math].
  2. The intersection of [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] is: [math]\displaystyle{ A\cap B=\{3\}\,\! }[/math].
  3. The complement of [math]\displaystyle{ A\,\! }[/math] is: [math]\displaystyle{ \overline{A}=\{1,2,5\}\,\! }[/math].

Probability Properties, Theorems and Axioms

The probability of an event [math]\displaystyle{ A\,\! }[/math] is expressed as [math]\displaystyle{ P(A)\,\! }[/math] and has the following properties:

  1. [math]\displaystyle{ 0\le P(A)\le 1\,\! }[/math]
  2. [math]\displaystyle{ P(A)=1-P(\overline{A})\,\! }[/math]
  3. [math]\displaystyle{ P(\varnothing)=0\,\! }[/math]
  4. [math]\displaystyle{ P(S)=1\,\! }[/math]

In other words, when an event is certain to occur, it has a probability equal to 1; when it is impossible for the event to occur, it has a probability equal to 0.

It can also be shown that the probability of the union of two events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] is:

[math]\displaystyle{ P(A\cup B)=P(A)+P(B)-P(A\cap B)\ \,\! }[/math]

Similarly, the probability of the union of three events, [math]\displaystyle{ A\,\! }[/math], [math]\displaystyle{ B\,\! }[/math] and [math]\displaystyle{ C\,\! }[/math] is given by:

[math]\displaystyle{ \begin{align} P(A\cup B\cup C)= & P(A)+P(B)+P(C) \\ & -P(A\cap B)-P(A\cap C) \\ & -P(B\cap C)+P(A\cap B\cap C) \end{align}\,\! }[/math]

Mutually Exclusive Events

Two events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] are said to be mutually exclusive if it is impossible for them to occur simultaneously ( [math]\displaystyle{ A\cap B\,\! }[/math] = [math]\displaystyle{ \varnothing\,\! }[/math] ). In such cases, the expression for the union of these two events reduces to the following, since the probability of the intersection of these events is defined as zero.

[math]\displaystyle{ P(A\cup B)=P(A)+P(B)\,\! }[/math]

Conditional Probability

The conditional probability of two events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] is defined as the probability of one of the events occurring, knowing that the other event has already occurred. The expression below denotes the probability of [math]\displaystyle{ A\,\! }[/math] occurring given that [math]\displaystyle{ B\,\! }[/math] has already occurred.

[math]\displaystyle{ P(A|B)=\frac{P(A\cap B)}{P(B)}\ \,\! }[/math]

Note that knowing that event [math]\displaystyle{ B\,\! }[/math] has occurred reduces the sample space.

Independent Events

If knowing [math]\displaystyle{ B\,\! }[/math] gives no information about [math]\displaystyle{ A\,\! }[/math], then the events are said to be independent and the conditional probability expression reduces to:

[math]\displaystyle{ P(A|B)=P(A)\ \,\! }[/math]

From the definition of conditional probability, [math]\displaystyle{ P(A|B)=\frac{P(A\cap B)}{P(B)}\ \,\! }[/math] can be written as:

[math]\displaystyle{ P(A\cap B)=P(A|B)P(B)\ \,\! }[/math]

Since events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] are independent, the expression reduces to:

[math]\displaystyle{ P(A\cap B)=P(A)P(B)\ \,\! }[/math]

If a group of [math]\displaystyle{ n\,\! }[/math] events [math]\displaystyle{ {{A}_{i}}\,\! }[/math] are independent, then:

[math]\displaystyle{ P\left[ \underset{i=1}{\overset{n}{\mathop \bigcap }}\,{{A}_{i}} \right]=\underset{i=1}{\overset{n}{\mathop \prod }}\,P({{A}_{i}})\ \,\! }[/math]

As an illustration, consider the outcome of a six-sided die roll. The probability of rolling a 3 is one out of six or:

[math]\displaystyle{ \begin{align} P(O=3)=1/6=0.16667 \end{align}\,\! }[/math]

All subsequent rolls of the die are independent events, since knowing the outcome of the first die roll gives no information as to the outcome of subsequent die rolls (unless the die is loaded). Thus the probability of rolling a 3 on the second die roll is again:

[math]\displaystyle{ \begin{align} P(O=3)=1/6=0.16667 \end{align}\,\! }[/math]

However, if one were to ask the probability of rolling a double 3 with two dice, the result would be:

[math]\displaystyle{ \begin{align} 0.16667\cdot 0.16667= & 0.027778 \\ = & \frac{1}{36} \end{align}\,\! }[/math]

Example 1

Consider a system where two hinged members are holding a load in place, as shown next.

System for Example 1

The system fails if either member fails and the load is moved from its position.

  1. Let [math]\displaystyle{ A=\,\! }[/math] event of failure of Component 1 and let [math]\displaystyle{ \overline{A}\,\! }[/math] [math]\displaystyle{ =\,\! }[/math] the event of not failure of Component 1.
  2. Let [math]\displaystyle{ B=\,\! }[/math] event of failure of Component 2 and let [math]\displaystyle{ \overline{B}\,\! }[/math] [math]\displaystyle{ =\,\! }[/math] the event of not failure of Component 2.

Failure occurs if Component 1 or Component 2 or both fail. The system probability of failure (or unreliability) is:

[math]\displaystyle{ {{P}_{f}}=P(A\cup B)=P(A)+P(B)-P(A\cap B)\,\! }[/math]

Assuming independence (or that the failure of either component is not influenced by the success or failure of the other component), the system probability of failure becomes the sum of the probabilities of [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] occurring minus the product of the probabilities:

[math]\displaystyle{ {{P}_{f}}=P(A\cup B)=P(A)+P(B)-P(A)P(B)\,\! }[/math]

Another approach is to calculate the probability of the system not failing (i.e., the reliability of the system):

[math]\displaystyle{ \begin{align} P(no\text{ }failure)= & Reliability \\ = & P(\overline{A}\cap\overline{B})\\ = & P(\overline{A})P(\overline{B}) \end{align}\,\! }[/math]

Then the probability of system failure is simply 1 (or 100%) minus the reliability:

[math]\displaystyle{ \begin{align} {{P}_{f}}=1-Reliability \end{align}\,\! }[/math]

Example 2

Consider a system with a load being held in place by two rigid members, as shown next.

System for Example 2.
• Let [math]\displaystyle{ A=\,\! }[/math] event of failure of Component 1.
• Let [math]\displaystyle{ B=\,\! }[/math] event of failure of Component 2.
• The system fails if Component 1 fails and Component 2 fails. In other words, both components must fail for the system to fail.

The system probability of failure is defined as the intersection of events [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] :

[math]\displaystyle{ {{P}_{f}}=P(A\cap B)) \,\! }[/math]

Case 1

Assuming independence (i.e., either one of the members is sufficiently strong to hold the load in place), the probability of system failure becomes the product of the probabilities of [math]\displaystyle{ A\,\! }[/math] and [math]\displaystyle{ B\,\! }[/math] failing:

[math]\displaystyle{ {{P}_{f}}=P(A\cap B)=P(A)P(B)\,\! }[/math]

The reliability of the system now becomes:

[math]\displaystyle{ Reliability=1-{{P}_{f}}=1-P(A)P(B)\ \,\! }[/math]

Case 2

If independence is not assumed (e.g., when one component fails the other one is then more likely to fail), then the simplification given in [math]\displaystyle{ Reliability=1-{{P}_{f}}=1-P(A)P(B)\ \,\! }[/math] is no longer applicable. In this case, [math]\displaystyle{ {{P}_{f}}=P(A\cap B)) \,\! }[/math] must be used. We will examine this dependency in later sections under the subject of load sharing.

A Brief Introduction to Continuous Life Distributions

Random Variables


Chp3randomvariables.png


In general, most problems in reliability engineering deal with quantitative measures, such as the time-to-failure of a component, or qualitative measures, such as whether a component is defective or non-defective. We can then use a random variable [math]\displaystyle{ X\,\! }[/math] to denote these possible measures.

In the case of times-to-failure, our random variable [math]\displaystyle{ X\,\! }[/math] is the time-to-failure of the component and can take on an infinite number of possible values in a range from 0 to infinity (since we do not know the exact time a priori). Our component can be found failed at any time after time 0 (e.g., at 12 hours or at 100 hours and so forth), thus [math]\displaystyle{ X\,\! }[/math] can take on any value in this range. In this case, our random variable [math]\displaystyle{ X\,\! }[/math] is said to be a continuous random variable. In this reference, we will deal almost exclusively with continuous random variables.

In judging a component to be defective or non-defective, only two outcomes are possible. That is, [math]\displaystyle{ X\,\! }[/math] is a random variable that can take on one of only two values (let's say defective = 0 and non-defective = 1). In this case, the variable is said to be a discrete random variable.

The Probability Density Function and the Cumulative Distribution Function

The probability density function (pdf) and cumulative distribution function (cdf) are two of the most important statistical functions in reliability and are very closely related. When these functions are known, almost any other reliability measure of interest can be derived or obtained. We will now take a closer look at these functions and how they relate to other reliability measures, such as the reliability function and failure rate.

From probability and statistics, given a continuous random variable [math]\displaystyle{ X,\,\! }[/math] we denote:

  • The probability density function, pdf, as [math]\displaystyle{ f(x)\,\! }[/math].
  • The cumulative distribution function, cdf, as [math]\displaystyle{ F(x)\,\! }[/math].

The pdf and cdf give a complete description of the probability distribution of a random variable. The following figure illustrates a pdf.

Example of a pdf.

The next figures illustrate the pdf - cdf relationship.

Graphical representation of the relationship between pdf and cdf.

If [math]\displaystyle{ X\,\! }[/math] is a continuous random variable, then the pdf of [math]\displaystyle{ X\,\! }[/math] is a function, [math]\displaystyle{ f(x)\,\! }[/math], such that for any two numbers, [math]\displaystyle{ a\,\! }[/math] and [math]\displaystyle{ b\,\! }[/math] with [math]\displaystyle{ a\le b\,\! }[/math] :

[math]\displaystyle{ P(a\le X\le b)=\int_{a}^{b}f(x)dx\ \,\! }[/math]

That is, the probability that [math]\displaystyle{ X\,\! }[/math] takes on a value in the interval [math]\displaystyle{ [a,b]\,\! }[/math] is the area under the density function from [math]\displaystyle{ a\,\! }[/math] to [math]\displaystyle{ b,\,\! }[/math] as shown above. The pdf represents the relative frequency of failure times as a function of time.

The cdf is a function, [math]\displaystyle{ F(x)\,\! }[/math], of a random variable [math]\displaystyle{ X\,\! }[/math], and is defined for a number [math]\displaystyle{ x\,\! }[/math] by:

[math]\displaystyle{ F(x)=P(X\le x)=\int_{0}^{x}f(s)ds\ \,\! }[/math]

That is, for a number [math]\displaystyle{ x\,\! }[/math], [math]\displaystyle{ F(x)\,\! }[/math] is the probability that the observed value of [math]\displaystyle{ X\,\! }[/math] will be at most [math]\displaystyle{ x\,\! }[/math]. The cdf represents the cumulative values of the pdf. That is, the value of a point on the curve of the cdf represents the area under the curve to the left of that point on the pdf. In reliability, the cdf is used to measure the probability that the item in question will fail before the associated time value, [math]\displaystyle{ t\,\! }[/math], and is also called unreliability.

Note that depending on the density function, denoted by [math]\displaystyle{ f(x)\,\! }[/math], the limits will vary based on the region over which the distribution is defined. For example, for the life distributions considered in this reference, with the exception of the normal distribution, this range would be [math]\displaystyle{ [0,+\infty ].\,\! }[/math]

Mathematical Relationship: pdf and cdf

The mathematical relationship between the pdf and cdf is given by:

[math]\displaystyle{ F(x)=\int_{0}^{x}f(s)ds \,\! }[/math]

where [math]\displaystyle{ s\,\! }[/math] is a dummy integration variable.

Conversely:

[math]\displaystyle{ f(x)=\frac{d(F(x))}{dx}\,\! }[/math]

The cdf is the area under the probability density function up to a value of [math]\displaystyle{ x\,\! }[/math]. The total area under the pdf is always equal to 1, or mathematically:

Total area under a pdf.
[math]\displaystyle{ \int_{-\infty}^{+\infty }f(x)dx=1\,\! }[/math]

The well-known normal (or Gaussian) distribution is an example of a probability density function. The pdf for this distribution is given by:

[math]\displaystyle{ f(t)=\frac{1}{\sigma \sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{\sigma } \right)}^{2}}}}\,\! }[/math]

where [math]\displaystyle{ \mu \,\! }[/math] is the mean and [math]\displaystyle{ \sigma \,\! }[/math] is the standard deviation. The normal distribution has two parameters, [math]\displaystyle{ \mu \,\! }[/math] and [math]\displaystyle{ \sigma \,\! }[/math].

Another is the lognormal distribution, whose pdf is given by:

[math]\displaystyle{ f(t)=\frac{1}{t\cdot {{\sigma }^{\prime }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}^{\prime }}-{{\mu }^{\prime }}}{{{\sigma }^{\prime }}} \right)}^{2}}}}\,\! }[/math]

where [math]\displaystyle{ {\mu }'\,\! }[/math] is the mean of the natural logarithms of the times-to-failure and [math]\displaystyle{ {\sigma }'\,\! }[/math] is the standard deviation of the natural logarithms of the times-to-failure. Again, this is a 2-parameter distribution.

Reliability Function

The reliability function can be derived using the previous definition of the cumulative distribution function, [math]\displaystyle{ F(x)=\int_{0}^{x}f(s)ds \,\! }[/math]. From our definition of the cdf, the probability of an event occurring by time [math]\displaystyle{ t\,\! }[/math] is given by:

[math]\displaystyle{ F(t)=\int_{0}^{t}f(s)ds\ \,\! }[/math]

Or, one could equate this event to the probability of a unit failing by time [math]\displaystyle{ t\,\! }[/math].

Since this function defines the probability of failure by a certain time, we could consider this the unreliability function. Subtracting this probability from 1 will give us the reliability function, one of the most important functions in life data analysis. The reliability function gives the probability of success of a unit undertaking a mission of a given time duration. The following figure illustrates this.

Reliability as area under pdf.

To show this mathematically, we first define the unreliability function, [math]\displaystyle{ Q(t)\,\! }[/math], which is the probability of failure, or the probability that our time-to-failure is in the region of 0 and [math]\displaystyle{ t\,\! }[/math]. This is the same as the cdf. So from [math]\displaystyle{ F(t)=\int_{0}^{t}f(s)ds\ \,\! }[/math]:

[math]\displaystyle{ Q(t)=F(t)=\int_{0}^{t}f(s)ds\,\! }[/math]

Reliability and unreliability are the only two events being considered and they are mutually exclusive; hence, the sum of these probabilities is equal to unity.

Then:

[math]\displaystyle{ \begin{align} Q(t)+R(t)= & 1 \\ R(t)= & 1-Q(t) \\ R(t)= & 1-\int_{0}^{t}f(s)ds \\ R(t)= & \int_{t}^{\infty }f(s)ds \end{align}\,\! }[/math]

Conversely:

[math]\displaystyle{ f(t)=-\frac{d(R(t))}{dt}\,\! }[/math]

Conditional Reliability Function

Conditional reliability is the probability of successfully completing another mission following the successful completion of a previous mission. The time of the previous mission and the time for the mission to be undertaken must be taken into account for conditional reliability calculations. The conditional reliability function is given by:

[math]\displaystyle{ R(t|T)=\frac{R(T+t)}{R(T)}\ \,\! }[/math]

Failure Rate Function

The failure rate function enables the determination of the number of failures occurring per unit time. Omitting the derivation, the failure rate is mathematically given as:

[math]\displaystyle{ \lambda (t)=\frac{f(t)}{R(t)}\ \,\! }[/math]

This gives the instantaneous failure rate, also known as the hazard function. It is useful in characterizing the failure behavior of a component, determining maintenance crew allocation, planning for spares provisioning, etc. Failure rate is denoted as failures per unit time.

Mean Life (MTTF)

The mean life function, which provides a measure of the average time of operation to failure, is given by:

[math]\displaystyle{ \overline{T}=m=\int_{0}^{\infty }t\cdot f(t)dt\,\! }[/math]

This is the expected or average time-to-failure and is denoted as the MTTF (Mean Time To Failure).

The MTTF, even though an index of reliability performance, does not give any information on the failure distribution of the component in question when dealing with most lifetime distributions. Because vastly different distributions can have identical means, it is unwise to use the MTTF as the sole measure of the reliability of a component.

Median Life

Median life, [math]\displaystyle{ \tilde{T}\,\! }[/math], is the value of the random variable that has exactly one-half of the area under the pdf to its left and one-half to its right. It represents the centroid of the distribution. The median is obtained by solving the following equation for [math]\displaystyle{ \breve{T}\,\! }[/math]. (For individual data, the median is the midpoint value.)

[math]\displaystyle{ \int_{-\infty}^{{\breve{T}}}f(t)dt=0.5\ \,\! }[/math]

Modal Life (or Mode)

The modal life (or mode), [math]\displaystyle{ \tilde{T}\,\! }[/math], is the value of [math]\displaystyle{ T\,\! }[/math] that satisfies:

[math]\displaystyle{ \frac{d\left[ f(t) \right]}{dt}=0\ \,\! }[/math]

For a continuous distribution, the mode is that value of [math]\displaystyle{ t\,\! }[/math] that corresponds to the maximum probability density (the value at which the pdf has its maximum value, or the peak of the curve).

Lifetime Distributions

A statistical distribution is fully described by its pdf. In the previous sections, we used the definition of the pdf to show how all other functions most commonly used in reliability engineering and life data analysis can be derived. The reliability function, failure rate function, mean time function, and median life function can be determined directly from the pdf definition, or [math]\displaystyle{ f(t)\,\! }[/math]. Different distributions exist, such as the normal (Gaussian), exponential, Weibull, etc., and each has a predefined form of [math]\displaystyle{ f(t)\,\! }[/math] that can be found in many references. In fact, there are certain references that are devoted exclusively to different types of statistical distributions. These distributions were formulated by statisticians, mathematicians and engineers to mathematically model or represent certain behavior. For example, the Weibull distribution was formulated by Waloddi Weibull and thus it bears his name. Some distributions tend to better represent life data and are most commonly called "lifetime distributions".

A more detailed introduction to this topic is presented in Life Distributions.

A Brief Introduction to Life-Stress Relationships

In certain cases when one or more of the characteristics of the distribution change based on an outside factor, one may be interested in formulating a model that includes both the life distribution and a model that describes how a characteristic of the distribution changes. In reliability, the most common "outside factor" is the stress applied to the component. In system analysis, stress comes into play when dealing with units in a load sharing configuration. When components of a system operate in a load sharing configuration, each component supports a portion of the total load for that aspect of the system. When one or more load sharing components fail, the operating components must take on an increased portion of the load in order to compensate for the failure(s). Therefore, the reliability of each component is dependent upon the performance of the other components in the load sharing configuration.

Traditionally in a reliability block diagram, one assumes independence and thus an item's failure characteristics can be fully described by its failure distribution. However, if the configuration includes load sharing redundancy, then a single failure distribution is no longer sufficient to describe an item's failure characteristics. Instead, the item will fail differently when operating under different loads and the load applied to the component will vary depending on the performance of the other component(s) in the configuration. Therefore, a more complex model is needed to fully describe the failure characteristics of such blocks. This model must describe both the effect of the load (or stress) on the life of the product and the probability of failure of the item at the specified load. The models, theory and methodology used in Quantitative Accelerated Life Testing (QALT) data analysis can be used to obtain the desired model for this situation. The objective of QALT analysis is to relate the applied stress to life (or a life distribution). Identically in the load sharing case, one again wants to relate the applied stress (or load) to life. The following figure graphically illustrates the probability density function (pdf) for a standard item, where only a single distribution is required.

Single pdf

The next figure represents a load sharing item by using a 3-D surface that illustrates the pdf , load and time.

pdf and life-stress relationship.

The following figure shows the reliability curve for a load sharing item vs. the applied load.

Reliability and life-stress relationship.

To formulate the model, a life distribution is combined with a life-stress relationship. The distribution choice is based on the product's failure characteristics while the life-stress relationship is based on how the stress affects the life characteristics. The following figure graphically shows these elements of the formulation.

A life distribution and a life-stress relationship.

The next figure shows the combination of both an underlying distribution and a life-stress model by plotting a pdf against both time and stress.

pdf vs. time and stress.

The assumed underlying life distribution can be any life distribution. The most commonly used life distributions include the Weibull, the exponential and the lognormal. The life-stress relationship describes how a specific life characteristic changes with the application of stress. The life characteristic can be any life measure such as the mean, median, [math]\displaystyle{ R(x)\,\! }[/math], [math]\displaystyle{ F(x)\,\! }[/math], etc. It is expressed as a function of stress. Depending on the assumed underlying life distribution, different life characteristics are considered. Typical life characteristics for some distributions are shown in the next table.

Distribution Parameters Life Characteristic
Weibull [math]\displaystyle{ \beta\,\! }[/math]*, [math]\displaystyle{ \eta \,\! }[/math] Scale parameter, [math]\displaystyle{ \eta \,\! }[/math]
Exponential [math]\displaystyle{ \lambda \,\! }[/math] Mean Life, ([math]\displaystyle{ 1/{\lambda} \,\! }[/math])
Lognormal [math]\displaystyle{ \bar{T} \,\! }[/math], [math]\displaystyle{ \sigma \,\! }[/math]* Median, [math]\displaystyle{ \breve{T} \,\! }[/math]
*Usually assumed constant

For example, when considering the Weibull distribution, the scale parameter, [math]\displaystyle{ \eta \,\! }[/math], is chosen to be the life characteristic that is stress-dependent while [math]\displaystyle{ \beta \,\! }[/math] is assumed to remain constant across different stress levels. A life-stress relationship is then assigned to [math]\displaystyle{ \eta .\,\! }[/math]

For a detailed discussion of this topic, see ReliaSoft's Accelerated Life Testing Data Analysis Reference.