<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.reliawiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Harry+Guo</id>
	<title>ReliaWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.reliawiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Harry+Guo"/>
	<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php/Special:Contributions/Harry_Guo"/>
	<updated>2026-04-20T21:45:36Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.0</generator>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=User:Harry_Guo&amp;diff=65102</id>
		<title>User:Harry Guo</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=User:Harry_Guo&amp;diff=65102"/>
		<updated>2017-07-20T15:19:26Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dr. Huairui Guo is an expert on Reliability and Statistical Methods. He was the Director of Theoretical Development at ReliaSoft Corporation. He received his Ph.D. in Systems and Industrial Engineering from the University of Arizona. He has published over 70 technical articles and research papers in the area of quality and reliability engineering, including SPC, ANOVA, DOE, repairable and non-repairable system reliability modeling, accelerated life and degradation testing, and warranty prediction. He has served as a referee for ten international reliability engineering related journals and five international conferences, and has been invited to give presentations and seminars for NASA, ASQ, NREL and commercial companies. He has conducted consulting projects for over 20 companies from various industries, including renewable energy, oil and gas, automobile, medical devices and semi-conductors. As the leader of the theory team, he is deeply involved in the development of Weibull++, ALTA, DOE++, RGA, BlockSim, Lambda Predict and other products from ReliaSoft. Dr. Guo was the recipient of the Stan Ofsthun Award from the Society of Reliability Engineers (SRE) in 2008 and 2010. He also received the best paper award at the Institute of Industrial Engineers annual research conference in 2007. He is a Certified Reliability Professional (CRP) and an ASQ Certified Reliability Engineer (CRE) and Certified Quality Engineer (CQE).&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=1P-Weibull_MLE_Solution_for_Multiple_Right_Censored_Data&amp;diff=64877</id>
		<title>1P-Weibull MLE Solution for Multiple Right Censored Data</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=1P-Weibull_MLE_Solution_for_Multiple_Right_Censored_Data&amp;diff=64877"/>
		<updated>2017-01-31T17:51:00Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Reference Example}}&lt;br /&gt;
&lt;br /&gt;
This example validates the calculations for a 1-parameter Weibull MLE solution with right censored data in Weibull++ standard folios. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Reference_Example_Heading1}}&lt;br /&gt;
&lt;br /&gt;
The data set in Table C.5 on page 633 in the book &#039;&#039;Statistical Methods for Reliability Data&#039;&#039; by Dr. Meeker and Dr. Escobar, John Wiley &amp;amp; Sons, 1998 is used.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Reference_Example_Heading2}}&lt;br /&gt;
&lt;br /&gt;
{| {{table}}&lt;br /&gt;
!Number in State&lt;br /&gt;
!State F or S&lt;br /&gt;
!Time to Failure&lt;br /&gt;
|-&lt;br /&gt;
| 288||S||50&lt;br /&gt;
|-&lt;br /&gt;
| 148||S||150&lt;br /&gt;
|-&lt;br /&gt;
| 1||F||230&lt;br /&gt;
|-&lt;br /&gt;
| 124||S||250&lt;br /&gt;
|-&lt;br /&gt;
| 1||F||334&lt;br /&gt;
|-&lt;br /&gt;
| 111||S||350&lt;br /&gt;
|-&lt;br /&gt;
| 1||F||423&lt;br /&gt;
|-&lt;br /&gt;
| 106||S||450&lt;br /&gt;
|-&lt;br /&gt;
| 99||S||550&lt;br /&gt;
|-&lt;br /&gt;
| 110||S||650&lt;br /&gt;
|-&lt;br /&gt;
| 114||S||750&lt;br /&gt;
|-&lt;br /&gt;
| 119||S||850&lt;br /&gt;
|-&lt;br /&gt;
| 127||S||950&lt;br /&gt;
|-&lt;br /&gt;
| 1||F||990&lt;br /&gt;
|-&lt;br /&gt;
| 1||F||1009&lt;br /&gt;
|-&lt;br /&gt;
| 123||S||1050&lt;br /&gt;
|-&lt;br /&gt;
| 93||S||1150&lt;br /&gt;
|-&lt;br /&gt;
| 47||S||1250&lt;br /&gt;
|-&lt;br /&gt;
| 41||S||1350&lt;br /&gt;
|-&lt;br /&gt;
| 27||S||1450&lt;br /&gt;
|-&lt;br /&gt;
| 1||F||1510&lt;br /&gt;
|-&lt;br /&gt;
| 11||S||1550&lt;br /&gt;
|-&lt;br /&gt;
| 6||S||1650&lt;br /&gt;
|-&lt;br /&gt;
| 1||S||1850&lt;br /&gt;
|-&lt;br /&gt;
| 2||S||2050&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{{Reference_Example_Heading3}}&lt;br /&gt;
&lt;br /&gt;
The formulas for calculating the ML &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; and the standard error of &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; are given on page 193. &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\eta}=\left (\frac{\sum^{n}_{i=1}t^{\beta}_{i}}{r} \right)^{\frac{1}{\beta}}\,\!&amp;lt;/math&amp;gt; &amp;amp;nbsp; and &amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;math&amp;gt;se_{\hat{\eta}}=\frac{\hat{\eta}}{\beta}\sqrt{\frac{1}{r}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; is given, &amp;lt;math&amp;gt;t_{i}\,\!&amp;lt;/math&amp;gt; is the time for the &#039;&#039;i&#039;&#039;th observation, &#039;&#039;r&#039;&#039; is the number of failures. Appling this equation, we get the following results:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\eta}=\left (\frac{\sum^{n}_{i=1}t^{\beta}_{i}}{r} \right)^{\frac{1}{\beta}} = 12320.33\,\!&amp;lt;/math&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; and &amp;amp;nbsp; &amp;lt;math&amp;gt;se_{\hat{\eta}}=\frac{\hat{\eta}}{\beta}\sqrt{\frac{1}{r}} = 2514.88\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Reference_Example_Heading4}}&lt;br /&gt;
&lt;br /&gt;
The variance of eta is 6.324612E+06. The standard deviation is 2514.88.&lt;br /&gt;
&lt;br /&gt;
[[Image:1PMLE_multiple_right_censored.png|center]]&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=User:Harry_Guo&amp;diff=64219</id>
		<title>User:Harry Guo</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=User:Harry_Guo&amp;diff=64219"/>
		<updated>2016-05-31T18:01:07Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dr. Huairui (Harry) Guo is a Sr. Tech Specialist on Reliability and Statistical Method at FCA US LLC. He was the Director of Theoretical Development at ReliaSoft Corporation. He received his Ph.D. in Systems and Industrial Engineering from the University of Arizona. He has published over 70 technical articles and research papers in the area of quality and reliability engineering, including SPC, ANOVA, DOE, repairable and non-repairable system reliability modeling, accelerated life and degradation testing, and warranty prediction. He has served as a referee for ten international reliability engineering related journals and five international conferences, and has been invited to give presentations and seminars for NASA, ASQ, NREL and commercial companies. He has conducted consulting projects for over 20 companies from various industries, including renewable energy, oil and gas, automobile, medical devices and semi-conductors. As the leader of the theory team, he is deeply involved in the development of Weibull++, ALTA, DOE++, RGA, BlockSim, Lambda Predict and other products from ReliaSoft. Dr. Guo was the recipient of the Stan Ofsthun Award from the Society of Reliability Engineers (SRE) in 2008 and 2010. He also received the best paper award at the Institute of Industrial Engineers annual research conference in 2007. He is a Certified Reliability Professional (CRP) and an ASQ Certified Reliability Engineer (CRE) and Certified Quality Engineer (CQE).&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Crow-AMSAA_(NHPP)&amp;diff=60468</id>
		<title>Crow-AMSAA (NHPP)</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Crow-AMSAA_(NHPP)&amp;diff=60468"/>
		<updated>2015-08-21T14:40:49Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Parameter Estimation for Failure Times Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:InProgress}}&lt;br /&gt;
{{template:RGA BOOK|3.2|Crow-AMSAA}}&lt;br /&gt;
Dr. Larry H. Crow [[RGA_References|[17]]] noted that the [[Duane Model]] could be stochastically represented as a Weibull process, allowing for statistical procedures to be used in the application of this model in reliability growth. This statistical extension became what is known as the Crow-AMSAA (NHPP) model. This method was first developed at the U.S. Army Materiel Systems Analysis Activity (AMSAA). It is frequently used on systems when usage is measured on a continuous scale. It can also be applied for the analysis of one shot items when there is high reliability and a large number of trials.&lt;br /&gt;
&lt;br /&gt;
Test programs are generally conducted on a phase by phase basis. The Crow-AMSAA model is designed for tracking the reliability within a test phase and not across test phases. A development testing program may consist of several separate test phases. If corrective actions are introduced during a particular test phase, then this type of testing and the associated data are appropriate for analysis by the Crow-AMSAA model. The model analyzes the reliability growth progress within each test phase and can aid in determining the following:&lt;br /&gt;
&lt;br /&gt;
*Reliability of the configuration currently on test&lt;br /&gt;
*Reliability of the configuration on test at the end of the test phase&lt;br /&gt;
*Expected reliability if the test time for the phase is extended&lt;br /&gt;
*Growth rate&lt;br /&gt;
*Confidence intervals&lt;br /&gt;
*Applicable goodness-of-fit tests&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
The reliability growth pattern for the Crow-AMSAA model is exactly the same pattern as for the [[Duane Model|Duane postulate]], that is, the cumulative number of failures is linear when plotted on ln-ln scale. Unlike the Duane postulate, the Crow-AMSAA model is statistically based. Under the Duane postulate, the failure rate is linear on ln-ln scale. However, for the Crow-AMSAA model statistical structure, the failure intensity of the underlying non-homogeneous Poisson process (NHPP) is linear when plotted on ln-ln scale.&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;N(t)\,\!&amp;lt;/math&amp;gt; be the cumulative number of failures observed in cumulative test time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;,  and let &amp;lt;math&amp;gt;\rho (t)\,\!&amp;lt;/math&amp;gt; be the failure intensity for the Crow-AMSAA model. Under the NHPP model, &amp;lt;math&amp;gt;\rho (t)\Delta t\,\!&amp;lt;/math&amp;gt; is approximately the probably of a failure occurring over the interval &amp;lt;math&amp;gt;[t,t+\Delta t]\,\!&amp;lt;/math&amp;gt; for small &amp;lt;math&amp;gt;\Delta t\,\!&amp;lt;/math&amp;gt;. In addition, the expected number of failures experienced over the test interval &amp;lt;math&amp;gt;[0,T]\,\!&amp;lt;/math&amp;gt; under the Crow-AMSAA model is given by:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;E[N(T)]=\int_{0}^{T}\rho (t)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Crow-AMSAA model assumes that &amp;lt;math&amp;gt;\rho (T)\,\!&amp;lt;/math&amp;gt; may be approximated by the Weibull failure rate function: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\rho (T)=\frac{\beta }{{{\eta }^{\beta }}}{{T}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, if &amp;lt;math&amp;gt;\lambda =\tfrac{1}{{{\eta }^{\beta }}},\,\!&amp;lt;/math&amp;gt; the intensity function, &amp;lt;math&amp;gt;\rho (T),\,\!&amp;lt;/math&amp;gt; or the instantaneous failure intensity, &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)\,\!&amp;lt;/math&amp;gt;, is defined as: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\lambda }_{i}}(T)=\lambda \beta {{T}^{\beta -1}},\text{with }T&amp;gt;0,\text{ }\lambda &amp;gt;0\text{ and }\beta &amp;gt;0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the special case of exponential failure times, there is no growth and the failure intensity, &amp;lt;math&amp;gt;\rho (t)\,\!&amp;lt;/math&amp;gt;, is equal to &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. In this case, the expected number of failures is given by:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   E[N(T)]=  &amp;amp; \int_{0}^{T}\rho (t)dt \\ &lt;br /&gt;
  =  &amp;amp; \lambda T  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order for the plot to be linear when plotted on ln-ln scale under the general reliability growth case, the following must hold true where the expected number of failures is equal to:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   E[N(T)]=  &amp;amp; \int_{0}^{T}\rho (t)dt \\ &lt;br /&gt;
  =  &amp;amp; \lambda {{T}^{\beta }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To put a statistical structure on the reliability growth process, consider again the special case of no growth. In this case the number of failures, &amp;lt;math&amp;gt;N(T),\,\!&amp;lt;/math&amp;gt; experienced during the testing over &amp;lt;math&amp;gt;[0,T]\,\!&amp;lt;/math&amp;gt; is random. The expected number of failures, &amp;lt;math&amp;gt;N(T),\,\!&amp;lt;/math&amp;gt; is said to follow the homogeneous (constant) Poisson process with mean &amp;lt;math&amp;gt;\lambda T\,\!&amp;lt;/math&amp;gt; and is given by:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{}{\overset{}{\mathop{\Pr }}}\,[N(T)=n]=\frac{{{(\lambda T)}^{n}}{{e}^{-\lambda T}}}{n!};\text{ }n=0,1,2,\ldots \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Crow-AMSAA model generalizes this no growth case to allow for reliability growth due to corrective actions. This generalization keeps the Poisson distribution for the number of failures but allows for the expected number of failures, &amp;lt;math&amp;gt;E[N(T)],\,\!&amp;lt;/math&amp;gt; to be linear when plotted on ln-ln scale. The Crow-AMSAA model lets &amp;lt;math&amp;gt;E[N(T)]=\lambda {{T}^{\beta }}\,\!&amp;lt;/math&amp;gt;. The probability that the number of failures, &amp;lt;math&amp;gt;N(T),\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; under growth is then given by the Poisson distribution:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{}{\overset{}{\mathop{\Pr }}}\,[N(T)=n]=\frac{{{(\lambda {{T}^{\beta }})}^{n}}{{e}^{-\lambda {{T}^{\beta }}}}}{n!};\text{ }n=0,1,2,\ldots \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the general growth situation, and the number of failures, &amp;lt;math&amp;gt;N(T)\,\!&amp;lt;/math&amp;gt;, follows a non-homogeneous Poisson process. The exponential, &amp;quot;no growth&amp;quot; homogeneous Poisson process is a special case of the non-homogeneous Crow-AMSAA model. This is reflected in the Crow-AMSAA model parameter where &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The cumulative failure rate, &amp;lt;math&amp;gt;{{\lambda }_{c}}\,\!&amp;lt;/math&amp;gt;, is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\lambda }_{c}}=\lambda {{T}^{\beta -1}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The cumulative &amp;lt;math&amp;gt;MTB{{F}_{c}}\,\!&amp;lt;/math&amp;gt; is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;MTB{{F}_{c}}=\frac{1}{\lambda }{{T}^{1-\beta }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As mentioned above, the local pattern for reliability growth within a test phase is the same as the growth pattern observed by [[Duane Model|Duane]]. The Duane &amp;lt;math&amp;gt;MTB{{F}_{c}}\,\!&amp;lt;/math&amp;gt; is equal to: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;MTB{{F}_{{{c}_{DUANE}}}}=b{{T}^{\alpha }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the Duane cumulative failure rate, &amp;lt;math&amp;gt;{{\lambda }_{c}}\,\!&amp;lt;/math&amp;gt;, is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\lambda }_{{{c}_{DUANE}}}}=\frac{1}{b}{{T}^{-\alpha }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus a relationship between Crow-AMSAA parameters and Duane parameters can be developed, such that: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{b}_{DUANE}}=  &amp;amp; \frac{1}{{{\lambda }_{AMSAA}}} \\ &lt;br /&gt;
  {{\alpha }_{DUANE}}=  &amp;amp; 1-{{\beta }_{AMSAA}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that these relationships are not absolute. They change according to how the parameters (slopes, intercepts, etc.) are defined when the analysis of the data is performed. For the exponential case, &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)=\lambda \,\!&amp;lt;/math&amp;gt;, a constant. For &amp;lt;math&amp;gt;\beta &amp;gt;1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)\,\!&amp;lt;/math&amp;gt; is increasing. This indicates a deterioration in system reliability. For &amp;lt;math&amp;gt;\beta &amp;lt;1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)\,\!&amp;lt;/math&amp;gt; is decreasing. This is indicative of reliability growth. Note that the model assumes a Poisson process with the Weibull intensity function, not the Weibull distribution. Therefore, statistical procedures for the Weibull distribution do not apply for this model. The parameter &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is called a scale parameter because it depends upon the unit of measurement chosen for &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, while &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the shape parameter that characterizes the shape of the graph of the intensity function.&lt;br /&gt;
&lt;br /&gt;
The total number of failures, &amp;lt;math&amp;gt;N(T)\,\!&amp;lt;/math&amp;gt;, is a random variable with Poisson distribution. Therefore, the probability that exactly &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; failures occur by time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;P[N(T)=n]=\frac{{{[\theta (T)]}^{n}}{{e}^{-\theta (T)}}}{n!}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The number of failures occurring in the interval from &amp;lt;math&amp;gt;{{T}_{1}}\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;{{T}_{2}}\,\!&amp;lt;/math&amp;gt; is a random variable having a Poisson distribution with mean: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\theta ({{T}_{2}})-\theta ({{T}_{1}})=\lambda (T_{2}^{\beta }-T_{1}^{\beta })\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The number of failures in any interval is statistically independent of the number of failures in any interval that does not overlap the first interval. At time &amp;lt;math&amp;gt;{{T}_{0}}\,\!&amp;lt;/math&amp;gt;, the failure intensity is &amp;lt;math&amp;gt;{{\lambda }_{i}}({{T}_{0}})=\lambda \beta T_{0}^{\beta -1}\,\!&amp;lt;/math&amp;gt;. If improvements are not made to the system after time &amp;lt;math&amp;gt;{{T}_{0}}\,\!&amp;lt;/math&amp;gt;, it is assumed that failures would continue to occur at the constant rate &amp;lt;math&amp;gt;{{\lambda }_{i}}({{T}_{0}})=\lambda \beta T_{0}^{\beta -1}\,\!&amp;lt;/math&amp;gt;. Future failures would then follow an exponential distribution with mean &amp;lt;math&amp;gt;m({{T}_{0}})=\tfrac{1}{\lambda \beta T_{0}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;. The instantaneous MTBF of the system at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;m(T)=\frac{1}{\lambda \beta {{T}^{\beta -1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;m(T)\,\!&amp;lt;/math&amp;gt; is also called the demonstrated (or achieved) MTBF.&lt;br /&gt;
&lt;br /&gt;
===Note About Applicability===&lt;br /&gt;
The [[Duane Model|Duane]] and Crow-AMSAA models are the most frequently used reliability growth models. Their relationship comes from the fact that both make use of the underlying observed linear relationship between the logarithm of cumulative MTBF and cumulative test time. However, the Duane model does not provide a capability to test whether the change in MTBF observed over time is significantly different from what might be seen due to random error between phases. The Crow-AMSAA model allows for such assessments. Also, the Crow-AMSAA allows for development of hypothesis testing procedures to determine growth presence in the data (where &amp;lt;math&amp;gt;\beta &amp;lt;1\,\!&amp;lt;/math&amp;gt; indicates that there is growth in MTBF, &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt; indicates a constant MTBF and &amp;lt;math&amp;gt;\beta &amp;gt;1\,\!&amp;lt;/math&amp;gt; indicates a decreasing MTBF). Additionally, the Crow-AMSAA model views the process of reliability growth as probabilistic, while the Duane model views the process as deterministic.&lt;br /&gt;
&lt;br /&gt;
==Failure Times Data==&lt;br /&gt;
A description of Failure Times Data is presented in the [[RGA Data Types#Failure_Times_Data|RGA Data Types]] page.&lt;br /&gt;
===Parameter Estimation for Failure Times Data=== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER LOCATIONS IN THIS DOCUMENT AND ALSO FROM Crow Extended - Continuous Evaluation. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
The parameters for the Crow-AMSAA (NHPP) model are estimated using maximum likelihood estimation (MLE). The probability density function (&#039;&#039;pdf&#039;&#039;) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; event given that the &amp;lt;math&amp;gt;{{(i-1)}^{th}}\,\!&amp;lt;/math&amp;gt; event occurred at &amp;lt;math&amp;gt;{{T}_{i-1}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;f({{T}_{i}}|{{T}_{i-1}})=\frac{\beta }{\eta }{{\left( \frac{{{T}_{i}}}{\eta } \right)}^{\beta -1}}\cdot {{e}^{-\tfrac{1}{{{\eta }^{\beta }}}\left( T_{i}^{\beta }-T_{i-1}^{\beta } \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda =\tfrac{1}{{{\eta }^{\beta }}},\,\!&amp;lt;/math&amp;gt;, the likelihood function is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;L={{\lambda }^{n}}{{\beta }^{n}}{{e}^{-\lambda {{T}^{*\beta }}}}\underset{i=1}{\overset{n}{\mathop \prod }}\,T_{i}^{\beta -1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{T}^{*}}\,\!&amp;lt;/math&amp;gt; is the termination time and is given by: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{T}^{*}}=\left\{ \begin{matrix}&lt;br /&gt;
   {{T}_{n}}\text{ if the test is failure terminated}  \\&lt;br /&gt;
   T&amp;gt;{{T}_{n}}\text{ if the test is time terminated}  \\&lt;br /&gt;
\end{matrix} \right\}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the natural log on both sides: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\Lambda =n\ln \lambda +n\ln \beta -\lambda {{T}^{*\beta }}+(\beta -1)\underset{i=1}{\overset{n}{\mathop \sum }}\,\ln {{T}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And differentiating with respect to &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; yields: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\frac{\partial \Lambda }{\partial \lambda }=\frac{n}{\lambda }-{{T}^{*\beta }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set equal to zero and solve for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; : &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{\lambda }=\frac{n}{{{T}^{*\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now differentiate with respect to &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; : &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\frac{\partial \Lambda }{\partial \beta }=\frac{n}{\beta }-\lambda {{T}^{*\beta }}\ln {{T}^{*}}+\underset{i=1}{\overset{n}{\mathop \sum }}\,\ln {{T}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set equal to zero and solve for &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; : &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{\beta }=\frac{n}{n\ln {{T}^{*}}-\underset{i=1}{\overset{n}{\mathop{\sum }}}\,\ln {{T}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This equation is used for both failure terminated and time terminated test data.&lt;br /&gt;
&lt;br /&gt;
====Biasing and Unbiasing of Beta==== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM: Crow Extended - Continuous Evaluation. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
The equation above returns the biased estimate, &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt;. The unbiased estimate, &amp;lt;math&amp;gt;\bar{\beta }\,\!&amp;lt;/math&amp;gt;, can be calculated by using the following relationships. For time terminated data (the test ends after a specified test time):&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\bar{\beta }=\frac{N-1}{N}\hat{\beta }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For failure terminated data (the test ends after a specified number of failures):&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\bar{\beta }=\frac{N-2}{N}\hat{\beta }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; is returned. &amp;lt;math&amp;gt;\bar{\beta }\,\!&amp;lt;/math&amp;gt; can be returned by selecting the &#039;&#039;&#039;Calculate unbiased beta&#039;&#039;&#039; option on the Calculations tab of the Application Setup.&lt;br /&gt;
&lt;br /&gt;
===Cramér-von Mises Test===&lt;br /&gt;
The Cramér-von Mises (CVM) goodness-of-fit test validates the hypothesis that the data follows a non-homogeneous Poisson process with a failure intensity equal to &amp;lt;math&amp;gt;u(t)=\lambda \beta {{t}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;. This test can be applied when the failure data is complete over the continuous interval &amp;lt;math&amp;gt;[0,{{T}_{q}}]\,\!&amp;lt;/math&amp;gt; with no gaps in the data. The CVM data type applies to all data types when the failure times are known, except for Fleet data.&lt;br /&gt;
&lt;br /&gt;
If the individual failure times are known, a Cramér-von Mises statistic is used to test the null hypothesis that a non-homogeneous Poisson process with the failure intensity function &amp;lt;math&amp;gt;\rho \left( t \right)=\lambda \,\beta \,{{t}^{\beta -1}}\left( \lambda &amp;gt;0,\beta &amp;gt;0,t&amp;gt;0 \right)\,\!&amp;lt;/math&amp;gt; properly describes the reliability growth of a system. The Cramér-von Mises goodness-of-fit statistic is then given by the following expression:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;C_{M}^{2}=\frac{1}{12M}+\underset{i=1}{\overset{M}{\mathop \sum }}\,{{\left[ {{\left( \frac{{{T}_{i}}}{T} \right)}^{{\hat{\beta }}}}-\frac{2i-1}{2M} \right]}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;M=\left\{ \begin{matrix}&lt;br /&gt;
   N\text{ if the test is time terminated}  \\&lt;br /&gt;
   N-1\text{ if the test is failure terminated}  \\&lt;br /&gt;
\end{matrix} \right\}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The failure times, &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt;, must be ordered so that &amp;lt;math&amp;gt;{{T}_{1}}&amp;lt;{{T}_{2}}&amp;lt;\ldots &amp;lt;{{T}_{M}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
If the statistic &amp;lt;math&amp;gt;C_{M}^{2}\,\!&amp;lt;/math&amp;gt; is less than the critical value corresponding to &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; for a chosen significance level, then you can fail to reject the null hypothesis that the Crow-AMSAA model adequately fits the data.&lt;br /&gt;
&lt;br /&gt;
====Critical Values====&lt;br /&gt;
The following table displays the critical values for the Cramér-von Mises goodness-of-fit test given the sample size, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;, and the significance level, &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;6&amp;quot; style=&amp;quot;text-align:center&amp;quot;|&#039;&#039;&#039;Critical values for Cramér-von Mises test&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| ||colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|&amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; 				&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;|| 0.20||	0.15||	0.10||	0.05||	0.01&lt;br /&gt;
|-&lt;br /&gt;
|2||	0.138||	0.149||	0.162||	0.175||	0.186&lt;br /&gt;
|-&lt;br /&gt;
|3||	0.121||	0.135||	0.154||	0.184||0.23&lt;br /&gt;
|-&lt;br /&gt;
|4||	0.121||	0.134||	0.155||	0.191||0.28&lt;br /&gt;
|-&lt;br /&gt;
|5||	0.121||	0.137||	0.160||	0.199||0.30&lt;br /&gt;
|-&lt;br /&gt;
|6||	0.123||	0.139||	0.162||	0.204||0.31&lt;br /&gt;
|-&lt;br /&gt;
|7||	0.124||	0.140||	0.165||	0.208||0.32&lt;br /&gt;
|-&lt;br /&gt;
|8||	0.124||	0.141||	0.165||	0.210||0.32&lt;br /&gt;
|-&lt;br /&gt;
|9||	0.125||	0.142||	0.167||	0.212||0.32&lt;br /&gt;
|-&lt;br /&gt;
|10||	0.125||	0.142||	0.167||	0.212||0.32&lt;br /&gt;
|-&lt;br /&gt;
|11||	0.126||	0.143||	0.169||	0.214||0.32&lt;br /&gt;
|-&lt;br /&gt;
|12||	0.126||	0.144||	0.169||	0.214||0.32&lt;br /&gt;
|-&lt;br /&gt;
|13||	0.126||	0.144||	0.169||	0.214||0.33&lt;br /&gt;
|-&lt;br /&gt;
|14||	0.126||	0.144||	0.169||	0.214||0.33&lt;br /&gt;
|-&lt;br /&gt;
|15||	0.126||	0.144||	0.169||	0.215||0.33&lt;br /&gt;
|-&lt;br /&gt;
|16||	0.127||	0.145||	0.171||	0.216|| 0.33&lt;br /&gt;
|-&lt;br /&gt;
|17||	0.127||	0.145||	0.171||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|18||	0.127||	0.146||	0.171||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|19||	0.127||	0.146||	0.171||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|20||	0.128||	0.146||	0.172||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|30||	0.128||	0.146||	0.172||	0.218||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|60||	0.128||	0.147||	0.173||	0.220||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|100||	0.129||	0.147||	0.173||	0.220||	0.34&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The significance level represents the probability of rejecting the hypothesis even if it&#039;s true. So, there is a risk associated with applying the goodness-of-fit test (i.e., there is a chance that the CVM will indicate that the model does not fit, when in fact it does). As the significance level is increased, the CVM test becomes more stringent. Keep in mind that the CVM test passes when the test statistic is less than the critical value. Therefore, the larger the critical value, the more room there is to work with (e.g., a CVM test with a significance level equal to 0.1 is more strict than a test with 0.01).&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds===&lt;br /&gt;
The RGA software provides two methods to estimate the confidence bounds for the Crow Extended model when applied to developmental testing data. The Fisher Matrix approach is based on the Fisher Information Matrix and is commonly employed in the reliability field. The Crow bounds were developed by Dr. Larry Crow. See the [[Crow-AMSAA Confidence Bounds]] chapter for details on how the confidence bounds are calculated. &lt;br /&gt;
&lt;br /&gt;
===Failure Times Data Examples===&lt;br /&gt;
====Example - Parameter Estimation====&lt;br /&gt;
&lt;br /&gt;
{{:Crow-AMSAA Parameter Estimation Example}}&lt;br /&gt;
&lt;br /&gt;
{{:Crow-AMSAA_Confidence_Bounds_Example}}&lt;br /&gt;
&lt;br /&gt;
==Multiple Systems==&lt;br /&gt;
When more than one system is placed on test during developmental testing, there are multiple data types which are available depending on the testing strategy and the format of the data. The data types that allow for the analysis of multiple systems using the Crow-AMSAA (NHPP) model are given below:&lt;br /&gt;
&lt;br /&gt;
*[[Crow-AMSAA_(NHPP)#Multiple Systems (Known Operating Times)|Multiple Systems (Known Operating Times)]]&lt;br /&gt;
*[[Crow-AMSAA_(NHPP)#Multiple Systems (Concurrent Operating Times)|Multiple Systems (Concurrent Operating Times)]]&lt;br /&gt;
*[[Crow-AMSAA_(NHPP)#Multiple Systems with Dates|Multiple Systems with Dates]]&lt;br /&gt;
&lt;br /&gt;
===Goodness-of-fit Tests===&lt;br /&gt;
For all multiple systems data types, the [[Crow-AMSAA (NHPP)#Cram.C3.A9r-von_Mises_Test|Cramér-von Mises (CVM) Test]] is available. For Multiple Systems (Concurrent Operating Times) and Multiple Systems with Dates, two additional tests are also available: [[Hypothesis Tests#Laplace_Trend_Test|Laplace Trend Test]] and [[Hypothesis Tests#Common_Beta_Hypothesis_Test|Common Beta Hypothesis]].&lt;br /&gt;
&lt;br /&gt;
===Multiple Systems (Known Operating Times)===&lt;br /&gt;
&lt;br /&gt;
A description of Multiple Systems (Known Operating Times) is presented on the [[RGA Data Types#Multiple_Systems_.28Known_Operating_Times.29|RGA Data Types]] page.&lt;br /&gt;
&lt;br /&gt;
Consider the data in the table below for two prototypes that were placed in a reliability growth test.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Developmental Test Data for Two Identical Systems&#039;&#039;&#039;	&amp;lt;/center&amp;gt;	&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Failure Number&lt;br /&gt;
!Failed Unit&lt;br /&gt;
!Test Time Unit 1 (hr)&lt;br /&gt;
!Test Time Unit 2 (hr)&lt;br /&gt;
!Total Test Time (hr)&lt;br /&gt;
!&amp;lt;math&amp;gt;ln{(T)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|1||	1||	1.0||	1.7||	2.7||	0.99325&lt;br /&gt;
|-&lt;br /&gt;
|2||	1||	7.3||	3.0||	10.3||	2.33214&lt;br /&gt;
|-&lt;br /&gt;
|3||	2||	8.7||	3.8||	12.5||	2.52573&lt;br /&gt;
|-&lt;br /&gt;
|4||	2||	23.3||	7.3||	30.6||	3.42100&lt;br /&gt;
|-&lt;br /&gt;
|5||	2||	46.4||	10.6||	57.0||	4.04305&lt;br /&gt;
|-&lt;br /&gt;
|6||	1||	50.1||	11.2||	61.3||	4.11578&lt;br /&gt;
|-&lt;br /&gt;
|7||	1||	57.8||	22.2||	80.0||	4.38203&lt;br /&gt;
|-&lt;br /&gt;
|8||	2||	82.1||	27.4||	109.5||	4.69592&lt;br /&gt;
|-&lt;br /&gt;
|9||	2||	86.6||	38.4||	125.0||4.82831&lt;br /&gt;
|-&lt;br /&gt;
|10||	1||	87.0||	41.6||	128.6||	4.85671&lt;br /&gt;
|-&lt;br /&gt;
|11||	2||	98.7||	45.1||	143.8||	4.96842&lt;br /&gt;
|-&lt;br /&gt;
|12||	1||	102.2||	65.7||	167.9||	5.12337&lt;br /&gt;
|-&lt;br /&gt;
|13||	1||	139.2	||90.0||229.2||	5.43459&lt;br /&gt;
|-&lt;br /&gt;
|14||	1||	166.6||	130.1||	296.7||	5.69272&lt;br /&gt;
|-&lt;br /&gt;
|15||	2||	180.8||	139.8	||320.6||5.77019&lt;br /&gt;
|-&lt;br /&gt;
|16||	1||	181.3||	146.9||	328.2||	5.79362&lt;br /&gt;
|-&lt;br /&gt;
|17||	2||	207.9||	158.3	||366.2||5.90318&lt;br /&gt;
|-&lt;br /&gt;
|18||	2||	209.8||	186.9||	396.7||	5.98318&lt;br /&gt;
|-&lt;br /&gt;
|19||	2||	226.9||	194.2||	421.1||	6.04287&lt;br /&gt;
|-&lt;br /&gt;
|20||	1||	232.2||	206.0||	438.2||	6.08268&lt;br /&gt;
|-&lt;br /&gt;
|21||	2||	267.5||	233.7||	501.2||	6.21701&lt;br /&gt;
|-&lt;br /&gt;
|22||	2||	330.1||	289.9||	620.0||	6.42972&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The Failed Unit column indicates the system that failed and is meant to be informative, but it does not affect the calculations. To combine the data from both systems, the system ages are added together at the times when a failure occurred. This is seen in the Total Test Time column above. Once the single timeline is generated, then the calculations for the parameters Beta and Lambda are the same as the process presented for [[Crow-AMSAA (NHPP)#Parameter_Estimation_for_Failure_Times_Data|Failure Times Data]]. The results of this analysis would match the results of [[Crow-AMSAA (NHPP)#Failure_Times_-_Example_1|Failure Times - Example 1]].&lt;br /&gt;
&lt;br /&gt;
===Multiple Systems (Concurrent Operating Times)===&lt;br /&gt;
A description of Multiple Systems (Concurrent Operating Times) is presented on the [[RGA Data Types#Multiple_Systems_.28Concurrent_Operating_Times.29|RGA Data Types]] page.&lt;br /&gt;
&lt;br /&gt;
====Parameter Estimation for Multiple Systems (Concurrent Operating Times)====&lt;br /&gt;
To estimate the parameters, the equivalent system must first be determined. The equivalent single system (ESS) is calculated by summing the usage across all systems when a failure occurs. Keep in mind that Multiple Systems (Concurrent Operating Times) assumes that the systems are running simultaneously and accumulate the same usage. If the systems have different end times then the equivalent system must only account for the systems that are operating when a failure occurred. Systems with a start time greater than zero are shifted back to t = 0. This is the same as having a start time equal to zero and the converted end time is equal to the end time minus the start time. In addition, all failures times are adjusted by subtracting the start time from each value to ensure that all values occur within t = 0 and the adjusted end time. A start time greater than zero indicates that it is not known as to what events occurred at a time less than the start time. This may have been caused by the events during this period not being tracked and/or recorded properly. &lt;br /&gt;
&lt;br /&gt;
As an example, consider two systems have entered a reliability growth test. Both systems have a start time equal to zero and both begin the test with the same configuration. System 1 operated for 100 hours and System 2 operated for 125 hours. The failure times for each system are given below:&lt;br /&gt;
&lt;br /&gt;
*System 1: 25, 47, 80&lt;br /&gt;
*System 2: 15, 62, 89, 110&lt;br /&gt;
&lt;br /&gt;
To build the ESS, the total accumulated hours across both systems is taken into account when a failure occurs. Therefore, given the data for Systems 1 and 2, the ESS is comprised of the following events: 30, 50, 94, 124, 160, 178, 210.&lt;br /&gt;
&lt;br /&gt;
The ESS combines the data from both systems into a single timeline. The termination time for the ESS is (100 + 125) = 225 hours. The parameter estimates for &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{\lambda}\,\!&amp;lt;/math&amp;gt; are then calculated using the ESS. This process is the same as the method for [[Crow-AMSAA (NHPP)#Parameter_Estimation_for_Failure_Times_Data|Failure Times data]].&lt;br /&gt;
&lt;br /&gt;
====Example - Concurrent Operating Times====&lt;br /&gt;
{{:Concurrent Operating Times - Crow-AMSAA (NHPP) Example}}&lt;br /&gt;
&lt;br /&gt;
===Multiple Systems with Dates===&lt;br /&gt;
An overview of the Multiple Systems with Dates data type is presented on the [[RGA Data Types#Multiple_Systems_with_Dates|RGA Data Types]] page. While Multiple Systems with Dates requires a date for each event, including the start and end times for each system, once the equivalent single system is determined, the parameter estimation is the same as it is for Multiple Systems (Concurrent Operating Times). See [[Crow-AMSAA_(NHPP)#Parameter_Estimation_for_Multiple_Systems_.28Concurrent_Operating_Times.29|Parameter Estimation for Multiple Systems (Concurrent Operating Times)]] for details.&lt;br /&gt;
&lt;br /&gt;
==Grouped Data== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM: Operational Mission Profile Testing, Crow Extended, and Fleet Data Analysis. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
A description of Grouped Data is presented in the [[RGA Data Types#Grouped_Failure_Times|RGA Data Types]] page.&lt;br /&gt;
===Parameter Estimation for Grouped Data===&lt;br /&gt;
For analyzing grouped data, we follow the same logic described previously for the [[Duane Model|Duane]] model. If the &amp;lt;math&amp;gt;E[N(T)]\,\!&amp;lt;/math&amp;gt; equation from the [[Crow-AMSAA_(NHPP)#Background|Background]] section above is linearized: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln [E(N(T))]=\ln \lambda +\beta \ln T&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to Crow [[RGA_References|[9]]], the likelihood function for the grouped data case, (where &amp;lt;math&amp;gt;{{n}_{1}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{n}_{2}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{n}_{3}},\ldots ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{n}_{k}}\,\!&amp;lt;/math&amp;gt; failures are observed and &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; is the number of groups), is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{i=1}{\overset{k}{\mathop \prod }}\,\underset{}{\overset{}{\mathop{\Pr }}}\,({{N}_{i}}={{n}_{i}})=\underset{i=1}{\overset{k}{\mathop \prod }}\,\frac{{{(\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })}^{{{n}_{i}}}}\cdot {{e}^{-(\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })}}}{{{n}_{i}}!}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; based on this relationship is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{\lambda }=\frac{n}{T_{k}^{\hat{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;n \,\!&amp;lt;/math&amp;gt; is the total number of failures from all the groups.&lt;br /&gt;
&lt;br /&gt;
The estimate of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the value &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; that satisfies: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{i=1}{\overset{k}{\mathop \sum }}\,{{n}_{i}}\left[ \frac{T_{i}^{\hat{\beta }}\ln {{T}_{i}}-T_{i-1}^{\hat{\beta }}\ln {{T}_{i-1}}}{T_{i}^{\hat{\beta }}-T_{i-1}^{\hat{\beta }}}-\ln {{T}_{k}} \right]=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See [[Crow-AMSAA Confidence Bounds#Grouped_Data|Crow-AMSAA Confidence Bounds]] for details on how confidence bounds for grouped data are calculated.&lt;br /&gt;
&lt;br /&gt;
===Chi-Squared Test===&lt;br /&gt;
A chi-squared goodness-of-fit test is used to test the null hypothesis that the Crow-AMSAA reliability model adequately represents a set of grouped data. This test is applied only when the data is grouped. The expected number of failures in the interval from &amp;lt;math&amp;gt;{{T}_{i-1}}\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is approximated by: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\hat{\theta }}_{i}}=\hat{\lambda }\left( T_{i}^{{\hat{\beta }}}-T_{i-1}^{{\hat{\beta }}} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For each interval, &amp;lt;math&amp;gt;{{\hat{\theta }}_{i}}\,\!&amp;lt;/math&amp;gt; shall not be less than 5 and, if necessary, adjacent intervals may have to be combined so that the expected number of failures in any combined interval is at least 5. Let the number of intervals after this recombination be &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, and let the observed number of failures in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; new interval be &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt;. Finally, let the expected number of failures in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; new interval be &amp;lt;math&amp;gt;{{\hat{\theta }}_{i}}\,\!&amp;lt;/math&amp;gt;. Then the following statistic is approximately distributed as a chi-squared random variable with degrees of freedom &amp;lt;math&amp;gt;d-2\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\chi }^{2}}=\underset{i=1}{\overset{d}{\mathop \sum }}\,\frac{{{({{N}_{i}}-{{\hat{\theta }}_{i}})}^{2}}}{{{\hat{\theta }}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The null hypothesis is rejected if the &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; statistic exceeds the critical value for a chosen significance level. In this case, the hypothesis that the Crow-AMSAA model adequately fits the grouped data shall be rejected. Critical values for this statistic can be found in chi-squared distribution tables.&lt;br /&gt;
&lt;br /&gt;
===Grouped Data Examples===&lt;br /&gt;
====Example - Simple Grouped====&lt;br /&gt;
{{:Crow-AMSAA_Model_-_Grouped_Data_Example}}&lt;br /&gt;
&lt;br /&gt;
====Example - Helicopter System====&lt;br /&gt;
{{:Crow-AMSAA_Model_-_Helicopter_System_Example}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Examples Box|RGA Examples|&amp;lt;p&amp;gt;More grouped data examples are available! See also:&amp;lt;/p&amp;gt; &lt;br /&gt;
{{Examples Link External|http://www.reliasoft.com/rga/examples/rgex1/index.htm|Simple MTBF Determination}}&amp;lt;nowiki/&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ==Goodness-of-Fit Tests== This section is no longer necessary--&amp;gt;&lt;br /&gt;
&amp;lt;!-- {{:Goodness-of-Fit Tests}} This section is no longer necessary--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Missing Data==&lt;br /&gt;
{{:Gap Analysis}}&lt;br /&gt;
&lt;br /&gt;
==Discrete Data==&lt;br /&gt;
&lt;br /&gt;
The Crow-AMSAA model can be adapted for the analysis of &#039;&#039;success/failure&#039;&#039; data (also called &#039;&#039;discrete&#039;&#039; or &#039;&#039;attribute&#039;&#039; data). The following discrete data types are available: &lt;br /&gt;
&lt;br /&gt;
*Sequential &lt;br /&gt;
*Grouped per Configuration &lt;br /&gt;
*Mixed&lt;br /&gt;
&lt;br /&gt;
Sequential data and Grouped per Configuration are very similar as the parameter estimation methodology is the same for both data types. Mixed data is a combination of Sequential Data and Grouped per Configuration and is presented in [[Crow-AMSAA (NHPP)#Mixed_Data|Mixed Data]]. &lt;br /&gt;
&lt;br /&gt;
===Grouped per Configuration===&lt;br /&gt;
Suppose system development is represented by &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; configurations. This corresponds to &amp;lt;math&amp;gt;i-1\,\!&amp;lt;/math&amp;gt; configuration changes, unless fixes are applied at the end of the test phase, in which case there would be &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; configuration changes. Let &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; be the number of trials during configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;{{M}_{i}}\,\!&amp;lt;/math&amp;gt; be the number of failures during configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;. Then the cumulative number of trials through configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt;, is the sum of the &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, or: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{T}_{i}}=\underset{}{\overset{}{\mathop \sum }}\,{{N}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the cumulative number of failures through configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;{{K}_{i}}\,\!&amp;lt;/math&amp;gt;, is the sum of the &amp;lt;math&amp;gt;{{M}_{i}}\,\!&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, or: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{K}_{i}}=\underset{}{\overset{}{\mathop \sum }}\,{{M}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value of &amp;lt;math&amp;gt;{{K}_{i}}\,\!&amp;lt;/math&amp;gt; can be expressed as &amp;lt;math&amp;gt;E[{{K}_{i}}]\,\!&amp;lt;/math&amp;gt; and defined as the expected number of failures by the end of configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;. Applying the learning curve property to &amp;lt;math&amp;gt;E[{{K}_{i}}]\,\!&amp;lt;/math&amp;gt; implies: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;E\left[ {{K}_{i}} \right]=\lambda T_{i}^{\beta }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Denote &amp;lt;math&amp;gt;{{f}_{1}}\,\!&amp;lt;/math&amp;gt; as the probability of failure for configuration 1 and use it to develop a generalized equation for &amp;lt;math&amp;gt;{{f}_{i}}\,\!&amp;lt;/math&amp;gt; in terms of the &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt;. From the equation above, the expected number of failures by the end of configuration 1 is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;E\left[ {{K}_{1}} \right]=\lambda T_{1}^{\beta }={{f}_{1}}{{N}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\therefore {{f}_{1}}=\frac{\lambda T_{1}^{\beta }}{{{N}_{1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Applying the &amp;lt;math&amp;gt;E\left[ {{K}_{i}}\right]\,\!&amp;lt;/math&amp;gt; equation again and noting that the expected number of failures by the end of configuration 2 is the sum of the expected number of failures in configuration 1 and the expected number of failures in configuration 2: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   E\left[ {{K}_{2}} \right]  = &amp;amp; \lambda T_{2}^{\beta } \\ &lt;br /&gt;
  = &amp;amp; {{f}_{1}}{{N}_{1}}+{{f}_{2}}{{N}_{2}} \\ &lt;br /&gt;
  = &amp;amp; \lambda T_{1}^{\beta }+{{f}_{2}}{{N}_{2}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\therefore {{f}_{2}}=\frac{\lambda T_{2}^{\beta }-\lambda T_{1}^{\beta }}{{{N}_{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By this method of inductive reasoning, a generalized equation for the failure probability on a configuration basis, &amp;lt;math&amp;gt;{{f}_{i}}\,\!&amp;lt;/math&amp;gt;, is obtained, such that: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{f}_{i}}=\frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}{{{N}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this equation, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; represents the trial number. Thus, an equation for the reliability (probability of success) for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; configuration is obtained: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{i}}=1-{{f}_{i}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sequential Data===&lt;br /&gt;
From the [[Crow-AMSAA (NHPP)#Grouped_per_Configuration|Grouped per Configuration]] section, the following equation is given: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{f}_{i}}=\frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}{{{N}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the special case where &amp;lt;math&amp;gt;{{N}_{i}}=1\,\!&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, the equation above becomes a smooth curve, &amp;lt;math&amp;gt;{{g}_{i}}\,\!&amp;lt;/math&amp;gt;, that represents the probability of failure for trial by trial data, or: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{g}_{i}}=\lambda \cdot {{i}^{\beta }}-\lambda \cdot {{\left( i-1 \right)}^{\beta }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;{{N}_{i}}=1\,\!&amp;lt;/math&amp;gt;, this is the same as Sequential Data where systems are tested on a trial-by-trial basis. The equation for the reliability for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; trial is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{i}}=1-{{g}_{i}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Parameter Estimation for Discrete Data===&amp;lt;!-- THIS SECTION HEADER IS LINKED FROM ANOTHER SECTION IN THIS PAGE. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
This section describes procedures for estimating the parameters of the Crow-AMSAA model for success/failure data which includes Sequential data and Grouped per Configuration. An example is presented illustrating these concepts. The estimation procedures provide maximum likelihood estimates (MLEs) for the model&#039;s two parameters, &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;. The MLEs for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; allow for point estimates for the probability of failure, given by: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\hat{f}}_{i}}=\frac{\hat{\lambda }T_{i}^{{\hat{\beta }}}-\hat{\lambda }T_{i-1}^{{\hat{\beta }}}}{{{N}_{i}}}=\frac{\hat{\lambda }\left( T_{i}^{{\hat{\beta }}}-T_{i-1}^{{\hat{\beta }}} \right)}{{{N}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the probability of success (reliability) for each configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; is equal to: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\hat{R}}_{i}}=1-{{\hat{f}}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The likelihood function is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{i=1}{\overset{k}{\mathop \prod }}\,\left( \begin{matrix}&lt;br /&gt;
   {{N}_{i}}  \\&lt;br /&gt;
   {{M}_{i}}  \\&lt;br /&gt;
\end{matrix} \right){{\left( \frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}{{{N}_{i}}} \right)}^{{{M}_{i}}}}{{\left( \frac{{{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta }}{{{N}_{i}}} \right)}^{{{N}_{i}}-{{M}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the natural log on both sides yields: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \Lambda = &amp;amp; \underset{i=1}{\overset{K}{\mathop \sum }}\,\left[ \ln \left( \begin{matrix}&lt;br /&gt;
   {{N}_{i}}  \\&lt;br /&gt;
   {{M}_{i}}  \\&lt;br /&gt;
\end{matrix} \right)+{{M}_{i}}\left[ \ln (\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })-\ln {{N}_{i}} \right] \right] \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; +\underset{i=1}{\overset{K}{\mathop \sum }}\,\left[ ({{N}_{i}}-{{M}_{i}})\left[ \ln ({{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta })-\ln {{N}_{i}} \right] \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the derivative with respect to &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; respectively, exact MLEs for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are values satisfying the following two equations: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{i=1}{\overset{K}{\mathop \sum }}\,{{H}_{i}}\times {{S}_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \underset{i=1}{\overset{K}{\mathop \sum }}\,{{U}_{i}}\times {{S}_{i}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{H}_{i}}= &amp;amp; \left[ T_{i}^{\beta }\ln {{T}_{i}}-T_{i-1}^{\beta }\ln {{T}_{i-1}} \right] \\ &lt;br /&gt;
  {{S}_{i}}= &amp;amp; \frac{{{M}_{i}}}{\left[ \lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta } \right]}-\frac{{{N}_{i}}-{{M}_{i}}}{\left[ {{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta } \right]} \\ &lt;br /&gt;
  {{U}_{i}}= &amp;amp; T_{i}^{\beta }-T_{i-1}^{\beta }\,  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Example - Grouped per Configuration===&lt;br /&gt;
{{:Crow-AMSAA Discrete Model Example}}&lt;br /&gt;
&lt;br /&gt;
===Mixed Data===&lt;br /&gt;
The Mixed data type provides additional flexibility in terms of how it can handle different testing strategies. Systems can be tested using different configurations in groups or individual trial by trial, or a mixed combination of individual trials and configurations of more than one trial. The Mixed data type allows you to enter the data so that it represents how the systems were tested within the total number of trials. For example, if you launched five (5) missiles for a given configuration and none of them failed during testing, then there would be a row within the data sheet indicating that this configuration operated successfully for these five trials. If the very next trial, the sixth, failed then this would be a separate row within the data. The flexibility with the data entry allows for a greater understanding in terms of how the systems were tested by simply examining the data. The methodology for estimating the parameters &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{\lambda}\,\!&amp;lt;/math&amp;gt; are the same as those presented in the [[Crow-AMSAA (NHPP)#Grouped_Data|Grouped Data]] section. With Mixed data, the average reliability and average unreliability within a given interval can also be calculated.&lt;br /&gt;
&lt;br /&gt;
The average unreliability is calculated as:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\text{Average Unreliability }({{t}_{1,}}{{t}_{2}})=\frac{\lambda t_{2}^{\beta }-\lambda t_{1}^{\beta }}{{{t}_{2}}-{{t}_{1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the average reliability is calculated as:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\text{Average Reliability }({{t}_{1,}}{{t}_{2}})=1-\frac{\lambda t_{2}^{\beta }-\lambda t_{1}^{\beta }}{{{t}_{2}}-{{t}_{1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mixed Data Confidence Bounds====&lt;br /&gt;
&#039;&#039;&#039;Bounds on Average Failure Probability&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
The process to calculate the average unreliability confidence bounds for Mixed data is as follows: &lt;br /&gt;
&lt;br /&gt;
#Calculate the average failure probability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
#There will exist a &amp;lt;math&amp;gt;{{t}^{*}}\,\!&amp;lt;/math&amp;gt; between &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt; such that the instantaneous unreliability at &amp;lt;math&amp;gt;{{t}^{*}}\,\!&amp;lt;/math&amp;gt; equals the average unreliability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt;. The confidence intervals for the instantaneous unreliability at &amp;lt;math&amp;gt;{{t}^{*}}\,\!&amp;lt;/math&amp;gt; are the confidence intervals for the average unreliability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Bounds on Average Reliability&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
The process to calculate the average reliability confidence bounds for Mixed data is as follows:&lt;br /&gt;
&lt;br /&gt;
#Calculate confidence bounds for average unreliability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt; as described above.&lt;br /&gt;
#The confidence bounds for reliability are 1 minus these confidence bounds for average unreliability.&lt;br /&gt;
&lt;br /&gt;
====Example - Mixed Data====&lt;br /&gt;
{{:Crow-AMSAA Discrete Model Grouped Data Example}}&lt;br /&gt;
&lt;br /&gt;
==Change of Slope==&lt;br /&gt;
{{:Change of Slope Analysis}}&lt;br /&gt;
&lt;br /&gt;
==More Examples==&lt;br /&gt;
===Determining Whether a Design Meets the MTBF Goal===&lt;br /&gt;
{{:Failure_Times_Crow-AMSAA_Example}}&lt;br /&gt;
&lt;br /&gt;
===Analyzing Mixed Data for a One-Shot System===&lt;br /&gt;
{{:Mixed_Data_-_Crow-AMSAA_Example}}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57255</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57255"/>
		<updated>2015-02-25T21:11:44Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of &#039;&#039;&#039;interval&#039;&#039;&#039; or &#039;&#039;&#039;left censored&#039;&#039;&#039; data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of &#039;&#039;&#039;right censored&#039;&#039;&#039; data, the effect of the exact censoring time is not considered. One example of this can be seen at the [[Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method|parameter estimation]] chapter. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57254</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57254"/>
		<updated>2015-02-25T21:10:50Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of &#039;&#039;interval&#039;&#039; or &#039;&#039;left censored&#039;&#039; data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of &#039;&#039;right censored&#039;&#039; data, the effect of the exact censoring time is not considered. One example of this can be seen at the [[Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method|parameter estimation]] chapter. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57253</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57253"/>
		<updated>2015-02-25T21:10:33Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of &#039;interval&#039; or &#039;left censored&#039; data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of &#039;right censored&#039; data, the effect of the exact censoring time is not considered. One example of this can be seen at the [[Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method|parameter estimation]] chapter. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57252</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57252"/>
		<updated>2015-02-25T21:08:09Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of interval or left censored data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of right censored data, the effect of the exact censoring time is not considered. One example of this can be seen at the [[Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method|parameter estimation]] chapter. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57251</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57251"/>
		<updated>2015-02-25T21:05:58Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of interval or left censored data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of right censored data, the effect of the exact censoring time is not considered. One example of this can be seen at the [[parameter estimation||Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method]] chapter. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57250</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57250"/>
		<updated>2015-02-25T21:05:21Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of interval or left censored data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of right censored data, the effect of the exact censoring time is not considered. One example of this can be seen at the [[parameter estimation|Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method]] chapter. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57249</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57249"/>
		<updated>2015-02-25T21:04:23Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of interval or left censored data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of right censored data, the effect of the exact censoring time is not considered. One example of this can be seen at [[Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method]]. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57248</id>
		<title>ReliaSoft&#039;s Alternate Ranking Method</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ReliaSoft%27s_Alternate_Ranking_Method&amp;diff=57248"/>
		<updated>2015-02-25T21:03:39Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
In probability plotting or rank regression analysis of interval or left censored data, difficulties arise when attempting to estimate the exact time within the interval when the failure actually occurs, especially when an overlap on the intervals is present. In this case, the &#039;&#039;standard ranking method&#039;&#039; (SRM) is not applicable when dealing with interval data; thus, ReliaSoft has formulated a more sophisticated methodology to allow for more accurate probability plotting and regression analysis of data sets with interval or left censored data. This method utilizes the traditional rank regression method and iteratively improves upon the computed ranks by parametrically recomputing new ranks and the most probable failure time for interval data.&lt;br /&gt;
&lt;br /&gt;
In the traditional method for plotting or rank regression analysis of right censored data, the effect of the exact censoring time is not considered. One example of this can be seen at [[http://www.reliawiki.org/index.php/Parameter_Estimation#Shortfalls_of_the_Rank_Adjustment_Method]]. The ReliaSoft ranking method also can be used to overcome this shortfall of the standard ranking method.&lt;br /&gt;
&lt;br /&gt;
The following step-by-step example illustrates the ReliaSoft ranking method (RRM), which is an iterative improvement on the standard ranking method (SRM). Although this method is illustrated by the use of the two-parameter Weibull distribution, it can be easily generalized for other models.&lt;br /&gt;
&lt;br /&gt;
Consider the following test data:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.1- The Test Data&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|-align=&amp;quot;center&amp;quot; &lt;br /&gt;
|1||Right Censored|| ||20&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Left Censored||0||30&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| ||50&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Right Censored|| ||60&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||70&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Left Censored||0||100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===  Initial Parameter Estimation===&lt;br /&gt;
As a preliminary step, we need to provide a crude estimate of the Weibull parameters for this data. To begin, we will extract the exact times-to-failure (10, 40, and 50) and the midpoints of the interval failures. The midpoints are 50 (for the interval of 20 to 80) and 47.5 (for the interval of 10 to 85). Next, we group together the items that have the same failure times, as shown in Table B.2.&lt;br /&gt;
&lt;br /&gt;
Using the traditional rank regression, we obtain the first initial estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\beta }}_{0}}= &amp;amp; 1.91367089 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{0}}= &amp;amp; 43.91657736  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.2- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||10&lt;br /&gt;
|- &lt;br /&gt;
|2||Exact Failure|| ||40&lt;br /&gt;
|- &lt;br /&gt;
|1||Exact Failure|| ||47.5&lt;br /&gt;
|- &lt;br /&gt;
|3||Exact Failure||  ||50&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For all intervals, we obtain a weighted &#039;&#039;midpoint&#039;&#039; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{{\hat{t}}}_{m}}\left( \hat{\beta },\hat{\eta } \right)= &amp;amp; \frac{\int_{LI}^{TF}t\text{ }f(t;\hat{\beta },\hat{\eta })dt}{\int_{LI}^{TF}f(t;\hat{\beta },\hat{\eta })dt}, \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{LI}^{TF}t\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}{\int_{LI}^{TF}\tfrac{{\hat{\beta }}}{{\hat{\eta }}}{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{\hat{\beta }-1}}{{e}^{-{{\left( \tfrac{t}{{\hat{\eta }}} \right)}^{{\hat{\beta }}}}}}dt}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This transforms our data into the format in Table B.3.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.3- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, Based upon the Parameters &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Type&lt;br /&gt;
!Last Inspection&lt;br /&gt;
!Time&lt;br /&gt;
!Weighted &amp;quot;Midpoint&amp;quot;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure||  ||10 ||&lt;br /&gt;
|-  align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Exact Failure||  ||40||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Exact Failure|| || 50||&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||Interval Failure||20||80||42.837&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||Interval Failure||10||85||39.169&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now we arrange the data as in Table B.4.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Table B.4- The Union of Exact Times-to-Failure with the &amp;quot;Midpoint&amp;quot; of the Interval Failures, in Ascending Order.&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!Number of Items&lt;br /&gt;
!Time&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||10&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||39.169&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||40&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||42.837&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We now consider the left and right censored data, as in Table B.5.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;7&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.5- Computation of Increments in a Matrix Format for Computing a Revised Mean Order Number.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039; = 30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039; = 100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039; = 60&lt;br /&gt;
|- &lt;br /&gt;
|1||10||&amp;lt;math&amp;gt;2 \frac{\int_0^{10} f_0(t)dt}{F_0 (30)-F_0 (0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_0^{10} f_0 (t)dt}{F_0(70)-F_1(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_0^{10} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || 0||0&lt;br /&gt;
|- &lt;br /&gt;
|1||39.169||&amp;lt;math&amp;gt;2 \frac{\int_{10}^{30} f_0(t)dt}{F_0(30)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{10}^{39.169} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{20}^{39.169} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{39.169}^{40} f_0(t)dt}{F_0(\infty)-F_0(20)}\,\!&amp;lt;/math&amp;gt; ||0&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt;|| &amp;lt;math&amp;gt;\frac{\int_{40}^{42.837} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(70)-F_0(0)}\,\!&amp;lt;/math&amp;gt; ||&amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(100)-F_0(0)}\,\!&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\frac{\int_{42.837}^{50} f_0(t)dt}{F_0(\infty)-F_0(0)}\,\!&amp;lt;/math&amp;gt;||0&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In general, for left censored data:&lt;br /&gt;
&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; left censored items at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;{{t}_{0}}\le {{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;gt;{{t}_{i-1}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\underset{{{t}_{i-1}}}{\overset{MIN({{t}_{i}},{{t}_{0}})}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}(MIN({{t}_{i}},{{t}_{0}}))-{{F}_{0}}({{t}_{i-1}})}{{{F}_{0}}({{t}_{0}})-{{F}_{0}}(0)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
In general, for right censored data:&lt;br /&gt;
:•	The increment term for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; right censored at time &amp;lt;math&amp;gt;={{t}_{0}},\,\!&amp;lt;/math&amp;gt; with a time-to-failure of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, when &amp;lt;math&amp;gt;{{t}_{0}}\ge {{t}_{i}}\,\!&amp;lt;/math&amp;gt; is zero.&lt;br /&gt;
:•	When &amp;lt;math&amp;gt;{{t}_{0}}&amp;lt;{{t}_{i}},\,\!&amp;lt;/math&amp;gt; the contribution is:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{n}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\underset{MAX({{t}_{0}},{{t}_{i-1}})}{\overset{{{t}_{i}}}{\mathop \int }}\,{{f}_{0}}\left( t \right)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:or:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;n\frac{{{F}_{0}}({{t}_{i}})-{{F}_{0}}(MAX({{t}_{0}},{{t}_{i-1}}))}{{{F}_{0}}(\infty )-{{F}_{0}}({{t}_{0}})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i-1}}\,\!&amp;lt;/math&amp;gt; is the time-to-failure previous to the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; time-to-failure and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of units associated with that time-to-failure (or units in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sum up the increments (horizontally in rows), as in Table B.6.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;8&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.6- Increments Solved Numerically, Along with the Sum of Each Row.&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!2 Left Censored &#039;&#039;t&#039;&#039;=30&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=70&lt;br /&gt;
!1 Left Censored &#039;&#039;t&#039;&#039;=100&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=20&lt;br /&gt;
!1 Right Censored &#039;&#039;t&#039;&#039;=60&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.299065||0.062673||0.057673||0||0||0.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||1.700935||0.542213||0.498959||0.440887||0||3.182994&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0||0.015892||0.014625||0.018113||0||0.048630&lt;br /&gt;
|-&lt;br /&gt;
|2||42.831||0||0.052486||0.048299||0.059821||0||0.160606&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0||0.118151||0.108726||0.134663||0||0.361540&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 5&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new mean order numbers (MON), as shown Table B.7, utilizing the increments obtained in Table B.6, by adding the &#039;&#039;number of items&#039;&#039; plus the &#039;&#039;previous MON&#039;&#039; plus the current increment.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;4&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.7- Mean Order Numbers (MON)&lt;br /&gt;
|-&lt;br /&gt;
!Number of items&lt;br /&gt;
!Time of Failure&lt;br /&gt;
!Sum of row(increment)&lt;br /&gt;
!Mean Order Number&lt;br /&gt;
|-&lt;br /&gt;
|1||10||0.419411||1.419411&lt;br /&gt;
|-&lt;br /&gt;
|1||39.169||3.182994||5.602405&lt;br /&gt;
|-&lt;br /&gt;
|2||40||0.048630||7.651035&lt;br /&gt;
|-&lt;br /&gt;
|2||42.837||0.160606||9.811641&lt;br /&gt;
|-&lt;br /&gt;
|1||50||0.361540||11.173181&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 6&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute the median ranks based on these new MONs as shown in Table B.8.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Table B.8- Mean Order Numbers with Their Ranks for a Sample Size of 13 Units.&lt;br /&gt;
|-&lt;br /&gt;
!Time&lt;br /&gt;
!MON&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||1.419411||0.0825889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||5.602405||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||7.651035||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||9.811641||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||11.173181||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 7&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compute new &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta ,\,\!&amp;lt;/math&amp;gt; using standard rank regression and based upon the data as shown in Table B.9.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Time&lt;br /&gt;
!Ranks&lt;br /&gt;
|-&lt;br /&gt;
|10||0.0826889&lt;br /&gt;
|-&lt;br /&gt;
|39.169||0.3952894&lt;br /&gt;
|-&lt;br /&gt;
|40||0.5487781&lt;br /&gt;
|-&lt;br /&gt;
|42.837||0.7106217&lt;br /&gt;
|-&lt;br /&gt;
|50||0.8124983&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 8&#039;&#039;&#039;&lt;br /&gt;
Return and repeat the process from Step 1 until an acceptable convergence is reached on the parameters (i.e., the parameter values stabilize).&lt;br /&gt;
&lt;br /&gt;
===Results===&lt;br /&gt;
The results of the first five iterations are shown in Table B.10.&lt;br /&gt;
Using Weibull++ with rank regression on X yields:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|Table B.10-The parameters after the first five iterations&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
!&#039;&#039;Iteration&#039;&#039;&lt;br /&gt;
!&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
!&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|1||1.845638||42.576422&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|2||1.830621 ||42.039743&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|3||1.828010 ||41.830615&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|4||1.828030 ||41.749708&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|5||1.828383 ||41.717990&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{RRX}}=1.82890,\text{ }{{\widehat{\eta }}_{RRX}}=41.69774\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The direct MLE solution yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{MLE}}=2.10432,\text{ }{{\widehat{\eta }}_{MLE}}=42.31535\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57247</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57247"/>
		<updated>2015-02-25T20:56:40Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Shortfalls of the Rank Adjustment Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for censored data including left censored, right censored, and interval data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &#039;&#039;N&#039;&#039;= the sample size, or total number of items in the test&lt;br /&gt;
* &#039;&#039;PMON&#039;&#039; = previous mean order number&lt;br /&gt;
* &#039;&#039;NIBPSS&#039;&#039; = the number of items beyond the present suspended set. It is the number of units (including all the failures and suspensions) at the current failure time.&lt;br /&gt;
* &#039;&#039;i&#039;&#039; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt; at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression with the SRM. The results for both cases are identical when using the regression estimation technique with SRM, as SRM considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression with SRM, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
One alternative to improve the regression method is to use the following ReliaSoft Ranking Method (RRM) to calculate the rank. RRM does consider the effect of the censoring time.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). &lt;br /&gt;
&lt;br /&gt;
When analyzing left or right censored data, RRM also considers the effect of the actual censoring time. Therefore, the resulted rank will be more accurate than the SRM where only the position not the exact time of censoring is used. &lt;br /&gt;
&lt;br /&gt;
For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57246</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57246"/>
		<updated>2015-02-25T20:55:43Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Shortfalls of the Rank Adjustment Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for censored data including left censored, right censored, and interval data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &#039;&#039;N&#039;&#039;= the sample size, or total number of items in the test&lt;br /&gt;
* &#039;&#039;PMON&#039;&#039; = previous mean order number&lt;br /&gt;
* &#039;&#039;NIBPSS&#039;&#039; = the number of items beyond the present suspended set. It is the number of units (including all the failures and suspensions) at the current failure time.&lt;br /&gt;
* &#039;&#039;i&#039;&#039; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt; at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression with the SRM. The results for both cases are identical when using the regression estimation technique with SRM, as SRM considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression with SRM, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
One alternative to improve the regression method is to use the following ReliaSoft Rank Method (RRM) to calculate the rank. RRM does consider the effect of the censoring time.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). &lt;br /&gt;
&lt;br /&gt;
When analyzing left or right censored data, RRM also considers the effect of the actual censoring time. Therefore, the resulted rank will be more accurate than the SRM where only the position not the exact time of censoring is used. &lt;br /&gt;
&lt;br /&gt;
For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57245</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57245"/>
		<updated>2015-02-25T20:54:02Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Shortfalls of the Rank Adjustment Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for censored data including left censored, right censored, and interval data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &#039;&#039;N&#039;&#039;= the sample size, or total number of items in the test&lt;br /&gt;
* &#039;&#039;PMON&#039;&#039; = previous mean order number&lt;br /&gt;
* &#039;&#039;NIBPSS&#039;&#039; = the number of items beyond the present suspended set. It is the number of units (including all the failures and suspensions) at the current failure time.&lt;br /&gt;
* &#039;&#039;i&#039;&#039; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt; at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression with the SRM. The results for both cases are identical when using the regression estimation technique with SRM, as SRM considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression with SRM, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
The following ReliaSoft Rank Method (RRM) can consider the effect of time for censored data.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). &lt;br /&gt;
&lt;br /&gt;
When analyzing left or right censored data, RRM also considers the effect of the actual censoring time. Therefore, the resulted rank will be more accurate than the SRM where only the position not the exact time of censoring is used. &lt;br /&gt;
&lt;br /&gt;
For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57244</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57244"/>
		<updated>2015-02-25T20:50:40Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* ReliaSoft&amp;#039;s Ranking Method (RRM) for Interval Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for censored data including left censored, right censored, and interval data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &#039;&#039;N&#039;&#039;= the sample size, or total number of items in the test&lt;br /&gt;
* &#039;&#039;PMON&#039;&#039; = previous mean order number&lt;br /&gt;
* &#039;&#039;NIBPSS&#039;&#039; = the number of items beyond the present suspended set. It is the number of units (including all the failures and suspensions) at the current failure time.&lt;br /&gt;
* &#039;&#039;i&#039;&#039; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt; at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). &lt;br /&gt;
&lt;br /&gt;
When analyzing left or right censored data, RRM also considers the effect of the actual censoring time. Therefore, the resulted rank will be more accurate than the SRM where only the position not the exact time of censoring is used. &lt;br /&gt;
&lt;br /&gt;
For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57243</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=57243"/>
		<updated>2015-02-25T20:46:40Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Methods for Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for censored data including left censored, right censored, and interval data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &#039;&#039;N&#039;&#039;= the sample size, or total number of items in the test&lt;br /&gt;
* &#039;&#039;PMON&#039;&#039; = previous mean order number&lt;br /&gt;
* &#039;&#039;NIBPSS&#039;&#039; = the number of items beyond the present suspended set. It is the number of units (including all the failures and suspensions) at the current failure time.&lt;br /&gt;
* &#039;&#039;i&#039;&#039; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt; at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=57078</id>
		<title>Experiment Design and Analysis Reference</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=57078"/>
		<updated>2015-02-17T22:25:17Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Allbooksindex}}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot;| &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;3&amp;quot;&amp;gt;ReliaSoft&#039;s Experiment Design and Analysis Reference&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot; | &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;4&amp;quot;&amp;gt;Chapter Index&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
#[[DOE Overview]]&lt;br /&gt;
#[[Statistical Background on DOE]]&lt;br /&gt;
#[[Simple Linear Regression Analysis]]&lt;br /&gt;
#[[Multiple Linear Regression Analysis]]&lt;br /&gt;
#[[One Factor Designs]]&lt;br /&gt;
#[[General Full Factorial Designs]]&lt;br /&gt;
#[[Randomization and Blocking in DOE]]&lt;br /&gt;
#[[Two Level Factorial Experiments]]&lt;br /&gt;
#[[Highly Fractional Factorial Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Plackett-Burman Designs]]&lt;br /&gt;
#*[[Highly_Fractional_Factorial_Designs#Taguchi.27s_Orthogonal_Arrays|Taguchi Orthogonal Arrays Designs]]&lt;br /&gt;
#[[Response Surface Methods for Optimization]]&lt;br /&gt;
#[[Design Evaluation and Power Study]]&lt;br /&gt;
#[[Optimal Custom Designs]]&lt;br /&gt;
#[[Robust Parameter Design]]&lt;br /&gt;
#[[Mixture Design]]&lt;br /&gt;
#[[Reliability DOE for Life Tests]]&lt;br /&gt;
#[[Measurement System Analysis]]&lt;br /&gt;
#Appendices &lt;br /&gt;
#*[[ANOVA Calculations in Multiple Linear Regression|Appendix A: ANOVA Calculations in Multiple Linear Regression]]&lt;br /&gt;
#*[[Use of Regression to Calculate Sum of Squares|Appendix B: Use of Regression to Calculate Sum of Squares]]&lt;br /&gt;
#*[[Plackett-Burman Designs|Appendix C: Plackett-Burman Designs]]&lt;br /&gt;
#*[[Taguchi Orthogonal Arrays|Appendix D: Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Alias Relations for Taguchi Orthogonal Arrays|Appendix E: Alias Relations for Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Box-Behnken Designs|Appendix F: Box-Behnken Designs]]&lt;br /&gt;
#*[[DOE Glossary|Appendix G: Glossary]]&lt;br /&gt;
#*[[DOE References|Appendix H: References]]&lt;br /&gt;
|}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;0&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; valign=&amp;quot;middle&amp;quot; bgcolor=&amp;quot;#dddddd&amp;quot;;  | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|50px]]&amp;lt;p st#le=&amp;quot;text-align: left;&amp;quot;&amp;gt;[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-&amp;lt;br&amp;gt;[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;0&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;center&amp;quot; | &lt;br /&gt;
&amp;lt;br&amp;gt; {{Allbooksindex footer|DOE++ Examples|DOE++}}&lt;br /&gt;
[[Image:DOE Examples Banner.png|link=DOE++ Examples|center|300px]] &lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56974</id>
		<title>SynthesisX</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56974"/>
		<updated>2015-02-11T23:58:12Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* 32px DOE++ 10 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:SynthesisX}} [[Image:SynthesisX buiding.png|right|360x194px]] &lt;br /&gt;
&lt;br /&gt;
Version 10 of the Synthesis Platform, Synthesis X, is currently in development with a planned release in Q1 of 2015. This document was last revised on {{Template:SynXRevDate}}. [[Image:ConstSX.gif|right|200x250px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This page is a working draft of the changes/modifications planned for this version. It is made publicly available to customers for input and feedback. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Comments, Questions or Feedback ===&lt;br /&gt;
&lt;br /&gt;
You can use the new &#039;&#039;&#039;[http://www.reliability-discussion.com/forumdisplay.php?f=42 Synthesis sub-forum]&#039;&#039;&#039; in the Reliability Discussion Forum for comments, questions, suggestions and/or feedback related to the planned modifications. &lt;br /&gt;
&lt;br /&gt;
Alternatively, you can send an [mailto:Synthesis@ReliaSoft.com?subject=SynthesisX%20Feedback%20from%20ReliaWiki e-mail to the Development team].&lt;br /&gt;
&lt;br /&gt;
= Platform-Wide Modifications =&lt;br /&gt;
&lt;br /&gt;
These platform modifications are incorporated into all applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Template Projects]]&#039;&#039;&#039; (now called &amp;quot;Reference Projects&amp;quot;) added.{{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&#039;&#039;&#039;[[Global/Template Resources]]&#039;&#039;&#039; added/expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Synthesis-wide &#039;&#039;&#039;[[DFR Planner]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[DFR Resources]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[Work Days Scheduler]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Alerts]]&#039;&#039;&#039;. E-mail alerts and notifications expanded and streamlined. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
*&#039;&#039;&#039;[[User Profiles]]&#039;&#039;&#039; expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Change in &#039;&#039;&#039;[[Global Identifiers]]&#039;&#039;&#039;. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Advanced Categorization]]&#039;&#039;&#039; and Filtering added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Global Item Filter]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Resource Shortcuts]]&#039;&#039;&#039; (called &amp;quot;Synthesis Locator Links&amp;quot;) added. Open an app/project/folio/sheet from a single file. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Non-Auto-Save Mode]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Project Explorer]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Unified Actions]]&#039;&#039;&#039;. All Actions including test requests are unified across all products and can be managed/linked with the DFR planner, as well as through the new web-based [[Synthesis Enterprise Portal]]. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Work Books]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*&#039;&#039;&#039;[[Results Dashboard]]&#039;&#039;&#039; modified and expanded. Now available in more locations (FMEA data, Synthesis Explorer) and integrated with the SEP. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*Unified User Preferences Setup Window {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Modified Security Permissions &#039;&#039;&#039;[[V10 Security and Permissions]]&#039;&#039;&#039;  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*Parts Table Reference added to desktop applications for better tie in to XFRACAS. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}{{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
=== UI Changes Not Affecting Functionality  ===&lt;br /&gt;
&lt;br /&gt;
*UDFs for folios also changed to be a tree structure. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Model window interface updated to use a tree structure similar to other resources. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Interface improvements for Resource Manager: {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&amp;quot;Show All,&amp;quot; &amp;quot;Show Only Unused&amp;quot; and &amp;quot;Show Only Duplicates&amp;quot; now indicate which option is selected.&lt;br /&gt;
**A status bar at the bottom of the window indicates which resource is selected, the Local/Global view option and the Selection option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New Resources ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Metric]]&#039;&#039;&#039;. Shows calculated results value from a model or simulation result. Result values can be manually added to a stack of saved values and tracked over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*FMEAs are now resources. This is only available in applications that use FMEAs (Xfmea/RCM++/RBI)  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
= Application-Specific Modifications  =&lt;br /&gt;
&lt;br /&gt;
== [[Image:Weibull++Icon.png|32px]] Weibull++/ALTA 10 ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Fractional Failures Analysis]]&#039;&#039;&#039; added. Discount failures based on planned corrective actions effectiveness for what-if analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|Weibull/ALTA Multiple Projects]]&#039;&#039;&#039; added. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*New &#039;&#039;&#039;[[3D-Plot Folio]]&#039;&#039;&#039; added. New, original 3D plots. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Catastrophic Degradation Analysis]]&#039;&#039;&#039; added. Direct MLE solution options for Degradation Analysis added, allowing for catastrophic degradation analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;User-Defined Degradation Model&#039;&#039;&#039; was added in Weibull++. This is the same feature as the &#039;&#039;&#039;Equation Fit Solver&#039;&#039;&#039; in Weibull++. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Publish analysis to the Synthesis Enterprise Portal: {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Published Folio/Data Sheets can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Reliability Data Warehouse (RDW) functionality expanded and interface redesigned. {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**Link automatically to external databases through RDW (SQL, Oracle Access).&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:BlockSimIcon.png|32px]] BlockSim 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim General Enhancements]]&#039;&#039;&#039;. General Interface and Analysis enhancements were made. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[FMRA View Expanded]]&#039;&#039;&#039;. Additional functionality and capability added to the BlockSim FMRA view. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim Multiple Projects]]&#039;&#039;&#039; Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Diagrams can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RENOIcon.png|32px]] RENO 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*Drag &amp;amp;amp; Drop Mode enhanced and can now be used in tabbed view as well. &lt;br /&gt;
&lt;br /&gt;
*Option to not automatically validate equations added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Ability to reference resources &#039;&#039;by name&#039;&#039; in the equations instead of by reference.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[RENO_Multiple_Projects|RENO Multiple Projects]]&#039;&#039;&#039;. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Quick send to Weibull for result containers.  &lt;br /&gt;
&lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:XfmeaIcon.png|32px]] Xfmea 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Analyses as Resources]]&#039;&#039;&#039;. FMEAs are now resources (i.e., Linked FMEAs) and can be reused. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*FMEA Smart Add. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Optional color coding on scale selections added based on item target reliability.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Target Reliability allocation to cause added  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Read and Push Metrics added to all items  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[QCPN]]&#039;&#039;&#039; metric added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*DFR planner is no longer an analysis in Xfmea but is now a separate project-level &#039;&#039;utility application&#039;&#039; available to all Synthesis applications. &lt;br /&gt;
**Conversion considerations: &lt;br /&gt;
***In V9, DFR plans were simpler and could be added to any item in the system hierarchy. &lt;br /&gt;
***The greatly expanded V10 DFR planner is no longer item-based but instead project-based. &lt;br /&gt;
***Upon conversion of older files, and if more than one DFR planner is in the project, the plans will be merged into a single project-level DFR planner. &lt;br /&gt;
*System Hierarchy Filtered View &lt;br /&gt;
*Interactive FMEA Block Diagrams added.  &lt;br /&gt;
*Automatic and linked Test Plan Generation and monitoring. Enhances and replaces existing DVP&amp;amp;R functionality. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*&#039;&#039;&#039;[[FMEA Import/Compare Window]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RCMIcon.png|32px]] RCM++ 10  ==&lt;br /&gt;
&lt;br /&gt;
*Platform-wide modifications [[#top|(&#039;&#039;see the list at the top of this page&#039;&#039;)]].&lt;br /&gt;
*ALL Xfmea modifications are also included in RCM++ [[#XFMEA_10|&#039;&#039;(see the list under Xfmea)&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
== [[Image:RBIIcon.png|32px]] RBI 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}}&lt;br /&gt;
*All Xfmea improvements are also available in RBI.&lt;br /&gt;
&lt;br /&gt;
== [[Image:LpredictIcon.png|28px|28]] Lambda Predict 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*FIDES &lt;br /&gt;
**A set of FIDES Phases can now be defined in a Phase Set.&lt;br /&gt;
**Two new FIDES plots.&lt;br /&gt;
*Parts Count added.&lt;br /&gt;
**Added a parts count prediction that restricts the behaviors to those allowable in a parts count.&lt;br /&gt;
*NSWC Updated&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:DOEIcon.png|32px]] DOE++ 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
* &#039;&#039;&#039;Mixture Design&#039;&#039;&#039; was added. &lt;br /&gt;
*[[Repeated Measurements]] added for standard designs with a single response. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|DOE++ Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and are independent of User Settings.&lt;br /&gt;
&lt;br /&gt;
*The optimization plot, overlaid contour plot and dynamic overlaid contour plot dialogs now use a tree format. (Specific factors can now be held constant in the optimization plot.)&lt;br /&gt;
&lt;br /&gt;
*Ignore/Include column added to standard and robust folios. (Ignored rows are not included in any of the calculations.)&lt;br /&gt;
&lt;br /&gt;
*Dialogs added to modify all factors or all responses from one spot.&lt;br /&gt;
&lt;br /&gt;
*Central Composite factor values can be assigned based on Alpha values.&lt;br /&gt;
&lt;br /&gt;
*New and improved Surface plot.&lt;br /&gt;
**Basic plot functionality remains the same as the previous surface plot.&lt;br /&gt;
**Appearance is greatly improved (including anti-aliasing and legible text).&lt;br /&gt;
**Interface has changed substantially.&lt;br /&gt;
**Available settings have been vastly expanded.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== [[Image:RGAIcon.png|32px]] RGA 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|RGA Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*Modifications/improvements to multiple systems analysis (and generating the equivalent single system).&lt;br /&gt;
&lt;br /&gt;
*Folios/data sheets can now be associated with a [[Metric]] variable. &lt;br /&gt;
&lt;br /&gt;
*Metric variable array can be used as a data source. Track and plot (and even analyze) changes in analysis metrics over time. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:XFRACASIcon.png|32px]] XFRACAS 10  ==&lt;br /&gt;
&lt;br /&gt;
*Multiple &amp;quot;under-the-hood&amp;quot; improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:MPCIcon.png|32px]] MPC 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Synthesis API 10  ==&lt;br /&gt;
&lt;br /&gt;
*Calculate compound (analytical diagram) models via an API call. &lt;br /&gt;
*Manipulate Xfmea System Hierarchy Via API&lt;br /&gt;
&lt;br /&gt;
= New Applications Added to the Platform for Version 10  =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SEP: [[Synthesis Enterprise Portal]] 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ReliaSoft’s Synthesis Enterprise Portal (SEP) opens up the Synthesis Platform, and your work&lt;br /&gt;
in the platform, to your whole organization. You can share your progress, results and analyses with management and colleagues using the new web-based portal, accessible from any web-enabled device. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SEP Areas&#039;&#039;&#039;&lt;br /&gt;
*Home Landing Page&lt;br /&gt;
*Projects and Project Information Page &lt;br /&gt;
**[[SEP Analysis in Project Pages]] (Based on current Project)&lt;br /&gt;
*Timeline Messaging&lt;br /&gt;
*Tasks &amp;amp; Actions&lt;br /&gt;
*Repository Management Pages&lt;br /&gt;
&lt;br /&gt;
== Markov 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Apply Markov analysis.&lt;br /&gt;
*Implemented in the BlockSim/RENO interface as a new diagram type. &lt;br /&gt;
**Introduces two new diagram types:&lt;br /&gt;
*** Discrete Markov:  For creating and analyzing single or multi-phased diagrams with discrete transition probabilities.&lt;br /&gt;
***Continuous Markov:  For creating and analyzing single or multi-phased diagrams with continuous distributions for transition probabilities.&lt;br /&gt;
&lt;br /&gt;
== The Synthesis Dashboard Designer 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
* Create dashboard templates that can be published and viewed from each hosting application and the Synthesis Enterprise Portal.&lt;br /&gt;
&lt;br /&gt;
* Dashboard designs are available for the following analyses:&lt;br /&gt;
** DFR Planner&lt;br /&gt;
** RDW Data&lt;br /&gt;
** BlockSim Simulation Results&lt;br /&gt;
** Synthesis Explorer&lt;br /&gt;
** Xfmea/RCM++/RBI Items&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Course Player 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
A new eLearning delivery platform designed from the ground up by ReliaSoft to create and deliver ReliaSoft eCourses. Details upon release.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56969</id>
		<title>SynthesisX</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56969"/>
		<updated>2015-02-11T23:38:44Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* 32px Weibull++/ALTA 10 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:SynthesisX}} [[Image:SynthesisX buiding.png|right|360x194px]] &lt;br /&gt;
&lt;br /&gt;
Version 10 of the Synthesis Platform, Synthesis X, is currently in development with a planned release in Q1 of 2015. This document was last revised on {{Template:SynXRevDate}}. [[Image:ConstSX.gif|right|200x250px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This page is a working draft of the changes/modifications planned for this version. It is made publicly available to customers for input and feedback. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Comments, Questions or Feedback ===&lt;br /&gt;
&lt;br /&gt;
You can use the new &#039;&#039;&#039;[http://www.reliability-discussion.com/forumdisplay.php?f=42 Synthesis sub-forum]&#039;&#039;&#039; in the Reliability Discussion Forum for comments, questions, suggestions and/or feedback related to the planned modifications. &lt;br /&gt;
&lt;br /&gt;
Alternatively, you can send an [mailto:Synthesis@ReliaSoft.com?subject=SynthesisX%20Feedback%20from%20ReliaWiki e-mail to the Development team].&lt;br /&gt;
&lt;br /&gt;
= Platform-Wide Modifications =&lt;br /&gt;
&lt;br /&gt;
These platform modifications are incorporated into all applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Template Projects]]&#039;&#039;&#039; (now called &amp;quot;Reference Projects&amp;quot;) added.{{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&#039;&#039;&#039;[[Global/Template Resources]]&#039;&#039;&#039; added/expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Synthesis-wide &#039;&#039;&#039;[[DFR Planner]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[DFR Resources]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[Work Days Scheduler]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Alerts]]&#039;&#039;&#039;. E-mail alerts and notifications expanded and streamlined. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
*&#039;&#039;&#039;[[User Profiles]]&#039;&#039;&#039; expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Change in &#039;&#039;&#039;[[Global Identifiers]]&#039;&#039;&#039;. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Advanced Categorization]]&#039;&#039;&#039; and Filtering added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Global Item Filter]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Resource Shortcuts]]&#039;&#039;&#039; (called &amp;quot;Synthesis Locator Links&amp;quot;) added. Open an app/project/folio/sheet from a single file. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Non-Auto-Save Mode]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Project Explorer]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Unified Actions]]&#039;&#039;&#039;. All Actions including test requests are unified across all products and can be managed/linked with the DFR planner, as well as through the new web-based [[Synthesis Enterprise Portal]]. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Work Books]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*&#039;&#039;&#039;[[Results Dashboard]]&#039;&#039;&#039; modified and expanded. Now available in more locations (FMEA data, Synthesis Explorer) and integrated with the SEP. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*Unified User Preferences Setup Window {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Modified Security Permissions &#039;&#039;&#039;[[V10 Security and Permissions]]&#039;&#039;&#039;  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*Parts Table Reference added to desktop applications for better tie in to XFRACAS. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}{{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
=== UI Changes Not Affecting Functionality  ===&lt;br /&gt;
&lt;br /&gt;
*UDFs for folios also changed to be a tree structure. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Model window interface updated to use a tree structure similar to other resources. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Interface improvements for Resource Manager: {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&amp;quot;Show All,&amp;quot; &amp;quot;Show Only Unused&amp;quot; and &amp;quot;Show Only Duplicates&amp;quot; now indicate which option is selected.&lt;br /&gt;
**A status bar at the bottom of the window indicates which resource is selected, the Local/Global view option and the Selection option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New Resources ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Metric]]&#039;&#039;&#039;. Shows calculated results value from a model or simulation result. Result values can be manually added to a stack of saved values and tracked over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*FMEAs are now resources. This is only available in applications that use FMEAs (Xfmea/RCM++/RBI)  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
= Application-Specific Modifications  =&lt;br /&gt;
&lt;br /&gt;
== [[Image:Weibull++Icon.png|32px]] Weibull++/ALTA 10 ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Fractional Failures Analysis]]&#039;&#039;&#039; added. Discount failures based on planned corrective actions effectiveness for what-if analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|Weibull/ALTA Multiple Projects]]&#039;&#039;&#039; added. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*New &#039;&#039;&#039;[[3D-Plot Folio]]&#039;&#039;&#039; added. New, original 3D plots. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Catastrophic Degradation Analysis]]&#039;&#039;&#039; added. Direct MLE solution options for Degradation Analysis added, allowing for catastrophic degradation analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;User-Defined Degradation Model&#039;&#039;&#039; was added in Weibull++. This is the same feature as the &#039;&#039;&#039;Equation Fit Solver&#039;&#039;&#039; in Weibull++. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Publish analysis to the Synthesis Enterprise Portal: {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Published Folio/Data Sheets can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Reliability Data Warehouse (RDW) functionality expanded and interface redesigned. {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**Link automatically to external databases through RDW (SQL, Oracle Access).&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:BlockSimIcon.png|32px]] BlockSim 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim General Enhancements]]&#039;&#039;&#039;. General Interface and Analysis enhancements were made. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[FMRA View Expanded]]&#039;&#039;&#039;. Additional functionality and capability added to the BlockSim FMRA view. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim Multiple Projects]]&#039;&#039;&#039; Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Diagrams can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RENOIcon.png|32px]] RENO 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*Drag &amp;amp;amp; Drop Mode enhanced and can now be used in tabbed view as well. &lt;br /&gt;
&lt;br /&gt;
*Option to not automatically validate equations added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Ability to reference resources &#039;&#039;by name&#039;&#039; in the equations instead of by reference.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[RENO_Multiple_Projects|RENO Multiple Projects]]&#039;&#039;&#039;. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Quick send to Weibull for result containers.  &lt;br /&gt;
&lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:XfmeaIcon.png|32px]] Xfmea 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Analyses as Resources]]&#039;&#039;&#039;. FMEAs are now resources (i.e., Linked FMEAs) and can be reused. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*FMEA Smart Add. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Optional color coding on scale selections added based on item target reliability.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Target Reliability allocation to cause added  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Read and Push Metrics added to all items  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[QCPN]]&#039;&#039;&#039; metric added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*DFR planner is no longer an analysis in Xfmea but is now a separate project-level &#039;&#039;utility application&#039;&#039; available to all Synthesis applications. &lt;br /&gt;
**Conversion considerations: &lt;br /&gt;
***In V9, DFR plans were simpler and could be added to any item in the system hierarchy. &lt;br /&gt;
***The greatly expanded V10 DFR planner is no longer item-based but instead project-based. &lt;br /&gt;
***Upon conversion of older files, and if more than one DFR planner is in the project, the plans will be merged into a single project-level DFR planner. &lt;br /&gt;
*System Hierarchy Filtered View &lt;br /&gt;
*Interactive FMEA Block Diagrams added.  &lt;br /&gt;
*Automatic and linked Test Plan Generation and monitoring. Enhances and replaces existing DVP&amp;amp;R functionality. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*&#039;&#039;&#039;[[FMEA Import/Compare Window]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RCMIcon.png|32px]] RCM++ 10  ==&lt;br /&gt;
&lt;br /&gt;
*Platform-wide modifications [[#top|(&#039;&#039;see the list at the top of this page&#039;&#039;)]].&lt;br /&gt;
*ALL Xfmea modifications are also included in RCM++ [[#XFMEA_10|&#039;&#039;(see the list under Xfmea)&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
== [[Image:RBIIcon.png|32px]] RBI 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}}&lt;br /&gt;
*All Xfmea improvements are also available in RBI.&lt;br /&gt;
&lt;br /&gt;
== [[Image:LpredictIcon.png|28px|28]] Lambda Predict 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*FIDES &lt;br /&gt;
**A set of FIDES Phases can now be defined in a Phase Set.&lt;br /&gt;
**Two new FIDES plots.&lt;br /&gt;
*Parts Count added.&lt;br /&gt;
**Added a parts count prediction that restricts the behaviors to those allowable in a parts count.&lt;br /&gt;
*NSWC Updated&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:DOEIcon.png|32px]] DOE++ 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
*[[Repeated Measurements]] added for standard designs with a single response. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|DOE++ Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and are independent of User Settings.&lt;br /&gt;
&lt;br /&gt;
*The optimization plot, overlaid contour plot and dynamic overlaid contour plot dialogs now use a tree format. (Specific factors can now be held constant in the optimization plot.)&lt;br /&gt;
&lt;br /&gt;
*Ignore/Include column added to standard and robust folios. (Ignored rows are not included in any of the calculations.)&lt;br /&gt;
&lt;br /&gt;
*Dialogs added to modify all factors or all responses from one spot.&lt;br /&gt;
&lt;br /&gt;
*Central Composite factor values can be assigned based on Alpha values.&lt;br /&gt;
&lt;br /&gt;
*New and improved Surface plot.&lt;br /&gt;
**Basic plot functionality remains the same as the previous surface plot.&lt;br /&gt;
**Appearance is greatly improved (including anti-aliasing and legible text).&lt;br /&gt;
**Interface has changed substantially.&lt;br /&gt;
**Available settings have been vastly expanded.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== [[Image:RGAIcon.png|32px]] RGA 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|RGA Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*Modifications/improvements to multiple systems analysis (and generating the equivalent single system).&lt;br /&gt;
&lt;br /&gt;
*Folios/data sheets can now be associated with a [[Metric]] variable. &lt;br /&gt;
&lt;br /&gt;
*Metric variable array can be used as a data source. Track and plot (and even analyze) changes in analysis metrics over time. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:XFRACASIcon.png|32px]] XFRACAS 10  ==&lt;br /&gt;
&lt;br /&gt;
*Multiple &amp;quot;under-the-hood&amp;quot; improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:MPCIcon.png|32px]] MPC 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Synthesis API 10  ==&lt;br /&gt;
&lt;br /&gt;
*Calculate compound (analytical diagram) models via an API call. &lt;br /&gt;
*Manipulate Xfmea System Hierarchy Via API&lt;br /&gt;
&lt;br /&gt;
= New Applications Added to the Platform for Version 10  =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SEP: [[Synthesis Enterprise Portal]] 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ReliaSoft’s Synthesis Enterprise Portal (SEP) opens up the Synthesis Platform, and your work&lt;br /&gt;
in the platform, to your whole organization. You can share your progress, results and analyses with management and colleagues using the new web-based portal, accessible from any web-enabled device. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SEP Areas&#039;&#039;&#039;&lt;br /&gt;
*Home Landing Page&lt;br /&gt;
*Projects and Project Information Page &lt;br /&gt;
**[[SEP Analysis in Project Pages]] (Based on current Project)&lt;br /&gt;
*Timeline Messaging&lt;br /&gt;
*Tasks &amp;amp; Actions&lt;br /&gt;
*Repository Management Pages&lt;br /&gt;
&lt;br /&gt;
== Markov 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Apply Markov analysis.&lt;br /&gt;
*Implemented in the BlockSim/RENO interface as a new diagram type. &lt;br /&gt;
**Introduces two new diagram types:&lt;br /&gt;
*** Discrete Markov:  For creating and analyzing single or multi-phased diagrams with discrete transition probabilities.&lt;br /&gt;
***Continuous Markov:  For creating and analyzing single or multi-phased diagrams with continuous distributions for transition probabilities.&lt;br /&gt;
&lt;br /&gt;
== The Synthesis Dashboard Designer 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
* Create dashboard templates that can be published and viewed from each hosting application and the Synthesis Enterprise Portal.&lt;br /&gt;
&lt;br /&gt;
* Dashboard designs are available for the following analyses:&lt;br /&gt;
** DFR Planner&lt;br /&gt;
** RDW Data&lt;br /&gt;
** BlockSim Simulation Results&lt;br /&gt;
** Synthesis Explorer&lt;br /&gt;
** Xfmea/RCM++/RBI Items&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Course Player 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
A new eLearning delivery platform designed from the ground up by ReliaSoft to create and deliver ReliaSoft eCourses. Details upon release.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56968</id>
		<title>SynthesisX</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56968"/>
		<updated>2015-02-11T23:37:16Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* 32px Weibull++/ALTA 10 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:SynthesisX}} [[Image:SynthesisX buiding.png|right|360x194px]] &lt;br /&gt;
&lt;br /&gt;
Version 10 of the Synthesis Platform, Synthesis X, is currently in development with a planned release in Q1 of 2015. This document was last revised on {{Template:SynXRevDate}}. [[Image:ConstSX.gif|right|200x250px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This page is a working draft of the changes/modifications planned for this version. It is made publicly available to customers for input and feedback. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Comments, Questions or Feedback ===&lt;br /&gt;
&lt;br /&gt;
You can use the new &#039;&#039;&#039;[http://www.reliability-discussion.com/forumdisplay.php?f=42 Synthesis sub-forum]&#039;&#039;&#039; in the Reliability Discussion Forum for comments, questions, suggestions and/or feedback related to the planned modifications. &lt;br /&gt;
&lt;br /&gt;
Alternatively, you can send an [mailto:Synthesis@ReliaSoft.com?subject=SynthesisX%20Feedback%20from%20ReliaWiki e-mail to the Development team].&lt;br /&gt;
&lt;br /&gt;
= Platform-Wide Modifications =&lt;br /&gt;
&lt;br /&gt;
These platform modifications are incorporated into all applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Template Projects]]&#039;&#039;&#039; (now called &amp;quot;Reference Projects&amp;quot;) added.{{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&#039;&#039;&#039;[[Global/Template Resources]]&#039;&#039;&#039; added/expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Synthesis-wide &#039;&#039;&#039;[[DFR Planner]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[DFR Resources]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[Work Days Scheduler]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Alerts]]&#039;&#039;&#039;. E-mail alerts and notifications expanded and streamlined. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
*&#039;&#039;&#039;[[User Profiles]]&#039;&#039;&#039; expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Change in &#039;&#039;&#039;[[Global Identifiers]]&#039;&#039;&#039;. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Advanced Categorization]]&#039;&#039;&#039; and Filtering added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Global Item Filter]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Resource Shortcuts]]&#039;&#039;&#039; (called &amp;quot;Synthesis Locator Links&amp;quot;) added. Open an app/project/folio/sheet from a single file. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Non-Auto-Save Mode]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Project Explorer]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Unified Actions]]&#039;&#039;&#039;. All Actions including test requests are unified across all products and can be managed/linked with the DFR planner, as well as through the new web-based [[Synthesis Enterprise Portal]]. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Work Books]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*&#039;&#039;&#039;[[Results Dashboard]]&#039;&#039;&#039; modified and expanded. Now available in more locations (FMEA data, Synthesis Explorer) and integrated with the SEP. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*Unified User Preferences Setup Window {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Modified Security Permissions &#039;&#039;&#039;[[V10 Security and Permissions]]&#039;&#039;&#039;  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*Parts Table Reference added to desktop applications for better tie in to XFRACAS. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}{{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
=== UI Changes Not Affecting Functionality  ===&lt;br /&gt;
&lt;br /&gt;
*UDFs for folios also changed to be a tree structure. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Model window interface updated to use a tree structure similar to other resources. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Interface improvements for Resource Manager: {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&amp;quot;Show All,&amp;quot; &amp;quot;Show Only Unused&amp;quot; and &amp;quot;Show Only Duplicates&amp;quot; now indicate which option is selected.&lt;br /&gt;
**A status bar at the bottom of the window indicates which resource is selected, the Local/Global view option and the Selection option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New Resources ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Metric]]&#039;&#039;&#039;. Shows calculated results value from a model or simulation result. Result values can be manually added to a stack of saved values and tracked over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*FMEAs are now resources. This is only available in applications that use FMEAs (Xfmea/RCM++/RBI)  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
= Application-Specific Modifications  =&lt;br /&gt;
&lt;br /&gt;
== [[Image:Weibull++Icon.png|32px]] Weibull++/ALTA 10 ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Fractional Failures Analysis]]&#039;&#039;&#039; added. Discount failures based on planned corrective actions effectiveness for what-if analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|Weibull/ALTA Multiple Projects]]&#039;&#039;&#039; added. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*New &#039;&#039;&#039;[[3D-Plot Folio]]&#039;&#039;&#039; added. New, original 3D plots. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Catastrophic Degradation Analysis]]&#039;&#039;&#039; added. Direct MLE solution options for Degradation Analysis added, allowing for catastrophic degradation analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;User-Defined Degradation Model&#039;&#039;&#039; was added in Weibull++. This is the same feature as the &#039;&#039;&#039;Equation Fit Solver&#039;&#039;&#039; in Weibull++&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Publish analysis to the Synthesis Enterprise Portal: {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Published Folio/Data Sheets can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Reliability Data Warehouse (RDW) functionality expanded and interface redesigned. {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**Link automatically to external databases through RDW (SQL, Oracle Access).&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:BlockSimIcon.png|32px]] BlockSim 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim General Enhancements]]&#039;&#039;&#039;. General Interface and Analysis enhancements were made. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[FMRA View Expanded]]&#039;&#039;&#039;. Additional functionality and capability added to the BlockSim FMRA view. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim Multiple Projects]]&#039;&#039;&#039; Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Diagrams can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RENOIcon.png|32px]] RENO 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*Drag &amp;amp;amp; Drop Mode enhanced and can now be used in tabbed view as well. &lt;br /&gt;
&lt;br /&gt;
*Option to not automatically validate equations added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Ability to reference resources &#039;&#039;by name&#039;&#039; in the equations instead of by reference.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[RENO_Multiple_Projects|RENO Multiple Projects]]&#039;&#039;&#039;. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Quick send to Weibull for result containers.  &lt;br /&gt;
&lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:XfmeaIcon.png|32px]] Xfmea 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Analyses as Resources]]&#039;&#039;&#039;. FMEAs are now resources (i.e., Linked FMEAs) and can be reused. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*FMEA Smart Add. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Optional color coding on scale selections added based on item target reliability.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Target Reliability allocation to cause added  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Read and Push Metrics added to all items  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[QCPN]]&#039;&#039;&#039; metric added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*DFR planner is no longer an analysis in Xfmea but is now a separate project-level &#039;&#039;utility application&#039;&#039; available to all Synthesis applications. &lt;br /&gt;
**Conversion considerations: &lt;br /&gt;
***In V9, DFR plans were simpler and could be added to any item in the system hierarchy. &lt;br /&gt;
***The greatly expanded V10 DFR planner is no longer item-based but instead project-based. &lt;br /&gt;
***Upon conversion of older files, and if more than one DFR planner is in the project, the plans will be merged into a single project-level DFR planner. &lt;br /&gt;
*System Hierarchy Filtered View &lt;br /&gt;
*Interactive FMEA Block Diagrams added.  &lt;br /&gt;
*Automatic and linked Test Plan Generation and monitoring. Enhances and replaces existing DVP&amp;amp;R functionality. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*&#039;&#039;&#039;[[FMEA Import/Compare Window]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RCMIcon.png|32px]] RCM++ 10  ==&lt;br /&gt;
&lt;br /&gt;
*Platform-wide modifications [[#top|(&#039;&#039;see the list at the top of this page&#039;&#039;)]].&lt;br /&gt;
*ALL Xfmea modifications are also included in RCM++ [[#XFMEA_10|&#039;&#039;(see the list under Xfmea)&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
== [[Image:RBIIcon.png|32px]] RBI 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}}&lt;br /&gt;
*All Xfmea improvements are also available in RBI.&lt;br /&gt;
&lt;br /&gt;
== [[Image:LpredictIcon.png|28px|28]] Lambda Predict 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*FIDES &lt;br /&gt;
**A set of FIDES Phases can now be defined in a Phase Set.&lt;br /&gt;
**Two new FIDES plots.&lt;br /&gt;
*Parts Count added.&lt;br /&gt;
**Added a parts count prediction that restricts the behaviors to those allowable in a parts count.&lt;br /&gt;
*NSWC Updated&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:DOEIcon.png|32px]] DOE++ 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
*[[Repeated Measurements]] added for standard designs with a single response. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|DOE++ Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and are independent of User Settings.&lt;br /&gt;
&lt;br /&gt;
*The optimization plot, overlaid contour plot and dynamic overlaid contour plot dialogs now use a tree format. (Specific factors can now be held constant in the optimization plot.)&lt;br /&gt;
&lt;br /&gt;
*Ignore/Include column added to standard and robust folios. (Ignored rows are not included in any of the calculations.)&lt;br /&gt;
&lt;br /&gt;
*Dialogs added to modify all factors or all responses from one spot.&lt;br /&gt;
&lt;br /&gt;
*Central Composite factor values can be assigned based on Alpha values.&lt;br /&gt;
&lt;br /&gt;
*New and improved Surface plot.&lt;br /&gt;
**Basic plot functionality remains the same as the previous surface plot.&lt;br /&gt;
**Appearance is greatly improved (including anti-aliasing and legible text).&lt;br /&gt;
**Interface has changed substantially.&lt;br /&gt;
**Available settings have been vastly expanded.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== [[Image:RGAIcon.png|32px]] RGA 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|RGA Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*Modifications/improvements to multiple systems analysis (and generating the equivalent single system).&lt;br /&gt;
&lt;br /&gt;
*Folios/data sheets can now be associated with a [[Metric]] variable. &lt;br /&gt;
&lt;br /&gt;
*Metric variable array can be used as a data source. Track and plot (and even analyze) changes in analysis metrics over time. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:XFRACASIcon.png|32px]] XFRACAS 10  ==&lt;br /&gt;
&lt;br /&gt;
*Multiple &amp;quot;under-the-hood&amp;quot; improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:MPCIcon.png|32px]] MPC 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Synthesis API 10  ==&lt;br /&gt;
&lt;br /&gt;
*Calculate compound (analytical diagram) models via an API call. &lt;br /&gt;
*Manipulate Xfmea System Hierarchy Via API&lt;br /&gt;
&lt;br /&gt;
= New Applications Added to the Platform for Version 10  =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SEP: [[Synthesis Enterprise Portal]] 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ReliaSoft’s Synthesis Enterprise Portal (SEP) opens up the Synthesis Platform, and your work&lt;br /&gt;
in the platform, to your whole organization. You can share your progress, results and analyses with management and colleagues using the new web-based portal, accessible from any web-enabled device. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SEP Areas&#039;&#039;&#039;&lt;br /&gt;
*Home Landing Page&lt;br /&gt;
*Projects and Project Information Page &lt;br /&gt;
**[[SEP Analysis in Project Pages]] (Based on current Project)&lt;br /&gt;
*Timeline Messaging&lt;br /&gt;
*Tasks &amp;amp; Actions&lt;br /&gt;
*Repository Management Pages&lt;br /&gt;
&lt;br /&gt;
== Markov 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Apply Markov analysis.&lt;br /&gt;
*Implemented in the BlockSim/RENO interface as a new diagram type. &lt;br /&gt;
**Introduces two new diagram types:&lt;br /&gt;
*** Discrete Markov:  For creating and analyzing single or multi-phased diagrams with discrete transition probabilities.&lt;br /&gt;
***Continuous Markov:  For creating and analyzing single or multi-phased diagrams with continuous distributions for transition probabilities.&lt;br /&gt;
&lt;br /&gt;
== The Synthesis Dashboard Designer 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
* Create dashboard templates that can be published and viewed from each hosting application and the Synthesis Enterprise Portal.&lt;br /&gt;
&lt;br /&gt;
* Dashboard designs are available for the following analyses:&lt;br /&gt;
** DFR Planner&lt;br /&gt;
** RDW Data&lt;br /&gt;
** BlockSim Simulation Results&lt;br /&gt;
** Synthesis Explorer&lt;br /&gt;
** Xfmea/RCM++/RBI Items&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Course Player 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
A new eLearning delivery platform designed from the ground up by ReliaSoft to create and deliver ReliaSoft eCourses. Details upon release.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56967</id>
		<title>SynthesisX</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=SynthesisX&amp;diff=56967"/>
		<updated>2015-02-11T23:36:50Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* 32px Weibull++/ALTA 10 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:SynthesisX}} [[Image:SynthesisX buiding.png|right|360x194px]] &lt;br /&gt;
&lt;br /&gt;
Version 10 of the Synthesis Platform, Synthesis X, is currently in development with a planned release in Q1 of 2015. This document was last revised on {{Template:SynXRevDate}}. [[Image:ConstSX.gif|right|200x250px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This page is a working draft of the changes/modifications planned for this version. It is made publicly available to customers for input and feedback. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Comments, Questions or Feedback ===&lt;br /&gt;
&lt;br /&gt;
You can use the new &#039;&#039;&#039;[http://www.reliability-discussion.com/forumdisplay.php?f=42 Synthesis sub-forum]&#039;&#039;&#039; in the Reliability Discussion Forum for comments, questions, suggestions and/or feedback related to the planned modifications. &lt;br /&gt;
&lt;br /&gt;
Alternatively, you can send an [mailto:Synthesis@ReliaSoft.com?subject=SynthesisX%20Feedback%20from%20ReliaWiki e-mail to the Development team].&lt;br /&gt;
&lt;br /&gt;
= Platform-Wide Modifications =&lt;br /&gt;
&lt;br /&gt;
These platform modifications are incorporated into all applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Template Projects]]&#039;&#039;&#039; (now called &amp;quot;Reference Projects&amp;quot;) added.{{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&#039;&#039;&#039;[[Global/Template Resources]]&#039;&#039;&#039; added/expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Synthesis-wide &#039;&#039;&#039;[[DFR Planner]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}  {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[DFR Resources]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**New &#039;&#039;&#039;[[Work Days Scheduler]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Alerts]]&#039;&#039;&#039;. E-mail alerts and notifications expanded and streamlined. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
*&#039;&#039;&#039;[[User Profiles]]&#039;&#039;&#039; expanded. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Change in &#039;&#039;&#039;[[Global Identifiers]]&#039;&#039;&#039;. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Advanced Categorization]]&#039;&#039;&#039; and Filtering added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Global Item Filter]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Resource Shortcuts]]&#039;&#039;&#039; (called &amp;quot;Synthesis Locator Links&amp;quot;) added. Open an app/project/folio/sheet from a single file. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Non-Auto-Save Mode]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Expanded Project Explorer]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Unified Actions]]&#039;&#039;&#039;. All Actions including test requests are unified across all products and can be managed/linked with the DFR planner, as well as through the new web-based [[Synthesis Enterprise Portal]]. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Synthesis Work Books]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*&#039;&#039;&#039;[[Results Dashboard]]&#039;&#039;&#039; modified and expanded. Now available in more locations (FMEA data, Synthesis Explorer) and integrated with the SEP. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
*Unified User Preferences Setup Window {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Modified Security Permissions &#039;&#039;&#039;[[V10 Security and Permissions]]&#039;&#039;&#039;  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*Parts Table Reference added to desktop applications for better tie in to XFRACAS. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}}{{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
=== UI Changes Not Affecting Functionality  ===&lt;br /&gt;
&lt;br /&gt;
*UDFs for folios also changed to be a tree structure. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Model window interface updated to use a tree structure similar to other resources. {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Interface improvements for Resource Manager: {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**&amp;quot;Show All,&amp;quot; &amp;quot;Show Only Unused&amp;quot; and &amp;quot;Show Only Duplicates&amp;quot; now indicate which option is selected.&lt;br /&gt;
**A status bar at the bottom of the window indicates which resource is selected, the Local/Global view option and the Selection option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New Resources ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Metric]]&#039;&#039;&#039;. Shows calculated results value from a model or simulation result. Result values can be manually added to a stack of saved values and tracked over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*FMEAs are now resources. This is only available in applications that use FMEAs (Xfmea/RCM++/RBI)  {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
= Application-Specific Modifications  =&lt;br /&gt;
&lt;br /&gt;
== [[Image:Weibull++Icon.png|32px]] Weibull++/ALTA 10 ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Fractional Failures Analysis]]&#039;&#039;&#039; added. Discount failures based on planned corrective actions effectiveness for what-if analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|Weibull/ALTA Multiple Projects]]&#039;&#039;&#039; added. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*New &#039;&#039;&#039;[[3D-Plot Folio]]&#039;&#039;&#039; added. New, original 3D plots. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Catastrophic Degradation Analysis]]&#039;&#039;&#039; added. Direct MLE solution options for Degradation Analysis added, allowing for catastrophic degradation analysis. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;User-Defined Degradation Model was added in Weibull++. This is the same feature as the &#039;&#039;&#039;Equation Fit Solver&#039;&#039;&#039; in Weibull++&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Publish analysis to the Synthesis Enterprise Portal: {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Published Folio/Data Sheets can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Reliability Data Warehouse (RDW) functionality expanded and interface redesigned. {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
**Link automatically to external databases through RDW (SQL, Oracle Access).&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:BlockSimIcon.png|32px]] BlockSim 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim General Enhancements]]&#039;&#039;&#039;. General Interface and Analysis enhancements were made. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[FMRA View Expanded]]&#039;&#039;&#039;. Additional functionality and capability added to the BlockSim FMRA view. {{Font|[MODIFIED]|8|tahoma|italic|rgb(128,35,205)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[BlockSim Multiple Projects]]&#039;&#039;&#039; Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [FC] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Diagrams can now be associated with a [[Metric]] variable. Track and plot (and even analyze) changes in analysis metrics over time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RENOIcon.png|32px]] RENO 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*Drag &amp;amp;amp; Drop Mode enhanced and can now be used in tabbed view as well. &lt;br /&gt;
&lt;br /&gt;
*Option to not automatically validate equations added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Ability to reference resources &#039;&#039;by name&#039;&#039; in the equations instead of by reference.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Diagram Calculation Options are defined at the diagram level and  are independent of User Settings. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[RENO_Multiple_Projects|RENO Multiple Projects]]&#039;&#039;&#039;. Open multiple projects at the same time. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
&lt;br /&gt;
*Quick send to Weibull for result containers.  &lt;br /&gt;
&lt;br /&gt;
*Curved line type connectors added to diagrams. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:XfmeaIcon.png|32px]] Xfmea 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Analyses as Resources]]&#039;&#039;&#039;. FMEAs are now resources (i.e., Linked FMEAs) and can be reused. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*FMEA Smart Add. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Optional color coding on scale selections added based on item target reliability.  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Target Reliability allocation to cause added  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
* Read and Push Metrics added to all items  {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*&#039;&#039;&#039;[[QCPN]]&#039;&#039;&#039; metric added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [IP] ]]|8|tahoma|italic|orange}} &lt;br /&gt;
*DFR planner is no longer an analysis in Xfmea but is now a separate project-level &#039;&#039;utility application&#039;&#039; available to all Synthesis applications. &lt;br /&gt;
**Conversion considerations: &lt;br /&gt;
***In V9, DFR plans were simpler and could be added to any item in the system hierarchy. &lt;br /&gt;
***The greatly expanded V10 DFR planner is no longer item-based but instead project-based. &lt;br /&gt;
***Upon conversion of older files, and if more than one DFR planner is in the project, the plans will be merged into a single project-level DFR planner. &lt;br /&gt;
*System Hierarchy Filtered View &lt;br /&gt;
*Interactive FMEA Block Diagrams added.  &lt;br /&gt;
*Automatic and linked Test Plan Generation and monitoring. Enhances and replaces existing DVP&amp;amp;R functionality. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Postponed&lt;br /&gt;
*&#039;&#039;&#039;[[FMEA Import/Compare Window]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} {{Font|[[Current Build Status| [NS] ]]|8|tahoma|italic|orange}}&lt;br /&gt;
&lt;br /&gt;
== [[Image:RCMIcon.png|32px]] RCM++ 10  ==&lt;br /&gt;
&lt;br /&gt;
*Platform-wide modifications [[#top|(&#039;&#039;see the list at the top of this page&#039;&#039;)]].&lt;br /&gt;
*ALL Xfmea modifications are also included in RCM++ [[#XFMEA_10|&#039;&#039;(see the list under Xfmea)&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
== [[Image:RBIIcon.png|32px]] RBI 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}}&lt;br /&gt;
*All Xfmea improvements are also available in RBI.&lt;br /&gt;
&lt;br /&gt;
== [[Image:LpredictIcon.png|28px|28]] Lambda Predict 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*FIDES &lt;br /&gt;
**A set of FIDES Phases can now be defined in a Phase Set.&lt;br /&gt;
**Two new FIDES plots.&lt;br /&gt;
*Parts Count added.&lt;br /&gt;
**Added a parts count prediction that restricts the behaviors to those allowable in a parts count.&lt;br /&gt;
*NSWC Updated&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:DOEIcon.png|32px]] DOE++ 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
*[[Repeated Measurements]] added for standard designs with a single response. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|DOE++ Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Item Specific Calculation Options]]&#039;&#039;&#039;. Folio Calculation Options are defined at the folio level and are independent of User Settings.&lt;br /&gt;
&lt;br /&gt;
*The optimization plot, overlaid contour plot and dynamic overlaid contour plot dialogs now use a tree format. (Specific factors can now be held constant in the optimization plot.)&lt;br /&gt;
&lt;br /&gt;
*Ignore/Include column added to standard and robust folios. (Ignored rows are not included in any of the calculations.)&lt;br /&gt;
&lt;br /&gt;
*Dialogs added to modify all factors or all responses from one spot.&lt;br /&gt;
&lt;br /&gt;
*Central Composite factor values can be assigned based on Alpha values.&lt;br /&gt;
&lt;br /&gt;
*New and improved Surface plot.&lt;br /&gt;
**Basic plot functionality remains the same as the previous surface plot.&lt;br /&gt;
**Appearance is greatly improved (including anti-aliasing and legible text).&lt;br /&gt;
**Interface has changed substantially.&lt;br /&gt;
**Available settings have been vastly expanded.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== [[Image:RGAIcon.png|32px]] RGA 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Data Analysis Applications - Multiple Projects|RGA Multiple Projects]]&#039;&#039;&#039; added. {{Font|[NEW]|8|tahoma|italic|rgb(205,16,118)}} &lt;br /&gt;
&lt;br /&gt;
*Modifications/improvements to multiple systems analysis (and generating the equivalent single system).&lt;br /&gt;
&lt;br /&gt;
*Folios/data sheets can now be associated with a [[Metric]] variable. &lt;br /&gt;
&lt;br /&gt;
*Metric variable array can be used as a data source. Track and plot (and even analyze) changes in analysis metrics over time. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Interactive plot zoom]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:XFRACASIcon.png|32px]] XFRACAS 10  ==&lt;br /&gt;
&lt;br /&gt;
*Multiple &amp;quot;under-the-hood&amp;quot; improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[Image:MPCIcon.png|32px]] MPC 10  ==&lt;br /&gt;
&lt;br /&gt;
{{Template:SynxItemhead}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Synthesis API 10  ==&lt;br /&gt;
&lt;br /&gt;
*Calculate compound (analytical diagram) models via an API call. &lt;br /&gt;
*Manipulate Xfmea System Hierarchy Via API&lt;br /&gt;
&lt;br /&gt;
= New Applications Added to the Platform for Version 10  =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SEP: [[Synthesis Enterprise Portal]] 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ReliaSoft’s Synthesis Enterprise Portal (SEP) opens up the Synthesis Platform, and your work&lt;br /&gt;
in the platform, to your whole organization. You can share your progress, results and analyses with management and colleagues using the new web-based portal, accessible from any web-enabled device. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SEP Areas&#039;&#039;&#039;&lt;br /&gt;
*Home Landing Page&lt;br /&gt;
*Projects and Project Information Page &lt;br /&gt;
**[[SEP Analysis in Project Pages]] (Based on current Project)&lt;br /&gt;
*Timeline Messaging&lt;br /&gt;
*Tasks &amp;amp; Actions&lt;br /&gt;
*Repository Management Pages&lt;br /&gt;
&lt;br /&gt;
== Markov 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Apply Markov analysis.&lt;br /&gt;
*Implemented in the BlockSim/RENO interface as a new diagram type. &lt;br /&gt;
**Introduces two new diagram types:&lt;br /&gt;
*** Discrete Markov:  For creating and analyzing single or multi-phased diagrams with discrete transition probabilities.&lt;br /&gt;
***Continuous Markov:  For creating and analyzing single or multi-phased diagrams with continuous distributions for transition probabilities.&lt;br /&gt;
&lt;br /&gt;
== The Synthesis Dashboard Designer 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
* Create dashboard templates that can be published and viewed from each hosting application and the Synthesis Enterprise Portal.&lt;br /&gt;
&lt;br /&gt;
* Dashboard designs are available for the following analyses:&lt;br /&gt;
** DFR Planner&lt;br /&gt;
** RDW Data&lt;br /&gt;
** BlockSim Simulation Results&lt;br /&gt;
** Synthesis Explorer&lt;br /&gt;
** Xfmea/RCM++/RBI Items&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Course Player 10 {{Font|[NEW]|12|tahoma|italic|rgb(205,16,118)}}  ==&lt;br /&gt;
&lt;br /&gt;
A new eLearning delivery platform designed from the ground up by ReliaSoft to create and deliver ReliaSoft eCourses. Details upon release.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56802</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56802"/>
		<updated>2014-12-03T23:05:26Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &#039;&#039;N&#039;&#039;= the sample size, or total number of items in the test&lt;br /&gt;
* &#039;&#039;PMON&#039;&#039; = previous mean order number&lt;br /&gt;
* &#039;&#039;NIBPSS&#039;&#039; = the number of items beyond the present suspended set. It is the number of units (including all the failures and suspensions) at the current failure time.&lt;br /&gt;
* &#039;&#039;i&#039;&#039; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt; at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56801</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56801"/>
		<updated>2014-12-03T21:55:07Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &#039;&#039;N&#039;&#039;= the sample size, or total number of items in the test&lt;br /&gt;
* &#039;&#039;PMON&#039;&#039; = previous mean order number&lt;br /&gt;
* &#039;&#039;NIBPSS&#039;&#039; = the number of items beyond the present suspended set&lt;br /&gt;
* &#039;&#039;i&#039;&#039; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt; at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56800</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56800"/>
		<updated>2014-12-03T21:54:11Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &amp;quot;N&amp;quot; = the sample size, or total number of items in the test&lt;br /&gt;
* &amp;quot;PMON&amp;quot; = previous mean order number&lt;br /&gt;
* &amp;quot;NIBPSS&amp;quot; = the number of items beyond the present suspended set&lt;br /&gt;
* &amp;quot;i&amp;quot; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
&lt;br /&gt;
For Grouped data, the increment &amp;lt;math&amp;gt;{{I}_{i}}&amp;lt;/math&amp;gt;at each failure group will be multiplied by the number of failures in that group. &lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56799</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56799"/>
		<updated>2014-12-03T20:59:37Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* &amp;quot;N&amp;quot; = the sample size, or total number of items in the test&lt;br /&gt;
* &amp;quot;PMON&amp;quot; = previous mean order number&lt;br /&gt;
* &amp;quot;NIBPSS&amp;quot; = the number of items beyond the present suspended set&lt;br /&gt;
* &amp;quot;i&amp;quot; = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56798</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56798"/>
		<updated>2014-12-03T20:58:33Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* N = the sample size, or total number of items in the test&lt;br /&gt;
* PMON = previous mean order number&lt;br /&gt;
* NIBPSS = the number of items beyond the present suspended set&lt;br /&gt;
* i = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
:: &amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56797</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56797"/>
		<updated>2014-12-03T20:57:50Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* N = the sample size, or total number of items in the test&lt;br /&gt;
* PMON = previous mean order number&lt;br /&gt;
* NIBPSS = the number of items beyond the present suspended set&lt;br /&gt;
* i = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
::&amp;lt;math&amp;gt;{MON}_{i}={MON}_{i-1}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56796</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56796"/>
		<updated>2014-12-03T20:56:57Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* N = the sample size, or total number of items in the test&lt;br /&gt;
* PMON = previous mean order number&lt;br /&gt;
* NIBPSS = the number of items beyond the present suspended set&lt;br /&gt;
* i = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
::&amp;lt;math&amp;gt;MON_{i}}=MON_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56795</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56795"/>
		<updated>2014-12-03T20:55:38Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
* N = the sample size, or total number of items in the test&lt;br /&gt;
* PMON = previous mean order number&lt;br /&gt;
* NIBPSS = the number of items beyond the present suspended set&lt;br /&gt;
* i = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same.&lt;br /&gt;
 &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56794</id>
		<title>Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Parameter_Estimation&amp;diff=56794"/>
		<updated>2014-12-03T20:54:37Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Rank Adjustment Method for Right Censored Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|4|Parameter Estimation}}&lt;br /&gt;
The term &#039;&#039;parameter estimation&#039;&#039; refers to the process of using sample data (in reliability engineering, usually times-to-failure or success data) to estimate the parameters of the selected distribution. Several parameter estimation methods are available. This section presents an overview of the available methods used in life data analysis. More specifically, we start with the relatively simple method of Probability Plotting and continue with the more sophisticated methods of Rank Regression (or Least Squares), Maximum Likelihood Estimation and Bayesian Estimation Methods.&lt;br /&gt;
&lt;br /&gt;
=Probability Plotting=&lt;br /&gt;
The least mathematically intensive method for parameter estimation is the method of probability plotting. As the term implies, probability plotting involves a physical plot of the data on specially constructed &#039;&#039;probability plotting paper&#039;&#039;. This method is easily implemented by hand, given that one can obtain the appropriate probability plotting paper.&lt;br /&gt;
&lt;br /&gt;
The method of probability plotting takes the &#039;&#039;cdf&#039;&#039; of the distribution and attempts to linearize it by employing a specially constructed paper. The following sections illustrate the steps in this method using the 2-parameter Weibull distribution as an example. This includes:&lt;br /&gt;
&lt;br /&gt;
*Linearize the unreliability function&lt;br /&gt;
*Construct the probability plotting paper&lt;br /&gt;
*Determine the X and Y positions of the plot points&lt;br /&gt;
&lt;br /&gt;
And then using the plot to read any particular time or reliability/unreliability value of interest.&lt;br /&gt;
&lt;br /&gt;
==Linearizing the Unreliability Function==&lt;br /&gt;
&lt;br /&gt;
In the case of the 2-parameter Weibull, the &#039;&#039;cdf&#039;&#039; (also the unreliability &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=Q(t)=1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function can then be linearized (i.e., put in the common form of &amp;lt;math&amp;gt;y = m&#039;x + b\,\!&amp;lt;/math&amp;gt; format) as follows&#039;&#039;&#039;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 Q(t)= &amp;amp;  1-{e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}}  \\&lt;br /&gt;
  \ln (1-Q(t))= &amp;amp; \ln \left[ {e^{-\left(\tfrac{t}{\eta}\right)^{\beta}}} \right]  \\&lt;br /&gt;
  \ln (1-Q(t))=&amp;amp; -\left(\tfrac{t}{\eta}\right)^{\beta}  \\&lt;br /&gt;
  \ln ( -\ln (1-Q(t)))= &amp;amp; \beta \left(\ln \left( \frac{t}{\eta }\right)\right) \\&lt;br /&gt;
  \ln \left( \ln \left( \frac{1}{1-Q(t)}\right) \right) = &amp;amp; \beta\ln{ t} -\beta(\eta )  \\&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then by setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left( \ln \left( \frac{1}{1-Q(t)} \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\ln \left( t \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equation can then be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\beta x-\beta \ln \left( \eta  \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is now a linear equation with a slope of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
m = \beta&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and an intercept of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=-\beta \cdot ln(\eta)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Constructing the Paper==&lt;br /&gt;
The next task is to construct the Weibull probability plotting paper with the appropriate y and x axes. The x-axis transformation is simply logarithmic. The y-axis is a bit more complex, requiring a double log reciprocal transformation, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\ln \left(\ln \left( \frac{1}{1-Q(t)} ) \right) \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt; is the unreliability. &lt;br /&gt;
&lt;br /&gt;
Such papers have been created by different vendors and are called &#039;&#039;probability plotting papers&#039;&#039;. ReliaSoft&#039;s reliability engineering resource website at www.weibull.com has different plotting papers available for [http://www.weibull.com/GPaper/index.htm download]. &lt;br /&gt;
&lt;br /&gt;
[[Image:WeibullPaper2C.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
To illustrate, consider the following probability plot on a slightly different type of Weibull probability paper. &lt;br /&gt;
&lt;br /&gt;
[[Image:different_weibull_paper.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
This paper is constructed based on the mentioned y and x transformations, where the y-axis represents unreliability and the x-axis represents time. Both of these values must be known for each time-to-failure point we want to plot. &lt;br /&gt;
&lt;br /&gt;
Then, given the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; value for each point, the points can easily be put on the plot. Once the points have been placed on the plot, the best possible straight line is drawn through these points. Once the line has been drawn, the slope of the line can be obtained (some probability papers include a slope indicator to simplify this calculation). This is the parameter &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, which is the value of the slope. To determine the scale parameter, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; (also called the &#039;&#039;characteristic life&#039;&#039;), one reads the time from the x-axis corresponding to &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that at:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t=\eta)= &amp;amp; 1-{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
  = &amp;amp; 1-{{e}^{-1}} \\ &lt;br /&gt;
  = &amp;amp; 0.632 \\ &lt;br /&gt;
  = &amp;amp; 63.2%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if we enter the &#039;&#039;y&#039;&#039; axis at &amp;lt;math&amp;gt;Q(t)=63.2%\,\!&amp;lt;/math&amp;gt;, the corresponding value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;. Thus, using this simple methodology, the parameters of the Weibull distribution can be estimated.&lt;br /&gt;
&lt;br /&gt;
==Determining the X and Y Position of the Plot Points==&lt;br /&gt;
The points on the plot represent our data or, more specifically, our times-to-failure data. If, for example, we tested four units that failed at 10, 20, 30 and 40 hours, then we would use these times as our &#039;&#039;x&#039;&#039; values or time values. &lt;br /&gt;
&lt;br /&gt;
Determining the appropriate &#039;&#039;y&#039;&#039; plotting positions, or the unreliability values, is a little more complex. To determine the &#039;&#039;y&#039;&#039; plotting positions, we must first determine a value indicating the corresponding unreliability for that failure. In other words, we need to obtain the cumulative percent failed for each time-to-failure. For example, the cumulative percent failed by 10 hours may be 25%, by 20 hours 50%, and so forth. This is a simple method illustrating the idea. The problem with this simple method is the fact that the 100% point is not defined on most probability plots; thus, an alternative and more robust approach must be used. The most widely used method of determining this value is the method of obtaining the &#039;&#039;median rank&#039;&#039; for each failure, as discussed next.&lt;br /&gt;
&lt;br /&gt;
===Median Ranks ===&lt;br /&gt;
The Median Ranks method is used to obtain an estimate of the unreliability for each failure. The median rank is the value that the true probability of failure, &amp;lt;math&amp;gt;Q({{T}_{j}})\,\!&amp;lt;/math&amp;gt;, should have at the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure out of a sample of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units at the 50% confidence level. &lt;br /&gt;
&lt;br /&gt;
The rank can be found for any percentage point, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, greater than zero and less than one, by solving the cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. This represents the rank, or unreliability estimate, for the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; failure in the following equation for the cumulative binomial: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the sample size and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; the order number. &lt;br /&gt;
&lt;br /&gt;
The median rank is obtained by solving this equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  at &amp;lt;math&amp;gt;P = 0.50\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.50=\underset{k=j}{\overset{N}{\mathop \sum }}\,\left( \begin{matrix}&lt;br /&gt;
   N  \\&lt;br /&gt;
   k  \\&lt;br /&gt;
\end{matrix} \right){{Z}^{k}}{{\left( 1-Z \right)}^{N-k}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if &amp;lt;math&amp;gt;N=4\,\!&amp;lt;/math&amp;gt; and we have four failures, we would solve the median rank equation for the value of &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  four times; once for each failure with &amp;lt;math&amp;gt;j= 1, 2, 3 \text{ and }4\,\!&amp;lt;/math&amp;gt;. This result can then be used as the unreliability estimate for each failure or the &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;  plotting position. (See also [[The Weibull Distribution|The Weibull Distribution]]&amp;amp;nbsp;for a step-by-step example of this method.) The solution of cumulative binomial equation for &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  requires the use of numerical methods.&lt;br /&gt;
&lt;br /&gt;
===Beta and F Distributions Approach===&lt;br /&gt;
A more straightforward and easier method of estimating median ranks is by applying two transformations to the cumulative binomial equation, first to the beta distribution and then to the F distribution, resulting in [[Appendix:_Life_Data_Analysis_References|[12, 13]]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   MR &amp;amp; = &amp;amp; \tfrac{1}{1+\tfrac{N-j+1}{j}{{F}_{0.50;m;n}}}  \\&lt;br /&gt;
   m &amp;amp; = &amp;amp; 2(N-j+1)  \\&lt;br /&gt;
   n &amp;amp; = &amp;amp; 2j  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{F}_{0.50;m;n}}\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution at the 0.50 point, with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; degrees of freedom, for failure &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; units.&lt;br /&gt;
&lt;br /&gt;
=== Benard&#039;s Approximation for Median Ranks  ===&lt;br /&gt;
Another quick, and less accurate, approximation of the median ranks is also given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MR = \frac{{j - 0.3}}{{N + 0.4}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This approximation of the median ranks is also known as &#039;&#039;Benard&#039;s approximation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Kaplan-Meier===&lt;br /&gt;
The Kaplan-Meier estimator (also known as the &#039;&#039;product limit estimator&#039;&#039;) is used as an alternative to the median ranks method for calculating the estimates of the unreliability for probability plotting purposes. The equation of the estimator is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{F}({{t}_{i}})=1-\underset{j=1}{\overset{i}{\mathop \prod }}\,\frac{{{n}_{j}}-{{r}_{j}}}{{{n}_{j}}},\text{ }i=1,...,m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  m =  &amp;amp; {\text{total number of data points}} \\ &lt;br /&gt;
  n =  &amp;amp; {\text{the total number of units}} \\ &lt;br /&gt;
  {n_i} =  &amp;amp; n - \sum_{j = 0}^{i - 1}{s_j} - \sum_{j = 0}^{i - 1}{r_j}, \text{i = 1,...,m }\\ &lt;br /&gt;
  {r_j} =  &amp;amp; {\text{ number of failures in the }}{j^{th}}{\text{ data group, and}} \\ &lt;br /&gt;
  {s_j} =  &amp;amp; {\text{number of surviving units in the }}{j^{th}}{\text{ data group}} \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Probability Plotting Example  ==&lt;br /&gt;
This same methodology can be applied to other distributions with &#039;&#039;cdf&#039;&#039; equations that can be linearized. Different probability papers exist for each distribution, because different distributions have different &#039;&#039;cdf&#039;&#039; equations. ReliaSoft&#039;s software tools automatically create these plots for you. Special scales on these plots allow you to derive the parameter estimates directly from the plots, similar to the way &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; were obtained from the Weibull probability plot. The following example demonstrates the method again, this time using the 1-parameter exponential distribution.&lt;br /&gt;
&lt;br /&gt;
{{:Probability Plotting Example}}&lt;br /&gt;
&lt;br /&gt;
== Comments on the Probability Plotting Method ==&lt;br /&gt;
Besides the most obvious drawback to probability plotting, which is the amount of effort required, manual probability plotting is not always consistent in the results. Two people plotting a straight line through a set of points will not always draw this line the same way, and thus will come up with slightly different results. This method was used primarily before the widespread use of computers that could easily perform the calculations for more complicated parameter estimation methods, such as the least squares and maximum likelihood methods.&lt;br /&gt;
&lt;br /&gt;
= Least Squares (Rank Regression)  =&lt;br /&gt;
Using the idea of probability plotting, regression analysis mathematically fits the best straight line to a set of points, in an attempt to estimate the parameters. Essentially, this is a mathematically based version of the probability plotting method discussed previously. &lt;br /&gt;
&lt;br /&gt;
The method of linear least squares is used for all regression analysis performed by Weibull++, except for the cases of the 3-parameter Weibull, mixed Weibull, gamma and generalized gamma distributions, where a non-linear regression technique is employed. The terms &#039;&#039;linear regression&#039;&#039; and &#039;&#039;least squares&#039;&#039; are used synonymously in this reference. In Weibull++, the term &#039;&#039;rank regression&#039;&#039; is used instead of least squares, or linear regression, because the regression is performed on the rank values, more specifically, the median rank values (represented on the y-axis). The method of least squares requires that a straight line be fitted to a set of data points, such that the sum of the squares of the distance of the points to the fitted line is minimized. This minimization can be performed in either the vertical or horizontal direction. If the regression is on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, then the line is fitted so that the horizontal deviations from the points to the line are minimized. If the regression is on Y, then this means that the distance of the vertical deviations from the points to the line is minimized. This is illustrated in the following figure. &lt;br /&gt;
&lt;br /&gt;
[[Image:minimizingdistance.png|center|500px]]&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on Y  ===&lt;br /&gt;
Assume that a set of data pairs &amp;lt;math&amp;gt;({{x}_{1}},{{y}_{1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;-values are known exactly. Then, according to the &#039;&#039;least squares principle,&#039;&#039; which minimizes the vertical distance between the data points and the straight line fitted to the data, the best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;y=\hat{a}+\hat{b}x\,\!&amp;lt;/math&amp;gt; (where the recently introduced (&amp;lt;math&amp;gt;\hat{ }\,\!&amp;lt;/math&amp;gt;) symbol indicates that this value is an estimate) such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum\limits_{i=1}^{N}{{{\left( \hat{a}+\hat{b}{{x}_{i}}-{{y}_{i}} \right)}^{2}}=\min \sum\limits_{i=1}^{N}{{{\left( a+b{{x}_{i}}-{{y}_{i}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and where &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat b\,\!&amp;lt;/math&amp;gt; are the &#039;&#039;least squares estimates&#039;&#039; of &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}=\bar{y}-\hat{b}\bar{x}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rank Regression on X  ===&lt;br /&gt;
Assume that a set of data pairs .., &amp;lt;math&amp;gt;({{x}_{2}},{{y}_{2}})\,\!&amp;lt;/math&amp;gt;,..., &amp;lt;math&amp;gt;({{x}_{N}},{{y}_{N}})\,\!&amp;lt;/math&amp;gt; were obtained and plotted, and that the y-values are known exactly. The same least squares principle is applied, but this time, minimizing the horizontal distance between the data points and the straight line fitted to the data. The best fitting straight line to these data is the straight line &amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(\widehat{a}+\widehat{b}{{y}_{i}}-{{x}_{i}})}^{2}}=min(a,b)\underset{i=1}{\overset{N}{\mathop \sum }}\,{{(a+b{{y}_{i}}-{{x}_{i}})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat b\,\!&amp;lt;/math&amp;gt; are the least squares estimates of and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of data points. These equations are minimized by estimates of &amp;lt;math&amp;gt;\widehat a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}=\bar{x}-\hat{b}\bar{y}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding relations for determining the parameters for specific distributions (i.e., Weibull, exponential, etc.), are presented in the chapters covering that distribution.&lt;br /&gt;
&lt;br /&gt;
=== Correlation Coefficient  ===&lt;br /&gt;
The correlation coefficient is a measure of how well the linear regression model fits the data and is usually denoted by &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;. In the case of life data analysis, it is a measure for the strength of the linear relation (correlation) between the median ranks and the data. The population correlation coefficient is defined as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\rho =\frac{{{\sigma }_{xy}}}{{{\sigma }_{x}}{{\sigma }_{y}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\sigma}_{xy}} = \,\!&amp;lt;/math&amp;gt; covariance of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\sigma}_{x}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{\sigma}_{y}} = \,\!&amp;lt;/math&amp;gt; standard deviation of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\sqrt{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N} \right)\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The range of &amp;lt;math&amp;gt;\hat \rho \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\le \hat{\rho }\le 1\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
[[Image:correlationcoeffficient.png|center|500px]] &lt;br /&gt;
&lt;br /&gt;
The closer the value is to &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt;, the better the linear fit. Note that +1 indicates a perfect fit (the paired values (&amp;lt;math&amp;gt;{{x}_{i}},{{y}_{i}}\,\!&amp;lt;/math&amp;gt;) lie on a straight line) with a positive slope, while -1 indicates a perfect fit with a negative slope. A correlation coefficient value of zero would indicate that the data are randomly scattered and have no pattern or correlation in relation to the regression line model.&lt;br /&gt;
&lt;br /&gt;
===Comments on the Least Squares Method===&lt;br /&gt;
The least squares estimation method is quite good for functions that can be linearized.&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt; For these distributions, the calculations are relatively easy and straightforward, having closed-form solutions that can readily yield an answer without having to resort to numerical techniques or tables. Furthermore, this technique provides a good measure of the goodness-of-fit of the chosen distribution in the correlation coefficient. Least squares is generally best used with data sets containing complete data, that is, data consisting only of single times-to-failure with no censored or interval data. (See [[Life Data Classification]] for information about the different data types, including complete, left censored, right censored (or suspended) and interval data.) &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Least Squares/Rank Regression Equations]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Rank Methods for Censored Data=&lt;br /&gt;
All available data should be considered in the analysis of times-to-failure data. This includes the case when a particular unit in a sample has been removed from the test prior to failure. An item, or unit, which is removed from a reliability test prior to failure, or a unit which is in the field and is still operating at the time the reliability of these units is to be determined, is called a &#039;&#039;suspended item &#039;&#039;or &#039;&#039;right censored observation &#039;&#039;or &#039;&#039;right censored&#039;&#039; data point&#039;&#039;. &#039;&#039;Suspended items analysis would also be considered when: &lt;br /&gt;
&lt;br /&gt;
#We need to make an analysis of the available results before test completion. &lt;br /&gt;
#The failure modes which are occurring are different than those anticipated and such units are withdrawn from the test. &lt;br /&gt;
#We need to analyze a single mode and the actual data set comprises multiple modes. &lt;br /&gt;
#A &#039;&#039;warranty analysis&#039;&#039; is to be made of all units in the field (non-failed and failed units). The non-failed units are considered to be suspended items (or right censored).&lt;br /&gt;
&lt;br /&gt;
This section describes the rank methods that are used in both probability plotting and least squares (rank regression) to handle censored data. This includes:&lt;br /&gt;
&lt;br /&gt;
*The rank adjustment method for right censored (suspension) data.&lt;br /&gt;
*ReliaSoft&#039;s alternative ranking method for interval censored data.&lt;br /&gt;
=== Rank Adjustment Method for Right Censored Data ===&lt;br /&gt;
When using the probability plotting or least squares (rank regression) method for data sets where some of the units did not fail, or were suspended, we need to adjust their probability of failure, or unreliability. As discussed before, estimates of the unreliability for complete data are obtained using the median ranks approach. The following methodology illustrates how adjusted median ranks are computed to account for right censored data. To better illustrate the methodology, consider the following example in Kececioglu [[Appendix:_Life_Data_Analysis_References|&amp;amp;nbsp;[20]]] where five items are tested resulting in three failures and two suspensions. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Item Number &amp;lt;br&amp;gt;(Position) &lt;br /&gt;
! Failure (F) &amp;lt;br&amp;gt;or Suspension (S) &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 5,100&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,500&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 15,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 22,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 40,000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The methodology for plotting suspended items involves adjusting the rank positions and plotting the data based on new positions, determined by the location of the suspensions. If we consider these five units, the following methodology would be used: The first item must be the first failure; hence, it is assigned failure order number &amp;lt;math&amp;gt;j = 1\,\!&amp;lt;/math&amp;gt;. The actual failure order number (or position) of the second failure, &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; is in doubt. It could either be in position 2 or in position 3. Had &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; not been withdrawn from the test at 9,500 hours, it could have operated successfully past 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 2. Alternatively, &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; could also have failed before 15,000 hours, thus placing &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in position 3. In this case, the failure order number for &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; will be some number between 2 and 3. To determine this number, consider the following: &lt;br /&gt;
&lt;br /&gt;
We can find the number of ways the second failure can occur in either order number 2 (position 2) or order number 3 (position 3). The possible ways are listed next. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;6&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 2 &lt;br /&gt;
| style=&amp;quot;text: align:center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; in Position 3&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 4 &lt;br /&gt;
| 5 &lt;br /&gt;
| 6 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; can occur in the second position six ways and in the third position two ways. The most probable position is the average of these possible ways, or the &#039;&#039;mean order number&#039;&#039; ( MON ), given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}=MO{{N}_{2}}=\frac{(6\times 2)+(2\times 3)}{6+2}=2.25\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Using the same logic on the third failure, it can be located in position numbers 3, 4 and 5 in the possible ways listed next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;2&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 3 &lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 4&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; rowspan=&amp;quot;7&amp;quot; | OR &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; in Position 5&lt;br /&gt;
|-&lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3 &lt;br /&gt;
| 1 &lt;br /&gt;
| 2 &lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Then, the mean order number for the third failure, (item 5) is: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=\frac{(2\times 3)+(3\times 4)+(3\times 5)}{2+3+3}=4.125\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Once the mean order number for each failure has been established, we obtain the median rank positions for these failures at their mean order number. Specifically, we obtain the median rank of the order numbers 1, 2.25 and 4.125 out of a sample size of 5, as given next. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Plotting Positions for the Failures (Sample Size=5)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Failure Number &lt;br /&gt;
! MON &lt;br /&gt;
! Median Rank Position(%)&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1:&amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1 &lt;br /&gt;
| 13%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2:&amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 2.25 &lt;br /&gt;
| 36%&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3:&amp;lt;math&amp;gt;{{F}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 4.125 &lt;br /&gt;
| 71%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the median rank values have been obtained, the probability plotting analysis is identical to that presented before. As you might have noticed, this methodology is rather laborious. Other techniques and shortcuts have been developed over the years to streamline this procedure. For more details on this method, see Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. Here, we will introduce one of these methods. This method calculates MON using an increment, &#039;&#039;I&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
:: &amp;lt;math&amp;gt;{{I}_{i}}=\frac{N+1-PMON}{1+NIBPSS}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
* N = the sample size, or total number of items in the test&lt;br /&gt;
* PMON = previous mean order number&lt;br /&gt;
* NIBPSS = the number of items beyond the present suspended set&lt;br /&gt;
* i = the ith failure item&lt;br /&gt;
&lt;br /&gt;
MON is given as:&lt;br /&gt;
 &lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{i}}=MO{{N}_{i-1}}+{{I}_{i}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let&#039;s calculate the previous example using the method.&lt;br /&gt;
&lt;br /&gt;
For F1:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{1}}=MO{{N}_{0}}+{{I}_{1}}=\frac{5+1-0}{1+5}=1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For F2:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{2}}=MO{{N}_{1}}+{{I}_{2}}=1+\frac{5+1-1}{1+3}=2.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For F3:&lt;br /&gt;
::&amp;lt;math&amp;gt;MO{{N}_{3}}=MO{{N}_{2}}+{{I}_{3}}=2.25+\frac{5+1-2.25}{1+1}=4.125&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MON obtained for each failure item via this method is same as from the first method, so the median rank values will also be the same. &lt;br /&gt;
==== Shortfalls of the Rank Adjustment Method  ====&lt;br /&gt;
Even though the rank adjustment method is the most widely used method for performing analysis for analysis of suspended items, we would like to point out the following shortcoming. As you may have noticed, only the position where the failure occurred is taken into account, and not the exact time-to-suspension. For example, this methodology would yield the exact same results for the next two cases. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 1 &lt;br /&gt;
! style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | Case 2&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
! Item Number &lt;br /&gt;
! State*&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of an item, hr &lt;br /&gt;
! Item number &lt;br /&gt;
! State*,&amp;quot;F&amp;quot; or &amp;quot;S&amp;quot; &lt;br /&gt;
! Life of item, hr&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000 &lt;br /&gt;
| 1 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,100 &lt;br /&gt;
| 2 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,700&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,200 &lt;br /&gt;
| 3 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,800&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 1,300 &lt;br /&gt;
| 4 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{S}_{3}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 9,900&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000 &lt;br /&gt;
| 5 &lt;br /&gt;
| &amp;lt;math&amp;gt;{{F}_{2}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| 10,000&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
| style=&amp;quot;text-align: center&amp;quot; colspan=&amp;quot;3&amp;quot; | * &#039;&#039;F&#039;&#039; - &#039;&#039;Failed, S&#039;&#039; - &#039;&#039;Suspended&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This shortfall is significant when the number of failures is small and the number of suspensions is large and not spread uniformly between failures, as with these data. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For the data given above, the results are as follows. The estimated parameters using the method just described are the same for both cases (1 and 2): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.81}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{11,417 hr}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, the MLE results for Case 1 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{1}\text{.33}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{6,900 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE results for Case 2 are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \widehat{\beta }= &amp;amp; \text{0}\text{.9337}  \\&lt;br /&gt;
   \widehat{\eta }= &amp;amp; \text{21,348 hr}  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see, there is a sizable difference in the results of the two sets calculated using MLE and the results using regression. The results for both cases are identical when using the regression estimation technique, as regression considers only the positions of the suspensions. The MLE results are quite different for the two cases, with the second case having a much larger value of &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, which is due to the higher values of the suspension times in Case 2. This is because the maximum likelihood technique, unlike rank regression, considers the values of the suspensions when estimating the parameters. This is illustrated in the [[Parameter_Estimation#Maximum_Likelihood_Estimation_.28MLE.29|discussion of MLE]] given below.&lt;br /&gt;
&lt;br /&gt;
== ReliaSoft&#039;s Ranking Method (RRM) for Interval Censored Data==&lt;br /&gt;
When analyzing interval data, it is commonplace to assume that the actual failure time occurred at the midpoint of the interval. To be more conservative, you can use the starting point of the interval or you can use the end point of the interval to be most optimistic. Weibull++ allows you to employ ReliaSoft&#039;s ranking method (RRM) when analyzing interval data. Using an iterative process, this ranking method is an improvement over the standard ranking method (SRM). For more details on this method see [[Appendix:_Special_Analysis_Methods#ReliaSoft_Ranking_Method|ReliaSoft&#039;s Ranking Method]].&lt;br /&gt;
&lt;br /&gt;
= Maximum Likelihood Estimation (MLE) = &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER WIKI PAGES. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
From a statistical point of view, the method of maximum likelihood estimation method is, with some exceptions, considered to be the most robust of the parameter estimation techniques discussed here. The method presented in this section is for complete data (i.e., data consisting only of times-to-failure). The analysis for [[Parameter_Estimation#MLE_for_Right_Censored_Data|right censored (suspension) data]], and for [[Parameter_Estimation#MLE_for_Interval_and_Left_Censored_Data|interval or left censored data]], are then discussed in the following sections.&lt;br /&gt;
&lt;br /&gt;
The basic idea behind MLE is to obtain the most likely values of the parameters, for a given distribution, that will best describe the data. As an example, consider the following data (-3, 0, 4) and assume that you are trying to estimate the mean of the data. Now, if you have to choose the most likely value for the mean from -5, 1 and 10, which one would you choose? In this case, the most likely value is 1 (given your limit on choices). Similarly, under MLE, one determines the most likely values for the parameters of the assumed distribution. It is mathematically formulated as follows. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown parameters which need to be estimated, with R independent observations,&amp;lt;math&amp;gt;{{x}_{1,}}{{x}_{2}},\cdots ,{{x}_{R}}\,\!&amp;lt;/math&amp;gt;, which correspond in the case of life data analysis to failure times. The likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{R}})=L=\underset{i=1}{\overset{R}{\mathop \prod }}\,f({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;i = 1,2,...,R\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda  = \ln L =\sum_{i = 1}^R \ln f({x_i};{\theta _1},{\theta _2},...,{\theta _k})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (or parameter values) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By maximizing &amp;lt;math&amp;gt;\Lambda\,\!&amp;lt;/math&amp;gt; which is much easier to work with than &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, the maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the simultaneous solutions of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; equations such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial{\Lambda}}{\partial{\theta_j}}=0, \text{ j=1,2...,k}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though it is common practice to plot the MLE solutions using median ranks (points are plotted according to median ranks and the line according to the MLE solutions), this is not completely representative. As can be seen from the equations above, the MLE method is independent of any kind of ranks. For this reason, the MLE solution often appears not to track the data on the probability plot. This is perfectly acceptable because the two methods are independent of each other, and in no way suggests that the solution is wrong.&lt;br /&gt;
&lt;br /&gt;
=== MLE for Right Censored Data  ===&lt;br /&gt;
When performing maximum likelihood analysis on data with suspended items, the likelihood function needs to be expanded to take into account the suspended items. The overall estimation technique does not change, but another term is added to the likelihood function to account for the suspended items. Beyond that, the method of solving for the parameter estimates remains the same. For example, consider a distribution where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a continuous random variable with &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
    &amp;amp; f(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
    &amp;amp; F(x;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta}_{1}},{{\theta}_{2}},...,{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the unknown parameters which need to be estimated from &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; observed failures at &amp;lt;math&amp;gt;{{T}_{1}},{{T}_{2}},...,{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; observed suspensions at &amp;lt;math&amp;gt;{{S}_{1}},{{S}_{2}},...,{{S}_{M}}\,\!&amp;lt;/math&amp;gt; then the likelihood function is formulated as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R,}}{{S}_{1}},...,{{S}_{M}})= &amp;amp; \underset{i=1}{\overset{R}{\mathop \prod }}\,f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \cdot \underset{j=1}{\overset{M}{\mathop \prod }}\,[1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The parameters are solved by maximizing this equation. In most cases, no closed-form solution exists for this maximum or for the parameters. Solutions specific to each distribution utilizing MLE are presented in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
=== MLE for Interval and Left Censored Data  ===&lt;br /&gt;
The inclusion of left and interval censored data in an MLE solution for parameter estimates involves adding a term to the likelihood equation to account for the data types in question. When using interval data, it is assumed that the failures occurred in an interval; i.e., in the interval from time &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or from time 0 to time &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; if left censored), where &amp;lt;math&amp;gt;A &amp;lt; B\,\!&amp;lt;/math&amp;gt;. In the case of interval data, and given &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; interval observations, the likelihood function is modified by multiplying the likelihood function with an additional term as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}|{{x}_{1}},{{x}_{2}},...,{{x}_{P}})= &amp;amp; \underset{i=1}{\overset{P}{\mathop \prod }}\,\{F({{x}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}) \\ &lt;br /&gt;
   &amp;amp; \ \ -F({{x}_{i-1}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that if only interval data are present, this term will represent the entire likelihood function for the MLE solution. The next section gives a formulation of the complete likelihood function for all possible censoring schemes.&lt;br /&gt;
&lt;br /&gt;
=== The Complete Likelihood Function  ===&lt;br /&gt;
We have now seen that obtaining MLE parameter estimates for different types of data involves incorporating different terms in the likelihood function to account for complete data, right censored data, and left, interval censored data. After including the terms for the different types of data, the likelihood function can now be expressed in its complete form or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
    L= &amp;amp; \underset{i=1}{\mathop{\overset{R}{\mathop{\prod }}\,}}\,f({{T}_{i}};{{\theta }_{1}},...,{{\theta }_{k}})\cdot \underset{j=1}{\mathop{\overset{M}{\mathop{\prod }}\,}}\,[1-F({{S}_{j}};{{\theta }_{1}},...,{{\theta }_{k}})]  \\&lt;br /&gt;
    &amp;amp; \cdot \underset{l=1}{\mathop{\overset{P}{\mathop{\prod }}\,}}\,\left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},...,{{\theta }_{k}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},...,{{\theta }_{k}}) \right\}  \\&lt;br /&gt;
\end{array}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; L\to L({{\theta }_{1}},...,{{\theta }_{k}}|{{T}_{1}},...,{{T}_{R}},{{S}_{1}},...,{{S}_{M}},{{I}_{1}},...{{I}_{P}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the number of units with exact failures &lt;br /&gt;
*&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is the number of suspended units &lt;br /&gt;
*&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is the number of units with left censored or interval times-to-failure &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\theta}_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the distribution &lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure&lt;br /&gt;
*&amp;lt;math&amp;gt;{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{j}^{th}}\,\!&amp;lt;/math&amp;gt; time of suspension&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{U}}}}\,\!&amp;lt;/math&amp;gt; is the ending of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
*&amp;lt;math&amp;gt;{{I}_{{{l}_{L}}}}\,\!&amp;lt;/math&amp;gt; is the beginning of the time interval of the &amp;lt;math&amp;gt;{{l}^{th}}\,\!&amp;lt;/math&amp;gt; group&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;The total number of units is &amp;lt;math&amp;gt;N = R + M + P\,\!&amp;lt;/math&amp;gt;. It should be noted that in this formulation, if either &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is zero then the product term associated with them is assumed to be one and not zero.&lt;br /&gt;
&lt;br /&gt;
== Comments on the MLE Method  ==&lt;br /&gt;
The MLE method has many large sample properties that make it attractive for use. It is asymptotically consistent, which means that as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, which means that for large samples, it produces the most precise estimates. It is asymptotically unbiased, which means that for large samples, one expects to get the right value on average. The distribution of the estimates themselves is normal, if the sample is large enough, and this is the basis for the usual [[Confidence_Bounds#Fisher_Matrix_Confidence_Bounds|Fisher Matrix Confidence Bounds]] discussed later. These are all excellent large sample properties. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, the size of the sample necessary to achieve these properties can be quite large: thirty to fifty to more than a hundred exact failure times, depending on the application. With fewer points, the methods can be badly biased. It is known, for example, that MLE estimates of the shape parameter for the Weibull distribution are badly biased for small sample sizes, and the effect can be increased depending on the amount of censoring. This bias can cause major discrepancies in analysis. There are also pathological situations when the asymptotic properties of the MLE do not apply. One of these is estimating the location parameter for the three-parameter Weibull distribution when the shape parameter has a value close to 1. These problems, too, can cause major discrepancies. &lt;br /&gt;
&lt;br /&gt;
However, MLE can handle suspensions and interval data better than rank regression, particularly when dealing with a heavily censored data set with few exact failure times or when the censoring times are unevenly distributed. It can also provide estimates with one or no observed failures, which rank regression cannot do. As a rule of thumb, our recommendation is to use rank regression techniques when the sample sizes are small and without heavy censoring (censoring is discussed in [[Life Data Classification|Life Data Classifications]]). When heavy or uneven censoring is present, when a high proportion of interval data is present and/or when the sample size is sufficient, MLE should be preferred. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also:&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*[[Appendix:_Maximum_Likelihood_Estimation_Example|Maximum Likelihood Parameter Estimation Example]] &lt;br /&gt;
*[[Appendix:_Special_Analysis_Methods|Grouped Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
=Bayesian Parameter Estimation Methods=&lt;br /&gt;
Up to this point, we have dealt exclusively with what is commonly referred to as classical statistics. In this section, another school of thought in statistical analysis will be introduced, namely Bayesian statistics. The premise of Bayesian statistics (within the context of life data analysis) is to incorporate prior knowledge, along with a given set of current observations, in order to make statistical inferences. The prior information could come from operational or observational data, from previous comparable experiments or from engineering knowledge.  This type of analysis can be particularly useful when there is limited test data for a given design or failure mode but there is a strong prior understanding of the failure rate behavior for that design or mode. By incorporating prior information about the parameter(s), a posterior distribution for the parameter(s) can be obtained and inferences on the model parameters and their functions can be made. This section is intended to give a quick and elementary overview of Bayesian methods, focused primarily on the material necessary for understanding the Bayesian analysis methods available in Weibull++. Extensive coverage of the subject can be found in numerous books dealing with Bayesian statistics.&lt;br /&gt;
&lt;br /&gt;
===Bayes’s Rule===&lt;br /&gt;
Bayes’s rule provides the framework for combining prior information with sample data. In this reference, we apply Bayes’s rule for combining prior information on the assumed distribution&#039;s parameter(s)   with sample data in order to make inferences based on the model. The prior knowledge about the parameter(s) is expressed in terms of a    &amp;lt;math&amp;gt;\varphi (\theta ),\,\!&amp;lt;/math&amp;gt; called the &#039;&#039;prior distribution&#039;&#039;. The &#039;&#039;posterior&#039;&#039; distribution of &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; given the sample data, using Bayes&#039;s rule, provides the updated information about the parameters &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. This is expressed with the following posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta )\varphi (\theta )}{\int_{\zeta}^{} L(Data|\theta )\varphi(\theta )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; is a vector of the parameters of the chosen distribution&lt;br /&gt;
*&amp;lt;math&amp;gt;\zeta\,\!&amp;lt;/math&amp;gt; is the range of &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt; L(Data|\theta)\,\!&amp;lt;/math&amp;gt; is the likelihood function based on the chosen distribution and data&lt;br /&gt;
*&amp;lt;math&amp;gt;\varphi(\theta )\,\!&amp;lt;/math&amp;gt; is the prior distribution for each of the parameters&lt;br /&gt;
&lt;br /&gt;
The integral in the Bayes&#039;s rule equation is often referred to as the marginal probability, which is a constant number that can be interpreted as the probability of obtaining the sample data given a prior distribution. Generally, the integral in the Bayes&#039;s rule equation does not have a closed form solution and numerical methods are needed for its solution.&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Bayes&#039;s rule equation, there is a significant difference between classical and Bayesian statistics. First, the idea of prior information does not exist in classical statistics. All inferences in classical statistics are based on the sample data. On the other hand, in the Bayesian framework, prior information constitutes the basis of the theory. Another difference is in the overall approach of making inferences and their interpretation. For example, in Bayesian analysis, the parameters of the distribution to be fitted are the random variables. In reality, there is no distribution fitted to the data in the Bayesian case.&lt;br /&gt;
&lt;br /&gt;
For instance, consider the case where data is obtained from a reliability test. Based on prior experience on a similar product, the analyst believes that the shape parameter of the Weibull distribution has a value between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\beta }_{2}}\,\!&amp;lt;/math&amp;gt; and wants to utilize this information. This can be achieved by using the Bayes theorem. At this point, the analyst is automatically forcing the Weibull distribution as a model for the data and with a shape parameter between &amp;lt;math&amp;gt;{\beta _1}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{\beta _2}\,\!&amp;lt;/math&amp;gt;. In this example, the range of values for the shape parameter is the prior distribution, which in this case is Uniform. By applying Bayes&#039;s rule, the posterior distribution of the shape parameter will be obtained. Thus, we end up with a distribution for the parameter rather than an estimate of the parameter, as in classical statistics.&lt;br /&gt;
&lt;br /&gt;
To better illustrate the example, assume that a set of failure data was provided along with a distribution for the shape parameter (i.e., uniform prior) of the Weibull (automatically assuming that the data are Weibull distributed). Based on that, a new distribution (the posterior) for that parameter is then obtained using Bayes&#039;s rule. This posterior distribution of the parameter may or may not resemble in form the assumed prior distribution. In other words, in this example the prior distribution of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was assumed to be uniform but the posterior is most likely not a uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The question now becomes: what is the value of the shape parameter? What about the reliability and other results of interest? In order to answer these questions, we have to remember that in the Bayesian framework all of these metrics are random variables. Therefore, in order to obtain an estimate, a probability needs to be specified or we can use the expected value of the posterior distribution.&lt;br /&gt;
&lt;br /&gt;
In order to demonstrate the procedure of obtaining results from the posterior distribution, we will rewrite the Bayes&#039;s rule equation for a single parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(\theta |Data) = \frac{L(Data|\theta_1 )\varphi (\theta_1 )}{\int_{\zeta}^{} L(Data|\theta_1 )\varphi(\theta_1 )d (\theta)}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value (or mean value) of the parameter &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained using the equation for the mean and the Bayes&#039;s rule equation for single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E({\theta _1}) = {m_{{\theta _1}}} = \int_{\zeta}^{}{\theta _1} \cdot f({\theta _1}|Data)d{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative result for &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; would be the median value. Using the equation for the median and the Bayes&#039;s rule equation for a single parameter:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{\theta }_{0.5}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation for the median is solved for &amp;lt;math&amp;gt;{\theta _{0.5}}\,\!&amp;lt;/math&amp;gt; the median value of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, any other percentile of the posterior &#039;&#039;pdf&#039;&#039; can be calculated and reported. For example, one could calculate the 90th percentile of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;’s posterior &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty ,0}^{{{\theta }_{0.9}}}f({{\theta }_{1}}|Data)d{{\theta }_{1}}=0.9\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This calculation will be used in [[Confidence Bounds]] and [[The Weibull Distribution]] for obtaining confidence bounds on the parameter(s).&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step will be to make inferences on the reliability. Since the parameter &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt; is a random variable described by the posterior &#039;&#039;pdf,&#039;&#039; all subsequent functions of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; are distributed random variables as well and are entirely based on the posterior &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt;. Therefore, expected value, median or other percentile values will also need to be calculated. For example, the expected reliability at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;E[R(T|Data)] = \int_{\varsigma}^{} R(T)f(\theta |Data)d{\theta}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, at a given time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, there is a distribution that governs the reliability value at that time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, and by using Bayes&#039;s rule, the expected (or mean) value of the reliability is obtained. Other percentiles of this distribution can also be obtained.&lt;br /&gt;
A similar procedure is followed for other functions of &amp;lt;math&amp;gt;{\theta _1}\,\!&amp;lt;/math&amp;gt;, such as failure rate, reliable life, etc.&lt;br /&gt;
&lt;br /&gt;
===Prior Distributions===&lt;br /&gt;
Prior distributions play a very important role in Bayesian Statistics. They are essentially the basis in Bayesian analysis. Different types of prior distributions exist, namely &#039;&#039;informative&#039;&#039; and &#039;&#039;non-informative&#039;&#039;. Non-informative prior distributions (a.k.a. &#039;&#039;vague&#039;&#039;, &#039;&#039;flat&#039;&#039; and &#039;&#039;diffuse&#039;&#039;) are distributions that have no population basis and play a minimal role in the posterior distribution. The idea behind the use of non-informative prior distributions is to make inferences that are not greatly affected by external information or when external information is not available. The uniform distribution is frequently used as a non-informative prior.&lt;br /&gt;
&lt;br /&gt;
On the other hand, informative priors have a stronger influence on the posterior distribution. The influence of the prior distribution on the posterior is related to the sample size of the data and the form of the prior. Generally speaking, large sample sizes are required to modify strong priors, where weak priors are overwhelmed by even relatively small sample sizes. Informative priors are typically obtained from past data.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=56780</id>
		<title>Time-Varying Stress Models</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=56780"/>
		<updated>2014-11-24T16:10:32Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Mathematical Formulation for a Step-Stress Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|10}}&lt;br /&gt;
Traditionally, accelerated tests that use a time-varying stress application have been used to assure failures quickly. This is highly desirable given the pressure on industry today to shorten new product introduction time. The most basic type of time-varying stress test is a step-stress test. In step-stress accelerated testing, the test units are subjected to successively higher stress levels in predetermined stages, and thus follow a time-varying stress profile. The units usually start at a lower stress level and at a predetermined time, or failure number, the stress is increased and the test continues. The test is terminated when all units have failed, when a certain number of failures are observed or when a certain time has elapsed. Step-stress testing can substantially shorten the reliability test&#039;s duration. In addition to step-stress testing, there are many other types of time-varying stress profiles that can be used in accelerated life testing. However, it should be noted that there is more uncertainty in the results from such time-varying stress tests than from traditional constant stress tests of the same length and sample size.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When dealing with data from accelerated tests with time-varying stresses, the life-stress relationship must take into account the cumulative effect of the applied stresses. Such a model is commonly referred to as a &#039;&#039;cumulative damage&#039;&#039; or &#039;&#039;cumulative exposure&#039;&#039; model. Nelson [[Appendix_E:_References|[28]]] defines and presents the derivation and assumptions of such a model. ALTA includes the cumulative damage model for the analysis of time-varying stress data. This section presents an introduction to the model formulation and its application.&lt;br /&gt;
&lt;br /&gt;
=Model Formulation=&lt;br /&gt;
To formulate the cumulative exposure/damage model, consider a simple step-stress experiment where an electronic component was subjected to a voltage stress, starting at 2V (use stress level) and increased to 7V in stepwise increments, as shown in the next figure. The following steps, in hours, were used to apply stress to the products under test: 0 to 250, 2V; 250 to 350, 3V; 350 to 370, 4V; 370 to 380, 5V; 380 to 390, 6V; and 390 to 400, 7V.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.1.gif|center|550px|Step profile for a simple voltage stress test.]]&lt;br /&gt;
&lt;br /&gt;
In this example, 11 units were available for the test. All units were tested using this same stress profile. Units that failed were removed from the test and their total times on test were recorded. The following times-to-failure were observed in the test, in hours: 280, 310, 330, 352, 360, 366, 371, 374, 378, 381 and 385. The first failure in this test occurred at 280 hours when the stress was 3V. During the test, this unit experienced a period of time at 2V before failing at 3V. If the stress were 2V, one would expect the unit to fail at a time later than 280 hours, while if the unit were always at 3V, one would expect that failure time to be sooner than 280 hrs. The problem faced by the analyst in this case is to determine some equivalency between the stresses. In other words, what is the equivalent of 280 hours (with 250 hours spent at 2V and 30 hours spent at 3V) at a constant 2V stress or at a constant 3V stress?&lt;br /&gt;
&lt;br /&gt;
==Mathematical Formulation for a Step-Stress Model==&lt;br /&gt;
To mathematically formulate the model, consider the step-stress test shown in the next figure, with stresses S1, S2 and S3. Furthermore, assume that the underlying life distribution is the Weibull distribution, and also assume an inverse power law relationship between the Weibull scale parameter and the applied stress.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.2.png|center|300px|Step-stress profile and the corresponding life distributions.]]&lt;br /&gt;
&lt;br /&gt;
From the inverse power law relationship, the scale parameter, &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, of the Weibull distribution can be expressed as an inverse power function of the stress, &amp;lt;math&amp;gt;V \,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta(V)=\frac{1}{{{K}{V}}^\eta } \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
The fraction of the units failing by time &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; under a constant stress &amp;lt;math&amp;gt;V = {{S}_{1}}\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
F(t;V)=1-R(t;V)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t;V)={{e}^{-{{\left[ \tfrac{t}{\eta (V)} \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; for each constant stress level is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{F}_{1}}(t;{{S}_{1}})= &amp;amp; 1-{{e}^{-{{(KS_{1}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{2}}(t;{{S}_{2}})= &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{3}}(t;{{S}_{3}})= &amp;amp; 1-{{e}^{-{{(KS_{3}^{n}t)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equations would suffice if the units did not experience different stresses during the test, as they did in this case. To analyze the data from this step-stress test, a cumulative exposure model is needed. Such a model will relate the life distribution, in this case the Weibull distribution, of the units at one stress level to the distribution at the next stress level. In formulating this model, it is assumed that the remaining life of the test units depends only on the cumulative exposure the units have seen and that the units do not remember how such exposure was accumulated. Moreover, since the units are held at a constant stress at each step, the surviving units will fail according to the distribution at the current step, but with a starting age corresponding to the total accumulated time up to the beginning of the current step. This model can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
*Units failing during the first step have not experienced any other stresses and will fail according to the &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;. Units that made it to the second step will fail according to the &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;, but will have accumulated some equivalent age, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; at this stress level (given the fact that they have spent &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; hours at &amp;lt;math&amp;gt;{{S}_{1}})\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}(t;{{S}_{2}})=1-{{e}^{-{{[KS_{2}^{n}((t-{{t}_{1}})+{{\varepsilon }_{1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
In other words, the probability that the units will fail at a time, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, while at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; and between &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; is equivalent to the probability that the units would fail after accumulating &amp;lt;math&amp;gt;(t-{{t}_{1}})\,\!&amp;lt;/math&amp;gt; plus some equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; to account for the exposure the units have seen at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*The equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; will be the time by which the probability of failure at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; is equal to the probability of failure at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; after an exposure of &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	  {{F}_{1}}({{t}_{1}};{{S}_{1}})=\ &amp;amp; {{F}_{2}}({{\varepsilon }_{1}},{{S}_{2}}) \\ &lt;br /&gt;
	 1-{{e}^{-{{(KS_{1}^{n}{{t}_{1}})}^{\beta }}}}=\ &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}{{\varepsilon }_{1}})}^{\beta }}}} \\ &lt;br /&gt;
	 S_{1}^{n}{{t}_{1}}=\ &amp;amp; S_{2}^{n}{{\varepsilon }_{1}} \\ &lt;br /&gt;
	 {{\varepsilon }_{1}}=\ &amp;amp; {{t}_{1}}{{\left( \frac{{{S}_{1}}}{{{S}_{2}}} \right)}^{n}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
*One would repeat this for step 3 taking into account the accumulated exposure during steps 1 and 2, or in more general terms and for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; step: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{i}}(t;{{S}_{i}})=1-{{e}^{-{{[KS_{i}^{n}((t-{{t}_{i-1}})+{{\varepsilon }_{i-1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\varepsilon }_{i-1}}=({{t}_{i-1}}-{{t}_{i-2}}+{{\varepsilon }_{i-2}}){{\left( \frac{{{S}_{i-1}}}{{{S}_{i}}} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
*Once the &#039;&#039;cdf&#039;&#039; for each step has been obtained, the  &#039;&#039;pdf&#039;&#039;  can also then be determined utilizing: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{f}_{i}}(t,{{S}_{i}})=-\frac{d}{dt}\left[ {{F}_{i}}(t,{{S}_{i}}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
Once the model has been formulated, model parameters (i.e., &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; ) can be computed utilizing maximum likelihood estimation methods.&lt;br /&gt;
&lt;br /&gt;
The previous example can be expanded for any time-varying stress. ALTA allows you to define any stress profile. For example, the stress can be a ramp stress, a monotonically increasing stress, sinusoidal, etc. This section presents a generalized formulation of the cumulative damage model, where stress can be any function of time.&lt;br /&gt;
&lt;br /&gt;
{{Example:CD-GLL_Weibull}}&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Power Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the power relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship,  the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{\left( \frac{a}{x(t)} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the power law relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\ln \left( x(t) \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln ({{a}^{n}}) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; -n  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,\,x)}=s(t,\,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\,x(t))={{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\,x)=s(t,\,x){{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest (e.g., mean life, failure rate, etc.) can be obtained utilizing the statistical properties definitions presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},\,{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},\,{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\,x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt;  is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cumulative Damage-Power-Weibull Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the simple step-stress data given [[Time-Varying Stress Models#Model Formulation|here]], one would define &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt;  as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 x(t)=\ &amp;amp; 2,\text{    }0&amp;lt;t\le 250 \\ &lt;br /&gt;
 =\ &amp;amp; 3,\text{    }250&amp;lt;t\le 350 \\ &lt;br /&gt;
 =\ &amp;amp; 4,\text{    }350&amp;lt;t\le 370 \\ &lt;br /&gt;
 =\ &amp;amp; 5,\text{    }370&amp;lt;t\le 380 \\ &lt;br /&gt;
 =\ &amp;amp; 6,\text{    }380&amp;lt;t\le 390 \\ &lt;br /&gt;
 =\ &amp;amp; 7,\text{    }390&amp;lt;t\le +\infty   &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming a power relation as the underlying life-stress relationship and the Weibull distribution as the underlying life distribution, one can then formulate the log-likelihood function for the above data set as,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\ln \left\{ \beta {{\left[ \frac{x(t)}{a} \right]}^{n}}{{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\} -\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\left\{ {{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta }} \right\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; is the number of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are the IPL parameters.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; is the stress profile function.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure.&lt;br /&gt;
&lt;br /&gt;
The parameter estimates for &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; can be obtained by simultaneously solving, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial a}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;. Using ALTA, the parameter estimates for this data set are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \widehat{\beta }=\ &amp;amp; 2.67829 \\ &lt;br /&gt;
  \widehat{\alpha }=\ &amp;amp; 11.72208 \\ &lt;br /&gt;
  \widehat{n}=\ &amp;amp; 3.998466  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the parameters are obtained, one can now determine the reliability for these units at any time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; and stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t,x\left( t \right) \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x(t)=2V\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t=300\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t=300,x(t)=2 \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}=97.5%\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean time to failure (MTTF) at any stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; can be determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=\int_{0}^{\infty }t\left[ \left\{ \beta {{\left[ \frac{x\left( t \right)}{a} \right]}^{n}}{{\left[ \int_{0}^{t}{{\left[ \frac{x\left( u \right)}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\}{{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}} \right]dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x\left( t \right)=2V\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=1046.3hrs\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any other metric of interest (e.g., failure rate, conditional reliability etc.) can also be determined using the basic definitions given in [[Appendix A: Brief Statistical Background|Appendix A]] and calculated automatically with ALTA.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the    interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Arrhenius Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the Arrhenius life-stress relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))=C{{e}^{\tfrac{b}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the Arrhenius relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\tfrac{1}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the mean life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{(I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }))}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Exponential Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the exponential relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
L(x(t))=C{{e}^{bx(t)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the exponential relationship:&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{-b\cdot x(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}] -\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }}-\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the median life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage General Log-Linear Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where multiple stress types are used in the analysis and where the stresses can be any function of time.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Exponential==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m\left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{a}_{0}}-\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{a}_{j}}{{x}_{j}}(t)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{0}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
This relationship can be further modified through the use of transformations and can be reduced to the relationships discussed previously (power, Arrhenius and exponential), if so desired.&lt;br /&gt;
The exponential reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{du}{{{e}^{^{^{{{\alpha }_{0}}+\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=s(t,\overset{\_}{\mathop{x}}\,){{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Weibull==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta \left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{a}_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=\beta s(t,\overset{\_}{\mathop{x}}\,){{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}){{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Lognormal==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,\bar{x})}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The lognormal reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\bar{x})=1-\Phi (z(t,\bar{x}))\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,\bar{x})=\frac{\ln I(t,\bar{x})}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\bar{x})=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{\alpha }_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\bar{x})=\frac{s(t,\bar{x})\varphi (z(t,\bar{x}))}{\sigma _{T}^{\prime }I(t,\bar{x})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{{\bar{x}}}_{i}})\varphi (z({{T}_{i}},{{{\bar{x}}}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{{\bar{x}}}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },\bar{x}_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Confidence Intervals=&lt;br /&gt;
Using the same methodology as in previous sections, approximate confidence intervals can be derived and applied to all results of interest using the Fisher Matrix approach discussed in [[Appendix A: Brief Statistical Background|Appendix A]]. ALTA utilizes such intervals on all results. The formulas for such intervals are beyond the scope of this reference and are thus omitted. Interested readers can contact ReliaSoft for internal document ALTA-CBCD, detailing these derivations.&lt;br /&gt;
&lt;br /&gt;
=Notes on Trigonometric Functions=&lt;br /&gt;
Trigonometric functions sometime are used in accelerated life tests. However ALTA does not include them. In fact, a trigonometric function can be defined by its frequency and magnitude. Frequency and magnitude then can be treated as two constant stresses. The GLL model discussed in [[General Log-Linear Relationship]] then can be applied for modeling.&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Non-Parametric_Recurrent_Event_Data_Analysis&amp;diff=56777</id>
		<title>Non-Parametric Recurrent Event Data Analysis</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Non-Parametric_Recurrent_Event_Data_Analysis&amp;diff=56777"/>
		<updated>2014-11-18T17:39:44Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Confidence Limits for the MCF */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Recurrent_Event_Data_Analysis#Non-Parametric_Recurrent_Event_Data_Analysis|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Non-parametric RDA provides a non-parametric graphical estimate of the mean cumulative number or cost of recurrence per unit versus age. As discussed in Nelson [[Appendix:_Life_Data_Analysis_References|[31]]], in the reliability field, the Mean Cumulative Function (MCF) can be used to: &lt;br /&gt;
&lt;br /&gt;
:*Evaluate whether the population repair (or cost) rate increases or decreases with age (this is useful for product retirement and burn-in decisions). &lt;br /&gt;
:*Estimate the average number or cost of repairs per unit during warranty or some time period. &lt;br /&gt;
:*Compare two or more sets of data from different designs, production periods, maintenance policies, environments, operating conditions, etc. &lt;br /&gt;
:*Predict future numbers and costs of repairs, such as the expected number of failures next month, quarter, or year. &lt;br /&gt;
:*Reveal unexpected information and insight.&lt;br /&gt;
&lt;br /&gt;
== The Mean Cumulative Function (MCF)  ==&lt;br /&gt;
In a non-parametric analysis of recurrent event data, each population unit can be described by a cumulative history function for the cumulative number of recurrences. It is a staircase function that depicts the cumulative number of recurrences of a particular event, such as repairs over time. The figure below depicts a unit&#039;s cumulative history function. &lt;br /&gt;
&lt;br /&gt;
[[Image:Lda11.1.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
The non-parametric model for a population of units is described as the population of cumulative history functions (curves). It is the population of all staircase functions of every unit in the population. At age t, the units have a distribution of their cumulative number of events. That is, a fraction of the population has accumulated 0 recurrences, another fraction has accumulated 1 recurrence, another fraction has accumulated 2 recurrences, etc. This distribution differs at different ages &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, and has a mean &amp;lt;math&amp;gt;M(t)\,\!&amp;lt;/math&amp;gt; called the mean cumulative function (MCF). The &amp;lt;math&amp;gt;M(t)\,\!&amp;lt;/math&amp;gt; is the point-wise average of all population cumulative history functions (see figure below). &lt;br /&gt;
&lt;br /&gt;
[[Image:Lda11.2.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
For the case of uncensored data, the mean cumulative function &amp;lt;math&amp;gt;M{{(t)}_{i}}\ \,\!&amp;lt;/math&amp;gt; values at different recurrence ages &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; are estimated by calculating the average of the cumulative number of recurrences of events for each unit in the population at &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;. When the histories are censored, the following steps are applied. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1st Step - Order all ages:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Order all recurrence and censoring ages from smallest to largest. If a recurrence age for a unit is the same as its censoring (suspension) age, then the recurrence age goes first. If multiple units have a common recurrence or censoring age, then these units could be put in a certain order or be sorted randomly. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2nd Step - Calculate the number, &amp;lt;math&amp;gt;{{r}_{i}}\,\!&amp;lt;/math&amp;gt;, of units that passed through age &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;&amp;amp;nbsp;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{r}_{i}}= &amp;amp; {{r}_{i-1}}\quad \quad \text{if }{{t}_{i}}\text{ is a recurrence age} \\ &lt;br /&gt;
 &amp;amp; {{r}_{i}}= &amp;amp; {{r}_{i-1}}-1\text{   if }{{t}_{i}}\text{ is a censoring age}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the total number of units and &amp;lt;math&amp;gt;{{r}_{1}} = N\,\!&amp;lt;/math&amp;gt; at the first observed age which could be a recurrence or suspension. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3rd Step - Calculate the MCF estimate, M*(t):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
For each sample recurrence age &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, calculate the mean cumulative function estimate as follows &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{M}^{*}}({{t}_{i}})=\frac{1}{{{r}_{i}}}+{{M}^{*}}({{t}_{i-1}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{M}^{*}}(t)=\tfrac{1}{{{r}_{1}}}\,\!&amp;lt;/math&amp;gt; at the earliest observed recurrence age, &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
===Confidence Limits for the MCF===&lt;br /&gt;
Upper and lower confidence limits for &amp;lt;math&amp;gt;M({{t}_{i}})\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{M}_{U}}({{t}_{i}})= {{M}^{*}}({{t}_{i}}).{{e}^{\tfrac{{{K}_{\alpha }}.\sqrt{Var[{{M}^{*}}({{t}_{i}})]}}{{{M}^{*}}({{t}_{i}})}}} \\ &lt;br /&gt;
 &amp;amp; {{M}_{L}}({{t}_{i}})=  \frac{{{M}^{*}}({{t}_{i}})}{{{e}^{\tfrac{{{K}_{\alpha }}.\sqrt{Var[{{M}^{*}}({{t}_{i}})]}}{{{M}^{*}}({{t}_{i}})}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; ( &amp;lt;math&amp;gt;50%&amp;lt;\alpha &amp;lt;100%\,\!&amp;lt;/math&amp;gt; ) is  confidence level, &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; standard normal percentile and &amp;lt;math&amp;gt;Var[{{M}^{*}}({{t}_{i}})]\,\!&amp;lt;/math&amp;gt; is the variance of the MCF estimate at recurrence age &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;. The variance is calculated as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var[{{M}^{*}}({{t}_{i}})]=Var[{{M}^{*}}({{t}_{i-1}})]+\frac{1}{r_{i}^{2}}\left[ \underset{j\in {{R}_{i}}}{\overset{}{\mathop \sum }}\,{{\left( {{d}_{ji}}-\frac{1}{{{r}_{i}}} \right)}^{2}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{r}_{i}\,\!&amp;lt;/math&amp;gt; is defined in the equation of the survivals, &amp;lt;math&amp;gt;{{R}_{i}}\,\!&amp;lt;/math&amp;gt; is the set of the units that have not been suspended by &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{d}_{ji}}\,\!&amp;lt;/math&amp;gt; is defined as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{d}_{ji}}= 1\text{  if the }{{j}^{\text{th }}}\text{unit had an event recurrence at age }{{t}_{i}} \\ &lt;br /&gt;
 &amp;amp; {{d}_{ji}}=  0\text{  if the }{{j}^{\text{th }}}\text{unit did not have an event reoccur at age }{{t}_{i}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case there are multiple events at the same time &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{d}_{ji}}\,\!&amp;lt;/math&amp;gt; is calculated sequentially for each event. For each event, only one &amp;lt;math&amp;gt;{{d}_{ji}}\,\!&amp;lt;/math&amp;gt;  can take value of 1. Once all the events at  &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; are calculated, the final calculated MCF and its variance are the values for time  &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;. This is illustrated in the following example.&lt;br /&gt;
&lt;br /&gt;
==Example: Mean Cumulative Function==&lt;br /&gt;
&lt;br /&gt;
{{:Non_Parametric_RDA_MCF_Example}}&lt;br /&gt;
&lt;br /&gt;
{{:Non-Parametric_RDA_Transmission_Example}}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Non-Parametric_Recurrent_Event_Data_Analysis&amp;diff=56776</id>
		<title>Non-Parametric Recurrent Event Data Analysis</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Non-Parametric_Recurrent_Event_Data_Analysis&amp;diff=56776"/>
		<updated>2014-11-18T17:28:52Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Confidence Limits for the MCF */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner Weibull Articles}}&lt;br /&gt;
&#039;&#039;This article appears in the [[Recurrent_Event_Data_Analysis#Non-Parametric_Recurrent_Event_Data_Analysis|Life Data Analysis Reference book]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Non-parametric RDA provides a non-parametric graphical estimate of the mean cumulative number or cost of recurrence per unit versus age. As discussed in Nelson [[Appendix:_Life_Data_Analysis_References|[31]]], in the reliability field, the Mean Cumulative Function (MCF) can be used to: &lt;br /&gt;
&lt;br /&gt;
:*Evaluate whether the population repair (or cost) rate increases or decreases with age (this is useful for product retirement and burn-in decisions). &lt;br /&gt;
:*Estimate the average number or cost of repairs per unit during warranty or some time period. &lt;br /&gt;
:*Compare two or more sets of data from different designs, production periods, maintenance policies, environments, operating conditions, etc. &lt;br /&gt;
:*Predict future numbers and costs of repairs, such as the expected number of failures next month, quarter, or year. &lt;br /&gt;
:*Reveal unexpected information and insight.&lt;br /&gt;
&lt;br /&gt;
== The Mean Cumulative Function (MCF)  ==&lt;br /&gt;
In a non-parametric analysis of recurrent event data, each population unit can be described by a cumulative history function for the cumulative number of recurrences. It is a staircase function that depicts the cumulative number of recurrences of a particular event, such as repairs over time. The figure below depicts a unit&#039;s cumulative history function. &lt;br /&gt;
&lt;br /&gt;
[[Image:Lda11.1.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
The non-parametric model for a population of units is described as the population of cumulative history functions (curves). It is the population of all staircase functions of every unit in the population. At age t, the units have a distribution of their cumulative number of events. That is, a fraction of the population has accumulated 0 recurrences, another fraction has accumulated 1 recurrence, another fraction has accumulated 2 recurrences, etc. This distribution differs at different ages &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, and has a mean &amp;lt;math&amp;gt;M(t)\,\!&amp;lt;/math&amp;gt; called the mean cumulative function (MCF). The &amp;lt;math&amp;gt;M(t)\,\!&amp;lt;/math&amp;gt; is the point-wise average of all population cumulative history functions (see figure below). &lt;br /&gt;
&lt;br /&gt;
[[Image:Lda11.2.png|center|400px]] &lt;br /&gt;
&lt;br /&gt;
For the case of uncensored data, the mean cumulative function &amp;lt;math&amp;gt;M{{(t)}_{i}}\ \,\!&amp;lt;/math&amp;gt; values at different recurrence ages &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; are estimated by calculating the average of the cumulative number of recurrences of events for each unit in the population at &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;. When the histories are censored, the following steps are applied. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1st Step - Order all ages:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Order all recurrence and censoring ages from smallest to largest. If a recurrence age for a unit is the same as its censoring (suspension) age, then the recurrence age goes first. If multiple units have a common recurrence or censoring age, then these units could be put in a certain order or be sorted randomly. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2nd Step - Calculate the number, &amp;lt;math&amp;gt;{{r}_{i}}\,\!&amp;lt;/math&amp;gt;, of units that passed through age &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;&amp;amp;nbsp;:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{r}_{i}}= &amp;amp; {{r}_{i-1}}\quad \quad \text{if }{{t}_{i}}\text{ is a recurrence age} \\ &lt;br /&gt;
 &amp;amp; {{r}_{i}}= &amp;amp; {{r}_{i-1}}-1\text{   if }{{t}_{i}}\text{ is a censoring age}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the total number of units and &amp;lt;math&amp;gt;{{r}_{1}} = N\,\!&amp;lt;/math&amp;gt; at the first observed age which could be a recurrence or suspension. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3rd Step - Calculate the MCF estimate, M*(t):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
For each sample recurrence age &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, calculate the mean cumulative function estimate as follows &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{M}^{*}}({{t}_{i}})=\frac{1}{{{r}_{i}}}+{{M}^{*}}({{t}_{i-1}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{M}^{*}}(t)=\tfrac{1}{{{r}_{1}}}\,\!&amp;lt;/math&amp;gt; at the earliest observed recurrence age, &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
===Confidence Limits for the MCF===&lt;br /&gt;
Upper and lower confidence limits for &amp;lt;math&amp;gt;M({{t}_{i}})\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{M}_{U}}({{t}_{i}})= {{M}^{*}}({{t}_{i}}).{{e}^{\tfrac{{{K}_{\alpha }}.\sqrt{Var[{{M}^{*}}({{t}_{i}})]}}{{{M}^{*}}({{t}_{i}})}}} \\ &lt;br /&gt;
 &amp;amp; {{M}_{L}}({{t}_{i}})=  \frac{{{M}^{*}}({{t}_{i}})}{{{e}^{\tfrac{{{K}_{\alpha }}.\sqrt{Var[{{M}^{*}}({{t}_{i}})]}}{{{M}^{*}}({{t}_{i}})}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; ( &amp;lt;math&amp;gt;50%&amp;lt;\alpha &amp;lt;100%\,\!&amp;lt;/math&amp;gt; ) is  confidence level, &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; standard normal percentile and &amp;lt;math&amp;gt;Var[{{M}^{*}}({{t}_{i}})]\,\!&amp;lt;/math&amp;gt; is the variance of the MCF estimate at recurrence age &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;. The variance is calculated as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var[{{M}^{*}}({{t}_{i}})]=Var[{{M}^{*}}({{t}_{i-1}})]+\frac{1}{r_{i}^{2}}\left[ \underset{j\in {{R}_{i}}}{\overset{}{\mathop \sum }}\,{{\left( {{d}_{ji}}-\frac{1}{{{r}_{i}}} \right)}^{2}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{r}_{i}\,\!&amp;lt;/math&amp;gt; is defined in the equation of the survivals, &amp;lt;math&amp;gt;{{R}_{i}}\,\!&amp;lt;/math&amp;gt; is the set of the units that have not been suspended by &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{d}_{ji}}\,\!&amp;lt;/math&amp;gt; is defined as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{d}_{ji}}= 1\text{  if the }{{j}^{\text{th }}}\text{unit had an event recurrence at age }{{t}_{i}} \\ &lt;br /&gt;
 &amp;amp; {{d}_{ji}}=  0\text{  if the }{{j}^{\text{th }}}\text{unit did not have an event reoccur at age }{{t}_{i}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case there are multiple events at the same time &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{d}_{ji}}\,\!&amp;lt;/math&amp;gt; is calculated sequentially for each event. For each event, only one &amp;lt;math&amp;gt;{{d}_{ji}}\,\!&amp;lt;/math&amp;gt;  can take value of 1. Once all the events are calculated, the final MCF and its variance are the values for time  &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt;. This is illustrated in the following example.&lt;br /&gt;
&lt;br /&gt;
==Example: Mean Cumulative Function==&lt;br /&gt;
&lt;br /&gt;
{{:Non_Parametric_RDA_MCF_Example}}&lt;br /&gt;
&lt;br /&gt;
{{:Non-Parametric_RDA_Transmission_Example}}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ALTA_Test_Plan_Example&amp;diff=56775</id>
		<title>ALTA Test Plan Example</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ALTA_Test_Plan_Example&amp;diff=56775"/>
		<updated>2014-11-17T17:11:12Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner_ALTA_Examples}}&lt;br /&gt;
&#039;&#039;This example appears in the [[Accelerated_Life_Test_Plans#Test Plans for a Single Stress Type|Accelerated Life Testing Data Analysis Reference book]]&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
A reliability engineer is planning an accelerated test for a mechanical component. Torque is the only factor in the test. The purpose of the experiment is to estimate the B10 life (time equivalent to unreliability = 0.1) of the diodes. The reliability engineer wants to use a 2 Level Statistically Optimum Plan because it would require fewer test chambers than a 3 level test plan. 40 units are available for the test. The mechanical component is assumed to follow a Weibull distribution with beta = 3.5, and a power model is assumed for the life-stress relationship. The test is planned to last for 10,000 cycles. The engineer has estimated that there is a 0.06% probability that a unit will fail by 10,000 cycles at the use stress level of 60 N · m. The highest level allowed in the test is 120 N · m and a unit is estimated to fail with a probability of 99.999% at 120 N · m. The following setup shows the test plan in ALTA.&lt;br /&gt;
&lt;br /&gt;
[[Image:1testplan.png|center|650px|Test plan setup for a single stress test.]]&lt;br /&gt;
&lt;br /&gt;
The Two Level Statistically Optimum Plan is shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:tpr.png|center|650px|The Two level Statistically Optimum Plan]]&lt;br /&gt;
&lt;br /&gt;
The Two Level Statistically Optimum Plan is to test 28.24 units at 95.39 N · m and 11.76 units at 120 N · m. The variance of the test at B10 is  &amp;lt;math&amp;gt;Var({{T}_{p}}=B10)=StdDev{{({{T}_{p}}=B10)}^{2}}={{14380}^{2}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Test Plan Evaluation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In addition to assessing &amp;lt;math&amp;gt;Var({{\hat{T}}_{p}})\,\!&amp;lt;/math&amp;gt;, the test plan can also be evaluated based on three different criteria: confidence level, bounds ratio or sample size. These criteria can be assessed before conducting the recommended test to decide whether the test plan is satisfactory or whether some modifications would be beneficial. We can solve for any one of three criteria, given the two other criteria. &lt;br /&gt;
&lt;br /&gt;
The bounds ratio is defined as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\text{Bounds Ratio}=\frac{\text{Two Sided Upper Bound on }{{T}_{p}}}{\text{Two Sided Lower Bound on }{{T}_{p}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This ratio is analogous to the ratio that can be calculated if a test is conducted and life data are obtained and used to calculate the ratio of the confidence bounds based on the results.&lt;br /&gt;
&lt;br /&gt;
For this example, assume that a 90% confidence is desired and 40 units are to be used in the test. The bounds ratio is calculated as 2.946345,  as shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA13.13.gif|center|300px|Evaluating the test plan using a bounds ratio criterion.]]&lt;br /&gt;
&lt;br /&gt;
If this calculated bounds ratio is unsatisfactory, we can calculate the required number of units that would meet a certain bounds ratio criterion. For example, if a bounds ratio of 2 is desired, the required sample size is calculated as 97.210033, as shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA13.14.gif|center|300px|Evaluation the test plan using a sample size criterion.]]&lt;br /&gt;
&lt;br /&gt;
If the sample size is kept at 40 units and a bounds ratio of 2 is desired, the equivalent confidence level we have in the test drops to 70.8629%, as shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA13.15.gif|center|300px|Evaluating the test plan using a confidence level criterion.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Completed Theoretical Review]]&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Proportional_Hazards_Model&amp;diff=55900</id>
		<title>Proportional Hazards Model</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Proportional_Hazards_Model&amp;diff=55900"/>
		<updated>2014-06-17T22:32:12Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Non-Parametric Model Formulation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Multivariable_Relationships:_General_Log-Linear_and_Proportional_Hazards|Accelerated Life Testing Data Analysis Reference]] book.&#039;&#039; &amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Introduced by D. R. Cox, the Proportional Hazards (PH) model was developed in order to estimate the effects of different covariates influencing the times-to-failure of a system.&lt;br /&gt;
The model has been widely used in the biomedical field, as discussed in Leemis [[Appendix_E:_References|[22]]], and recently there has been an increasing interest in its application in reliability engineering. In its original form, the model is non-parametric, (i.e., no assumptions are made about the nature or shape of the underlying failure distribution). In this reference, the original non-parametric formulation as well as a parametric form of the model will be considered utilizing a Weibull life distribution. In ALTA, the proportional hazards model is included in its parametric form and can be used to analyze data with up to eight variables. The GLL-Weibull and GLL-exponential models are actually special cases of the proportional hazards model. However, when using the proportional hazards in ALTA, no transformation on the covariates (or stresses) can be performed.&lt;br /&gt;
&lt;br /&gt;
==Non-Parametric Model Formulation==&lt;br /&gt;
According to the PH model, the failure rate of a system is affected not only by its operation time, but also by the covariates under which it operates. For example, a unit may have been tested under a combination of different accelerated stresses such as humidity, temperature, voltage, etc. It is clear then that such factors affect the failure rate of a unit.&lt;br /&gt;
&lt;br /&gt;
The instantaneous failure rate (or hazard rate) of a unit is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t)=\frac{f(t)}{R(t)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;f(t)\,\!&amp;lt;/math&amp;gt; is the probability density function.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;R(t)\,\!&amp;lt;/math&amp;gt; is the reliability function.&lt;br /&gt;
&lt;br /&gt;
Note that for the case of the failure rate of a unit being dependent not only on time but also on other covariates, the above equation must be modified in order to be a function of time and of the covariates.&lt;br /&gt;
The proportional hazards model assumes that the failure rate (hazard rate) of a unit is the product of:&lt;br /&gt;
&lt;br /&gt;
*an arbitrary and unspecified baseline failure rate, &amp;lt;math&amp;gt;{{\lambda }_{0}}(t),\,\!&amp;lt;/math&amp;gt; which is a function of time only.&lt;br /&gt;
&lt;br /&gt;
*a positive function &amp;lt;math&amp;gt;g(x,\underline{A})\,\!&amp;lt;/math&amp;gt;, independent of time, which incorporates the effects of a number of covariates such as humidity, temperature, pressure, voltage, etc.&lt;br /&gt;
&lt;br /&gt;
The failure rate of a unit is then given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})={{\lambda }_{0}}(t)\cdot g(\underline{X},\underline{A})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\underline{X}\,\!&amp;lt;/math&amp;gt; is a row vector consisting of the covariates: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underline{X}=({{x}_{1}},{{x}_{2}},...,{{x}_{m}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
*&amp;lt;math&amp;gt;\underline{A}\,\!&amp;lt;/math&amp;gt; is a column vector consisting of the unknown parameters (also called regression parameters) of the model: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\underline{A}={{({{a}_{1}},{{a}_{2}},...{{a}_{m}})}^{T}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\quad \quad m\,\!&amp;lt;/math&amp;gt; = number of stress related variates (time-independent).&lt;br /&gt;
&lt;br /&gt;
It can be assumed that the form of &amp;lt;math&amp;gt;g(\underline{X},\underline{A})\,\!&amp;lt;/math&amp;gt; is known and &amp;lt;math&amp;gt;{{\lambda }_{0}}(t)\,\!&amp;lt;/math&amp;gt; is unspecified. Different forms of &amp;lt;math&amp;gt;g(\underline{X},\underline{A})\,\!&amp;lt;/math&amp;gt; can be used. &lt;br /&gt;
&lt;br /&gt;
However, the exponential form is mostly used due to its simplicity and is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;g(\underline{X},\underline{A})={{e}^{{{\underline{A}}^{T}}{{\underline{X}}^{T}}}}={{e}^{\mathop{\sum}_{j=1}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The failure rate can then be written as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})={{\lambda }_{0}}(t)\cdot {{e}^{\mathop{\sum}_{j=1}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parametric Model Formulation==&lt;br /&gt;
A parametric form of the proportional hazards model can be obtained by assuming an underlying distribution. In ALTA, the Weibull and exponential distributions are available.  In this section we will consider the Weibull distribution to formulate the parametric proportional hazards model.  In other words, it is assumed that the baseline failure rate is parametric and given by the Weibull distribution. In this case, the baseline failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\lambda }_{0}}(t)=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The PH failure rate  then becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}\cdot {{e}^{\mathop{\sum}_{j=1}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is often more convenient to define an additional covariate, &amp;lt;math&amp;gt;{{x}_{0}} = 1\,\!&amp;lt;/math&amp;gt;, in order to allow the Weibull scale parameter raised to the beta (shape parameter) to be included in the vector of regression coefficients. The PH failure rate can then be written as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})=\beta \cdot {{t}^{\beta -1}}\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The PH reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  R(t,\underline{X})=\ {{e}^{-\int_{0}^{t}\lambda (u)du}} =\  {{e}^{-\int_{0}^{t}\lambda (u,\underline{X})du}} =\  {{e}^{-{{t}^{\beta }}\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; can be obtained by taking the partial derivative of the reliability function with respect to time. The PH &#039;&#039;pdf&#039;&#039; is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  f(t,\underline{X})= &amp;amp; \lambda (t,\underline{X})\cdot R(t,\underline{X}) =\  \beta \cdot {{t}^{\beta -1}}{{e}^{\left[ \mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}-{{t}^{\beta }}\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}}} \right]}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The total number of unknowns to solve for in this model is &amp;lt;math&amp;gt;m+2\,\!&amp;lt;/math&amp;gt; (i.e., &amp;lt;math&amp;gt;\beta ,\eta ,{{a}_{0}},{{a}_{1}},...{{a}_{m}}\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimation method can be used to determine these parameters. The log-likelihood function for this case is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left( \beta \cdot T_{i}^{\beta -1}{{e}^{-T_{i}^{\beta }\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{i,j}}}}}}{{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{i,j}}}} \right) -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( T_{i}^{\prime } \right)}^{\beta }}{{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{i,j}}}}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }= &amp;amp; {{e}^{-T_{Li}^{\prime \prime \beta }{{e}^{\underset{j=0}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}}}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }= &amp;amp; {{e}^{-T_{Ri}^{\prime \prime \beta }{{e}^{\underset{j=0}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for the parameters that maximize the log-likelihood function will yield the parameters for the PH-Weibull model. Note that for &amp;lt;math&amp;gt;\beta =1 \,\!&amp;lt;/math&amp;gt;, the log-likelihood function becomes the log-likelihood function for the PH-exponential model, which is similar to the original form of the proportional hazards model proposed by Cox and Oakes [[Appendix_E:_References|[39]]].&lt;br /&gt;
&lt;br /&gt;
Note that the likelihood function of the GLL model is very similar to the likelihood function for the proportional hazards-Weibull model. In particular, the shape parameter of the Weibull distribution can be included in the regression coefficients as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{a}_{i,PH}}=-\beta \cdot {{a}_{i,GLL}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{a}_{i,PH}}\,\!&amp;lt;/math&amp;gt; are the parameters of the PH model.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{a}_{i,GLL}}\,\!&amp;lt;/math&amp;gt; are the parameters of the general log-linear model.&lt;br /&gt;
&lt;br /&gt;
In this case, the likelihood functions are identical. Therefore, if no transformation on the covariates is performed, the parameter values that maximize the likelihood function of the GLL model also maximize the likelihood function for the proportional hazards-Weibull (PHW) model. Note that for &amp;lt;math&amp;gt;\beta = 1\,\!&amp;lt;/math&amp;gt; (exponential life distribution), the two likelihood functions are identical, and &amp;lt;math&amp;gt;{{a}_{i,PH}}=-{{a}_{i,GLL}}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;noinclude&amp;gt;=Indicator Variables=&lt;br /&gt;
Another advantage of the multivariable relationships used in ALTA is that they allow for simultaneous analysis of continuous and categorical variables. Categorical variables are variables that take on discrete values such as the lot designation for products from different manufacturing lots. In this example, lot is a categorical variable, and it can be expressed in terms of indicator variables. Indicator variables only take a value of 1 or 0. For example, consider a sample of test units. A number of these units were obtained from Lot 1, others from Lot 2, and the rest from Lot 3. These three lots can be represented with the use of indicator variables, as follows:&lt;br /&gt;
&lt;br /&gt;
*Define two indicator variables, &amp;lt;math&amp;gt;{{X}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 1, &amp;lt;math&amp;gt;{{X}_{1}}=1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 2, &amp;lt;math&amp;gt;{{X}_{1}}=0,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=1.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 3, &amp;lt;math&amp;gt;{{X}_{1}}=0,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Assume that an accelerated test was performed with these units, and temperature was the accelerated stress. In this case, the [[General_Log-Linear_Relationship|GLL relationship]] can be used to analyze the data. From this relationship we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\underline{X})={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}{{X}_{1}}+{{\alpha }_{2}}{{X}_{2}}+{{\alpha }_{3}}{{X}_{3}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{X}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}\,\!&amp;lt;/math&amp;gt; are the indicator variables, as defined above.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{X}_{3}}=\tfrac{1}{T},\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is the temperature.&lt;br /&gt;
&lt;br /&gt;
The data can now be entered in ALTA and, with the assumption of an underlying life distribution and using MLE, the parameters of this model can be obtained.&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Proportional_Hazards_Model&amp;diff=55899</id>
		<title>Proportional Hazards Model</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Proportional_Hazards_Model&amp;diff=55899"/>
		<updated>2014-06-17T22:31:16Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Non-Parametric Model Formulation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Multivariable_Relationships:_General_Log-Linear_and_Proportional_Hazards|Accelerated Life Testing Data Analysis Reference]] book.&#039;&#039; &amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Introduced by D. R. Cox, the Proportional Hazards (PH) model was developed in order to estimate the effects of different covariates influencing the times-to-failure of a system.&lt;br /&gt;
The model has been widely used in the biomedical field, as discussed in Leemis [[Appendix_E:_References|[22]]], and recently there has been an increasing interest in its application in reliability engineering. In its original form, the model is non-parametric, (i.e., no assumptions are made about the nature or shape of the underlying failure distribution). In this reference, the original non-parametric formulation as well as a parametric form of the model will be considered utilizing a Weibull life distribution. In ALTA, the proportional hazards model is included in its parametric form and can be used to analyze data with up to eight variables. The GLL-Weibull and GLL-exponential models are actually special cases of the proportional hazards model. However, when using the proportional hazards in ALTA, no transformation on the covariates (or stresses) can be performed.&lt;br /&gt;
&lt;br /&gt;
==Non-Parametric Model Formulation==&lt;br /&gt;
According to the PH model, the failure rate of a system is affected not only by its operation time, but also by the covariates under which it operates. For example, a unit may have been tested under a combination of different accelerated stresses such as humidity, temperature, voltage, etc. It is clear then that such factors affect the failure rate of a unit.&lt;br /&gt;
&lt;br /&gt;
The instantaneous failure rate (or hazard rate) of a unit is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t)=\frac{f(t)}{R(t)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;f(t)\,\!&amp;lt;/math&amp;gt; is the probability density function.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;R(t)\,\!&amp;lt;/math&amp;gt; is the reliability function.&lt;br /&gt;
&lt;br /&gt;
Note that for the case of the failure rate of a unit being dependent not only on time but also on other covariates, the above equation must be modified in order to be a function of time and of the covariates.&lt;br /&gt;
The proportional hazards model assumes that the failure rate (hazard rate) of a unit is the product of:&lt;br /&gt;
&lt;br /&gt;
*an arbitrary and unspecified baseline failure rate, &amp;lt;math&amp;gt;{{\lambda }_{0}}(t),\,\!&amp;lt;/math&amp;gt; which is a function of time only.&lt;br /&gt;
&lt;br /&gt;
*a positive function &amp;lt;math&amp;gt;g(x,\underline{A})\,\!&amp;lt;/math&amp;gt;, independent of time, which incorporates the effects of a number of covariates such as humidity, temperature, pressure, voltage, etc.&lt;br /&gt;
&lt;br /&gt;
The failure rate of a unit is then given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})={{\lambda }_{0}}(t)\cdot g(\underline{X},\underline{A})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\underline{X}\,\!&amp;lt;/math&amp;gt; is a row vector consisting of the covariates: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underline{X}=({{x}_{1}},{{x}_{2}},...,{{x}_{m}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
*&amp;lt;math&amp;gt;\underline{A}\,\!&amp;lt;/math&amp;gt; is a column vector consisting of the unknown parameters (also called regression parameters) of the model: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\underline{A}={{({{a}_{1}},{{a}_{2}},...{{a}_{m}})}^{T}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\quad \quad m\,\!&amp;lt;/math&amp;gt; = number of stress related variates (time-independent).&lt;br /&gt;
&lt;br /&gt;
It can be assumed that the form of &amp;lt;math&amp;gt;g(\underline{X},\underline{A})\,\!&amp;lt;/math&amp;gt; is known and &amp;lt;math&amp;gt;{{\lambda }_{0}}(t)\,\!&amp;lt;/math&amp;gt; is unspecified. Different forms of &amp;lt;math&amp;gt;g(\underline{X},\underline{A})\,\!&amp;lt;/math&amp;gt; can be used. &lt;br /&gt;
&lt;br /&gt;
However, the exponential form is mostly used due to its simplicity and is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;g(\underline{X},\underline{A})={{e}^{{{\underline{A}}^{T}}{{\underline{X}}^{T}}}}={{e}^{\mathop{}_{j=1}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The failure rate can then be written as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})={{\lambda }_{0}}(t)\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parametric Model Formulation==&lt;br /&gt;
A parametric form of the proportional hazards model can be obtained by assuming an underlying distribution. In ALTA, the Weibull and exponential distributions are available.  In this section we will consider the Weibull distribution to formulate the parametric proportional hazards model.  In other words, it is assumed that the baseline failure rate is parametric and given by the Weibull distribution. In this case, the baseline failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\lambda }_{0}}(t)=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The PH failure rate  then becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}\cdot {{e}^{\mathop{\sum}_{j=1}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is often more convenient to define an additional covariate, &amp;lt;math&amp;gt;{{x}_{0}} = 1\,\!&amp;lt;/math&amp;gt;, in order to allow the Weibull scale parameter raised to the beta (shape parameter) to be included in the vector of regression coefficients. The PH failure rate can then be written as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t,\underline{X})=\beta \cdot {{t}^{\beta -1}}\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The PH reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  R(t,\underline{X})=\ {{e}^{-\int_{0}^{t}\lambda (u)du}} =\  {{e}^{-\int_{0}^{t}\lambda (u,\underline{X})du}} =\  {{e}^{-{{t}^{\beta }}\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; can be obtained by taking the partial derivative of the reliability function with respect to time. The PH &#039;&#039;pdf&#039;&#039; is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  f(t,\underline{X})= &amp;amp; \lambda (t,\underline{X})\cdot R(t,\underline{X}) =\  \beta \cdot {{t}^{\beta -1}}{{e}^{\left[ \mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}-{{t}^{\beta }}\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{j}}}} \right]}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The total number of unknowns to solve for in this model is &amp;lt;math&amp;gt;m+2\,\!&amp;lt;/math&amp;gt; (i.e., &amp;lt;math&amp;gt;\beta ,\eta ,{{a}_{0}},{{a}_{1}},...{{a}_{m}}\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimation method can be used to determine these parameters. The log-likelihood function for this case is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left( \beta \cdot T_{i}^{\beta -1}{{e}^{-T_{i}^{\beta }\cdot {{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{i,j}}}}}}{{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{i,j}}}} \right) -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( T_{i}^{\prime } \right)}^{\beta }}{{e}^{\mathop{\sum}_{j=0}^{m}{{a}_{j}}{{x}_{i,j}}}}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }= &amp;amp; {{e}^{-T_{Li}^{\prime \prime \beta }{{e}^{\underset{j=0}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}}}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }= &amp;amp; {{e}^{-T_{Ri}^{\prime \prime \beta }{{e}^{\underset{j=0}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for the parameters that maximize the log-likelihood function will yield the parameters for the PH-Weibull model. Note that for &amp;lt;math&amp;gt;\beta =1 \,\!&amp;lt;/math&amp;gt;, the log-likelihood function becomes the log-likelihood function for the PH-exponential model, which is similar to the original form of the proportional hazards model proposed by Cox and Oakes [[Appendix_E:_References|[39]]].&lt;br /&gt;
&lt;br /&gt;
Note that the likelihood function of the GLL model is very similar to the likelihood function for the proportional hazards-Weibull model. In particular, the shape parameter of the Weibull distribution can be included in the regression coefficients as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{a}_{i,PH}}=-\beta \cdot {{a}_{i,GLL}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{a}_{i,PH}}\,\!&amp;lt;/math&amp;gt; are the parameters of the PH model.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{a}_{i,GLL}}\,\!&amp;lt;/math&amp;gt; are the parameters of the general log-linear model.&lt;br /&gt;
&lt;br /&gt;
In this case, the likelihood functions are identical. Therefore, if no transformation on the covariates is performed, the parameter values that maximize the likelihood function of the GLL model also maximize the likelihood function for the proportional hazards-Weibull (PHW) model. Note that for &amp;lt;math&amp;gt;\beta = 1\,\!&amp;lt;/math&amp;gt; (exponential life distribution), the two likelihood functions are identical, and &amp;lt;math&amp;gt;{{a}_{i,PH}}=-{{a}_{i,GLL}}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;noinclude&amp;gt;=Indicator Variables=&lt;br /&gt;
Another advantage of the multivariable relationships used in ALTA is that they allow for simultaneous analysis of continuous and categorical variables. Categorical variables are variables that take on discrete values such as the lot designation for products from different manufacturing lots. In this example, lot is a categorical variable, and it can be expressed in terms of indicator variables. Indicator variables only take a value of 1 or 0. For example, consider a sample of test units. A number of these units were obtained from Lot 1, others from Lot 2, and the rest from Lot 3. These three lots can be represented with the use of indicator variables, as follows:&lt;br /&gt;
&lt;br /&gt;
*Define two indicator variables, &amp;lt;math&amp;gt;{{X}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 1, &amp;lt;math&amp;gt;{{X}_{1}}=1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 2, &amp;lt;math&amp;gt;{{X}_{1}}=0,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=1.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 3, &amp;lt;math&amp;gt;{{X}_{1}}=0,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Assume that an accelerated test was performed with these units, and temperature was the accelerated stress. In this case, the [[General_Log-Linear_Relationship|GLL relationship]] can be used to analyze the data. From this relationship we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\underline{X})={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}{{X}_{1}}+{{\alpha }_{2}}{{X}_{2}}+{{\alpha }_{3}}{{X}_{3}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{X}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}\,\!&amp;lt;/math&amp;gt; are the indicator variables, as defined above.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{X}_{3}}=\tfrac{1}{T},\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is the temperature.&lt;br /&gt;
&lt;br /&gt;
The data can now be entered in ALTA and, with the assumption of an underlying life distribution and using MLE, the parameters of this model can be obtained.&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Comparing_Life_Data_Sets&amp;diff=53137</id>
		<title>Comparing Life Data Sets</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Comparing_Life_Data_Sets&amp;diff=53137"/>
		<updated>2014-05-05T21:54:54Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Life Comparison Tool */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:LDABOOK_SUB|Additional Reliability Analysis Tools|Comparing Life Data Sets}}&lt;br /&gt;
It is often desirable to be able to compare two sets of reliability or life data in order to determine which of the data sets has a more favorable life distribution. The data sets could be from two alternate designs, manufacturers, lots, assembly lines, etc. Many methods are available in statistical literature for doing this when the units come from a &#039;&#039;complete&#039;&#039; sample (i.e., a sample with no censoring). This process becomes a little more difficult when dealing with data sets that have censoring, or when trying to compare two data sets that have different distributions. In general, the problem boils down to that of being able to determine any statistically significant difference between the two samples of potentially censored data from two possibly different populations. This section discusses some of the methods available in Weibull++ that are applicable to censored data.&lt;br /&gt;
&lt;br /&gt;
==Simple Plotting==&lt;br /&gt;
One popular graphical method for making this determination involves plotting the data with confidence bounds and seeing whether the bounds overlap or separate at the point of interest. This can be easily done using the Overlay Plot feature in Weibull++. This approach can be effective for comparisons at a given point in time or a given reliability level, but it is difficult to assess the overall behavior of the two distributions because the confidence bounds may overlap at some points and be far apart at others.&lt;br /&gt;
&lt;br /&gt;
==Contour Plots==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Contour Plot Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
To determine whether two data sets are significantly different and at what confidence level, one can utilize the contour plots provided in Weibull++. By overlaying two contour plots from two different data sets at the same confidence level, one can visually assess whether the data sets are significantly different at that confidence level if there is no overlap on the contours.  The disadvantage of this method is that the same distribution must be fitted to both data sets.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;{{:Contour Plot Example}}&lt;br /&gt;
&lt;br /&gt;
==Life Comparison Tool==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Life Comparison Wizard. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
Another methodology, suggested by Gerald G. Brown and Herbert C. Rutemiller, is to estimate the probability of whether the times-to-failure of one population are better or worse than the times-to-failure of the second. The equation used to estimate this probability is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P\left[ {{t}_{2}}\ge {{t}_{1}} \right]=\int_{0}^{\infty }{{f}_{1}}(t)\cdot {{R}_{2}}(t)\cdot dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{f}_{1}}(t)\,\!&amp;lt;/math&amp;gt; is the  &#039;&#039;pdf &#039;&#039; of the first distribution and &amp;lt;math&amp;gt;{{R}_{2}}(t)\,\!&amp;lt;/math&amp;gt; is the reliability function of the second distribution. The evaluation of the superior data set is based on whether this probability is smaller or greater than 0.5. If the probability is equal to 0.5, then it is equivalent to saying that the two distributions are identical.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes we may need to compare the life when one of the distributions is truncated. For example, if the random variable from the first distribution is truncated with a range of [L, U}, then the comparison with truncated distribution should be used. For detail, please see [[Stress-Strength Analysis]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consider two product designs where X and Y represent the life test data from two different populations. If we simply wanted to determine which component has a higher reliability, we would simply compare the reliability estimates of both components at a time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;. But if we wanted to determine which product will have a longer life, we would want to calculate the probability that the distribution of one product is better than the other. Using the equation given above, the probability that X is greater than or equal to Y can be interpreted as follows:&lt;br /&gt;
&lt;br /&gt;
:*If &amp;lt;math&amp;gt;P=0.50\,\!&amp;lt;/math&amp;gt;, then lives of both X and Y are equal.&lt;br /&gt;
:*If &amp;lt;math&amp;gt;P&amp;lt;0.50\,\!&amp;lt;/math&amp;gt; or, for example, &amp;lt;math&amp;gt;P=0.10\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;P=1-0.10=0.90\,\!&amp;lt;/math&amp;gt;, or Y is better than X with a 90% probability. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;{{:Life Comparison Wizard}}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Comparing_Life_Data_Sets&amp;diff=53135</id>
		<title>Comparing Life Data Sets</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Comparing_Life_Data_Sets&amp;diff=53135"/>
		<updated>2014-05-05T21:54:29Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Life Comparison Tool */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:LDABOOK_SUB|Additional Reliability Analysis Tools|Comparing Life Data Sets}}&lt;br /&gt;
It is often desirable to be able to compare two sets of reliability or life data in order to determine which of the data sets has a more favorable life distribution. The data sets could be from two alternate designs, manufacturers, lots, assembly lines, etc. Many methods are available in statistical literature for doing this when the units come from a &#039;&#039;complete&#039;&#039; sample (i.e., a sample with no censoring). This process becomes a little more difficult when dealing with data sets that have censoring, or when trying to compare two data sets that have different distributions. In general, the problem boils down to that of being able to determine any statistically significant difference between the two samples of potentially censored data from two possibly different populations. This section discusses some of the methods available in Weibull++ that are applicable to censored data.&lt;br /&gt;
&lt;br /&gt;
==Simple Plotting==&lt;br /&gt;
One popular graphical method for making this determination involves plotting the data with confidence bounds and seeing whether the bounds overlap or separate at the point of interest. This can be easily done using the Overlay Plot feature in Weibull++. This approach can be effective for comparisons at a given point in time or a given reliability level, but it is difficult to assess the overall behavior of the two distributions because the confidence bounds may overlap at some points and be far apart at others.&lt;br /&gt;
&lt;br /&gt;
==Contour Plots==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Contour Plot Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
To determine whether two data sets are significantly different and at what confidence level, one can utilize the contour plots provided in Weibull++. By overlaying two contour plots from two different data sets at the same confidence level, one can visually assess whether the data sets are significantly different at that confidence level if there is no overlap on the contours.  The disadvantage of this method is that the same distribution must be fitted to both data sets.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;{{:Contour Plot Example}}&lt;br /&gt;
&lt;br /&gt;
==Life Comparison Tool==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Life Comparison Wizard. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
Another methodology, suggested by Gerald G. Brown and Herbert C. Rutemiller, is to estimate the probability of whether the times-to-failure of one population are better or worse than the times-to-failure of the second. The equation used to estimate this probability is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P\left[ {{t}_{2}}\ge {{t}_{1}} \right]=\int_{0}^{\infty }{{f}_{1}}(t)\cdot {{R}_{2}}(t)\cdot dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{f}_{1}}(t)\,\!&amp;lt;/math&amp;gt; is the  &#039;&#039;pdf &#039;&#039; of the first distribution and &amp;lt;math&amp;gt;{{R}_{2}}(t)\,\!&amp;lt;/math&amp;gt; is the reliability function of the second distribution. The evaluation of the superior data set is based on whether this probability is smaller or greater than 0.5. If the probability is equal to 0.5, then it is equivalent to saying that the two distributions are identical.&lt;br /&gt;
&lt;br /&gt;
Sometimes we may need to compare the life when one of the distributions is truncated. For example, if the random variable from the first distribution is truncated with a range of [L, U}, then the comparison with truncated distribution should be used. For detail, please see [[Stress-Strength Analysis]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consider two product designs where X and Y represent the life test data from two different populations. If we simply wanted to determine which component has a higher reliability, we would simply compare the reliability estimates of both components at a time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;. But if we wanted to determine which product will have a longer life, we would want to calculate the probability that the distribution of one product is better than the other. Using the equation given above, the probability that X is greater than or equal to Y can be interpreted as follows:&lt;br /&gt;
&lt;br /&gt;
:*If &amp;lt;math&amp;gt;P=0.50\,\!&amp;lt;/math&amp;gt;, then lives of both X and Y are equal.&lt;br /&gt;
:*If &amp;lt;math&amp;gt;P&amp;lt;0.50\,\!&amp;lt;/math&amp;gt; or, for example, &amp;lt;math&amp;gt;P=0.10\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;P=1-0.10=0.90\,\!&amp;lt;/math&amp;gt;, or Y is better than X with a 90% probability. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;{{:Life Comparison Wizard}}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Comparing_Life_Data_Sets&amp;diff=53132</id>
		<title>Comparing Life Data Sets</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Comparing_Life_Data_Sets&amp;diff=53132"/>
		<updated>2014-05-05T21:50:21Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* Life Comparison Tool */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:LDABOOK_SUB|Additional Reliability Analysis Tools|Comparing Life Data Sets}}&lt;br /&gt;
It is often desirable to be able to compare two sets of reliability or life data in order to determine which of the data sets has a more favorable life distribution. The data sets could be from two alternate designs, manufacturers, lots, assembly lines, etc. Many methods are available in statistical literature for doing this when the units come from a &#039;&#039;complete&#039;&#039; sample (i.e., a sample with no censoring). This process becomes a little more difficult when dealing with data sets that have censoring, or when trying to compare two data sets that have different distributions. In general, the problem boils down to that of being able to determine any statistically significant difference between the two samples of potentially censored data from two possibly different populations. This section discusses some of the methods available in Weibull++ that are applicable to censored data.&lt;br /&gt;
&lt;br /&gt;
==Simple Plotting==&lt;br /&gt;
One popular graphical method for making this determination involves plotting the data with confidence bounds and seeing whether the bounds overlap or separate at the point of interest. This can be easily done using the Overlay Plot feature in Weibull++. This approach can be effective for comparisons at a given point in time or a given reliability level, but it is difficult to assess the overall behavior of the two distributions because the confidence bounds may overlap at some points and be far apart at others.&lt;br /&gt;
&lt;br /&gt;
==Contour Plots==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Contour Plot Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
To determine whether two data sets are significantly different and at what confidence level, one can utilize the contour plots provided in Weibull++. By overlaying two contour plots from two different data sets at the same confidence level, one can visually assess whether the data sets are significantly different at that confidence level if there is no overlap on the contours.  The disadvantage of this method is that the same distribution must be fitted to both data sets.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;{{:Contour Plot Example}}&lt;br /&gt;
&lt;br /&gt;
==Life Comparison Tool==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Life Comparison Wizard. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
Another methodology, suggested by Gerald G. Brown and Herbert C. Rutemiller, is to estimate the probability of whether the times-to-failure of one population are better or worse than the times-to-failure of the second. The equation used to estimate this probability is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P\left[ {{t}_{2}}\ge {{t}_{1}} \right]=\int_{0}^{\infty }{{f}_{1}}(t)\cdot {{R}_{2}}(t)\cdot dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{f}_{1}}(t)\,\!&amp;lt;/math&amp;gt; is the  &#039;&#039;pdf &#039;&#039; of the first distribution and &amp;lt;math&amp;gt;{{R}_{2}}(t)\,\!&amp;lt;/math&amp;gt; is the reliability function of the second distribution. The evaluation of the superior data set is based on whether this probability is smaller or greater than 0.5. If the probability is equal to 0.5, then it is equivalent to saying that the two distributions are identical.&lt;br /&gt;
&lt;br /&gt;
Consider two product designs where X and Y represent the life test data from two different populations. If we simply wanted to determine which component has a higher reliability, we would simply compare the reliability estimates of both components at a time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;. But if we wanted to determine which product will have a longer life, we would want to calculate the probability that the distribution of one product is better than the other. Using the equation given above, the probability that X is greater than or equal to Y can be interpreted as follows:&lt;br /&gt;
&lt;br /&gt;
:*If &amp;lt;math&amp;gt;P=0.50\,\!&amp;lt;/math&amp;gt;, then lives of both X and Y are equal.&lt;br /&gt;
:*If &amp;lt;math&amp;gt;P&amp;lt;0.50\,\!&amp;lt;/math&amp;gt; or, for example, &amp;lt;math&amp;gt;P=0.10\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;P=1-0.10=0.90\,\!&amp;lt;/math&amp;gt;, or Y is better than X with a 90% probability. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;{{:Life Comparison Wizard}}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53072</id>
		<title>Experiment Design and Analysis Reference</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53072"/>
		<updated>2014-04-30T16:24:14Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Allbooksindex}}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot;| &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;3&amp;quot;&amp;gt;ReliaSoft&#039;s Experiment Design and Analysis Reference&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot; | &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;4&amp;quot;&amp;gt;Chapter Index&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
#[[DOE Overview]]&lt;br /&gt;
#[[Statistical Background on DOE]]&lt;br /&gt;
#[[Simple Linear Regression Analysis]]&lt;br /&gt;
#[[Multiple Linear Regression Analysis]]&lt;br /&gt;
#[[One Factor Designs]]&lt;br /&gt;
#[[General Full Factorial Designs]]&lt;br /&gt;
#[[Randomization and Blocking in DOE]]&lt;br /&gt;
#[[Two Level Factorial Experiments]]&lt;br /&gt;
#[[Highly Fractional Factorial Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Plackett-Burman Designs]]&lt;br /&gt;
#*[[Highly_Fractional_Factorial_Designs#Taguchi.27s_Orthogonal_Arrays|Taguchi Orthogonal Arrays Designs]]&lt;br /&gt;
#[[Response Surface Methods for Optimization]]&lt;br /&gt;
#[[Design Evaluation and Power Study]]&lt;br /&gt;
#[[Optimal Custom Designs]]&lt;br /&gt;
#[[Robust Parameter Design]]&lt;br /&gt;
#[[Reliability DOE for Life Tests]]&lt;br /&gt;
#[[Measurement System Analysis]]&lt;br /&gt;
#Appendices &lt;br /&gt;
#*[[ANOVA Calculations in Multiple Linear Regression|Appendix A: ANOVA Calculations in Multiple Linear Regression]]&lt;br /&gt;
#*[[Use of Regression to Calculate Sum of Squares|Appendix B: Use of Regression to Calculate Sum of Squares]]&lt;br /&gt;
#*[[Plackett-Burman Designs|Appendix C: Plackett-Burman Designs]]&lt;br /&gt;
#*[[Taguchi Orthogonal Arrays|Appendix D: Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Alias Relations for Taguchi Orthogonal Arrays|Appendix E: Alias Relations for Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Box-Behnken Designs|Appendix F: Box-Behnken Designs]]&lt;br /&gt;
#*[[DOE Glossary|Appendix G: Glossary]]&lt;br /&gt;
#*[[DOE References|Appendix H: References]]&lt;br /&gt;
|}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;0&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; valign=&amp;quot;middle&amp;quot; bgcolor=&amp;quot;#dddddd&amp;quot;;  | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|50px]]&amp;lt;p st#le=&amp;quot;text-align: left;&amp;quot;&amp;gt;[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-&amp;lt;br&amp;gt;[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;0&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;center&amp;quot; | &lt;br /&gt;
&amp;lt;br&amp;gt; {{Allbooksindex footer|DOE++ Examples|DOE++}}&lt;br /&gt;
[[Image:DOE Examples Banner.png|link=DOE++ Examples|center|300px]] &lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53071</id>
		<title>Experiment Design and Analysis Reference</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53071"/>
		<updated>2014-04-30T16:23:03Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Allbooksindex}}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot;| &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;3&amp;quot;&amp;gt;ReliaSoft&#039;s Experiment Design and Analysis Reference&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot; | &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;4&amp;quot;&amp;gt;Chapter Index&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
#[[DOE Overview]]&lt;br /&gt;
#[[Statistical Background on DOE]]&lt;br /&gt;
#[[Simple Linear Regression Analysis]]&lt;br /&gt;
#[[Multiple Linear Regression Analysis]]&lt;br /&gt;
#[[One Factor Designs]]&lt;br /&gt;
#[[General Full Factorial Designs]]&lt;br /&gt;
#[[Randomization and Blocking in DOE]]&lt;br /&gt;
#[[Two Level Factorial Experiments]]&lt;br /&gt;
#[[Highly Fractional Factorial Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Plackett-Burman Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Taguchi Orthogonal Arrays Designs]]&lt;br /&gt;
#[[Response Surface Methods for Optimization]]&lt;br /&gt;
#[[Design Evaluation and Power Study]]&lt;br /&gt;
#[[Optimal Custom Designs]]&lt;br /&gt;
#[[Robust Parameter Design]]&lt;br /&gt;
#[[Reliability DOE for Life Tests]]&lt;br /&gt;
#[[Measurement System Analysis]]&lt;br /&gt;
#Appendices &lt;br /&gt;
#*[[ANOVA Calculations in Multiple Linear Regression|Appendix A: ANOVA Calculations in Multiple Linear Regression]]&lt;br /&gt;
#*[[Use of Regression to Calculate Sum of Squares|Appendix B: Use of Regression to Calculate Sum of Squares]]&lt;br /&gt;
#*[[Plackett-Burman Designs|Appendix C: Plackett-Burman Designs]]&lt;br /&gt;
#*[[Taguchi Orthogonal Arrays|Appendix D: Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Alias Relations for Taguchi Orthogonal Arrays|Appendix E: Alias Relations for Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Box-Behnken Designs|Appendix F: Box-Behnken Designs]]&lt;br /&gt;
#*[[DOE Glossary|Appendix G: Glossary]]&lt;br /&gt;
#*[[DOE References|Appendix H: References]]&lt;br /&gt;
|}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;0&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; valign=&amp;quot;middle&amp;quot; bgcolor=&amp;quot;#dddddd&amp;quot;;  | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|50px]]&amp;lt;p st#le=&amp;quot;text-align: left;&amp;quot;&amp;gt;[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-&amp;lt;br&amp;gt;[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;0&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;center&amp;quot; | &lt;br /&gt;
&amp;lt;br&amp;gt; {{Allbooksindex footer|DOE++ Examples|DOE++}}&lt;br /&gt;
[[Image:DOE Examples Banner.png|link=DOE++ Examples|center|300px]] &lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53070</id>
		<title>Experiment Design and Analysis Reference</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53070"/>
		<updated>2014-04-30T16:18:39Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Allbooksindex}}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot;| &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;3&amp;quot;&amp;gt;ReliaSoft&#039;s Experiment Design and Analysis Reference&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot; | &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;4&amp;quot;&amp;gt;Chapter Index&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
#[[DOE Overview]]&lt;br /&gt;
#[[Statistical Background on DOE]]&lt;br /&gt;
#[[Simple Linear Regression Analysis]]&lt;br /&gt;
#[[Multiple Linear Regression Analysis]]&lt;br /&gt;
#[[One Factor Designs]]&lt;br /&gt;
#[[General Full Factorial Designs]]&lt;br /&gt;
#[[Randomization and Blocking in DOE]]&lt;br /&gt;
#[[Two Level Factorial Experiments]]&lt;br /&gt;
#[[Highly Fractional Factorial Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Plackett-Burman Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Orthogonal Arrays Designs]]&lt;br /&gt;
#[[Response Surface Methods for Optimization]]&lt;br /&gt;
#[[Design Evaluation and Power Study]]&lt;br /&gt;
#[[Optimal Custom Designs]]&lt;br /&gt;
#[[Robust Parameter Design]]&lt;br /&gt;
#[[Reliability DOE for Life Tests]]&lt;br /&gt;
#[[Measurement System Analysis]]&lt;br /&gt;
#Appendices &lt;br /&gt;
#*[[ANOVA Calculations in Multiple Linear Regression|Appendix A: ANOVA Calculations in Multiple Linear Regression]]&lt;br /&gt;
#*[[Use of Regression to Calculate Sum of Squares|Appendix B: Use of Regression to Calculate Sum of Squares]]&lt;br /&gt;
#*[[Plackett-Burman Designs|Appendix C: Plackett-Burman Designs]]&lt;br /&gt;
#*[[Taguchi Orthogonal Arrays|Appendix D: Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Alias Relations for Taguchi Orthogonal Arrays|Appendix E: Alias Relations for Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Box-Behnken Designs|Appendix F: Box-Behnken Designs]]&lt;br /&gt;
#*[[DOE Glossary|Appendix G: Glossary]]&lt;br /&gt;
#*[[DOE References|Appendix H: References]]&lt;br /&gt;
|}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;0&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; valign=&amp;quot;middle&amp;quot; bgcolor=&amp;quot;#dddddd&amp;quot;;  | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|50px]]&amp;lt;p st#le=&amp;quot;text-align: left;&amp;quot;&amp;gt;[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-&amp;lt;br&amp;gt;[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;0&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;center&amp;quot; | &lt;br /&gt;
&amp;lt;br&amp;gt; {{Allbooksindex footer|DOE++ Examples|DOE++}}&lt;br /&gt;
[[Image:DOE Examples Banner.png|link=DOE++ Examples|center|300px]] &lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53069</id>
		<title>Experiment Design and Analysis Reference</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&amp;diff=53069"/>
		<updated>2014-04-30T16:17:04Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Allbooksindex}}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot;| &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;3&amp;quot;&amp;gt;ReliaSoft&#039;s Experiment Design and Analysis Reference&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; bgcolor=&amp;quot;#E5B21B&amp;quot; | &amp;lt;font color=&amp;quot;#ffffff&amp;quot; size=&amp;quot;4&amp;quot;&amp;gt;Chapter Index&amp;lt;/font&amp;gt; &lt;br /&gt;
|- style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;left&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
#[[DOE Overview]]&lt;br /&gt;
#[[Statistical Background on DOE]]&lt;br /&gt;
#[[Simple Linear Regression Analysis]]&lt;br /&gt;
#[[Multiple Linear Regression Analysis]]&lt;br /&gt;
#[[One Factor Designs]]&lt;br /&gt;
#[[General Full Factorial Designs]]&lt;br /&gt;
#[[Randomization and Blocking in DOE]]&lt;br /&gt;
#[[Two Level Factorial Experiments]]&lt;br /&gt;
#[[Highly Fractional Factorial Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Plackett-Burnman Designs]]&lt;br /&gt;
#*[[Highly Fractional Factorial Designs|Orthogonal Arrays Designs]]&lt;br /&gt;
#[[Response Surface Methods for Optimization]]&lt;br /&gt;
#[[Design Evaluation and Power Study]]&lt;br /&gt;
#[[Optimal Custom Designs]]&lt;br /&gt;
#[[Robust Parameter Design]]&lt;br /&gt;
#[[Reliability DOE for Life Tests]]&lt;br /&gt;
#[[Measurement System Analysis]]&lt;br /&gt;
#Appendices &lt;br /&gt;
#*[[ANOVA Calculations in Multiple Linear Regression|Appendix A: ANOVA Calculations in Multiple Linear Regression]]&lt;br /&gt;
#*[[Use of Regression to Calculate Sum of Squares|Appendix B: Use of Regression to Calculate Sum of Squares]]&lt;br /&gt;
#*[[Plackett-Burman Designs|Appendix C: Plackett-Burman Designs]]&lt;br /&gt;
#*[[Taguchi Orthogonal Arrays|Appendix D: Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Alias Relations for Taguchi Orthogonal Arrays|Appendix E: Alias Relations for Taguchi&#039;s Orthogonal Arrays]]&lt;br /&gt;
#*[[Box-Behnken Designs|Appendix F: Box-Behnken Designs]]&lt;br /&gt;
#*[[DOE Glossary|Appendix G: Glossary]]&lt;br /&gt;
#*[[DOE References|Appendix H: References]]&lt;br /&gt;
|}&lt;br /&gt;
{| width=&amp;quot;600&amp;quot; border=&amp;quot;0&amp;quot; align=&amp;quot;center&amp;quot; cellpadding=&amp;quot;3&amp;quot; cellspacing=&amp;quot;0&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; valign=&amp;quot;middle&amp;quot; bgcolor=&amp;quot;#dddddd&amp;quot;;  | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|50px]]&amp;lt;p st#le=&amp;quot;text-align: left;&amp;quot;&amp;gt;[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-&amp;lt;br&amp;gt;[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;0&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;&amp;quot; valign=&amp;quot;middle&amp;quot; align=&amp;quot;center&amp;quot; | &lt;br /&gt;
&amp;lt;br&amp;gt; {{Allbooksindex footer|DOE++ Examples|DOE++}}&lt;br /&gt;
[[Image:DOE Examples Banner.png|link=DOE++ Examples|center|300px]] &lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=File:Effect_of_upsilon.png&amp;diff=50532</id>
		<title>File:Effect of upsilon.png</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=File:Effect_of_upsilon.png&amp;diff=50532"/>
		<updated>2014-02-10T22:15:10Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: uploaded a new version of &amp;quot;File:Effect of upsilon.png&amp;quot;:&amp;amp;#32;Reverted to version as of 22:14, 10 February 2014&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=File:Effect_of_upsilon.png&amp;diff=50531</id>
		<title>File:Effect of upsilon.png</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=File:Effect_of_upsilon.png&amp;diff=50531"/>
		<updated>2014-02-10T22:14:35Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: uploaded a new version of &amp;quot;File:Effect of upsilon.png&amp;quot;:&amp;amp;#32;Reverted to version as of 16:37, 13 March 2012&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=File:Effect_of_upsilon.png&amp;diff=50530</id>
		<title>File:Effect of upsilon.png</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=File:Effect_of_upsilon.png&amp;diff=50530"/>
		<updated>2014-02-10T22:14:02Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: uploaded a new version of &amp;quot;File:Effect of upsilon.png&amp;quot;:&amp;amp;#32;Reverted to version as of 16:37, 13 March 2012&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=File:Effect_of_lambda_on_exponential_pdf.png&amp;diff=50529</id>
		<title>File:Effect of lambda on exponential pdf.png</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=File:Effect_of_lambda_on_exponential_pdf.png&amp;diff=50529"/>
		<updated>2014-02-10T22:13:10Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: uploaded a new version of &amp;quot;File:Effect of lambda on exponential pdf.png&amp;quot;:&amp;amp;#32;Reverted to version as of 16:36, 13 March 2012&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=File:Effect_on_failure_rate_new.png&amp;diff=50429</id>
		<title>File:Effect on failure rate new.png</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=File:Effect_on_failure_rate_new.png&amp;diff=50429"/>
		<updated>2014-02-10T18:28:26Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Exponential_Distribution_Characteristics&amp;diff=50428</id>
		<title>Exponential Distribution Characteristics</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Exponential_Distribution_Characteristics&amp;diff=50428"/>
		<updated>2014-02-10T18:28:11Z</updated>

		<summary type="html">&lt;p&gt;Harry Guo: /* The Effect of lambda and gamma on the Failure Rate Function */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[The_Exponential_Distribution|Life Data Analysis Reference]] and [[Distributions_Used_in_Accelerated_Testing|Accelerated Life Testing Data Analysis Reference]] books.&#039;&#039; &amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The primary trait of the exponential distribution is that it is used for modeling the behavior of items with a constant failure rate. It has a fairly simple mathematical form, which makes it fairly easy to manipulate. Unfortunately, this fact also leads to the use of this model in situations where it is not appropriate. For example, it would not be appropriate to use the exponential distribution to model the reliability of an automobile. The constant failure rate of the exponential distribution would require the assumption that the automobile would be just as likely to experience a breakdown during the first mile as it would during the one-hundred-thousandth mile. Clearly, this is not a valid assumption. However, some inexperienced practitioners of reliability engineering and life data analysis will overlook this fact, lured by the siren-call of the exponential distribution&#039;s relatively simple mathematical models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===The Effect of lambda and gamma on the Exponential &#039;&#039;pdf&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
[[Image:effect of lambda on exponential pdf.png|center|400px|]]&lt;br /&gt;
&lt;br /&gt;
:*The exponential &#039;&#039;pdf&#039;&#039; has no shape parameter, as it has only one shape.&lt;br /&gt;
:*The exponential &#039;&#039;pdf&#039;&#039; is always convex and is stretched to the right as &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; decreases in value.&lt;br /&gt;
:*The value of the &#039;&#039;pdf&#039;&#039; function is always equal to the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt; (or &amp;lt;math&amp;gt;t=\gamma \,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
:*The location parameter, &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt;, if positive, shifts the beginning of the distribution by a distance of &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; to the right of the origin, signifying that the chance failures start to occur only after &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; hours of operation, and cannot occur before this time.&lt;br /&gt;
:*The scale parameter is &amp;lt;math&amp;gt;\tfrac{1}{\lambda }=\bar{T}-\gamma =m-\gamma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*As &amp;lt;math&amp;gt;t\to \infty \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;f(t)\to 0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===The Effect of lambda and gamma on the Exponential Reliability Function===&lt;br /&gt;
&lt;br /&gt;
[[Image:effect of upsilon.png|center|400px|]]&lt;br /&gt;
&lt;br /&gt;
:*The 1-parameter exponential reliability function starts at the value of 100% at &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;, decreases thereafter monotonically and is convex.&lt;br /&gt;
:*The 2-parameter exponential reliability function remains at the value of 100% for &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt; up to &amp;lt;math&amp;gt;t=\gamma \,\!&amp;lt;/math&amp;gt;, and decreases thereafter monotonically and is convex.&lt;br /&gt;
:*As &amp;lt;math&amp;gt;t\to \infty \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;R(t\to \infty )\to 0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*The reliability for a mission duration of &amp;lt;math&amp;gt;t=m=\tfrac{1}{\lambda }\,\!&amp;lt;/math&amp;gt;, or of one MTTF duration, is always equal to &amp;lt;math&amp;gt;0.3679\,\!&amp;lt;/math&amp;gt; or 36.79%. This means that the reliability for a mission which is as long as one MTTF is relatively low and is not recommended because only 36.8% of the missions will be completed successfully. In other words, of the equipment undertaking such a mission, only 36.8% will survive their mission.&lt;br /&gt;
&lt;br /&gt;
===The Effect of lambda and gamma on the Failure Rate Function===&lt;br /&gt;
:*The 1-parameter exponential failure rate function is constant and starts at &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*The 2-parameter exponential failure rate function remains at the value of 0 for &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt; up to &amp;lt;math&amp;gt;t=\gamma \,\!&amp;lt;/math&amp;gt;, and then keeps at the constant value of &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image: effect_on_failure_rate_new.png|center|600px|]]&lt;/div&gt;</summary>
		<author><name>Harry Guo</name></author>
	</entry>
</feed>