The Lognormal Distribution: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
(24 intermediate revisions by 4 users not shown)
Line 5: Line 5:


==Lognormal Probability Density Function==
==Lognormal Probability Density Function==
The lognormal distribution is a 2-parameter distribution with parameters <math>{\mu }'\,\!</math> and <math>\sigma'\,\!</math>. The ''pdf'' for this distribution is given by:  
The lognormal distribution is a 2-parameter distribution with parameters <math>{\mu }'\,\!</math> and <math>\sigma'\,\!</math>. The ''pdf'' for this distribution is given by:  


::<math>f({t}')=\frac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}^{\prime }}-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}</math>
::<math>f({t}')=\frac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}^{\prime }}-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}\,\!</math>


where:  
where:  


:<math>{t}'=\ln (t)\,\!</math>. <math>t\,\!</math> values are the times-to-failure  
:<math>{t}'=\ln (t)\,\!</math>. <math>t\,\!</math> values are the times-to-failure  


:<math>\mu'\,\!</math> = mean of the natural logarithms of the times-to-failure
:<math>\mu'\,\!</math> = mean of the natural logarithms of the times-to-failure
Line 17: Line 17:
:<math>\sigma'\,\!</math> = standard deviation of the natural logarithms of the times-to-failure
:<math>\sigma'\,\!</math> = standard deviation of the natural logarithms of the times-to-failure


The lognormal <math>pdf</math>  can be obtained, realizing that for equal probabilities under the normal and lognormal ''pdfs'', incremental areas should also be equal, or:  
The lognormal ''pdf'' can be obtained, realizing that for equal probabilities under the normal and lognormal ''pdfs'', incremental areas should also be equal, or:  


::<math>\begin{align}
::<math>\begin{align}
f(t)dt=f({t}')d{t}'
f(t)dt=f({t}')d{t}'
\end{align}</math>
\end{align}\,\!</math>


Taking the derivative yields:  
Taking the derivative of the relationship between <math>{t}'\,\!</math> and <math>{t}\,\!</math> yields:  


::<math>d{t}'=\frac{dt}{t}</math>
::<math>d{t}'=\frac{dt}{t}\,\!</math>


Substitution yields:  
Substitution yields:  
Line 32: Line 32:
   f(t)= & \frac{f({t}')}{t} \\  
   f(t)= & \frac{f({t}')}{t} \\  
   f(t)= & \frac{1}{t\cdot {{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}(t)-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}   
   f(t)= & \frac{1}{t\cdot {{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}(t)-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}   
\end{align}</math>
\end{align}\,\!</math>


where:  
where:  


::<math>f(t)\ge 0,t>0,-\infty <{\mu }'<\infty ,{{\sigma' }}>0</math>
::<math>f(t)\ge 0,t>0,-\infty <{\mu }'<\infty ,{{\sigma' }}>0\,\!</math>


==Lognormal Distribution Functions== <!-- THIS SECTION HEADER IS LINKED FROM ANOTHER LOCATION IN THIS PAGE IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). -->
==Lognormal Distribution Functions== <!-- THIS SECTION HEADER IS LINKED FROM ANOTHER LOCATION IN THIS PAGE IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). -->
Line 46: Line 46:
==Estimation of the Parameters==
==Estimation of the Parameters==
===Probability Plotting===
===Probability Plotting===
As described before, probability plotting involves plotting the failure times and associated unreliability estimates on specially constructed probability plotting paper. The form of this paper is based on a linearization of the <math>cdf</math>  of the specific distribution. For the lognormal distribution, the cumulative density function can be written as:  
As described before, probability plotting involves plotting the failure times and associated unreliability estimates on specially constructed probability plotting paper. The form of this paper is based on a linearization of the ''cdf'' of the specific distribution. For the lognormal distribution, the cumulative density function can be written as:  


::<math>F({t}')=\Phi \left( \frac{{t}'-{\mu }'}{{{\sigma'}}} \right)</math>
::<math>F({t}')=\Phi \left( \frac{{t}'-{\mu }'}{{{\sigma'}}} \right)\,\!</math>


or:  
or:  


::<math>{{\Phi }^{-1}}\left[ F({t}') \right]=-\frac{{{\mu }'}}{{{\sigma}'}}+\frac{1}{{{\sigma }'}}\cdot {t}'</math>
::<math>{{\Phi }^{-1}}\left[ F({t}') \right]=-\frac{{{\mu }'}}{{{\sigma}'}}+\frac{1}{{{\sigma }'}}\cdot {t}'\,\!</math>


where:  
where:  


::<math>\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt</math>
::<math>\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!</math>


Now, let:  
Now, let:  


::<math>y={{\Phi }^{-1}}\left[ F({t}') \right]</math>
::<math>y={{\Phi }^{-1}}\left[ F({t}') \right]\,\!</math>


::<math>a=-\frac{{{\mu }'}}{{{\sigma}'}}</math>
::<math>a=-\frac{{{\mu }'}}{{{\sigma}'}}\,\!</math>


and:  
and:  


::<math>b=\frac{1}{{{\sigma}'}}</math>
::<math>b=\frac{1}{{{\sigma}'}}\,\!</math>


which results in the linear equation of:  
which results in the linear equation of:  
Line 72: Line 72:
::<math>\begin{align}
::<math>\begin{align}
y=a+b{t}'
y=a+b{t}'
\end{align}</math>
\end{align}\,\!</math>


The normal probability paper resulting from this linearized <math>cdf</math>  function is shown next.
The normal probability paper resulting from this linearized ''cdf'' function is shown next.


[[Image:BS.10 lognormal probability plot.png|center|350px| ]]  
[[Image:BS.10 lognormal probability plot.png|center|350px| ]]  
Line 90: Line 90:
The least squares parameter estimation method, or regression analysis, was discussed in [[Parameter Estimation]] and the following equations for regression on Y were derived, and are again applicable:
The least squares parameter estimation method, or regression analysis, was discussed in [[Parameter Estimation]] and the following equations for regression on Y were derived, and are again applicable:


::<math>\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}</math>
::<math>\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}\,\!</math>


and:  
and:  


::<math>\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}</math>
::<math>\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!</math>


In our case the equations for <math>{{y}_{i}}\,\!</math> and <math>x_{i}\,\!</math> are:  
In our case the equations for <math>{{y}_{i}}\,\!</math> and <math>x_{i}\,\!</math> are:  


::<math>{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]</math>
::<math>{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\!</math>


and:  
and:  


::<math>{{x}_{i}}=t_{i}^{\prime }</math>
::<math>{{x}_{i}}=t_{i}^{\prime }\,\!</math>


where the <math>F(t_{i}^{\prime })\,\!</math> is estimated from the median ranks. Once <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are obtained, then <math>\widehat{\sigma }\,\!</math> and <math>\widehat{\mu }\,\!</math> can easily be obtained from the above equations.
where the <math>F(t_{i}^{\prime })\,\!</math> is estimated from the median ranks. Once <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are obtained, then <math>\widehat{\sigma }\,\!</math> and <math>\widehat{\mu }\,\!</math> can easily be obtained from the above equations.


{{The Correlation Coefficient Calculation}}
{{The Correlation Coefficient Calculation}}
Line 149: Line 149:
|}  
|}  


Assuming the data follow a lognormal distribution, estimate the parameters and the correlation coefficient, <math>\rho \,\!</math> , using rank regression on Y.
Assuming the data follow a lognormal distribution, estimate the parameters and the correlation coefficient, <math>\rho \,\!</math>, using rank regression on Y.


'''Solution'''
'''Solution'''
Line 155: Line 155:
Construct a table like the one shown next.
Construct a table like the one shown next.


<center><math>\overset{{}}{\mathop{\text{Least Squares Analysis}}}\,</math></center>
<center><math>\overset{{}}{\mathop{\text{Least Squares Analysis}}}\,\,\!</math></center>


<center><math>\begin{matrix}
<center><math>\begin{matrix}
Line 167: Line 167:
   \text{7} & \text{35} & \text{0}\text{.4651} & \text{3.5553} & \text{-0.0873} & \text{12.6405} & \text{-0.0076}& \text{-0.3102}  \\
   \text{7} & \text{35} & \text{0}\text{.4651} & \text{3.5553} & \text{-0.0873} & \text{12.6405} & \text{-0.0076}& \text{-0.3102}  \\
   \text{8} & \text{40} & \text{0}\text{.5349} & \text{3.6889}& \text{0.0873} & \text{13.6078} & \text{0.0076} & \text{0.3219}  \\
   \text{8} & \text{40} & \text{0}\text{.5349} & \text{3.6889}& \text{0.0873} & \text{13.6078} & \text{0.0076} & \text{0.3219}  \\
   \text{9} & \text{50} & \text{0}\text{.6046} & \text{3.912} & \text{0.2647} & \text{15.3039} & \text{0.0701} &\text{1.0357}  \\
   \text{9} & \text{50} & \text{0}\text{.6046} & \text{3.9120} & \text{0.2647} & \text{15.3039} & \text{0.0701} &\text{1.0357}  \\
   \text{10} & \text{60} & \text{0}\text{.6742} & \text{4.0943} & \text{0.4512} & \text{16.7637} & \text{0.2036}&\text{1.8474}  \\
   \text{10} & \text{60} & \text{0}\text{.6742} & \text{4.0943} & \text{0.4512} & \text{16.7637} & \text{0.2036}&\text{1.8474}  \\
   \text{11} & \text{70} & \text{0}\text{.7439} & \text{4.2485} & \text{0.6552} & \text{18.0497}& \text{0.4292} & \text{2.7834} \\
   \text{11} & \text{70} & \text{0}\text{.7439} & \text{4.2485} & \text{0.6552} & \text{18.0497}& \text{0.4292} & \text{2.7834} \\
   \text{12} & \text{80} & \text{0}\text{.8135} & \text{4.382} & \text{0.8908} & \text{19.2022} & \text{0.7935} & \text{3.9035}  \\
   \text{12} & \text{80} & \text{0}\text{.8135} & \text{4.3820} & \text{0.8908} & \text{19.2022} & \text{0.7935} & \text{3.9035}  \\
   \text{13} & \text{90} & \text{0}\text{.8830} & \text{4.4998} & \text{1.1901} & \text{20.2483}&\text{1.4163} & \text{5.3552}  \\
   \text{13} & \text{90} & \text{0}\text{.8830} & \text{4.4998} & \text{1.1901} & \text{20.2483}&\text{1.4163} & \text{5.3552}  \\
     \text{14} & \text{100}& \text{1.9517} & \text{4.6052} & \text{1.6619} & \text{21.2076} &\text{2.7619} & \text{7.6533}  \\
     \text{14} & \text{100}& \text{0}\text{.9517} & \text{4.6052} & \text{1.6619} & \text{21.2076} &\text{2.7619} & \text{7.6533}  \\
   \sum_{}^{} & \text{ } & \text{ } & \text{49.222} & \text{0} & \text{183.1531} & \text{11.3646} & \text{10.4473}  \\
   \sum_{}^{} & \text{ } & \text{ } & \text{49.222} & \text{0} & \text{183.1531} & \text{11.3646} & \text{10.4473}  \\


\end{matrix}</math></center>
\end{matrix}\,\!</math></center>


The median rank values ( <math>F({{t}_{i}})\,\!</math> ) can be found in rank tables or by using the Quick Statistical Reference in Weibull++ .
The median rank values ( <math>F({{t}_{i}})\,\!</math> ) can be found in rank tables or by using the Quick Statistical Reference in Weibull++ .


The <math>{{y}_{i}}\,\!</math> values were obtained from the standardized normal distribution's area tables by entering for <math>F(z)\,\!</math> and getting the corresponding <math>z\,\!</math> value ( <math>{{y}_{i}}\,\!</math> ).
The <math>{{y}_{i}}\,\!</math> values were obtained from the standardized normal distribution's area tables by entering for <math>F(z)\,\!</math> and getting the corresponding <math>z\,\!</math> value ( <math>{{y}_{i}}\,\!</math> ).


Given the values in the table above, calculate <math>\widehat{a}\,\!</math> and <math>\widehat{b}</math>:
Given the values in the table above, calculate <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math>:


::<math>\begin{align}
::<math>\begin{align}
Line 187: Line 187:
  &  &  \\  
  &  &  \\  
  & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{183.1530-{{(49.2220)}^{2}}/14}   
  & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{183.1530-{{(49.2220)}^{2}}/14}   
\end{align}</math>
\end{align}\,\!</math>


or:  
or:  


::<math>\widehat{b}=1.0349</math>
::<math>\widehat{b}=1.0349\,\!</math>


and:  
and:  


::<math>\widehat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\widehat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,t_{i}^{\prime }}{N}</math>
::<math>\widehat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\widehat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,t_{i}^{\prime }}{N}\,\!</math>


or:  
or:  


::<math>\widehat{a}=\frac{0}{14}-(1.0349)\frac{49.2220}{14}=-3.6386</math>
::<math>\widehat{a}=\frac{0}{14}-(1.0349)\frac{49.2220}{14}=-3.6386\,\!</math>


Therefore:  
Therefore:  


::<math>{\sigma'}=\frac{1}{\widehat{b}}=\frac{1}{1.0349}=0.9663</math>
::<math>{\sigma'}=\frac{1}{\widehat{b}}=\frac{1}{1.0349}=0.9663\,\!</math>


and:  
and:  


::<math>{\mu }'=-\widehat{a}\cdot {\sigma'}=-(-3.6386)\cdot 0.9663</math>
::<math>{\mu }'=-\widehat{a}\cdot {\sigma'}=-(-3.6386)\cdot 0.9663\,\!</math>


or:  
or:  
Line 213: Line 213:
::<math>\begin{align}
::<math>\begin{align}
{\mu }'=3.516  
{\mu }'=3.516  
\end{align}</math>
\end{align}\,\!</math>


The mean and the standard deviation of the lognormal distribution are obtained using equations in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above:  
The mean and the standard deviation of the lognormal distribution are obtained using equations in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above:  


::<math>\overline{T}=\mu ={{e}^{3.516+\tfrac{1}{2}{{0.9663}^{2}}}}=53.6707\text{ hours}</math>
::<math>\overline{T}=\mu ={{e}^{3.516+\tfrac{1}{2}{{0.9663}^{2}}}}=53.6707\text{ hours}\,\!</math>


and:  
and:  


::<math>{\sigma}=\sqrt{({{e}^{2\cdot 3.516+{{0.9663}^{2}}}})({{e}^{{{0.9663}^{2}}}}-1)}=66.69\text{ hours}</math>
::<math>{\sigma}=\sqrt{({{e}^{2\cdot 3.516+{{0.9663}^{2}}}})({{e}^{{{0.9663}^{2}}}}-1)}=66.69\text{ hours}\,\!</math>


The correlation coefficient can be estimated as:  
The correlation coefficient can be estimated as:  


::<math>\widehat{\rho }=0.9754</math>
::<math>\widehat{\rho }=0.9754\,\!</math>


The above example can be repeated using Weibull++ , using RRY.
The above example can be repeated using Weibull++ , using RRY.


[[Image:Lognormal Distribution Example 2 Data and Result.png|center|550px| ]]  
[[Image:Lognormal Distribution Example 2 Data and Result.png|center|650px| ]]  


The mean can be obtained from the QCP and both the mean and the standard deviation can be obtained from the Function Wizard.
The mean can be obtained from the QCP and both the mean and the standard deviation can be obtained from the Function Wizard.
Line 236: Line 236:
Performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.
Performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.


Again, the first task is to bring our <math>cdf</math>  function into a linear form. This step is exactly the same as in regression on Y analysis and all the equations apply in this case too. The deviation from the previous analysis begins on the least squares fit part, where in this case we treat <math>x\,\!</math> as the dependent variable and <math>y\,\!</math> as the independent variable. The best-fitting straight line to the data, for regression on X (see [[Parameter Estimation]]), is the straight line:
Again, the first task is to bring our ''cdf'' function into a linear form. This step is exactly the same as in regression on Y analysis and all the equations apply in this case too. The deviation from the previous analysis begins on the least squares fit part, where in this case we treat <math>x\,\!</math> as the dependent variable and <math>y\,\!</math> as the independent variable. The best-fitting straight line to the data, for regression on X (see [[Parameter Estimation]]), is the straight line:


::<math>x=\widehat{a}+\widehat{b}y</math>
::<math>x=\widehat{a}+\widehat{b}y\,\!</math>


The corresponding equations for <math>\widehat{a}\,\!</math>   and <math>\widehat{b}\,\!</math> are:  
The corresponding equations for <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are:  


::<math>\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}</math>
::<math>\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}\,\!</math>


and:  
and:  


::<math>\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}</math>
::<math>\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!</math>


where:  
where:  


::<math>{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]</math>
::<math>{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\!</math>


and:  
and:  


::<math>{{x}_{i}}=t_{i}^{\prime }</math>
::<math>{{x}_{i}}=t_{i}^{\prime }\,\!</math>


and the <math>F(t_{i}^{\prime })\,\!</math> is estimated from the median ranks. Once <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are obtained, solve the linear equation for the unknown <math>y\,\!</math> , which corresponds to:  
and the <math>F(t_{i}^{\prime })\,\!</math> is estimated from the median ranks. Once <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are obtained, solve the linear equation for the unknown <math>y\,\!</math>, which corresponds to:  


::<math>y=-\frac{\widehat{a}}{\widehat{b}}+\frac{1}{\widehat{b}}x</math>
::<math>y=-\frac{\widehat{a}}{\widehat{b}}+\frac{1}{\widehat{b}}x\,\!</math>


Solving for the parameters we get:  
Solving for the parameters we get:  


::<math>a=-\frac{\widehat{a}}{\widehat{b}}=-\frac{{{\mu }'}}{\sigma'}</math>
::<math>a=-\frac{\widehat{a}}{\widehat{b}}=-\frac{{{\mu }'}}{\sigma'}\,\!</math>


and:  
and:  


::<math>b=\frac{1}{\widehat{b}}=\frac{1}{\sigma'}</math>
::<math>b=\frac{1}{\widehat{b}}=\frac{1}{\sigma'}\,\!</math>


The correlation coefficient is evaluated as before using equation in the [[The_Lognormal_Distribution#Rank_Regression_on_Y|previous section]].
The correlation coefficient is evaluated as before using equation in the [[The_Lognormal_Distribution#Rank_Regression_on_Y|previous section]].
Line 273: Line 273:
'''Lognormal Distribution RRX Example'''
'''Lognormal Distribution RRX Example'''


Using the same data set from the [[The_Lognormal_Distribution#RRY_Example|RRY example]] given above, and assuming a lognormal distribution, estimate the parameters and estimate the correlation coefficient, <math>\rho \,\!</math> , using rank regression on X.
Using the same data set from the [[The_Lognormal_Distribution#RRY_Example|RRY example]] given above, and assuming a lognormal distribution, estimate the parameters and estimate the correlation coefficient, <math>\rho \,\!</math>, using rank regression on X.


'''Solution'''
'''Solution'''
Line 283: Line 283:
  &  &  \\  
  &  &  \\  
  & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{11.3646-{{(0)}^{2}}/14}   
  & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{11.3646-{{(0)}^{2}}/14}   
\end{align}</math>
\end{align}\,\!</math>


or:  
or:  


::<math>\widehat{b}=0.9193</math>
::<math>\widehat{b}=0.9193\,\!</math>


and:  
and:  


::<math>\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }}{14}-\widehat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}</math>
::<math>\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }}{14}-\widehat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}\,\!</math>


or:  
or:  


::<math>\widehat{a}=\frac{49.2220}{14}-(0.9193)\frac{(0)}{14}=3.5159</math>
::<math>\widehat{a}=\frac{49.2220}{14}-(0.9193)\frac{(0)}{14}=3.5159\,\!</math>


Therefore:  
Therefore:  


::<math>{\sigma'}=\widehat{b}=0.9193</math>
::<math>{\sigma'}=\widehat{b}=0.9193\,\!</math>


and:  
and:  


::<math>{\mu }'=\frac{\widehat{a}}{\widehat{b}}{\sigma'}=\frac{3.5159}{0.9193}\cdot 0.9193=3.5159</math>
::<math>{\mu }'=\frac{\widehat{a}}{\widehat{b}}{\sigma'}=\frac{3.5159}{0.9193}\cdot 0.9193=3.5159\,\!</math>


Using for Mean and Standard Deviation we get:  
Using for Mean and Standard Deviation we get:  


::<math>\overline{T}=\mu =51.3393\text{ hours}</math>
::<math>\overline{T}=\mu =51.3393\text{ hours}\,\!</math>


and:  
and:  
Line 314: Line 314:
::<math>\begin{align}
::<math>\begin{align}
{\sigma'}=59.1682\text{ hours}.
{\sigma'}=59.1682\text{ hours}.
\end{align}</math>
\end{align}\,\!</math>


The correlation coefficient is found using the equation in [[The Correlation Coefficient Calculation|previous section]]:  
The correlation coefficient is found using the equation in [[The Correlation Coefficient Calculation|previous section]]:  


::<math>\widehat{\rho }=0.9754.</math>
::<math>\widehat{\rho }=0.9754.\,\!</math>


Note that the regression on Y analysis is not necessarily the same as the regression on X. The only time when the results of the two regression types are the same (i.e., will yield the same equation for a line) is when the data lie perfectly on a line.
Note that the regression on Y analysis is not necessarily the same as the regression on X. The only time when the results of the two regression types are the same (i.e., will yield the same equation for a line) is when the data lie perfectly on a line.
Line 324: Line 324:
Using Weibull++ , with the Rank Regression on X option, the results are:
Using Weibull++ , with the Rank Regression on X option, the results are:


[[Image:Lognormal Distribution Example 3 Data and Result.png|center|550px| ]]
[[Image:Lognormal Distribution Example 3 Data and Result.png|center|650px| ]]


===Maximum Likelihood Estimation===
===Maximum Likelihood Estimation===
Line 345: Line 345:
   & \frac{\partial \Lambda }{\partial {\mu }'}= & \frac{1}{\sigma'^{2}}\cdot \underset{i=1}{\overset{14}{\mathop \sum }}\,\ln ({{t}_{i}})-{\mu }'=0 \\  
   & \frac{\partial \Lambda }{\partial {\mu }'}= & \frac{1}{\sigma'^{2}}\cdot \underset{i=1}{\overset{14}{\mathop \sum }}\,\ln ({{t}_{i}})-{\mu }'=0 \\  
  & \frac{\partial \Lambda }{\partial {{\sigma'}}}= & \underset{i=1}{\overset{14}{\mathop \sum }}\,\left( \frac{\ln ({{t}_{i}})-{\mu }'}{\sigma'^{3}}-\frac{1}{{{\sigma'}}} \right)=0   
  & \frac{\partial \Lambda }{\partial {{\sigma'}}}= & \underset{i=1}{\overset{14}{\mathop \sum }}\,\left( \frac{\ln ({{t}_{i}})-{\mu }'}{\sigma'^{3}}-\frac{1}{{{\sigma'}}} \right)=0   
\end{align}</math>
\end{align}\,\!</math>


Substituting the values of <math>{{T}_{i}}</math> and solving the above system simultaneously, we get:  
Substituting the values of <math>{{T}_{i}}\,\!</math> and solving the above system simultaneously, we get:  


::<math>\begin{align}
::<math>\begin{align}
   & {{{\hat{\sigma' }}}}= & 0.849 \\  
   & {{{\hat{\sigma' }}}}= & 0.849 \\  
  & {{{\hat{\mu }}}^{\prime }}= & 3.516   
  & {{{\hat{\mu }}}^{\prime }}= & 3.516   
\end{align}</math>
\end{align}\,\!</math>


Using the equation for mean and standard deviation in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above, we get:  
Using the equation for mean and standard deviation in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above, we get:  


::<math>\overline{T}=\hat{\mu }=48.25\text{ hours}</math>
::<math>\overline{T}=\hat{\mu }=48.25\text{ hours}\,\!</math>


and:  
and:  


::<math>{{\hat{\sigma }}}=49.61\text{ hours}.</math>
::<math>{{\hat{\sigma }}}=49.61\text{ hours}.\,\!</math>


The variance/covariance matrix is given by:  
The variance/covariance matrix is given by:  
Line 368: Line 368:
   {} & {} & {}  \\
   {} & {} & {}  \\
   \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma' }}}} \right)=0.0000 & {} & \widehat{Var}\left( {{{\hat{\sigma' }}}} \right)=0.0258  \\
   \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma' }}}} \right)=0.0000 & {} & \widehat{Var}\left( {{{\hat{\sigma' }}}} \right)=0.0258  \\
\end{matrix} \right]</math>
\end{matrix} \right]\,\!</math>


==Confidence Bounds==
==Confidence Bounds==
Line 375: Line 375:
===Fisher Matrix Bounds===
===Fisher Matrix Bounds===
====Bounds on the Parameters====
====Bounds on the Parameters====
The lower and upper bounds on the mean, <math>{\mu }'</math> , are estimated from:
The lower and upper bounds on the mean, <math>{\mu }'\,\!</math>, are estimated from:


::<math>\begin{align}
::<math>\begin{align}
   & \mu _{U}^{\prime }= & {{\widehat{\mu }}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (upper bound),} \\  
   & \mu _{U}^{\prime }= & {{\widehat{\mu }}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (upper bound),} \\  
  & \mu _{L}^{\prime }= & {{\widehat{\mu }}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (lower bound)}\text{.}   
  & \mu _{L}^{\prime }= & {{\widehat{\mu }}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (lower bound)}\text{.}   
\end{align}</math>
\end{align}\,\!</math>


For the standard deviation, <math>{\widehat{\sigma}'}</math> , <math>\ln ({{\widehat{\sigma'}}})</math> is treated as normally distributed, and the bounds are estimated from:
For the standard deviation, <math>{\widehat{\sigma}'}\,\!</math>, <math>\ln ({{\widehat{\sigma'}}})\,\!</math> is treated as normally distributed, and the bounds are estimated from:


::<math>\begin{align}
::<math>\begin{align}
   & {{\sigma}_{U}}= & {{\widehat{\sigma'}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma'}}})}}{{{\widehat{\sigma'}}}}}}\text{ (upper bound),} \\  
   & {{\sigma}_{U}}= & {{\widehat{\sigma'}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma'}}})}}{{{\widehat{\sigma'}}}}}}\text{ (upper bound),} \\  
  & {{\sigma }_{L}}= & \frac{{{\widehat{\sigma'}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma' }}})}}{{{\widehat{\sigma'}}}}}}}\text{ (lower bound),}   
  & {{\sigma }_{L}}= & \frac{{{\widehat{\sigma'}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma' }}})}}{{{\widehat{\sigma'}}}}}}}\text{ (lower bound),}   
\end{align}</math>
\end{align}\,\!</math>


where <math>{{K}_{\alpha }}</math> is defined by:  
where <math>{{K}_{\alpha }}\,\!</math> is defined by:  


::<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})</math>
::<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!</math>


If <math>\delta </math> is the confidence level, then <math>\alpha =\tfrac{1-\delta }{2}</math> for the two-sided bounds and <math>\alpha =1-\delta </math> for the one-sided bounds.
If <math>\delta \,\!</math> is the confidence level, then <math>\alpha =\tfrac{1-\delta }{2}\,\!</math> for the two-sided bounds and <math>\alpha =1-\delta \,\!</math> for the one-sided bounds.


The variances and covariances of <math>{{\widehat{\mu }}^{\prime }}</math> and <math>{{\widehat{\sigma'}}}</math> are estimated as follows:
The variances and covariances of <math>{{\widehat{\mu }}^{\prime }}\,\!</math> and <math>{{\widehat{\sigma'}}}\,\!</math> are estimated as follows:


::<math>\left( \begin{matrix}
::<math>\left( \begin{matrix}
Line 404: Line 404:
   {} & {}  \\
   {} & {}  \\
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }'\partial {{\sigma'}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma'^{2}}  \\
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }'\partial {{\sigma'}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma'^{2}}  \\
\end{matrix} \right)_{{\mu }'={{\widehat{\mu }}^{\prime }},{{\sigma'}}={{\widehat{\sigma'}}}}^{-1}</math>
\end{matrix} \right)_{{\mu }'={{\widehat{\mu }}^{\prime }},{{\sigma'}}={{\widehat{\sigma'}}}}^{-1}\,\!</math>


where <math>\Lambda </math> is the log-likelihood function of the lognormal distribution.
where <math>\Lambda \,\!</math> is the log-likelihood function of the lognormal distribution.


====Bounds on Time(Type 1)====
====Bounds on Time(Type 1)====
The bounds around time for a given lognormal percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, as follows:
The bounds around time for a given lognormal percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, as follows:


::<math>{t}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})={{\widehat{\mu }}^{\prime }}+z\cdot {{\widehat{\sigma' }}}</math>
::<math>{t}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})={{\widehat{\mu }}^{\prime }}+z\cdot {{\widehat{\sigma' }}}\,\!</math>


where:  
where:  


::<math>z={{\Phi }^{-1}}\left[ F({t}') \right]</math>
::<math>z={{\Phi }^{-1}}\left[ F({t}') \right]\,\!</math>


and:  
and:  


::<math>\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({t}')}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz</math>
::<math>\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({t}')}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!</math>


The next step is to calculate the variance of <math>{T}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma }}}):</math>  
The next step is to calculate the variance of <math>{T}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma }}}):\,\!</math>  


::<math>\begin{align}
::<math>\begin{align}
Line 428: Line 428:
  &  &  \\  
  &  &  \\  
  & Var({{{\hat{t}}}^{\prime }})= & Var({{\widehat{\mu }}^{\prime }})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma' }}})+2\cdot \widehat{z}\cdot Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}} \right)   
  & Var({{{\hat{t}}}^{\prime }})= & Var({{\widehat{\mu }}^{\prime }})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma' }}})+2\cdot \widehat{z}\cdot Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}} \right)   
\end{align}</math>
\end{align}\,\!</math>


The upper and lower bounds are then found by:  
The upper and lower bounds are then found by:  
Line 435: Line 435:
   & t_{U}^{\prime }= & \ln {{t}_{U}}={{{\hat{t}}}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})} \\  
   & t_{U}^{\prime }= & \ln {{t}_{U}}={{{\hat{t}}}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})} \\  
  & t_{L}^{\prime }= & \ln {{t}_{L}}={{{\hat{t}}}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})}   
  & t_{L}^{\prime }= & \ln {{t}_{L}}={{{\hat{t}}}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})}   
\end{align}</math>
\end{align}\,\!</math>


Solving for <math>{{t}_{U}}</math> and <math>{{t}_{L}}</math> we get:  
Solving for <math>{{t}_{U}}\,\!</math> and <math>{{t}_{L}}\,\!</math> we get:  


::<math>\begin{align}
::<math>\begin{align}
   & {{t}_{U}}= & {{e}^{t_{U}^{\prime }}}\text{ (upper bound),} \\  
   & {{t}_{U}}= & {{e}^{t_{U}^{\prime }}}\text{ (upper bound),} \\  
  & {{t}_{L}}= & {{e}^{t_{L}^{\prime }}}\text{ (lower bound)}\text{.}   
  & {{t}_{L}}= & {{e}^{t_{L}^{\prime }}}\text{ (lower bound)}\text{.}   
\end{align}</math>
\end{align}\,\!</math>


====Bounds on Reliability (Type 2)====
====Bounds on Reliability (Type 2)====
The reliability of the lognormal distribution is:
The reliability of the lognormal distribution is:


::<math>\hat{R}(t;{{\hat{\mu }}^{'}},{{\hat{\sigma }}^{'}})=\int_{t'}^{\infty }{\frac{1}{{{{\hat{\sigma }}}^{'}}\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{\left( \frac{x-{{{\hat{\mu }}}^{'}}}{{{{\hat{\sigma }}}^{'}}} \right)}^{2}}}}dx</math>
::<math>\hat{R}(t;{{\hat{\mu }}^{'}},{{\hat{\sigma }}^{'}})=\int_{t'}^{\infty }{\frac{1}{{{{\hat{\sigma }}}^{'}}\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{\left( \frac{x-{{{\hat{\mu }}}^{'}}}{{{{\hat{\sigma }}}^{'}}} \right)}^{2}}}}dx\,\!</math>


where <math>t'=\ln (t)</math>. Let <math>\hat{z}(x)=\frac{x-{{{\hat{\mu }}}^{'}}}{{{\sigma }^{'}}}</math>, the above equation then becomes:
where <math>t'=\ln (t)\,\!</math>. Let <math>\hat{z}(x)=\frac{x-{{{\hat{\mu }}}^{'}}}{{{\sigma }^{'}}}\,\!</math>, the above equation then becomes:


::<math>\hat{R}\left( \hat{z}(t') \right)=\int_{\hat{z}(t')}^{\infty }{\frac{1}{\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{z}^{2}}}}dz</math>
::<math>\hat{R}\left( \hat{z}(t') \right)=\int_{\hat{z}(t')}^{\infty }{\frac{1}{\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{z}^{2}}}}dz\,\!</math>


The bounds on <math>z</math> are estimated from:  
The bounds on <math>z\,\!</math> are estimated from:  


::<math>\begin{align}
::<math>\begin{align}
   & {{z}_{U}}= & \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\  
   & {{z}_{U}}= & \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\  
  & {{z}_{L}}= & \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}   
  & {{z}_{L}}= & \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}   
\end{align}</math>
\end{align}\,\!</math>


where:  
where:  
Line 465: Line 465:
   & Var(\hat{z})=\left( \frac{\partial {z}}{\partial \mu '} \right)_{\hat{\mu }'}^{2}Var\left( \hat{\mu }' \right)+\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{2}Var\left( \hat{\sigma }' \right) \\  
   & Var(\hat{z})=\left( \frac{\partial {z}}{\partial \mu '} \right)_{\hat{\mu }'}^{2}Var\left( \hat{\mu }' \right)+\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{2}Var\left( \hat{\sigma }' \right) \\  
  & +2\left( \frac{\partial{z}}{\partial \mu '} \right)_{\hat{\mu }'}^{{}}\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{{}}Cov\left( \hat{\mu }',\hat{\sigma }' \right)   
  & +2\left( \frac{\partial{z}}{\partial \mu '} \right)_{\hat{\mu }'}^{{}}\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{{}}Cov\left( \hat{\mu }',\hat{\sigma }' \right)   
\end{align}</math>
\end{align}\,\!</math>


or:  
or:  


::<math>Var(\hat{z})=\frac{1}{{{{\hat{\sigma }}}^{'2}}}\left[ Var\left( \hat{\mu }' \right)+{{{\hat{z}}}^{2}}Var\left( \sigma ' \right)+2\cdot \hat{z}\cdot Cov\left( \hat{\mu }',\hat{\sigma }' \right) \right]</math>
::<math>Var(\hat{z})=\frac{1}{{{{\hat{\sigma }}}^{'2}}}\left[ Var\left( \hat{\mu }' \right)+{{{\hat{z}}}^{2}}Var\left( \sigma ' \right)+2\cdot \hat{z}\cdot Cov\left( \hat{\mu }',\hat{\sigma }' \right) \right]\,\!</math>


The upper and lower bounds on reliability are:  
The upper and lower bounds on reliability are:  
Line 476: Line 476:
   & {{R}_{U}}= & \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\  
   & {{R}_{U}}= & \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\  
  & {{R}_{L}}= & \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}   
  & {{R}_{L}}= & \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}   
\end{align}</math>
\end{align}\,\!</math>


===Likelihood Ratio Confidence Bounds===
===Likelihood Ratio Confidence Bounds===
====Bounds on Parameters====
====Bounds on Parameters====
As covered in [[Parameter Estimation]], the likelihood confidence bounds are calculated by finding values for <math>{{\theta }_{1}}</math> and <math>{{\theta }_{2}}</math> that satisfy:
As covered in [[Parameter Estimation]], the likelihood confidence bounds are calculated by finding values for <math>{{\theta }_{1}}\,\!</math> and <math>{{\theta }_{2}}\,\!</math> that satisfy:


::<math>-2\cdot \text{ln}\left( \frac{L({{\theta }_{1}},{{\theta }_{2}})}{L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})} \right)=\chi _{\alpha ;1}^{2}</math>
::<math>-2\cdot \text{ln}\left( \frac{L({{\theta }_{1}},{{\theta }_{2}})}{L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})} \right)=\chi _{\alpha ;1}^{2}\,\!</math>


This equation can be rewritten as:
This equation can be rewritten as:


::<math>L({{\theta }_{1}},{{\theta }_{2}})=L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}</math>
::<math>L({{\theta }_{1}},{{\theta }_{2}})=L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}\,\!</math>


For complete data, the likelihood formula for the normal distribution is given by:
For complete data, the likelihood formula for the normal distribution is given by:


::<math>L({\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma' }}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{\mu }'}{{{\sigma'}}} \right)}^{2}}}}</math>
::<math>L({\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma' }}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{\mu }'}{{{\sigma'}}} \right)}^{2}}}}\,\!</math>


where the <math>{{x}_{i}}</math> values represent the original time-to-failure data.  For a given value of <math>\alpha </math> , values for <math>{\mu }'</math> and <math>{{\sigma' }}</math> can be found which represent the maximum and minimum values that satisfy likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level <math>\delta ,</math> where <math>\alpha =\delta </math> for two-sided bounds and <math>\alpha =2\delta -1</math> for one-sided.
where the <math>{{x}_{i}}\,\!</math> values represent the original time-to-failure data.  For a given value of <math>\alpha \,\!</math>, values for <math>{\mu }'\,\!</math> and <math>{{\sigma' }}\,\!</math> can be found which represent the maximum and minimum values that satisfy likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level <math>\delta ,\,\!</math> where <math>\alpha =\delta \,\!</math> for two-sided bounds and <math>\alpha =2\delta -1\,\!</math> for one-sided.
=====Example: LR Bounds on Parameters=====
=====Example: LR Bounds on Parameters=====
'''Lognormal Distribution Likelihood Ratio Bound Example (Parameters)'''
'''Lognormal Distribution Likelihood Ratio Bound Example (Parameters)'''


Five units are put on a reliability test and experience failures at 45, 60, 75, 90, and 115 hours. Assuming a lognormal distribution, the MLE parameter estimates are calculated to be <math>{{\widehat{\mu }}^{\prime }}=4.2926</math> and <math>{{\widehat{\sigma'}}}=0.32361.</math> Calculate the two-sided 75% confidence bounds on these parameters using the likelihood ratio method.
Five units are put on a reliability test and experience failures at 45, 60, 75, 90, and 115 hours. Assuming a lognormal distribution, the MLE parameter estimates are calculated to be <math>{{\widehat{\mu }}^{\prime }}=4.2926\,\!</math> and <math>{{\widehat{\sigma'}}}=0.32361.\,\!</math> Calculate the two-sided 75% confidence bounds on these parameters using the likelihood ratio method.


'''Solution'''
'''Solution'''
Line 507: Line 507:
   L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & \underset{i=1}{\overset{5}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot 0.32361\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-4.2926}{0.32361} \right)}^{2}}}} \\  
   L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & \underset{i=1}{\overset{5}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot 0.32361\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-4.2926}{0.32361} \right)}^{2}}}} \\  
   L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & 1.115256\times {{10}^{-10}}   
   L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & 1.115256\times {{10}^{-10}}   
\end{align}</math>
\end{align}\,\!</math>


where <math>{{x}_{i}}</math> are the original time-to-failure data points. We can now rearrange the likelihod ratio equation to the form:  
where <math>{{x}_{i}}\,\!</math> are the original time-to-failure data points. We can now rearrange the likelihod ratio equation to the form:  


::<math>L({\mu }',{{\sigma' }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0</math>
::<math>L({\mu }',{{\sigma' }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0\,\!</math>


Since our specified confidence level, <math>\delta </math> , is 75%, we can calculate the value of the chi-squared statistic, <math>\chi _{0.75;1}^{2}=1.323303.</math> We can now substitute this information into the equation:  
Since our specified confidence level, <math>\delta \,\!</math>, is 75%, we can calculate the value of the chi-squared statistic, <math>\chi _{0.75;1}^{2}=1.323303.\,\!</math> We can now substitute this information into the equation:  


::<math>\begin{align}
::<math>\begin{align}
Line 519: Line 519:
  & L({\mu }',{{\sigma'}})-1.115256\times {{10}^{-10}}\cdot {{e}^{\tfrac{-1.323303}{2}}}= & 0 \\  
  & L({\mu }',{{\sigma'}})-1.115256\times {{10}^{-10}}\cdot {{e}^{\tfrac{-1.323303}{2}}}= & 0 \\  
  & L({\mu }',{{\sigma'}})-5.754703\times {{10}^{-11}}= & 0   
  & L({\mu }',{{\sigma'}})-5.754703\times {{10}^{-11}}= & 0   
\end{align}</math>
\end{align}\,\!</math>


It now remains to find the values of <math>{\mu }'</math> and <math>{{\sigma'}}</math> which satisfy this equation. This is an iterative process that requires setting the value of <math>{{\sigma'}}</math> and finding the appropriate values of <math>{\mu }'</math> , and vice versa.
It now remains to find the values of <math>{\mu }'\,\!</math> and <math>{{\sigma'}}\,\!</math> which satisfy this equation. This is an iterative process that requires setting the value of <math>{{\sigma'}}\,\!</math> and finding the appropriate values of <math>{\mu }'\,\!</math>, and vice versa.
 
The following table gives the values of  <math>{\mu }'</math>  based on given values of  <math>{{\sigma'}}</math> .


The following table gives the values of <math>{\mu }'\,\!</math> based on given values of <math>{{\sigma'}}\,\!</math>.


<center><math>\begin{matrix}
<center><math>\begin{matrix}
Line 541: Line 540:
   0.35 & 4.1166 & 4.4687 & 0.48 & 4.2221 & 4.3632  \\
   0.35 & 4.1166 & 4.4687 & 0.48 & 4.2221 & 4.3632  \\
   0.36 & 4.1150 & 4.4703 & {} & {} & {}  \\
   0.36 & 4.1150 & 4.4703 & {} & {} & {}  \\
\end{matrix}</math></center>
\end{matrix}\,\!</math></center>


These points are represented graphically in the following contour plot:
These points are represented graphically in the following contour plot:


[[Image:WB.10 lognormal contour plot.png|center|350px| ]]  
[[Image:WB.10 lognormal contour plot.png|center|450px| ]]  


(Note that this plot is generated with degrees of freedom <math>k=1</math> , as we are only determining bounds on one parameter. The contour plots generated in Weibull++ are done with degrees of freedom <math>k=2</math> , for use in comparing both parameters simultaneously.) As can be determined from the table the lowest calculated value for <math>{\mu }'</math> is 4.1145, while the highest is 4.4708. These represent the two-sided 75% confidence limits on this parameter. Since solutions for the equation do not exist for values of <math>{{\sigma'
(Note that this plot is generated with degrees of freedom <math>k=1\,\!</math>, as we are only determining bounds on one parameter. The contour plots generated in Weibull++ are done with degrees of freedom <math>k=2\,\!</math>, for use in comparing both parameters simultaneously.) As can be determined from the table the lowest calculated value for <math>{\mu }'\,\!</math> is 4.1145, while the highest is 4.4708. These represent the two-sided 75% confidence limits on this parameter. Since solutions for the equation do not exist for values of <math>{{\sigma'
}}</math> below 0.24 or above 0.48, these can be considered the two-sided 75% confidence limits for this parameter. In order to obtain more accurate values for the confidence limits on <math>{{\sigma'}}</math> , we can perform the same procedure as before, but finding the two values of <math>\sigma </math> that correspond with a given value of <math>{\mu }'.</math> Using this method, we find that the 75% confidence limits on <math>{{\sigma'}}</math> are 0.23405 and 0.48936, which are close to the initial estimates of 0.24 and 0.48.
}}\,\!</math> below 0.24 or above 0.48, these can be considered the two-sided 75% confidence limits for this parameter. In order to obtain more accurate values for the confidence limits on <math>{{\sigma'}}\,\!</math>, we can perform the same procedure as before, but finding the two values of <math>\sigma \,\!</math> that correspond with a given value of <math>{\mu }'.\,\!</math> Using this method, we find that the 75% confidence limits on <math>{{\sigma'}}\,\!</math> are 0.23405 and 0.48936, which are close to the initial estimates of 0.24 and 0.48.


====Bounds on Time and Reliability====
====Bounds on Time and Reliability====
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the normal reliability equation into the likelihood function. The normal reliability equation can be written as:  
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the normal reliability equation into the likelihood function. The normal reliability equation can be written as:  


::<math>R=1-\Phi \left( \frac{\text{ln}(t)-{\mu }'}{{{\sigma'}}} \right)</math>
::<math>R=1-\Phi \left( \frac{\text{ln}(t)-{\mu }'}{{{\sigma'}}} \right)\,\!</math>


This can be rearranged to the form:  
This can be rearranged to the form:  


::<math>{\mu }'=\text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R)</math>
::<math>{\mu }'=\text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R)\,\!</math>


where <math>{{\Phi }^{-1}}</math> is the inverse standard normal. This equation can now be substituted into likelihood function to produce a likelihood equation in terms of <math>{{\sigma'}},</math>   <math>t</math> and <math>R</math>:   
where <math>{{\Phi }^{-1}}\,\!</math> is the inverse standard normal. This equation can now be substituted into likelihood function to produce a likelihood equation in terms of <math>{{\sigma'}},\,\!</math> <math>t\,\!</math> and <math>R\,\!</math>:   


::<math>L({{\sigma'}},t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma'}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-\left( \text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R) \right)}{{{\sigma'}}} \right)}^{2}}}}</math>
::<math>L({{\sigma'}},t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma'}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-\left( \text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R) \right)}{{{\sigma'}}} \right)}^{2}}}}\,\!</math>


The unknown variable <math>t/R</math> depends on what type of bounds are being determined.  If one is trying to determine the bounds on time for a given reliability, then <math>R</math> is a known constant and <math>t</math> is the unknown variable. Conversely, if one is trying to determine the bounds on reliability for a given time, then <math>t</math> is a known constant and <math>R</math> is the unknown variable. Either way, the above equation can be used to solve the likelihood ratio equation for the values of interest.
The unknown variable <math>t/R\,\!</math> depends on what type of bounds are being determined.  If one is trying to determine the bounds on time for a given reliability, then <math>R\,\!</math> is a known constant and <math>t\,\!</math> is the unknown variable. Conversely, if one is trying to determine the bounds on reliability for a given time, then <math>t\,\!</math> is a known constant and <math>R\,\!</math> is the unknown variable. Either way, the above equation can be used to solve the likelihood ratio equation for the values of interest.


=====Example: LR Bounds on Time=====
=====Example: LR Bounds on Time=====
'''Lognormal Distribution Likelihood Ratio Bound Example (Time)'''
'''Lognormal Distribution Likelihood Ratio Bound Example (Time)'''


For the same data set given for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the time estimate for a reliability of 80%.  The ML estimate for the time at <math>R(t)=80%</math> is 55.718.
For the same data set given for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the time estimate for a reliability of 80%.  The ML estimate for the time at <math>R(t)=80%\,\!</math> is 55.718.


'''Solution'''
'''Solution'''


In this example, we are trying to determine the two-sided 75% confidence bounds on the time estimate of 55.718. This is accomplished by substituting <math>R=0.80</math> and <math>\alpha =0.75</math> into the likelihood function, and varying <math>{{\sigma' }}</math> until the maximum and minimum values of <math>t</math> are found. The following table gives the values of <math>t</math> based on given values of <math>{{\sigma' }}</math> .
In this example, we are trying to determine the two-sided 75% confidence bounds on the time estimate of 55.718. This is accomplished by substituting <math>R=0.80\,\!</math> and <math>\alpha =0.75\,\!</math> into the likelihood function, and varying <math>{{\sigma' }}\,\!</math> until the maximum and minimum values of <math>t\,\!</math> are found. The following table gives the values of <math>t\,\!</math> based on given values of <math>{{\sigma' }}\,\!</math>.


<center><math>\begin{matrix}
<center><math>\begin{matrix}
Line 589: Line 588:
   0.35 & 45.697 & 64.983 & 0.48 & 45.517 & 52.418  \\
   0.35 & 45.697 & 64.983 & 0.48 & 45.517 & 52.418  \\
   0.36 & 45.242 & 64.541 & {} & {} & {}  \\
   0.36 & 45.242 & 64.541 & {} & {} & {}  \\
\end{matrix}</math></center>
\end{matrix}\,\!</math></center>


This data set is represented graphically in the following contour plot:
This data set is represented graphically in the following contour plot:


[[Image:WB.10 time vs sigma.png|center|350px| ]]  
[[Image:WB.10 time vs sigma.png|center|450px| ]]  


As can be determined from the table, the lowest calculated value for <math>t</math> is 43.634, while the highest is 66.085. These represent the two-sided 75% confidence limits on the time at which reliability is equal to 80%.
As can be determined from the table, the lowest calculated value for <math>t\,\!</math> is 43.634, while the highest is 66.085. These represent the two-sided 75% confidence limits on the time at which reliability is equal to 80%.


=====Example: LR Bounds on Reliability=====
=====Example: LR Bounds on Reliability=====
'''Lognormal Distribution Likelihood Ratio Bound Example (Reliability)'''
'''Lognormal Distribution Likelihood Ratio Bound Example (Reliability)'''


For the same data set given above for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the reliability estimate for <math>t=65</math> .  The ML estimate for the reliability at <math>t=65</math> is 64.261%.
For the same data set given above for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the reliability estimate for <math>t=65\,\!</math>.  The ML estimate for the reliability at <math>t=65\,\!</math> is 64.261%.


'''Solution'''
'''Solution'''


In this example, we are trying to determine the two-sided 75% confidence bounds on the reliability estimate of 64.261%. This is accomplished by substituting <math>t=65</math> and <math>\alpha =0.75</math> into the likelihood function, and varying <math>{{\sigma'}}</math> until the maximum and minimum values of <math>R</math> are found. The following table gives the values of <math>R</math> based on given values of <math>{{\sigma' }}</math> .
In this example, we are trying to determine the two-sided 75% confidence bounds on the reliability estimate of 64.261%. This is accomplished by substituting <math>t=65\,\!</math> and <math>\alpha =0.75\,\!</math> into the likelihood function, and varying <math>{{\sigma'}}\,\!</math> until the maximum and minimum values of <math>R\,\!</math> are found. The following table gives the values of <math>R\,\!</math> based on given values of <math>{{\sigma' }}\,\!</math>.


<center><math>\begin{matrix}
<center><math>\begin{matrix}
Line 621: Line 620:
   0.35 & 43.444% & 79.979% & 0.48 & 53.956% & 65.299%  \\
   0.35 & 43.444% & 79.979% & 0.48 & 53.956% & 65.299%  \\
   0.36 & 43.450% & 79.444% & {} & {} & {}  \\
   0.36 & 43.450% & 79.444% & {} & {} & {}  \\
\end{matrix}</math></center>
\end{matrix}\,\!</math></center>


This data set is represented graphically in the following contour plot:
This data set is represented graphically in the following contour plot:


[[Image:WB.10 reliability v sigma.png|center|350px| ]]  
[[Image:WB.10 reliability v sigma.png|center|450px| ]]  


As can be determined from the table, the lowest calculated value for <math>R</math> is 43.444%, while the highest is 81.508%. These represent the two-sided 75% confidence limits on the reliability at <math>t=65</math> .
As can be determined from the table, the lowest calculated value for <math>R\,\!</math> is 43.444%, while the highest is 81.508%. These represent the two-sided 75% confidence limits on the reliability at <math>t=65\,\!</math>.


===Bayesian Confidence Bounds===
===Bayesian Confidence Bounds===
====Bounds on Parameters====
====Bounds on Parameters====
From [[Parameter Estimation]], we know that the marginal distribution of parameter <math>{\mu }'</math> is:  
From [[Parameter Estimation]], we know that the marginal distribution of parameter <math>{\mu }'\,\!</math> is:  


::<math>\begin{align}
::<math>\begin{align}
   f({\mu }'|Data)= & \int_{0}^{\infty }f({\mu }',{{\sigma'}}|Data)d{{\sigma'}} \\  
   f({\mu }'|Data)= & \int_{0}^{\infty }f({\mu }',{{\sigma'}}|Data)d{{\sigma'}} \\  
   = & \frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{{\sigma'}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{\mu }'d{{\sigma'}}}   
   = & \frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{{\sigma'}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{\mu }'d{{\sigma'}}}   
\end{align}</math>
\end{align}\,\!</math>


where:
where:
::<math>\varphi ({{\sigma '}})</math> is <math>\tfrac{1}{{{\sigma '}}}</math> , non-informative prior of <math>{{\sigma '}}</math> .
::<math>\varphi ({{\sigma '}})\,\!</math> is <math>\tfrac{1}{{{\sigma '}}}\,\!</math>, non-informative prior of <math>{{\sigma '}}\,\!</math>.
<math>\varphi ({\mu }')</math> is an uniform distribution from - <math>\infty </math> to + <math>\infty </math> , non-informative prior of <math>{\mu }'</math> .
 
With the above prior distributions, <math>f({\mu }'|Data)</math> can be rewritten as:
<math>\varphi ({\mu }')\,\!</math> is an uniform distribution from - <math>\infty \,\!</math> to + <math>\infty \,\!</math>, non-informative prior of <math>{\mu }'\,\!</math>.
With the above prior distributions, <math>f({\mu }'|Data)\,\!</math> can be rewritten as:


::<math>f({\mu }'|Data)=\frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}</math>
::<math>f({\mu }'|Data)=\frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\!</math>


The one-sided upper bound of   <math>{\mu }'</math> is:
The one-sided upper bound of <math>{\mu }'\,\!</math> is:


::<math>CL=P({\mu }'\le \mu _{U}^{\prime })=\int_{-\infty }^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'</math>
::<math>CL=P({\mu }'\le \mu _{U}^{\prime })=\int_{-\infty }^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'\,\!</math>


The one-sided lower bound of <math>{\mu }'</math> is:
The one-sided lower bound of <math>{\mu }'\,\!</math> is:


::<math>1-CL=P({\mu }'\le \mu _{L}^{\prime })=\int_{-\infty }^{\mu _{L}^{\prime }}f({\mu }'|Data)d{\mu }'</math>
::<math>1-CL=P({\mu }'\le \mu _{L}^{\prime })=\int_{-\infty }^{\mu _{L}^{\prime }}f({\mu }'|Data)d{\mu }'\,\!</math>


The two-sided bounds of <math>{\mu }'</math> is:
The two-sided bounds of <math>{\mu }'\,\!</math> is:


::<math>CL=P(\mu _{L}^{\prime }\le {\mu }'\le \mu _{U}^{\prime })=\int_{\mu _{L}^{\prime }}^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'</math>
::<math>CL=P(\mu _{L}^{\prime }\le {\mu }'\le \mu _{U}^{\prime })=\int_{\mu _{L}^{\prime }}^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'\,\!</math>


The same method can be used to obtained the bounds of <math>{{\sigma '}}</math> .
The same method can be used to obtained the bounds of <math>{{\sigma '}}\,\!</math>.


====Bounds on Time (Type 1)====
====Bounds on Time (Type 1)====
Line 664: Line 664:
::<math>\begin{align}
::<math>\begin{align}
\ln T={\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R)
\ln T={\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R)
\end{align}</math>
\end{align}\,\!</math>


The one-sided upper on time bound is given by:
The one-sided upper on time bound is given by:


::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\ln t\le \ln {{t}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R)\le \ln {{t}_{U}})</math>
::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\ln t\le \ln {{t}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R)\le \ln {{t}_{U}})\,\!</math>


The above equation can be rewritten in terms of <math>{\mu }'</math> as:
The above equation can be rewritten in terms of <math>{\mu }'\,\!</math> as:


::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln {{t}_{U}}-{{\sigma '}}{{\Phi }^{-1}}(1-R)</math>
::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln {{t}_{U}}-{{\sigma '}}{{\Phi }^{-1}}(1-R)\,\!</math>


From the posterior distribution of <math>{\mu }'</math> get:
From the posterior distribution of <math>{\mu }'\,\!</math> get:


::<math>CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln {{t}_{U}}-{{\sigma ‘}}{{\Phi }^{-1}}(1-R)}L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}</math>
::<math>CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln {{t}_{U}}-{{\sigma ‘}}{{\Phi }^{-1}}(1-R)}L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\!</math>


The above equation is solved w.r.t. <math>{{t}_{U}}.</math> The same method can be applied for one-sided lower bounds and two-sided bounds on Time.
The above equation is solved w.r.t. <math>{{t}_{U}}.\,\!</math> The same method can be applied for one-sided lower bounds and two-sided bounds on Time.


====Bounds on Reliability (Type 2)====
====Bounds on Reliability (Type 2)====
Line 684: Line 684:
The one-sided upper bound on reliability is given by:
The one-sided upper bound on reliability is given by:


::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}}))</math>
::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}}))\,\!</math>
 
From the posterior distribution of  <math>{\mu }'</math> is:


::<math>CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}})}L({{\sigma'}},{\mu }')\tfrac{1}{{{\sigma'}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}</math>
From the posterior distribution of <math>{\mu }'\,\!</math> is:


The above equation is solved w.r.t.  <math>{{R}_{U}}.</math> The same method is used to calculate the one-sided lower bounds and two-sided bounds on Reliability.
::<math>CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}})}L({{\sigma'}},{\mu }')\tfrac{1}{{{\sigma'}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\!</math>


The above equation is solved w.r.t. <math>{{R}_{U}}.\,\!</math> The same method is used to calculate the one-sided lower bounds and two-sided bounds on Reliability.


====Example: Bayesian Bounds====
====Example: Bayesian Bounds====
Line 708: Line 707:
   \text{7} & \text{43}  \\
   \text{7} & \text{43}  \\
   \text{8} & \text{59}  \\
   \text{8} & \text{59}  \\
\end{matrix}</math></center>
\end{matrix}\,\!</math></center>


'''Solution'''
'''Solution'''
Line 716: Line 715:
The two-sided 90% Bayesian confidence bounds on the lognormal parameter are obtained using the QCP and clicking on the Calculate Bounds button in the Parameter Bounds tab as follows:  
The two-sided 90% Bayesian confidence bounds on the lognormal parameter are obtained using the QCP and clicking on the Calculate Bounds button in the Parameter Bounds tab as follows:  


<math></math>
[[Image:Lognormal Distribution Example 8 QCP.png|center|650px| ]]
[[Image:Lognormal Distribution Example 8 QCP.png|center|650px| ]]


Line 724: Line 722:
==Lognormal Distribution Examples==
==Lognormal Distribution Examples==
{{:Lognormal Distribution Examples}}
{{:Lognormal Distribution Examples}}
[[Category: Completed Theoretical Review]]

Revision as of 17:05, 30 June 2017

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/life_data_analysis

Chapter 10: The Lognormal Distribution


Weibullbox.png

Chapter 10  
The Lognormal Distribution  

Synthesis-icon.png

Available Software:
Weibull++

Examples icon.png

More Resources:
Weibull++ Examples Collection


The lognormal distribution is commonly used to model the lives of units whose failure modes are of a fatigue-stress nature. Since this includes most, if not all, mechanical systems, the lognormal distribution can have widespread application. Consequently, the lognormal distribution is a good companion to the Weibull distribution when attempting to model these types of units. As may be surmised by the name, the lognormal distribution has certain similarities to the normal distribution. A random variable is lognormally distributed if the logarithm of the random variable is normally distributed. Because of this, there are many mathematical similarities between the two distributions. For example, the mathematical reasoning for the construction of the probability plotting scales and the bias of parameter estimators is very similar for these two distributions.

Lognormal Probability Density Function

The lognormal distribution is a 2-parameter distribution with parameters [math]\displaystyle{ {\mu }'\,\! }[/math] and [math]\displaystyle{ \sigma'\,\! }[/math]. The pdf for this distribution is given by:

[math]\displaystyle{ f({t}')=\frac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}^{\prime }}-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}\,\! }[/math]

where:

[math]\displaystyle{ {t}'=\ln (t)\,\! }[/math]. [math]\displaystyle{ t\,\! }[/math] values are the times-to-failure
[math]\displaystyle{ \mu'\,\! }[/math] = mean of the natural logarithms of the times-to-failure
[math]\displaystyle{ \sigma'\,\! }[/math] = standard deviation of the natural logarithms of the times-to-failure

The lognormal pdf can be obtained, realizing that for equal probabilities under the normal and lognormal pdfs, incremental areas should also be equal, or:

[math]\displaystyle{ \begin{align} f(t)dt=f({t}')d{t}' \end{align}\,\! }[/math]

Taking the derivative of the relationship between [math]\displaystyle{ {t}'\,\! }[/math] and [math]\displaystyle{ {t}\,\! }[/math] yields:

[math]\displaystyle{ d{t}'=\frac{dt}{t}\,\! }[/math]

Substitution yields:

[math]\displaystyle{ \begin{align} f(t)= & \frac{f({t}')}{t} \\ f(t)= & \frac{1}{t\cdot {{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}(t)-{\mu }'}{{{\sigma' }}} \right)}^{2}}}} \end{align}\,\! }[/math]

where:

[math]\displaystyle{ f(t)\ge 0,t\gt 0,-\infty \lt {\mu }'\lt \infty ,{{\sigma' }}\gt 0\,\! }[/math]

Lognormal Distribution Functions

The Mean or MTTF

The mean of the lognormal distribution, [math]\displaystyle{ \mu \,\! }[/math], is discussed in Kececioglu [19]:

[math]\displaystyle{ \mu ={{e}^{{\mu }'+\tfrac{1}{2}\sigma'^{2}}}\,\! }[/math]

The mean of the natural logarithms of the times-to-failure, [math]\displaystyle{ \mu'\,\! }[/math], in terms of [math]\displaystyle{ \bar{T}\,\! }[/math] and [math]\displaystyle{ {{\sigma}}\,\! }[/math] is given by:

[math]\displaystyle{ {\mu }'=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\! }[/math]

The Median

The median of the lognormal distribution, [math]\displaystyle{ \breve{T}\,\! }[/math], is discussed in Kececioglu [19]:

[math]\displaystyle{ \breve{T}={{e}^{{{\mu}'}}}\,\! }[/math]

The Mode

The mode of the lognormal distribution, [math]\displaystyle{ \tilde{T}\,\! }[/math], is discussed in Kececioglu [19]:

[math]\displaystyle{ \tilde{T}={{e}^{{\mu }'-\sigma'^{2}}}\,\! }[/math]

The Standard Deviation

The standard deviation of the lognormal distribution, [math]\displaystyle{ {\sigma }_{T}\,\! }[/math], is discussed in Kececioglu [19]:

[math]\displaystyle{ {\sigma}_{T} =\sqrt{\left( {{e}^{2\mu '+\sigma {{'}^{2}}}} \right)\left( {{e}^{\sigma {{'}^{2}}}}-1 \right)}\,\! }[/math]

The standard deviation of the natural logarithms of the times-to-failure, [math]\displaystyle{ {\sigma}'\,\! }[/math], in terms of [math]\displaystyle{ \bar{T}\,\! }[/math] and [math]\displaystyle{ {\sigma}\,\! }[/math] is given by:

[math]\displaystyle{ \sigma '=\sqrt{\ln \left( \frac{{\sigma}_{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\! }[/math]

The Lognormal Reliability Function

The reliability for a mission of time [math]\displaystyle{ t\,\! }[/math], starting at age 0, for the lognormal distribution is determined by:

[math]\displaystyle{ R(t)=\int_{t}^{\infty }f(x)dx\,\! }[/math]

or:

[math]\displaystyle{ {{R}({t})}=\int_{\text{ln}(t)}^{\infty }\frac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}dx\,\! }[/math]

As with the normal distribution, there is no closed-form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability we will not discuss manual solution methods. For interested readers, full explanations can be found in the references.

The Lognormal Conditional Reliability Function

The lognormal conditional reliability function is given by:

[math]\displaystyle{ R(t|T)=\frac{R(T+t)}{R(T)}=\frac{\int_{\text{ln}(T+t)}^{\infty }\tfrac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}ds}{\int_{\text{ln}(T)}^{\infty }\tfrac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}dx}\,\! }[/math]

Once again, the use of standard normal tables is necessary to solve this equation, as no closed-form solution exists.

The Lognormal Reliable Life Function

As there is no closed-form solution for the lognormal reliability equation, no closed-form solution exists for the lognormal reliable life either. In order to determine this value, one must solve the following equation for [math]\displaystyle{ t\,\! }[/math]:

[math]\displaystyle{ {{R}_{t}}=\int_{\text{ln}(t)}^{\infty }\frac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-{\mu }'}{{{\sigma' }}} \right)}^{2}}}}dx\,\! }[/math]

The Lognormal Failure Rate Function

The lognormal failure rate is given by:

[math]\displaystyle{ \lambda (t)=\frac{f(t)}{R(t)}=\frac{\tfrac{1}{t\cdot {{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{(\tfrac{{t}'-{\mu }'}{{{\sigma' }}})}^{2}}}}}{\int_{{{t}'}}^{\infty }\tfrac{1}{{{\sigma' }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{(\tfrac{x-{\mu }'}{{{\sigma' }}})}^{2}}}}dx}\,\! }[/math]

As with the reliability equations, standard normal tables will be required to solve for this function.

Characteristics of the Lognormal Distribution

WB.10 effect of sigma.png
  • The lognormal distribution is a distribution skewed to the right.
  • The pdf starts at zero, increases to its mode, and decreases thereafter.
  • The degree of skewness increases as [math]\displaystyle{ {{\sigma'}}\,\! }[/math] increases, for a given [math]\displaystyle{ \mu'\,\! }[/math]
WB.10 lognormal pdf.png
  • For the same [math]\displaystyle{ {{\sigma'}}\,\! }[/math], the pdf 's skewness increases as [math]\displaystyle{ {\mu }'\,\! }[/math] increases.
  • For [math]\displaystyle{ {{\sigma' }}\,\! }[/math] values significantly greater than 1, the pdf rises very sharply in the beginning, (i.e., for very small values of [math]\displaystyle{ T\,\! }[/math] near zero), and essentially follows the ordinate axis, peaks out early, and then decreases sharply like an exponential pdf or a Weibull pdf with [math]\displaystyle{ 0\lt \beta \lt 1\,\! }[/math].
  • The parameter, [math]\displaystyle{ {\mu }'\,\! }[/math], in terms of the logarithm of the [math]\displaystyle{ {T}'s\,\! }[/math] is also the scale parameter, and not the location parameter as in the case of the normal pdf.
  • The parameter [math]\displaystyle{ {{\sigma'}}\,\! }[/math], or the standard deviation of the [math]\displaystyle{ {T}'s\,\! }[/math] in terms of their logarithm or of their [math]\displaystyle{ {T}'\,\! }[/math], is also the shape parameter and not the scale parameter, as in the normal pdf, and assumes only positive values.

Lognormal Distribution Parameters in ReliaSoft's Software

In ReliaSoft's software, the parameters returned for the lognormal distribution are always logarithmic. That is: the parameter [math]\displaystyle{ {\mu }'\,\! }[/math] represents the mean of the natural logarithms of the times-to-failure, while [math]\displaystyle{ {{\sigma' }}\,\! }[/math] represents the standard deviation of these data point logarithms. Specifically, the returned [math]\displaystyle{ {{\sigma' }}\,\! }[/math] is the square root of the variance of the natural logarithms of the data points. Even though the application denotes these values as mean and standard deviation, the user is reminded that these are given as the parameters of the distribution, and are thus the mean and standard deviation of the natural logarithms of the data. The mean value of the times-to-failure, not used as a parameter, as well as the standard deviation can be obtained through the QCP or the Function Wizard.

Estimation of the Parameters

Probability Plotting

As described before, probability plotting involves plotting the failure times and associated unreliability estimates on specially constructed probability plotting paper. The form of this paper is based on a linearization of the cdf of the specific distribution. For the lognormal distribution, the cumulative density function can be written as:

[math]\displaystyle{ F({t}')=\Phi \left( \frac{{t}'-{\mu }'}{{{\sigma'}}} \right)\,\! }[/math]

or:

[math]\displaystyle{ {{\Phi }^{-1}}\left[ F({t}') \right]=-\frac{{{\mu }'}}{{{\sigma}'}}+\frac{1}{{{\sigma }'}}\cdot {t}'\,\! }[/math]

where:

[math]\displaystyle{ \Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\! }[/math]

Now, let:

[math]\displaystyle{ y={{\Phi }^{-1}}\left[ F({t}') \right]\,\! }[/math]
[math]\displaystyle{ a=-\frac{{{\mu }'}}{{{\sigma}'}}\,\! }[/math]

and:

[math]\displaystyle{ b=\frac{1}{{{\sigma}'}}\,\! }[/math]

which results in the linear equation of:

[math]\displaystyle{ \begin{align} y=a+b{t}' \end{align}\,\! }[/math]

The normal probability paper resulting from this linearized cdf function is shown next.

BS.10 lognormal probability plot.png

The process for reading the parameter estimate values from the lognormal probability plot is very similar to the method employed for the normal distribution (see The Normal Distribution). However, since the lognormal distribution models the natural logarithms of the times-to-failure, the values of the parameter estimates must be read and calculated based on a logarithmic scale, as opposed to the linear time scale as it was done with the normal distribution. This parameter scale appears at the top of the lognormal probability plot.

The process of lognormal probability plotting is illustrated in the following example.

Plotting Example

8 units are put on a life test and tested to failure. The failures occurred at 45, 140, 260, 500, 850, 1400, 3000, and 9000 hours. Estimate the parameters for the lognormal distribution using probability plotting.

Solution

In order to plot the points for the probability plot, the appropriate unreliability estimate values must be obtained. These will be estimated through the use of median ranks, which can be obtained from statistical tables or the Quick Statistical Reference in Weibull++. The following table shows the times-to-failure and the appropriate median rank values for this example:

[math]\displaystyle{ \begin{matrix} \text{Time-to-} & \text{Median} \\ \text{Failure (hr}\text{.)} & \text{Rank ( }\!\!%\!\!\text{ )} \\ \text{ 45} & \text{ 8}\text{.30 }\!\!%\!\!\text{ } \\ \text{ 140} & \text{20}\text{.11 }\!\!%\!\!\text{ } \\ \text{ 260} & \text{32}\text{.05 }\!\!%\!\!\text{ } \\ \text{ 500} & \text{44}\text{.02 }\!\!%\!\!\text{ } \\ \text{ 850} & \text{55}\text{.98 }\!\!%\!\!\text{ } \\ \text{1400} & \text{67}\text{.95 }\!\!%\!\!\text{ } \\ \text{3000} & \text{79}\text{.89 }\!\!%\!\!\text{ } \\ \text{9000} & \text{91}\text{.70 }\!\!%\!\!\text{ } \\ \end{matrix}\,\! }[/math]


These points may now be plotted on normal probability plotting paper as shown in the next figure.

WB.10 lpp2.png

Draw the best possible line through the plot points. The time values where this line intersects the 15.85% and 50% unreliability values should be projected up to the logarithmic scale, as shown in the following plot.

WB.10 lpp3.png

The natural logarithm of the time where the fitted line intersects is equivalent to [math]\displaystyle{ {\mu }'\,\! }[/math]. In this case, [math]\displaystyle{ {\mu }'=6.45\,\! }[/math]. The value for [math]\displaystyle{ {{\sigma }_{{{T}'}}}\,\! }[/math] is equal to the difference between the natural logarithms of the times where the fitted line crosses [math]\displaystyle{ Q(t)=50%\,\! }[/math] and [math]\displaystyle{ Q(t)=15.85%.\,\! }[/math] At [math]\displaystyle{ Q(t)=15.85%\,\! }[/math], ln [math]\displaystyle{ (t)=4.55\,\! }[/math]. Therefore, [math]\displaystyle{ {\sigma'}=6.45-4.55=1.9\,\! }[/math].

Rank Regression on Y

Performing a rank regression on Y requires that a straight line be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized.

The least squares parameter estimation method, or regression analysis, was discussed in Parameter Estimation and the following equations for regression on Y were derived, and are again applicable:

[math]\displaystyle{ \hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}\,\! }[/math]

and:

[math]\displaystyle{ \hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\! }[/math]

In our case the equations for [math]\displaystyle{ {{y}_{i}}\,\! }[/math] and [math]\displaystyle{ x_{i}\,\! }[/math] are:

[math]\displaystyle{ {{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\! }[/math]

and:

[math]\displaystyle{ {{x}_{i}}=t_{i}^{\prime }\,\! }[/math]

where the [math]\displaystyle{ F(t_{i}^{\prime })\,\! }[/math] is estimated from the median ranks. Once [math]\displaystyle{ \widehat{a}\,\! }[/math] and [math]\displaystyle{ \widehat{b}\,\! }[/math] are obtained, then [math]\displaystyle{ \widehat{\sigma }\,\! }[/math] and [math]\displaystyle{ \widehat{\mu }\,\! }[/math] can easily be obtained from the above equations.

The Correlation Coefficient

The estimator of [math]\displaystyle{ \rho\,\! }[/math] is the sample correlation coefficient, [math]\displaystyle{ \hat{\rho }\,\! }[/math], given by:

[math]\displaystyle{ \hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,({{x}_{i}}-\overline{x})({{y}_{i}}-\overline{y})}{\sqrt{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{x}_{i}}-\overline{x})}^{2}}\cdot \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{y}_{i}}-\overline{y})}^{2}}}}\,\! }[/math]

RRY Example

Lognormal Distribution RRY Example

14 units were reliability tested and the following life test data were obtained:

Life Test Data
Data point index Time-to-failure
1 5
2 10
3 15
4 20
5 25
6 30
7 35
8 40
9 50
10 60
11 70
12 80
13 90
14 100

Assuming the data follow a lognormal distribution, estimate the parameters and the correlation coefficient, [math]\displaystyle{ \rho \,\! }[/math], using rank regression on Y.

Solution

Construct a table like the one shown next.

[math]\displaystyle{ \overset{{}}{\mathop{\text{Least Squares Analysis}}}\,\,\! }[/math]
[math]\displaystyle{ \begin{matrix} N & t_{i} & F(t_{i}) & {t_{i}}'& y_{i} & {{t_{i}}'}^{2} & y_{i}^{2} & t_{i} y_{i} \\ \text{1} & \text{5} & \text{0}\text{.0483} & \text{1}\text{.6094}& \text{-1}\text{.6619} & \text{2}\text{.5903} & \text{2}\text{.7619} & \text{-2}\text{.6747} \\ \text{2} & \text{10} & \text{0}\text{.1170} & \text{2.3026}& \text{-1.1901} & \text{5.3019} & \text{1.4163} & \text{-2.7403} \\ \text{3} & \text{15} & \text{0}\text{.1865} & \text{2.7080}&\text{-0.8908} & \text{7.3335} & \text{0.7935} & \text{-2.4123} \\ \text{4} & \text{20} & \text{0}\text{.2561} & \text{2.9957} &\text{-0.6552} & \text{8.9744} & \text{0.4292} & \text{-1.9627} \\ \text{5} & \text{25} & \text{0}\text{.3258} & \text{3.2189}& \text{-0.4512} & \text{10.3612} & \text{0.2036} & \text{-1.4524} \\ \text{6} & \text{30} & \text{0}\text{.3954} & \text{3.4012}& \text{-0.2647} & \text{11.5681} & \text{0.0701} & \text{-0.9004} \\ \text{7} & \text{35} & \text{0}\text{.4651} & \text{3.5553} & \text{-0.0873} & \text{12.6405} & \text{-0.0076}& \text{-0.3102} \\ \text{8} & \text{40} & \text{0}\text{.5349} & \text{3.6889}& \text{0.0873} & \text{13.6078} & \text{0.0076} & \text{0.3219} \\ \text{9} & \text{50} & \text{0}\text{.6046} & \text{3.9120} & \text{0.2647} & \text{15.3039} & \text{0.0701} &\text{1.0357} \\ \text{10} & \text{60} & \text{0}\text{.6742} & \text{4.0943} & \text{0.4512} & \text{16.7637} & \text{0.2036}&\text{1.8474} \\ \text{11} & \text{70} & \text{0}\text{.7439} & \text{4.2485} & \text{0.6552} & \text{18.0497}& \text{0.4292} & \text{2.7834} \\ \text{12} & \text{80} & \text{0}\text{.8135} & \text{4.3820} & \text{0.8908} & \text{19.2022} & \text{0.7935} & \text{3.9035} \\ \text{13} & \text{90} & \text{0}\text{.8830} & \text{4.4998} & \text{1.1901} & \text{20.2483}&\text{1.4163} & \text{5.3552} \\ \text{14} & \text{100}& \text{0}\text{.9517} & \text{4.6052} & \text{1.6619} & \text{21.2076} &\text{2.7619} & \text{7.6533} \\ \sum_{}^{} & \text{ } & \text{ } & \text{49.222} & \text{0} & \text{183.1531} & \text{11.3646} & \text{10.4473} \\ \end{matrix}\,\! }[/math]

The median rank values ( [math]\displaystyle{ F({{t}_{i}})\,\! }[/math] ) can be found in rank tables or by using the Quick Statistical Reference in Weibull++ .

The [math]\displaystyle{ {{y}_{i}}\,\! }[/math] values were obtained from the standardized normal distribution's area tables by entering for [math]\displaystyle{ F(z)\,\! }[/math] and getting the corresponding [math]\displaystyle{ z\,\! }[/math] value ( [math]\displaystyle{ {{y}_{i}}\,\! }[/math] ).

Given the values in the table above, calculate [math]\displaystyle{ \widehat{a}\,\! }[/math] and [math]\displaystyle{ \widehat{b}\,\! }[/math]:

[math]\displaystyle{ \begin{align} & \widehat{b}= & \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }{{y}_{i}}-(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime })(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}})/14}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime 2}-{{(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime })}^{2}}/14} \\ & & \\ & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{183.1530-{{(49.2220)}^{2}}/14} \end{align}\,\! }[/math]

or:

[math]\displaystyle{ \widehat{b}=1.0349\,\! }[/math]

and:

[math]\displaystyle{ \widehat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\widehat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,t_{i}^{\prime }}{N}\,\! }[/math]

or:

[math]\displaystyle{ \widehat{a}=\frac{0}{14}-(1.0349)\frac{49.2220}{14}=-3.6386\,\! }[/math]

Therefore:

[math]\displaystyle{ {\sigma'}=\frac{1}{\widehat{b}}=\frac{1}{1.0349}=0.9663\,\! }[/math]

and:

[math]\displaystyle{ {\mu }'=-\widehat{a}\cdot {\sigma'}=-(-3.6386)\cdot 0.9663\,\! }[/math]

or:

[math]\displaystyle{ \begin{align} {\mu }'=3.516 \end{align}\,\! }[/math]

The mean and the standard deviation of the lognormal distribution are obtained using equations in the Lognormal Distribution Functions section above:

[math]\displaystyle{ \overline{T}=\mu ={{e}^{3.516+\tfrac{1}{2}{{0.9663}^{2}}}}=53.6707\text{ hours}\,\! }[/math]

and:

[math]\displaystyle{ {\sigma}=\sqrt{({{e}^{2\cdot 3.516+{{0.9663}^{2}}}})({{e}^{{{0.9663}^{2}}}}-1)}=66.69\text{ hours}\,\! }[/math]

The correlation coefficient can be estimated as:

[math]\displaystyle{ \widehat{\rho }=0.9754\,\! }[/math]

The above example can be repeated using Weibull++ , using RRY.

Lognormal Distribution Example 2 Data and Result.png

The mean can be obtained from the QCP and both the mean and the standard deviation can be obtained from the Function Wizard.

Rank Regression on X

Performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.

Again, the first task is to bring our cdf function into a linear form. This step is exactly the same as in regression on Y analysis and all the equations apply in this case too. The deviation from the previous analysis begins on the least squares fit part, where in this case we treat [math]\displaystyle{ x\,\! }[/math] as the dependent variable and [math]\displaystyle{ y\,\! }[/math] as the independent variable. The best-fitting straight line to the data, for regression on X (see Parameter Estimation), is the straight line:

[math]\displaystyle{ x=\widehat{a}+\widehat{b}y\,\! }[/math]

The corresponding equations for [math]\displaystyle{ \widehat{a}\,\! }[/math] and [math]\displaystyle{ \widehat{b}\,\! }[/math] are:

[math]\displaystyle{ \hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}\,\! }[/math]

and:

[math]\displaystyle{ \hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\! }[/math]

where:

[math]\displaystyle{ {{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\! }[/math]

and:

[math]\displaystyle{ {{x}_{i}}=t_{i}^{\prime }\,\! }[/math]

and the [math]\displaystyle{ F(t_{i}^{\prime })\,\! }[/math] is estimated from the median ranks. Once [math]\displaystyle{ \widehat{a}\,\! }[/math] and [math]\displaystyle{ \widehat{b}\,\! }[/math] are obtained, solve the linear equation for the unknown [math]\displaystyle{ y\,\! }[/math], which corresponds to:

[math]\displaystyle{ y=-\frac{\widehat{a}}{\widehat{b}}+\frac{1}{\widehat{b}}x\,\! }[/math]

Solving for the parameters we get:

[math]\displaystyle{ a=-\frac{\widehat{a}}{\widehat{b}}=-\frac{{{\mu }'}}{\sigma'}\,\! }[/math]

and:

[math]\displaystyle{ b=\frac{1}{\widehat{b}}=\frac{1}{\sigma'}\,\! }[/math]

The correlation coefficient is evaluated as before using equation in the previous section.

RRX Example

Lognormal Distribution RRX Example

Using the same data set from the RRY example given above, and assuming a lognormal distribution, estimate the parameters and estimate the correlation coefficient, [math]\displaystyle{ \rho \,\! }[/math], using rank regression on X.

Solution

The table constructed for the RRY example also applies to this example as well. Using the values in this table we get:

[math]\displaystyle{ \begin{align} & \hat{b}= & \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }{{y}_{i}}-\tfrac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{14}} \\ & & \\ & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{11.3646-{{(0)}^{2}}/14} \end{align}\,\! }[/math]

or:

[math]\displaystyle{ \widehat{b}=0.9193\,\! }[/math]

and:

[math]\displaystyle{ \hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }}{14}-\widehat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}\,\! }[/math]

or:

[math]\displaystyle{ \widehat{a}=\frac{49.2220}{14}-(0.9193)\frac{(0)}{14}=3.5159\,\! }[/math]

Therefore:

[math]\displaystyle{ {\sigma'}=\widehat{b}=0.9193\,\! }[/math]

and:

[math]\displaystyle{ {\mu }'=\frac{\widehat{a}}{\widehat{b}}{\sigma'}=\frac{3.5159}{0.9193}\cdot 0.9193=3.5159\,\! }[/math]

Using for Mean and Standard Deviation we get:

[math]\displaystyle{ \overline{T}=\mu =51.3393\text{ hours}\,\! }[/math]

and:


[math]\displaystyle{ \begin{align} {\sigma'}=59.1682\text{ hours}. \end{align}\,\! }[/math]

The correlation coefficient is found using the equation in previous section:

[math]\displaystyle{ \widehat{\rho }=0.9754.\,\! }[/math]

Note that the regression on Y analysis is not necessarily the same as the regression on X. The only time when the results of the two regression types are the same (i.e., will yield the same equation for a line) is when the data lie perfectly on a line.

Using Weibull++ , with the Rank Regression on X option, the results are:

Lognormal Distribution Example 3 Data and Result.png

Maximum Likelihood Estimation

As it was outlined in Parameter Estimation, maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. However, this can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood equation with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the lognormal distribution are covered in Appendix D .

Note About Bias

See the discussion regarding bias with the normal distribution for information regarding parameter bias in the lognormal distribution.

MLE Example

Lognormal Distribution MLE Example

Using the same data set from the RRY and RRX examples given above and assuming a lognormal distribution, estimate the parameters using the MLE method.

Solution In this example we have only complete data. Thus, the partials reduce to:

[math]\displaystyle{ \begin{align} & \frac{\partial \Lambda }{\partial {\mu }'}= & \frac{1}{\sigma'^{2}}\cdot \underset{i=1}{\overset{14}{\mathop \sum }}\,\ln ({{t}_{i}})-{\mu }'=0 \\ & \frac{\partial \Lambda }{\partial {{\sigma'}}}= & \underset{i=1}{\overset{14}{\mathop \sum }}\,\left( \frac{\ln ({{t}_{i}})-{\mu }'}{\sigma'^{3}}-\frac{1}{{{\sigma'}}} \right)=0 \end{align}\,\! }[/math]

Substituting the values of [math]\displaystyle{ {{T}_{i}}\,\! }[/math] and solving the above system simultaneously, we get:

[math]\displaystyle{ \begin{align} & {{{\hat{\sigma' }}}}= & 0.849 \\ & {{{\hat{\mu }}}^{\prime }}= & 3.516 \end{align}\,\! }[/math]

Using the equation for mean and standard deviation in the Lognormal Distribution Functions section above, we get:

[math]\displaystyle{ \overline{T}=\hat{\mu }=48.25\text{ hours}\,\! }[/math]

and:

[math]\displaystyle{ {{\hat{\sigma }}}=49.61\text{ hours}.\,\! }[/math]

The variance/covariance matrix is given by:

[math]\displaystyle{ \left[ \begin{matrix} \widehat{Var}\left( {{{\hat{\mu }}}^{\prime }} \right)=0.0515 & {} & \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma'}}}} \right)=0.0000 \\ {} & {} & {} \\ \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma' }}}} \right)=0.0000 & {} & \widehat{Var}\left( {{{\hat{\sigma' }}}} \right)=0.0258 \\ \end{matrix} \right]\,\! }[/math]

Confidence Bounds

The method used by the application in estimating the different types of confidence bounds for lognormally distributed data is presented in this section. Note that there are closed-form solutions for both the normal and lognormal reliability that can be obtained without the use of the Fisher information matrix. However, these closed-form solutions only apply to complete data. To achieve consistent application across all possible data types, Weibull++ always uses the Fisher matrix in computing confidence intervals. The complete derivations were presented in detail for a general function in Confidence Bounds. For a discussion on exact confidence bounds for the normal and lognormal, see The Normal Distribution.

Fisher Matrix Bounds

Bounds on the Parameters

The lower and upper bounds on the mean, [math]\displaystyle{ {\mu }'\,\! }[/math], are estimated from:

[math]\displaystyle{ \begin{align} & \mu _{U}^{\prime }= & {{\widehat{\mu }}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (upper bound),} \\ & \mu _{L}^{\prime }= & {{\widehat{\mu }}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (lower bound)}\text{.} \end{align}\,\! }[/math]

For the standard deviation, [math]\displaystyle{ {\widehat{\sigma}'}\,\! }[/math], [math]\displaystyle{ \ln ({{\widehat{\sigma'}}})\,\! }[/math] is treated as normally distributed, and the bounds are estimated from:

[math]\displaystyle{ \begin{align} & {{\sigma}_{U}}= & {{\widehat{\sigma'}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma'}}})}}{{{\widehat{\sigma'}}}}}}\text{ (upper bound),} \\ & {{\sigma }_{L}}= & \frac{{{\widehat{\sigma'}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma' }}})}}{{{\widehat{\sigma'}}}}}}}\text{ (lower bound),} \end{align}\,\! }[/math]

where [math]\displaystyle{ {{K}_{\alpha }}\,\! }[/math] is defined by:

[math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\! }[/math]

If [math]\displaystyle{ \delta \,\! }[/math] is the confidence level, then [math]\displaystyle{ \alpha =\tfrac{1-\delta }{2}\,\! }[/math] for the two-sided bounds and [math]\displaystyle{ \alpha =1-\delta \,\! }[/math] for the one-sided bounds.

The variances and covariances of [math]\displaystyle{ {{\widehat{\mu }}^{\prime }}\,\! }[/math] and [math]\displaystyle{ {{\widehat{\sigma'}}}\,\! }[/math] are estimated as follows:

[math]\displaystyle{ \left( \begin{matrix} \widehat{Var}\left( {{\widehat{\mu }}^{\prime }} \right) & \widehat{Cov}\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}} \right) \\ \widehat{Cov}\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}} \right) & \widehat{Var}\left( {{\widehat{\sigma'}}} \right) \\ \end{matrix} \right)=\left( \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{({\mu }')}^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }'\partial {{\sigma'}}} \\ {} & {} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }'\partial {{\sigma'}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma'^{2}} \\ \end{matrix} \right)_{{\mu }'={{\widehat{\mu }}^{\prime }},{{\sigma'}}={{\widehat{\sigma'}}}}^{-1}\,\! }[/math]

where [math]\displaystyle{ \Lambda \,\! }[/math] is the log-likelihood function of the lognormal distribution.

Bounds on Time(Type 1)

The bounds around time for a given lognormal percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, as follows:

[math]\displaystyle{ {t}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})={{\widehat{\mu }}^{\prime }}+z\cdot {{\widehat{\sigma' }}}\,\! }[/math]

where:

[math]\displaystyle{ z={{\Phi }^{-1}}\left[ F({t}') \right]\,\! }[/math]

and:

[math]\displaystyle{ \Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({t}')}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\! }[/math]

The next step is to calculate the variance of [math]\displaystyle{ {T}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma }}}):\,\! }[/math]

[math]\displaystyle{ \begin{align} & Var({{{\hat{t}}}^{\prime }})= & {{\left( \frac{\partial {t}'}{\partial {\mu }'} \right)}^{2}}Var({{\widehat{\mu }}^{\prime }})+{{\left( \frac{\partial {t}'}{\partial {{\sigma' }}} \right)}^{2}}Var({{\widehat{\sigma' }}}) \\ & & +2\left( \frac{\partial {t}'}{\partial {\mu }'} \right)\left( \frac{\partial {t}'}{\partial {{\sigma' }}} \right)Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}} \right) \\ & & \\ & Var({{{\hat{t}}}^{\prime }})= & Var({{\widehat{\mu }}^{\prime }})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma' }}})+2\cdot \widehat{z}\cdot Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}} \right) \end{align}\,\! }[/math]

The upper and lower bounds are then found by:

[math]\displaystyle{ \begin{align} & t_{U}^{\prime }= & \ln {{t}_{U}}={{{\hat{t}}}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})} \\ & t_{L}^{\prime }= & \ln {{t}_{L}}={{{\hat{t}}}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})} \end{align}\,\! }[/math]

Solving for [math]\displaystyle{ {{t}_{U}}\,\! }[/math] and [math]\displaystyle{ {{t}_{L}}\,\! }[/math] we get:

[math]\displaystyle{ \begin{align} & {{t}_{U}}= & {{e}^{t_{U}^{\prime }}}\text{ (upper bound),} \\ & {{t}_{L}}= & {{e}^{t_{L}^{\prime }}}\text{ (lower bound)}\text{.} \end{align}\,\! }[/math]

Bounds on Reliability (Type 2)

The reliability of the lognormal distribution is:

[math]\displaystyle{ \hat{R}(t;{{\hat{\mu }}^{'}},{{\hat{\sigma }}^{'}})=\int_{t'}^{\infty }{\frac{1}{{{{\hat{\sigma }}}^{'}}\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{\left( \frac{x-{{{\hat{\mu }}}^{'}}}{{{{\hat{\sigma }}}^{'}}} \right)}^{2}}}}dx\,\! }[/math]

where [math]\displaystyle{ t'=\ln (t)\,\! }[/math]. Let [math]\displaystyle{ \hat{z}(x)=\frac{x-{{{\hat{\mu }}}^{'}}}{{{\sigma }^{'}}}\,\! }[/math], the above equation then becomes:

[math]\displaystyle{ \hat{R}\left( \hat{z}(t') \right)=\int_{\hat{z}(t')}^{\infty }{\frac{1}{\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{z}^{2}}}}dz\,\! }[/math]

The bounds on [math]\displaystyle{ z\,\! }[/math] are estimated from:

[math]\displaystyle{ \begin{align} & {{z}_{U}}= & \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ & {{z}_{L}}= & \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \end{align}\,\! }[/math]

where:

[math]\displaystyle{ \begin{align} & Var(\hat{z})=\left( \frac{\partial {z}}{\partial \mu '} \right)_{\hat{\mu }'}^{2}Var\left( \hat{\mu }' \right)+\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{2}Var\left( \hat{\sigma }' \right) \\ & +2\left( \frac{\partial{z}}{\partial \mu '} \right)_{\hat{\mu }'}^{{}}\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{{}}Cov\left( \hat{\mu }',\hat{\sigma }' \right) \end{align}\,\! }[/math]

or:

[math]\displaystyle{ Var(\hat{z})=\frac{1}{{{{\hat{\sigma }}}^{'2}}}\left[ Var\left( \hat{\mu }' \right)+{{{\hat{z}}}^{2}}Var\left( \sigma ' \right)+2\cdot \hat{z}\cdot Cov\left( \hat{\mu }',\hat{\sigma }' \right) \right]\,\! }[/math]

The upper and lower bounds on reliability are:

[math]\displaystyle{ \begin{align} & {{R}_{U}}= & \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ & {{R}_{L}}= & \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)} \end{align}\,\! }[/math]

Likelihood Ratio Confidence Bounds

Bounds on Parameters

As covered in Parameter Estimation, the likelihood confidence bounds are calculated by finding values for [math]\displaystyle{ {{\theta }_{1}}\,\! }[/math] and [math]\displaystyle{ {{\theta }_{2}}\,\! }[/math] that satisfy:

[math]\displaystyle{ -2\cdot \text{ln}\left( \frac{L({{\theta }_{1}},{{\theta }_{2}})}{L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})} \right)=\chi _{\alpha ;1}^{2}\,\! }[/math]

This equation can be rewritten as:

[math]\displaystyle{ L({{\theta }_{1}},{{\theta }_{2}})=L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}\,\! }[/math]

For complete data, the likelihood formula for the normal distribution is given by:

[math]\displaystyle{ L({\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma' }}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{\mu }'}{{{\sigma'}}} \right)}^{2}}}}\,\! }[/math]

where the [math]\displaystyle{ {{x}_{i}}\,\! }[/math] values represent the original time-to-failure data. For a given value of [math]\displaystyle{ \alpha \,\! }[/math], values for [math]\displaystyle{ {\mu }'\,\! }[/math] and [math]\displaystyle{ {{\sigma' }}\,\! }[/math] can be found which represent the maximum and minimum values that satisfy likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level [math]\displaystyle{ \delta ,\,\! }[/math] where [math]\displaystyle{ \alpha =\delta \,\! }[/math] for two-sided bounds and [math]\displaystyle{ \alpha =2\delta -1\,\! }[/math] for one-sided.

Example: LR Bounds on Parameters

Lognormal Distribution Likelihood Ratio Bound Example (Parameters)

Five units are put on a reliability test and experience failures at 45, 60, 75, 90, and 115 hours. Assuming a lognormal distribution, the MLE parameter estimates are calculated to be [math]\displaystyle{ {{\widehat{\mu }}^{\prime }}=4.2926\,\! }[/math] and [math]\displaystyle{ {{\widehat{\sigma'}}}=0.32361.\,\! }[/math] Calculate the two-sided 75% confidence bounds on these parameters using the likelihood ratio method.

Solution

The first step is to calculate the likelihood function for the parameter estimates:

[math]\displaystyle{ \begin{align} L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})= & \underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}}), \\ = & \underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\widehat{\sigma' }}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{{\widehat{\mu }}^{\prime }}}{{{\widehat{\sigma' }}}} \right)}^{2}}}} \\ L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & \underset{i=1}{\overset{5}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot 0.32361\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-4.2926}{0.32361} \right)}^{2}}}} \\ L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & 1.115256\times {{10}^{-10}} \end{align}\,\! }[/math]

where [math]\displaystyle{ {{x}_{i}}\,\! }[/math] are the original time-to-failure data points. We can now rearrange the likelihod ratio equation to the form:

[math]\displaystyle{ L({\mu }',{{\sigma' }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0\,\! }[/math]

Since our specified confidence level, [math]\displaystyle{ \delta \,\! }[/math], is 75%, we can calculate the value of the chi-squared statistic, [math]\displaystyle{ \chi _{0.75;1}^{2}=1.323303.\,\! }[/math] We can now substitute this information into the equation:

[math]\displaystyle{ \begin{align} & L({\mu }',{{\sigma' }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}= & 0 \\ & L({\mu }',{{\sigma'}})-1.115256\times {{10}^{-10}}\cdot {{e}^{\tfrac{-1.323303}{2}}}= & 0 \\ & L({\mu }',{{\sigma'}})-5.754703\times {{10}^{-11}}= & 0 \end{align}\,\! }[/math]

It now remains to find the values of [math]\displaystyle{ {\mu }'\,\! }[/math] and [math]\displaystyle{ {{\sigma'}}\,\! }[/math] which satisfy this equation. This is an iterative process that requires setting the value of [math]\displaystyle{ {{\sigma'}}\,\! }[/math] and finding the appropriate values of [math]\displaystyle{ {\mu }'\,\! }[/math], and vice versa.

The following table gives the values of [math]\displaystyle{ {\mu }'\,\! }[/math] based on given values of [math]\displaystyle{ {{\sigma'}}\,\! }[/math].

[math]\displaystyle{ \begin{matrix} {{\sigma' }} & \mu _{1}^{\prime } & \mu _{2}^{\prime } & {{\sigma' }} & \mu _{1}^{\prime } & \mu _{2}^{\prime } \\ 0.24 & 4.2421 & 4.3432 & 0.37 & 4.1145 & 4.4708 \\ 0.25 & 4.2115 & 4.3738 & 0.38 & 4.1152 & 4.4701 \\ 0.26 & 4.1909 & 4.3944 & 0.39 & 4.1170 & 4.4683 \\ 0.27 & 4.1748 & 4.4105 & 0.40 & 4.1200 & 4.4653 \\ 0.28 & 4.1618 & 4.4235 & 0.41 & 4.1244 & 4.4609 \\ 0.29 & 4.1509 & 4.4344 & 0.42 & 4.1302 & 4.4551 \\ 0.30 & 4.1419 & 4.4434 & 0.43 & 4.1377 & 4.4476 \\ 0.31 & 4.1343 & 4.4510 & 0.44 & 4.1472 & 4.4381 \\ 0.32 & 4.1281 & 4.4572 & 0.45 & 4.1591 & 4.4262 \\ 0.33 & 4.1231 & 4.4622 & 0.46 & 4.1742 & 4.4111 \\ 0.34 & 4.1193 & 4.4660 & 0.47 & 4.1939 & 4.3914 \\ 0.35 & 4.1166 & 4.4687 & 0.48 & 4.2221 & 4.3632 \\ 0.36 & 4.1150 & 4.4703 & {} & {} & {} \\ \end{matrix}\,\! }[/math]

These points are represented graphically in the following contour plot:

WB.10 lognormal contour plot.png

(Note that this plot is generated with degrees of freedom [math]\displaystyle{ k=1\,\! }[/math], as we are only determining bounds on one parameter. The contour plots generated in Weibull++ are done with degrees of freedom [math]\displaystyle{ k=2\,\! }[/math], for use in comparing both parameters simultaneously.) As can be determined from the table the lowest calculated value for [math]\displaystyle{ {\mu }'\,\! }[/math] is 4.1145, while the highest is 4.4708. These represent the two-sided 75% confidence limits on this parameter. Since solutions for the equation do not exist for values of [math]\displaystyle{ {{\sigma' }}\,\! }[/math] below 0.24 or above 0.48, these can be considered the two-sided 75% confidence limits for this parameter. In order to obtain more accurate values for the confidence limits on [math]\displaystyle{ {{\sigma'}}\,\! }[/math], we can perform the same procedure as before, but finding the two values of [math]\displaystyle{ \sigma \,\! }[/math] that correspond with a given value of [math]\displaystyle{ {\mu }'.\,\! }[/math] Using this method, we find that the 75% confidence limits on [math]\displaystyle{ {{\sigma'}}\,\! }[/math] are 0.23405 and 0.48936, which are close to the initial estimates of 0.24 and 0.48.

Bounds on Time and Reliability

In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the normal reliability equation into the likelihood function. The normal reliability equation can be written as:

[math]\displaystyle{ R=1-\Phi \left( \frac{\text{ln}(t)-{\mu }'}{{{\sigma'}}} \right)\,\! }[/math]

This can be rearranged to the form:

[math]\displaystyle{ {\mu }'=\text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R)\,\! }[/math]

where [math]\displaystyle{ {{\Phi }^{-1}}\,\! }[/math] is the inverse standard normal. This equation can now be substituted into likelihood function to produce a likelihood equation in terms of [math]\displaystyle{ {{\sigma'}},\,\! }[/math] [math]\displaystyle{ t\,\! }[/math] and [math]\displaystyle{ R\,\! }[/math]:

[math]\displaystyle{ L({{\sigma'}},t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma'}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-\left( \text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R) \right)}{{{\sigma'}}} \right)}^{2}}}}\,\! }[/math]

The unknown variable [math]\displaystyle{ t/R\,\! }[/math] depends on what type of bounds are being determined. If one is trying to determine the bounds on time for a given reliability, then [math]\displaystyle{ R\,\! }[/math] is a known constant and [math]\displaystyle{ t\,\! }[/math] is the unknown variable. Conversely, if one is trying to determine the bounds on reliability for a given time, then [math]\displaystyle{ t\,\! }[/math] is a known constant and [math]\displaystyle{ R\,\! }[/math] is the unknown variable. Either way, the above equation can be used to solve the likelihood ratio equation for the values of interest.

Example: LR Bounds on Time

Lognormal Distribution Likelihood Ratio Bound Example (Time)

For the same data set given for the parameter bounds example, determine the two-sided 75% confidence bounds on the time estimate for a reliability of 80%. The ML estimate for the time at [math]\displaystyle{ R(t)=80%\,\! }[/math] is 55.718.

Solution

In this example, we are trying to determine the two-sided 75% confidence bounds on the time estimate of 55.718. This is accomplished by substituting [math]\displaystyle{ R=0.80\,\! }[/math] and [math]\displaystyle{ \alpha =0.75\,\! }[/math] into the likelihood function, and varying [math]\displaystyle{ {{\sigma' }}\,\! }[/math] until the maximum and minimum values of [math]\displaystyle{ t\,\! }[/math] are found. The following table gives the values of [math]\displaystyle{ t\,\! }[/math] based on given values of [math]\displaystyle{ {{\sigma' }}\,\! }[/math].

[math]\displaystyle{ \begin{matrix} {{\sigma' }} & {{t}_{1}} & {{t}_{2}} & {{\sigma' }} & {{t}_{1}} & {{t}_{2}} \\ 0.24 & 56.832 & 62.879 & 0.37 & 44.841 & 64.031 \\ 0.25 & 54.660 & 64.287 & 0.38 & 44.494 & 63.454 \\ 0.26 & 53.093 & 65.079 & 0.39 & 44.200 & 62.809 \\ 0.27 & 51.811 & 65.576 & 0.40 & 43.963 & 62.093 \\ 0.28 & 50.711 & 65.881 & 0.41 & 43.786 & 61.304 \\ 0.29 & 49.743 & 66.041 & 0.42 & 43.674 & 60.436 \\ 0.30 & 48.881 & 66.085 & 0.43 & 43.634 & 59.481 \\ 0.31 & 48.106 & 66.028 & 0.44 & 43.681 & 58.426 \\ 0.32 & 47.408 & 65.883 & 0.45 & 43.832 & 57.252 \\ 0.33 & 46.777 & 65.657 & 0.46 & 44.124 & 55.924 \\ 0.34 & 46.208 & 65.355 & 0.47 & 44.625 & 54.373 \\ 0.35 & 45.697 & 64.983 & 0.48 & 45.517 & 52.418 \\ 0.36 & 45.242 & 64.541 & {} & {} & {} \\ \end{matrix}\,\! }[/math]

This data set is represented graphically in the following contour plot:

WB.10 time vs sigma.png

As can be determined from the table, the lowest calculated value for [math]\displaystyle{ t\,\! }[/math] is 43.634, while the highest is 66.085. These represent the two-sided 75% confidence limits on the time at which reliability is equal to 80%.

Example: LR Bounds on Reliability

Lognormal Distribution Likelihood Ratio Bound Example (Reliability)

For the same data set given above for the parameter bounds example, determine the two-sided 75% confidence bounds on the reliability estimate for [math]\displaystyle{ t=65\,\! }[/math]. The ML estimate for the reliability at [math]\displaystyle{ t=65\,\! }[/math] is 64.261%.

Solution

In this example, we are trying to determine the two-sided 75% confidence bounds on the reliability estimate of 64.261%. This is accomplished by substituting [math]\displaystyle{ t=65\,\! }[/math] and [math]\displaystyle{ \alpha =0.75\,\! }[/math] into the likelihood function, and varying [math]\displaystyle{ {{\sigma'}}\,\! }[/math] until the maximum and minimum values of [math]\displaystyle{ R\,\! }[/math] are found. The following table gives the values of [math]\displaystyle{ R\,\! }[/math] based on given values of [math]\displaystyle{ {{\sigma' }}\,\! }[/math].

[math]\displaystyle{ \begin{matrix} {{\sigma'}} & {{R}_{1}} & {{R}_{2}} & {{\sigma'}} & {{R}_{1}} & {{R}_{2}} \\ 0.24 & 61.107% & 75.910% & 0.37 & 43.573% & 78.845% \\ 0.25 & 55.906% & 78.742% & 0.38 & 43.807% & 78.180% \\ 0.26 & 55.528% & 80.131% & 0.39 & 44.147% & 77.448% \\ 0.27 & 50.067% & 80.903% & 0.40 & 44.593% & 76.646% \\ 0.28 & 48.206% & 81.319% & 0.41 & 45.146% & 75.767% \\ 0.29 & 46.779% & 81.499% & 0.42 & 45.813% & 74.802% \\ 0.30 & 45.685% & 81.508% & 0.43 & 46.604% & 73.737% \\ 0.31 & 44.857% & 81.387% & 0.44 & 47.538% & 72.551% \\ 0.32 & 44.250% & 81.159% & 0.45 & 48.645% & 71.212% \\ 0.33 & 43.827% & 80.842% & 0.46 & 49.980% & 69.661% \\ 0.34 & 43.565% & 80.446% & 0.47 & 51.652% & 67.789% \\ 0.35 & 43.444% & 79.979% & 0.48 & 53.956% & 65.299% \\ 0.36 & 43.450% & 79.444% & {} & {} & {} \\ \end{matrix}\,\! }[/math]

This data set is represented graphically in the following contour plot:

WB.10 reliability v sigma.png

As can be determined from the table, the lowest calculated value for [math]\displaystyle{ R\,\! }[/math] is 43.444%, while the highest is 81.508%. These represent the two-sided 75% confidence limits on the reliability at [math]\displaystyle{ t=65\,\! }[/math].

Bayesian Confidence Bounds

Bounds on Parameters

From Parameter Estimation, we know that the marginal distribution of parameter [math]\displaystyle{ {\mu }'\,\! }[/math] is:

[math]\displaystyle{ \begin{align} f({\mu }'|Data)= & \int_{0}^{\infty }f({\mu }',{{\sigma'}}|Data)d{{\sigma'}} \\ = & \frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{{\sigma'}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{\mu }'d{{\sigma'}}} \end{align}\,\! }[/math]

where:

[math]\displaystyle{ \varphi ({{\sigma '}})\,\! }[/math] is [math]\displaystyle{ \tfrac{1}{{{\sigma '}}}\,\! }[/math], non-informative prior of [math]\displaystyle{ {{\sigma '}}\,\! }[/math].

[math]\displaystyle{ \varphi ({\mu }')\,\! }[/math] is an uniform distribution from - [math]\displaystyle{ \infty \,\! }[/math] to + [math]\displaystyle{ \infty \,\! }[/math], non-informative prior of [math]\displaystyle{ {\mu }'\,\! }[/math]. With the above prior distributions, [math]\displaystyle{ f({\mu }'|Data)\,\! }[/math] can be rewritten as:

[math]\displaystyle{ f({\mu }'|Data)=\frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\! }[/math]

The one-sided upper bound of [math]\displaystyle{ {\mu }'\,\! }[/math] is:

[math]\displaystyle{ CL=P({\mu }'\le \mu _{U}^{\prime })=\int_{-\infty }^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'\,\! }[/math]

The one-sided lower bound of [math]\displaystyle{ {\mu }'\,\! }[/math] is:

[math]\displaystyle{ 1-CL=P({\mu }'\le \mu _{L}^{\prime })=\int_{-\infty }^{\mu _{L}^{\prime }}f({\mu }'|Data)d{\mu }'\,\! }[/math]

The two-sided bounds of [math]\displaystyle{ {\mu }'\,\! }[/math] is:

[math]\displaystyle{ CL=P(\mu _{L}^{\prime }\le {\mu }'\le \mu _{U}^{\prime })=\int_{\mu _{L}^{\prime }}^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'\,\! }[/math]

The same method can be used to obtained the bounds of [math]\displaystyle{ {{\sigma '}}\,\! }[/math].

Bounds on Time (Type 1)

The reliable life of the lognormal distribution is:

[math]\displaystyle{ \begin{align} \ln T={\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R) \end{align}\,\! }[/math]

The one-sided upper on time bound is given by:

[math]\displaystyle{ CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\ln t\le \ln {{t}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R)\le \ln {{t}_{U}})\,\! }[/math]

The above equation can be rewritten in terms of [math]\displaystyle{ {\mu }'\,\! }[/math] as:

[math]\displaystyle{ CL=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln {{t}_{U}}-{{\sigma '}}{{\Phi }^{-1}}(1-R)\,\! }[/math]

From the posterior distribution of [math]\displaystyle{ {\mu }'\,\! }[/math] get:

[math]\displaystyle{ CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln {{t}_{U}}-{{\sigma ‘}}{{\Phi }^{-1}}(1-R)}L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\! }[/math]

The above equation is solved w.r.t. [math]\displaystyle{ {{t}_{U}}.\,\! }[/math] The same method can be applied for one-sided lower bounds and two-sided bounds on Time.

Bounds on Reliability (Type 2)

The one-sided upper bound on reliability is given by:

[math]\displaystyle{ CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}}))\,\! }[/math]

From the posterior distribution of [math]\displaystyle{ {\mu }'\,\! }[/math] is:

[math]\displaystyle{ CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}})}L({{\sigma'}},{\mu }')\tfrac{1}{{{\sigma'}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\! }[/math]

The above equation is solved w.r.t. [math]\displaystyle{ {{R}_{U}}.\,\! }[/math] The same method is used to calculate the one-sided lower bounds and two-sided bounds on Reliability.

Example: Bayesian Bounds

Lognormal Distribution Bayesian Bound Example (Parameters)

Determine the two-sided 90% Bayesian confidence bounds on the lognormal parameter estimates for the data given next:

[math]\displaystyle{ \begin{matrix} \text{Data Point Index} & \text{State End Time} \\ \text{1} & \text{2} \\ \text{2} & \text{5} \\ \text{3} & \text{11} \\ \text{4} & \text{23} \\ \text{5} & \text{29} \\ \text{6} & \text{37} \\ \text{7} & \text{43} \\ \text{8} & \text{59} \\ \end{matrix}\,\! }[/math]

Solution

The data points are entered into a times-to-failure data sheet. The lognormal distribution is selected under Distributions. The Bayesian confidence bounds method only applies for the MLE analysis method, therefore, Maximum Likelihood (MLE) is selected under Analysis Method and Use Bayesian is selected under the Confidence Bounds Method in the Analysis tab.

The two-sided 90% Bayesian confidence bounds on the lognormal parameter are obtained using the QCP and clicking on the Calculate Bounds button in the Parameter Bounds tab as follows:

Lognormal Distribution Example 8 QCP.png


Lognormal Distribution Example 8 Parameter Bounds.png

Lognormal Distribution Examples

Complete Data Example

Determine the lognormal parameter estimates for the data given in the following table.

Non-Grouped Times-to-Failure Data
Data point index State F or S State End Time
1 F 2
2 F 5
3 F 11
4 F 23
5 F 29
6 F 37
7 F 43
8 F 59

Solution

Using Weibull++, the computed parameters for maximum likelihood are:

[math]\displaystyle{ \begin{align} & {{{\hat{\mu }}}^{\prime }}= & 2.83 \\ & {\hat{\sigma '}}= & 1.10 \end{align}\,\! }[/math]

For rank regression on [math]\displaystyle{ X\,\! }[/math]

[math]\displaystyle{ \begin{align} & {{{\hat{\mu }}}^{\prime }}= & 2.83 \\ & {{{\hat{\sigma' }}}}= & 1.24 \end{align}\,\! }[/math]

For rank regression on [math]\displaystyle{ Y:\,\! }[/math]

[math]\displaystyle{ \begin{align} & {{{\hat{\mu }}}^{\prime }}= & 2.83 \\ & {{{\hat{\sigma' }}}}= & 1.36 \end{align}\,\! }[/math]

Complete Data RRX Example

From Kececioglu [20, p. 347]. 15 identical units were tested to failure and following is a table of their failure times:

Times-to-Failure Data
[math]\displaystyle{ \begin{matrix} \text{Data Point Index} & \text{Failure Times (Hr)} \\ \text{1} & \text{62}\text{.5} \\ \text{2} & \text{91}\text{.9} \\ \text{3} & \text{100}\text{.3} \\ \text{4} & \text{117}\text{.4} \\ \text{5} & \text{141}\text{.1} \\ \text{6} & \text{146}\text{.8} \\ \text{7} & \text{172}\text{.7} \\ \text{8} & \text{192}\text{.5} \\ \text{9} & \text{201}\text{.6} \\ \text{10} & \text{235}\text{.8} \\ \text{11} & \text{249}\text{.2} \\ \text{12} & \text{297}\text{.5} \\ \text{13} & \text{318}\text{.3} \\ \text{14} & \text{410}\text{.6} \\ \text{15} & \text{550}\text{.5} \\ \end{matrix}\,\! }[/math]

Solution

Published results (using probability plotting):

[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=5.22575 \\ {{\widehat{\sigma' }}}=0.62048. \\ \end{matrix}\,\! }[/math]


Weibull++ computed parameters for rank regression on X are:

[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=5.2303 \\ {{\widehat{\sigma'}}}=0.6283. \\ \end{matrix}\,\! }[/math]


The small differences are due to the precision errors when fitting a line manually, whereas in Weibull++ the line was fitted mathematically.

Complete Data Unbiased MLE Example

From Kececioglu [19, p. 406]. 9 identical units are tested continuously to failure and failure times were recorded at 30.4, 36.7, 53.3, 58.5, 74.0, 99.3, 114.3, 140.1 and 257.9 hours.

Solution

The results published were obtained by using the unbiased model. Published Results (using MLE):

[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=4.3553 \\ {{\widehat{\sigma' }}}=0.67677 \\ \end{matrix}\,\! }[/math]


This same data set can be entered into Weibull++ by creating a data sheet capable of handling non-grouped time-to-failure data. Since the results shown above are unbiased, the Use Unbiased Std on Normal Data option in the User Setup must be selected in order to duplicate these results. Weibull++ computed parameters for maximum likelihood are:

[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=4.3553 \\ {{\widehat{\sigma' }}}=0.6768 \\ \end{matrix}\,\! }[/math]

Suspension Data Example

From Nelson [30, p. 324]. 96 locomotive controls were tested, 37 failed and 59 were suspended after running for 135,000 miles. The table below shows the failure and suspension times.

Nelson's Locomotive Data
Number in State F or S Time
1 1 F 22.5
2 1 F 37.5
3 1 F 46
4 1 F 48.5
5 1 F 51.5
6 1 F 53
7 1 F 54.5
8 1 F 57.5
9 1 F 66.5
10 1 F 68
11 1 F 69.5
12 1 F 76.5
13 1 F 77
14 1 F 78.5
15 1 F 80
16 1 F 81.5
17 1 F 82
18 1 F 83
19 1 F 84
20 1 F 91.5
21 1 F 93.5
22 1 F 102.5
23 1 F 107
24 1 F 108.5
25 1 F 112.5
26 1 F 113.5
27 1 F 116
28 1 F 117
29 1 F 118.5
30 1 F 119
31 1 F 120
32 1 F 122.5
33 1 F 123
34 1 F 127.5
35 1 F 131
36 1 F 132.5
37 1 F 134
38 59 S 135

Solution

The distribution used in the publication was the base-10 lognormal. Published results (using MLE):

[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=2.2223 \\ {{\widehat{\sigma' }}}=0.3064 \\ \end{matrix}\,\! }[/math]


Published 95% confidence limits on the parameters:

[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=\left\{ 2.1336,2.3109 \right\} \\ {{\widehat{\sigma'}}}=\left\{ 0.2365,0.3970 \right\} \\ \end{matrix}\,\! }[/math]


Published variance/covariance matrix:

[math]\displaystyle{ \left[ \begin{matrix} \widehat{Var}\left( {{{\hat{\mu }}}^{\prime }} \right)=0.0020 & {} & \widehat{Cov}({{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma' }}}})=0.001 \\ {} & {} & {} \\ \widehat{Cov}({{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma' }}}})=0.001 & {} & \widehat{Var}\left( {{{\hat{\sigma '}}}} \right)=0.0016 \\ \end{matrix} \right]\,\! }[/math]


To replicate the published results (since Weibull++ uses a lognormal to the base [math]\displaystyle{ e\,\! }[/math] ), take the base-10 logarithm of the data and estimate the parameters using the normal distribution and MLE.

  • Weibull++ computed parameters for maximum likelihood are:
[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=2.2223 \\ {{\widehat{\sigma' }}}=0.3064 \\ \end{matrix}\,\! }[/math]


  • Weibull++ computed 95% confidence limits on the parameters:
[math]\displaystyle{ \begin{matrix} {{\widehat{\mu }}^{\prime }}=\left\{ 2.1364,2.3081 \right\} \\ {{\widehat{\sigma'}}}=\left\{ 0.2395,0.3920 \right\} \\ \end{matrix}\,\! }[/math]


  • Weibull++ computed/variance covariance matrix:
[math]\displaystyle{ \left[ \begin{matrix} \widehat{Var}\left( {{{\hat{\mu }}}^{\prime }} \right)=0.0019 & {} & \widehat{Cov}({{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma' }}}})=0.0009 \\ {} & {} & {} \\ \widehat{Cov}({\mu }',{{{\hat{\sigma' }}}})=0.0009 & {} & \widehat{Var}\left( {{{\hat{\sigma' }}}} \right)=0.0015 \\ \end{matrix} \right]\,\! }[/math]

Interval Data Example

Determine the lognormal parameter estimates for the data given in the table below.

Non-Grouped Data Times-to-Failure with Intervals
Data point index Last Inspected State End Time
1 30 32
2 32 35
3 35 37
4 37 40
5 42 42
6 45 45
7 50 50
8 55 55

Solution

This is a sequence of interval times-to-failure where the intervals vary substantially in length. Using Weibull++, the computed parameters for maximum likelihood are calculated to be:

[math]\displaystyle{ \begin{align} & {{{\hat{\mu }}}^{\prime }}= & 3.64 \\ & {{{\hat{\sigma' }}}}= & 0.18 \end{align}\,\! }[/math]


For rank regression on [math]\displaystyle{ X\ \,\! }[/math]:

[math]\displaystyle{ \begin{align} & {{{\hat{\mu }}}^{\prime }}= & 3.64 \\ & {{{\hat{\sigma' }}}}= & 0.17 \end{align}\,\! }[/math]


For rank regression on [math]\displaystyle{ Y\ \,\! }[/math]:

[math]\displaystyle{ \begin{align} & {{{\hat{\mu }}}^{\prime }}= & 3.64 \\ & {{{\hat{\sigma' }}}}= & 0.21 \end{align}\,\! }[/math]