Lloyd-Lipow

Lloyd and Lipow (1962) considered a situation in which a test program is conducted in $$N\,\!$$ stages. Each stage consists of a certain number of trials of an item undergoing testing, and the data set is recorded as successes or failures. All tests in a given stage of testing involve similar items. The results of each stage of testing are used to improve the item for further testing in the next stage. For the $${{k}^{th}}\,\!$$ group of data, taken in chronological order, there are $${{n}_{k}}\,\!$$ tests with $${{S}_{k}}\,\!$$ observed successes. The reliability growth function is then given in Lloyd and Lipow [6]:


 * $${{R}_{k}}={{R}_{\infty }}-\frac{\alpha }{k}\,\!$$

where:


 * $$R_k =\,\!$$ the actual reliability during the $$k^{th}\,\!$$ stage of testing


 * $$R_{\infty} =\,\!$$ the ultimate reliability attained if $$k\to{\infty}\,\!$$


 * $$\alpha>0 =\,\!$$ modifies the rate of growth

Note that essentially, $${{R}_{k}}=\tfrac\,\!$$. If the data set consists of reliability data, then $${{S}_{k}}\,\!$$ is assumed to be the observed reliability given and $${{n}_{k}}\,\!$$ is considered 1.

Parameter Estimation
When analyzing reliability data in the RGA software, you have the option to enter the reliability values in percent or in decimal format. However, $${{\hat{R}}_{\infty }}\,\!$$ will always be returned in decimal format and not in percent. The estimated parameters in the RGA software are unitless.

Maximum Likelihood Estimators
For the $${{k}^{th}}\,\!$$ stage:


 * $${{L}_{k}}=const.\text{ }R_{k}^{{(1-{{R}_{k}})}^{{{n}_{k}}-{{S}_{k}}}}\,\!$$

And assuming that the results are independent between stages:


 * $$L=\underset{k=1}{\overset{N}{\mathop \prod }}\,R_{k}^{{(1-{{R}_{k}})}^{{{n}_{k}}-{{S}_{k}}}}\,\!$$

Then taking the natural log gives:


 * $$\Lambda =\underset{k=1}{\overset{N}{\mathop \sum }}\,{{S}_{k}}\ln \left( {{R}_{\infty }}-\frac{\alpha }{k} \right)+\underset{k=1}{\overset{N}{\mathop \sum }}\,({{n}_{k}}-{{S}_{k}})\ln \left( 1-{{R}_{\infty }}+\frac{\alpha }{k} \right)\,\!$$

Differentiating with respect to $${{R}_{\infty }}\,\!$$ and $$\alpha ,\,\!$$ yields:


 * $$\frac{\partial \Lambda }{\partial {{R}_{\infty }}}=\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{{{R}_{\infty }}-\tfrac{\alpha }{k}}-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{{{n}_{k}}-{{S}_{k}}}{1-{{R}_{\infty }}+\tfrac{\alpha }{k}}\,\!$$


 * $$\frac{\partial \Lambda }{\partial \alpha }=-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac{k}}{{{R}_{\infty }}-\tfrac{\alpha }{k}}+\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac{{{n}_{k}}-{{S}_{k}}}{k}}{1-{{R}_{\infty }}+\tfrac{\alpha }{k}}\,\!$$

Rearranging the equations and setting them equal to zero gives:


 * $$\frac{\partial \Lambda }{\partial {{R}_{\infty }}}=\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac-\left( {{R}_{\infty }}-\tfrac{\alpha }{k} \right)}{\tfrac{1}\left( {{R}_{\infty }}-\tfrac{\alpha }{k} \right)\left( 1-{{R}_{\infty }}+\tfrac{\alpha }{k} \right)}=0\,\!$$


 * $$\frac{\partial \Lambda }{\partial \alpha }=-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac{1}{k}\tfrac-\left( {{R}_{\infty }}-\tfrac{\alpha }{k} \right)\tfrac{1}{k}}{\tfrac{1}\left( {{R}_{\infty }}-\tfrac{\alpha }{k} \right)\left( 1-{{R}_{\infty }}+\tfrac{\alpha }{k} \right)}=0\,\!$$

The resulting equations can be solved simultaneously for $$\widehat{\alpha }\,\!$$ and $${{\hat{R}}_{\infty }}\,\!$$. It should be noted that a closed form solution does not exist for either of the parameters; thus, they must be estimated numerically.

Least Squares Estimators
To obtain least squares estimators for $${{R}_{\infty }}\,\!$$ and $$\alpha \,\!$$, the sum of squares, $$Q\,\!$$, of the deviations of the observed success-ratio, $${{S}_{k}}/{{n}_{k}}\,\!$$, is minimized from its expected value, $${{R}_{\infty }}-\tfrac{\alpha }{k}\,\!$$, with respect to the parameters $${{R}_{\infty }}\,\!$$ and $$\alpha .\,\!$$ Therefore, $$Q\,\!$$ is expressed as:


 * $$Q=\underset{k=1}{\overset{N}{\mathop \sum }}\,{{\left( \frac-{{R}_{\infty }}+\frac{\alpha }{k} \right)}^{2}}\,\!$$

Taking the derivatives with respect to $${{R}_{\infty }}\,\!$$ and $$\alpha \,\!$$ and setting equal to zero yields:


 * $$\begin{align}

\frac{\partial Q}{\partial {{R}_{\infty }}} = -2\underset{k=1}{\overset{N}{\mathop \sum }}\,\left( \frac-{{R}_{\infty }}+\frac{\alpha }{k} \right)=0 \\ \frac{\partial Q}{\partial \alpha } = 2\underset{k=1}{\overset{N}{\mathop \sum }}\,\left( \frac-{{R}_{\infty }}+\frac{\alpha }{k} \right)\frac{1}{k}=0 \end{align}\,\!$$

Solving the equations simultaneously, the least squares estimates of $${{R}_{\infty }}\,\!$$ and $$\alpha \,\!$$ are:


 * $${{\hat{R}}_{\infty }}=\frac{\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac-\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k}\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{k{{n}_{k}}}}{N\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}-{{\left( \underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k} \right)}^{2}}}\,\!$$


 * or:


 * $$\text{ }{{\hat{R}}_{\infty }}=\frac{\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}\underset{k=1}{\overset{N}{\mathop{\sum }}}\,{{R}_{k}}-\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k}\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{k}}{N\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}-{{\left( \underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k} \right)}^{2}}}\,\!$$


 * and:


 * $$\hat{\alpha }=\frac{\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k}\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac-N\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{k{{n}_{k}}}}{N\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}-{{\left( \underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k} \right)}^{2}}}\,\!$$


 * or:


 * $$\hat{\alpha }=\frac{\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k}\underset{k=1}{\overset{N}{\mathop{\sum }}}\,{{R}_{k}}-N\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{k}}{N\underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}-{{\left( \underset{k=1}{\overset{N}{\mathop{\sum }}}\,\tfrac{1}{k} \right)}^{2}}}\,\!$$

Confidence Bounds
This section presents the methods used in the RGA software to estimate the confidence bounds under the Lloyd-Lipow model. One of the properties of maximum likelihood estimators is that they are asymptotically normal. This indicates that they are normally distributed for large samples [6, 7]. Additionally, since the parameter $$\alpha \,\!$$ must be positive, $$\ln \alpha \,\!$$ is treated as being normally distributed as well. The parameter $${{R}_{\infty }}\,\!$$ represents the ultimate reliability that would be attained if $$k\to \infty \,\!$$. $${{R}_{k}}\,\!$$ is the actual reliability during the $${{k}^{th}}\,\!$$ stage of testing. Therefore, $${{R}_{\infty }}\,\!$$ and $${{R}_{k}}\,\!$$ will be between 0 and 1. Consequently, the endpoints of the confidence intervals of the parameters $${{R}_{\infty }}\,\!$$ and $${{R}_{k}}\,\!$$ also will be between 0 and 1. To obtain the confidence interval, it is common practice to use the logit transformation.

The confidence bounds on the parameters $$\alpha \,\!$$ and $${{R}_{\infty }}\,\!$$ are given by:


 * $$C{{B}_{\alpha }}=\hat{\alpha }{{e}^{\pm {{z}_{\alpha /2}}\sqrt{Var(\hat{\alpha })}/\hat{\alpha }}}\,\!$$


 * $$C{{B}_}=\frac{{_{\infty }}+(1-{{{\hat{R}}}_{\infty }}){{e}^{\pm {{z}_{\alpha /2}}\sqrt{Var({{{\hat{R}}}_{\infty }})}/\left[ {{{\hat{R}}}_{\infty }}(1-{{{\hat{R}}}_{\infty }}) \right]}}}\,\!$$

where $${{z}_{\alpha /2}}\,\!$$ represents the percentage points of the $$N(0,1)\,\!$$ distribution such that $$P\{z\ge {{z}_{\alpha /2}}\}=\alpha /2\,\!$$.

The confidence bounds on reliability are given by:


 * $$CB=\frac{{{{\hat{R}}}_{k}}}{{{{\hat{R}}}_{k}}+(1-{{{\hat{R}}}_{k}}){{e}^{\pm {{z}_{\alpha /2}}\sqrt{Var({{{\hat{R}}}_{k}})}/\left[ {{{\hat{R}}}_{k}}(1-{{{\hat{R}}}_{k}}) \right]}}}\,\!$$


 * where:


 * $$Var({{\widehat{R}}_{k}})=Var({{\widehat{R}}_{\infty }})+\frac{1}\cdot Var(\widehat{\alpha })-\frac{2}{k}\cdot Cov({{\widehat{R}}_{\infty }},\widehat{\alpha })\,\!$$

All the variances can be calculated using the Fisher Matrix:



\begin{bmatrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial R_{\infty }^{2}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \alpha \partial {{R}_{\infty }}} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \alpha \partial {{R}_{\infty }}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\alpha }^{2}}} \\ \end{bmatrix}^{-1}= \begin{bmatrix} Var({{\widehat{R}}_{\infty }}) & Cov({{\widehat{R}}_{\infty }},\widehat{\alpha }) \\ Cov({{\widehat{R}}_{\infty }},\widehat{\alpha }) & Var(\widehat{\alpha }) \\ \end{bmatrix}\,\!$$

From the ML estimators of the Lloyd-Lipow model, taking the second partial derivatives yields:


 * $$\frac{{{\partial }^{2}}\Lambda }{\partial R_{\infty }^{2}}=-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{{{n}_{k}}-{{S}_{k}}}\,\!$$


 * $$\frac{{{\partial }^{2}}\Lambda }{\partial {{\alpha }^{2}}}=-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac}-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac{{{n}_{k}}-{{S}_{k}}}}\,\!$$


 * and:


 * $$\frac{{{\partial }^{2}}\Lambda }{\partial {{R}_{\infty }}\partial \alpha }=\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac{k}}-\underset{k=1}{\overset{N}{\mathop \sum }}\,\frac{\tfrac{{{n}_{k}}-{{S}_{k}}}{k}}\,\!$$

The confidence bounds can be obtained by solving for the three equations shown above and the equation for $$Var(\hat{R}_{k})\,\!$$, and then substituting the values into the Fisher Matrix.

As an example, you can calculate and plot the confidence bounds for the data set given above in the Least Squares example as:


 * $$\begin{align}

\frac{{{\partial }^{2}}\Lambda }{\partial R_{\infty }^{2}} = & -255.3835-937.2902=-1192.6737 \\ \frac{{{\partial }^{2}}\Lambda }{\partial {{\alpha }^{2}}} = & -24.4575-43.3930=-67.8505 \\ \frac{{{\partial }^{2}}\Lambda }{\partial {{R}_{\infty }}\partial \alpha } = & 48.6606-140.7518=-92.0912 \end{align}\,\!$$

The variances can be calculated using the Fisher Matrix:


 * $$\begin{align}

{{\left[ \begin{matrix} 1192.6737 & 92.0912 \\   92.0912 & 67.8505  \\ \end{matrix} \right]}^{-1}}= & \left[ \begin{matrix} Var({{\widehat{R}}_{\infty }}) & Cov({{\widehat{R}}_{\infty }},\widehat{\alpha }) \\ Cov({{\widehat{R}}_{\infty }},\widehat{\alpha }) & Var(\widehat{\alpha }) \\ \end{matrix} \right] \\ = & \left[ \begin{matrix} 0.00093661 & -0.00127123 \\   -0.00127123 & 0.01646371  \\ \end{matrix} \right] \end{align}\,\!$$

The variance of $${{R}_{k}}\,\!$$ is obtained such that:


 * $$\begin{align}

Var({{\widehat{R}}_{k}})& =Var({{\widehat{R}}_{\infty }})+\frac{1}\cdot Var(\widehat{\alpha })-\frac{2}{k}\cdot Cov({{\widehat{R}}_{\infty }},\widehat{\alpha })\\ Var({{\widehat{R}}_{k}})& =0.00093661+\frac{1}\cdot 0.01646371+\frac{2}{k}\cdot 0.00127123 \end{align}\,\!$$

The confidence bounds on reliability can now be calculated. The associated confidence bounds at the 90% confidence level are plotted in the figure below with the predicted reliability, $${{R}_{k}}\,\!$$.