Reliability DOE for Life Tests

=Reliability DOE=

Reliability DOE
Reliability analysis is commonly thought of as an approach to model failures of existing products. The usual reliability analysis involves characterization of failures of the products using distributions such as exponential, Weibull and lognormal. Based on the fitted distribution, failures are mitigated, or warranty returns are predicted, or maintenance actions are planned. However, reliability analysis can also be used as a powerful tool to design robust products that operate with minimal failures, by adopting the methodology of Design for Reliability (DFR). In DFR, reliability analysis is carried out in conjunction with physics of failure and experiment design techniques. Under this approach, Design of Experiments (DOE) uses life data to "build" reliability into the products, not just quantify the existing reliability. Such an approach, if properly implemented, can result in significant cost savings, especially in terms of fewer warranty returns or repair and maintenance actions. Although DOE techniques can be used to improve product reliability and also make this reliability robust to noise factors, the discussion in this chapter is focussed on reliability improvement.

Reliability DOE Analysis
Reliability DOE (R-DOE) analysis is fairly similar to the analysis of other designed experiments except that the response is the life of the product in the respective units (e.g. for an automobile component the units of life may be miles, for a mechanical component this may be cycles, and for a pharmaceutical product this may be months or years). However, two important differences exist that make R-DOE analysis unique. The first is that life data of most products are typically well modeled by either the lognormal, Weibull or exponential distribution, but usually do not follow the normal distribution. Traditional DOE techniques follow the assumption that response values at any treatment level follow the normal distribution and therefore, the error terms, $$\epsilon $$, can be assumed to be normally and independently distributed. This assumption may not be valid for the response data used in most of the R-DOE analyses. Further, the life data obtained may either be complete or censored and in this case standard regression techniques applicable to the response data in traditional DOEs can no longer be used. Stresses affecting the life of the product may also be investigated using R-DOE analysis. In this case, the primary purpose of any R-DOE analysis is to identify which of the investigated stresses affect the life of the product (by investigating if change in the level of any stress leads to a significant change in the life of the product). Once the important stresses affecting the life of the product have been identified, detailed analyses can be carried out using ReliaSoft's ALTA software. ALTA includes a number of life-stress relationships (LSRs) to model the relation between life and the stress affecting the life of the product.

R-DOE Analysis of Lognormally Distributed Data
Assume that the life, $$T$$, for a certain product has been found to be lognormally distributed. The probability density function for the lognormal distribution is:


 * $$f(T)=\frac{1}{T{\sigma }'\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln (T)-{\mu }'}{{{\sigma }'}} \right)}^{2}}}}$$

where $${\mu }'$$  represents the mean of the natural logarithm of the times-to-failure and  $${\sigma }'$$  represents the standard deviation of the natural logarithms of the times-to-failure [LDAReference]. If the analyst wants to investigate a single two level factor that may affect the life, $$T$$, then the following model may be used:


 * $${{T}_{i}}={{\mu }_{i}}+{{\xi }_{i}}$$

where: •	 $${{T}_{i}}$$ represents the times-to-failure at the  $$i$$ th treatment level of the factor •	 $${{\mu }_{i}}$$ represents the mean value of  $${{T}_{i}}$$  for the  $$i$$ th treatment •	 $${{\xi }_{i}}$$ is the random error term •	and the subscript $$i$$  represent the treatment level of the factor with  $$i=1,2$$  for a two level factor The model of Eqn. (MeansModel) is analogous to the ANOVA model, $${{Y}_{i}}={{\mu }_{i}}+{{\epsilon }_{i}}$$, used in Chapter 6 for traditional DOE analyses. Note, however, that the random error term, $${{\xi }_{i}}$$, is not normally distributed here because the response,  $$T$$ , is lognormally distributed. It is known that the logarithmic value of a lognormally distributed random variable follows the normal distribution. Therefore, if the logarithmic transformation of $$T$$,  $$ln(T)$$ , is used in Eqn. (MeansModel), the model will be identical to the ANOVA model, $${{Y}_{i}}={{\mu }_{i}}+{{\epsilon }_{i}}$$, used in Chapter 6. Thus, using the logarithmic failure times, the model can be written as:


 * $$\ln ({{T}_{i}})=\mu _{i}^{\prime }+{{\epsilon }_{i}}$$

where: •	 $$\ln ({{T}_{i}})$$ represents the logarithmic times-to-failure at the  $$i$$ th treatment •	 $$\mu _{i}^{\prime }$$ represents the mean of the natural logarithm of the times-to-failure at the  $$i$$ th treatment •	and $${\sigma }'$$  represents the standard deviation of the natural logarithms of the times-to-failure The random error term, $${{\epsilon }_{i}}$$, is normally distributed because the response,  $$\ln ({{T}_{i}})$$ , is normally distributed. Since the model of Eqn. (AnovaModel) is identical to the ANOVA model used in traditional DOE analysis, regression techniques can be applied here and the R-DOE analysis can be carried out similar to the traditional DOE analyses. Recall from Chapter 7 that if the factor(s) affecting the response has only two levels, then the notation of the regression model can be applied to the ANOVA model. Therefore, the model of Eqn. (AnovaModel) can be written using a single indicator variable, $${{x}_{1}}$$, to represent the two level factor as:


 * $$\ln ({{T}_{i}})={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{i}}$$

where $${{\beta }_{0\text{ }}}$$ is the intercept term and  $${{\beta }_{1}}$$  is the effect coefficient for the investigated factor. Setting Eqns. (AnovaModel) and (RegressionNotation) equal to each other returns:


 * $$\mu _{i}^{\prime }={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}$$

The natural logarithm of the times-to-failure at any factor level, $$\mu _{i}^{\prime }$$, is referred to as the life characteristic because it represents a characteristic point of the underlying life distribution. The life characteristic used in the R-DOE analysis will change based on the underlying distribution assumed for the life data. If the analyst wants to investigate the effect of two factors (each at two levels) on the life of the product, then the life characteristic equation can be easily expanded as follows:


 * $$\mu _{i}^{\prime }={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}$$

where $${{\beta }_{2}}$$  is the effect coefficient for the second factor and  $${{x}_{2}}$$  is the indicator variable representing the second factor. If the interaction effect is also to be investigated, then the following equation can be used:


 * $$\mu _{i}^{\prime }={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}+{{\beta }_{12}}{{x}_{i1}}{{x}_{i2}}$$

In general the model to investigate a given number of factors can be expressed as:


 * $$\mu _{i}^{\prime }={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}+{{\beta }_{12}}{{x}_{i1}}{{x}_{i2}}+...$$

Based on the model equations mentioned thus far, the analyst can easily conduct an R-DOE analysis for the lognormally distributed life data using standard regression techniques. However this is no longer true once the data also includes censored observations. In the case of censored data, the analysis has to be carried out using maximum likelihood estimation (MLE) techniques.

Maximum Likelihood Estimation for the Lognormal Distribution
The maximum likelihood estimation method can be used to estimate parameters in R-DOE analyses when censored data are present. The likelihood function is calculated for each observed time to failure, $${{t}_{i}}$$, and the parameters of the model are obtained by maximizing the log-likelihood function. The likelihood function for complete data following the lognormal distribution is given as:


 * $$\begin{align}

& {{L}_{failures}}= & \underset{i=1}{\overset{\mathop \prod }}\,f({{t}_{i}},\mu _{i}^{\prime }) \\ & = & \underset{i=1}{\overset{\mathop \prod }}\,\left[ \frac{1}{{{t}_{i}}{\sigma }'\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln ({{t}_{i}})-\mu _{i}^{\prime }} \right)}^{2}}}} \right] \\ & = & \underset{i=1}{\overset{\mathop \prod }}\,\left[ \frac{1}{{{t}_{i}}{\sigma }'\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}+...)} \right)}^{2}}}} \right] \end{align}$$

where: •	 $${{F}_{e}}$$ is the total number of observed times-to-failure •	 $$\mu _{i}^{\prime }$$ is the life characteristic and has been substituted based on Eqn. (MeanLife) •	and $${{t}_{i}}$$  is the time of the  $$i$$ th failure For right censored data the likelihood function is:[LDAReference]


 * $${{L}_{suspensions}}=\underset{i=1}{\overset{\mathop \prod }}\,\left[ 1-\frac{1}{\sqrt{2\pi }}\mathop{}_{-\infty }^{\left( \tfrac{\ln ({{t}_{i}})-\mu _{i}^{\prime }} \right)}{{e}^{-\tfrac{2}}}dg \right]$$

where: •	 $${{S}_{e}}$$ is the total number of observed suspensions •	and $${{t}_{i}}$$  is the time of  $$i$$ th suspension For interval data the likelihood function is:[LDAReference]


 * $${{L}_{interval}}=\underset{i=11}{\overset{FI}{\mathop \prod }}\,\left[ \frac{1}{\sqrt{2\pi }}\mathop{}_{-\infty }^{\left( \tfrac{\ln (t_{i}^{2})-\mu _{i}^{\prime }} \right)}{{e}^{-\tfrac{2}}}dg-\frac{1}{\sqrt{2\pi }}\mathop{}_{-\infty }^{\left( \tfrac{\ln (t_{i}^{1})-\mu _{i}^{\prime }} \right)}{{e}^{-\tfrac{2}}}dg \right]$$

where: •	 $$FI$$ is the total number of interval data •	 $$t_{i}^{1}$$ is the beginning time of the  $$i$$ th interval •	and $$t_{i}^{2}$$  is the end time of the  $$i$$ th interval The complete likelihood function when all types of data (complete, right censored and interval) are present is:


 * $$L({\sigma }',{{\beta }_{0}},{{\beta }_{1}}...)={{L}_{failures}}\cdot {{L}_{suspensions}}\cdot {{L}_{interval}}$$

Then the log-likelihood function is:


 * $$\Lambda ({\sigma }',{{\beta }_{0}},{{\beta }_{1}}...)=\ln (L)$$

The MLE estimates are obtained by solving for parameters $$({\sigma }',{{\beta }_{0}},{{\beta }_{1}}...)$$  so that:


 * $$\begin{align}

& \frac{\partial \Lambda }{\partial {\sigma }'}= & 0 \\ & \frac{\partial \Lambda }{\partial {{\beta }_{0}}}= & 0 \\ & \frac{\partial \Lambda }{\partial {{\beta }_{1}}}= & 0 \\ & & ...  \end{align}$$

Once the estimates are obtained, the significance of any parameter, $${{\theta }_{i}}$$, can be assessed using the likelihood ratio test.

Hypothesis Tests
Hypothesis testing in R-DOE analyses is carried out using the likelihood ratio test. To test the significance of a factor, the corresponding effect coefficient(s), $${{\theta }_{i}}$$, is tested. The following statements are used:


 * $$\begin{align}

& {{H}_{0}}: & {{\theta }_{i}}=0 \\ & {{H}_{1}}: & {{\theta }_{i}}\ne 0 \end{align}$$

The statistic used for the test is the likelihood ratio, $$LR$$. The likelihood ratio for the parameter $${{\theta }_{i}}$$  is calculated as follows:


 * $$LR=-2\ln \frac{L({{{\hat{\theta }}}_{(-i)}})}{L(\hat{\theta })}$$

where: •	 $$\hat{\theta }$$ is the vector of all parameter estimates obtained using MLE (i.e.  $$\hat{\theta }=[{{\hat{\sigma }}^{\prime }}$$   $${{\hat{\beta }}_{0}}$$   $${{\hat{\beta }}_{1}}$$ ... $${]}'$$ ) •	 $${{\hat{\theta }}_{(-i)}}$$ is the vector of all parameter estimates excluding the estimate of  $${{\theta }_{i}}$$ •	 $$L(\hat{\theta })$$ is the value of the likelihood function when all parameters are included in the model •	and $$L({{\hat{\theta }}_{(-i)}})$$  is the value of the likelihood function when all parameters except  $${{\theta }_{i}}$$  are included in the model If the null hypothesis, $${{H}_{0}}$$, is true then the ratio,  $$-2\ln L({{\hat{\theta }}_{(-i)}})/L(\hat{\theta })$$ , follows the Chi-Squared distribution with one degree of freedom. Therefore, $${{H}_{0}}$$  is rejected at a significance level,  $$\alpha $$, if  $$LR$$  is greater than the critical value  $$\chi _{1,\alpha }^{2}$$. The likelihood ratio test can also be used to test the significance of a number of parameters, $$r$$, at the same time. In this case, $$L({{\hat{\theta }}_{(-i)}})$$  represents the likelihood value when all  $$r$$  parameters to be tested are not included in the model. In other words, $$L({{\hat{\theta }}_{(-i)}})$$  would represent the likelihood value for the reduced model that does not contain the  $$r$$  parameters under test. Here, the ratio $$-2\ln L({{\hat{\theta }}_{(-i)}})/L(\hat{\theta })$$  will follow the Chi-Squared distribution with  $$k-r$$  degrees of freedom if all  $$r$$  parameters are insignificant (with  $$k$$  representing the number of parameters in the full model). Thus, if $$LR>\chi _{k-r,\alpha }^{2}$$, the null hypothesis,  $${{H}_{0}}$$ , is rejected and it can be concluded that at least one of the  $$r$$  parameters is significant. Example 1 To illustrate the use of MLE in R-DOE analysis, consider the case where the life of a product is thought to be affected by two factors, $$A$$  and  $$B$$. The failure of the product has been found to follow the lognormal distribution. The analyst decides to run an R-DOE analysis using a single replicate of the 2 $$^{2}$$ design. Previous studies indicate that the interaction between $$A$$  and  $$B$$  does not affect the life of the product. The design for this experiment can be set up in DOE++ as shown in Figure Ex1DesignProps. The resulting experiment design and the corresponding times-to-failure data obtained are shown in Figure Ex1Design. Note that, although the life data shown in Figure Ex1Design is complete data and regression techniques are applicable, calculations are shown using MLE. DOE ++ uses MLE for all R-DOE analysis calculations. $$$$ Figure 11.1: Design properties for the experiment in Example 11.1. Figure 11.2: The $$2^2$$ experiment design and the corresponding life data for Example 11.1.

Because the purpose of the experiment is to study two factors without considering their interaction, the applicable model for the lognormally distributed response data is:


 * $$\mu _{i}^{\prime }={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}$$

where $$\mu _{i}^{\prime }$$  is the mean of the natural logarithm of the times-to-failure at the  $$i$$ th treatment combination ( $$i=1,2,3,4$$ ),  $${{\beta }_{1}}$$  is the effect coefficient for factor  $$A$$  and  $${{\beta }_{2}}$$  is the effect coefficient for factor  $$B$$. The analysis for this case is carried out in DOE++ by dropping the interaction $$AB$$  using the Select Effects icon in the Control Panel. The following hypotheses need to be tested in this example: •	•	 $${{H}_{0}}\ \ :\ \ {{\beta }_{1}}=0$$ $${{H}_{1}}\ \ :\ \ {{\beta }_{1}}\ne 0$$ This test investigates the main effect of factor $$A$$. The statistic for this test is:


 * $$L{{R}_{A}}=-2\ln \frac{L}$$

where $$L$$  represents the value of the likelihood function when all coefficients are included in the model and  $${{L}_{\tilde{\ }A}}$$  represents the value of the likelihood function when all coefficients except  $${{\beta }_{1}}$$  are included in the model.
 * $${{H}_{0}}\ \ :\ \ {{\beta }_{2}}=0$$

$${{H}_{1}}\ \ :\ \ {{\beta }_{2}}\ne 0$$ This test investigates the main effect of factor $$B$$. The statistic for this test is:


 * $$L{{R}_{B}}=-2\ln \frac{L}$$

where $$L$$  represents the value of the likelihood function when all coefficients are included in the model and  $${{L}_{\tilde{\ }B}}$$  represents the value of the likelihood function when all coefficients except  $${{\beta }_{2}}$$  are included in the model. To calculate the test statistics, the maximum likelihood estimates of the parameters must be known. The estimates are obtained next.

MLE Estimates
Since the life data for the present experiment are complete and follow the lognormal distribution, the likelihood function can be written as:


 * $$L=\underset{i=1}{\overset{4}{\mathop \prod }}\,\left[ \frac{1}{{{t}_{i}}{\sigma }'\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln ({{t}_{i}})-\mu _{i}^{\prime }} \right)}^{2}}}} \right]$$

Substituting $$\mu _{i}^{\prime }$$  from Eqn. (MeanLifeEx1), the likelihood function is:


 * $$L=\underset{i=1}{\overset{4}{\mathop \prod }}\,\left[ \frac{1}{{{t}_{i}}{\sigma }'\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})} \right)}^{2}}}} \right]$$

Then the log-likelihood function is:


 * $$\begin{align}

& \Lambda ({\sigma }',{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}})= & \ln (L) \\ & = & \underset{i=1}{\overset{4}{\mathop \sum }}\,\ln \left[ \frac{1}{{{t}_{i}}{\sigma }'\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})} \right)}^{2}}}} \right] \\ & = & \ln \left[ \frac{1} \right]+ \\ & & \left[ -\frac{1}{2}\underset{i=1}{\overset{4}{\mathop \sum }}\,{{\left( \frac{\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})} \right)}^{2}} \right] \\ & = & -[\ln ({{t}_{1}}{{t}_{2}}{{t}_{3}}{{t}_{4}})+4\ln ({\sigma }')+2\ln (2\pi )]+ \\ & & \left[ -\frac{1}{2}\underset{i=1}{\overset{4}{\mathop \sum }}\,{{\left( \frac{\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})}{{{\sigma }'}} \right)}^{2}} \right] \end{align}$$

To obtain the MLE estimates of the parameters, $${\sigma }',{{\beta }_{0}},{{\beta }_{1}}$$  and  $${{\beta }_{2}}$$, the log-likelihood function must be differentiated with respect to these parameters:


 * $$\begin{align}

& \frac{\partial \Lambda }{\partial {\sigma }'}= & -\frac{4}{{{\sigma }'}}+\frac{1}\underset{i=1}{\overset{4}{\mathop \sum }}\,{{[\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})]}^{2}} \\ & \frac{\partial \Lambda }{\partial {{\beta }_{0}}}= & \frac{1}\underset{i=1}{\overset{4}{\mathop \sum }}\,[\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})] \\ & \frac{\partial \Lambda }{\partial {{\beta }_{1}}}= & \frac{1}\underset{i=1}{\overset{4}{\mathop \sum }}\,{{x}_{i1}}[\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})] \\ & \frac{\partial \Lambda }{\partial {{\beta }_{2}}}= & \frac{1}\underset{i=1}{\overset{4}{\mathop \sum }}\,{{x}_{i2}}[\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})] \end{align}$$

Equating the $$\partial \Lambda /\partial {{\theta }_{i}}$$  terms to zero returns the required estimates. The coefficients $${{\hat{\beta }}_{0}}$$,  $${{\hat{\beta }}_{1}}$$  and  $${{\hat{\beta }}_{2}}$$  are obtained first as these are required to estimate  $${{\hat{\sigma }}^{\prime }}$$. Setting $$\partial \Lambda /\partial {{\beta }_{0}}=0$$ :


 * $$\underset{i=1}{\overset{4}{\mathop \sum }}\,[\ln ({{t}_{i}})-({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}})]=0$$

Substituting the values of $${{t}_{i}}$$,  $${{x}_{i1}}$$  and  $${{x}_{i2}}$$  from Figure Ex1Design and simplifying:


 * $$\ln {{t}_{1}}+\ln {{t}_{2}}+\ln {{t}_{3}}+\ln {{t}_{4}}-4{{\beta }_{0}}=0$$

Thus:


 * $$\begin{align}

& {{{\hat{\beta }}}_{0}}= & \frac{1}{4}(\ln {{t}_{1}}+\ln {{t}_{2}}+\ln {{t}_{3}}+\ln {{t}_{4}}) \\ & = & \frac{1}{4}(3.2958+3.2189+3.912+4.0073) \\ & = & 3.6085 \end{align}$$

Setting $$\partial \Lambda /\partial {{\beta }_{1}}=0$$ :


 * $${{x}_{i1}}\ln {{t}_{1}}+{{x}_{i1}}\ln {{t}_{2}}+{{x}_{i1}}\ln {{t}_{3}}+{{x}_{i1}}\ln {{t}_{4}}-4{{\beta }_{1}}=0$$

Thus:


 * $$\begin{align}

& {{{\hat{\beta }}}_{1}}= & \frac{1}{4}(-\ln {{t}_{1}}+\ln {{t}_{2}}-\ln {{t}_{3}}+\ln {{t}_{4}}) \\ & = & \frac{1}{4}(-3.2958+3.2189-3.912+4.0073) \\ & = & 0.0046 \end{align}$$

Setting $$\partial \Lambda /\partial {{\beta }_{2}}=0$$ :


 * $${{x}_{i2}}\ln {{t}_{1}}+{{x}_{i2}}\ln {{t}_{2}}+{{x}_{i3}}\ln {{t}_{3}}+{{x}_{i4}}\ln {{t}_{4}}-4{{\beta }_{2}}=0$$

Thus:


 * $$\begin{align}

& {{{\hat{\beta }}}_{2}}= & \frac{1}{4}(-\ln {{t}_{1}}-\ln {{t}_{2}}+\ln {{t}_{3}}+\ln {{t}_{4}}) \\ & = & \frac{1}{4}(-3.2958-3.2189+3.912+4.0073) \\ & = & 0.3512 \end{align}$$

Knowing $${{\hat{\beta }}_{0}},{{\hat{\beta }}_{1}}$$  and  $${{\hat{\beta }}_{2}}$$,  $${{\hat{\sigma }}^{\prime }}$$  can now be obtained. Setting $$\partial \Lambda /\partial {\sigma }'=0$$ :


 * $$-\frac{4}{{{\sigma }'}}+\frac{1}\underset{i=1}{\overset{4}{\mathop \sum }}\,{{[\ln ({{t}_{i}})-(3.6085+0.0046{{x}_{i1}}+0.3512{{x}_{i2}})]}^{2}}=0$$

Thus:


 * $$\begin{align}

& {{{\hat{\sigma }}}^{\prime }}= & \frac{1}{2}\sqrt{\underset{i=1}{\overset{4}{\mathop \sum }}\,{{[\ln ({{t}_{i}})-(3.6085+0.0046{{x}_{i1}}+0.3512{{x}_{i2}})]}^{2}}} \\ & = & 0.043 \end{align}$$

Once the estimates have been calculated, the likelihood ratio test can be carried out for the two factors.

Likelihood Ratio Test
The likelihood ratio test for factor $$A$$  is conducted by using the likelihood value corresponding to the full model and the likelihood value when  $$A$$  is not included in the model. The likelihood value corresponding to the full model (in this case $$\mu _{i}^{\prime }={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}$$ ) is:


 * $$\begin{align}

& L= & \underset{i=1}{\overset{4}{\mathop \prod }}\,\left[ \frac{1}{{{t}_{i}}{{{\hat{\sigma }}}^{\prime }}\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln ({{t}_{i}})-({{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{i1}}+{{{\hat{\beta }}}_{2}}{{x}_{i2}})}{{{{\hat{\sigma }}}^{\prime }}} \right)}^{2}}}} \right] \\ & = & 0.000537311 \end{align}$$

The corresponding logarithmic value is $$\ln (L)=\ln (0.000537311)=-7.529$$. The likelihood value for the reduced model that does not contain factor $$A$$  (in this case  $$\mu _{i}^{\prime }={{\beta }_{0}}+{{\beta }_{2}}{{x}_{i2}}$$ ) is:


 * $$\begin{align}

& {{L}_{\tilde{\ }A}}= & \underset{i=1}{\overset{4}{\mathop \prod }}\,\left[ \frac{1}{{{t}_{i}}{{{\hat{\sigma }}}^{\prime }}\sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{\ln ({{t}_{i}})-({{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{2}}{{x}_{i2}})}{{{{\hat{\sigma }}}^{\prime }}} \right)}^{2}}}} \right] \\ & = & 0.000525337 \end{align}$$

The corresponding logarithmic value is $$\ln ({{L}_{\tilde{\ }A}})=\ln (0.000525337)=-7.552$$. Therefore, the likelihood ratio to test the significance of factor $$A$$  is:


 * $$\begin{align}

& L{{R}_{A}}= & -2\ln \frac{L} \\ & = & -2\ln \frac{0.000525337}{0.000537311} \\ & = & 0.0451 \end{align}$$

The $$p$$  value corresponding to  $$L{{R}_{A}}$$  is:


 * $$\begin{align}

& p\text{ }value= & 1-P(\chi _{1}^{2}0.1$$,  $${{H}_{0}}\ \ :\ \ {{\beta }_{1}}=0$$  cannot be rejected and it can be concluded that factor  $$A$$  does not affect the life of the product. The likelihood ratio to test factor $$B$$  can be calculated in a similar way as shown next:


 * $$\begin{align}

& L{{R}_{B}}= & -2\ln \frac{L} \\ & = & -2\ln \frac{1.17995E-07}{0.000537311} \\ & = & 16.8475 \end{align}$$

The $$p$$  value corresponding to  $$L{{R}_{B}}$$  is:


 * $$\begin{align}

& p\text{ }value= & 1-P(\chi _{1}^{2}<L{{R}_{B}}) \\ & = & 1-0.99996 \\ & = & 0.00004  \end{align}$$

Since $$p$$   $$value<0.1$$,  $${{H}_{0}}\ \ :\ \ {{\beta }_{2}}=0$$  is rejected and it is concluded that factor  $$B$$  affects the life of the product. The previous calculation results are displayed as the Likelihood Ratio Test Table in the results obtained from DOE++ as shown in Figure Ex1LRResults.

$$$$

Figure 11.3: Likelihood ratio test results from DOE++ for the experiment in Example 11.1.

Fisher Matrix Bounds on Parameters
In general, the MLE estimates of the parameters are asymptotically normal. This means that for large sample sizes the distribution of the estimates from the same population would be very close to the normal distribution [MeekerAndEscobar]. If $$\hat{\theta }$$  is the MLE estimate of any parameter,  $$\theta $$, then the ( $$1-\alpha $$ )% two-sided confidence bounds on the parameter are:


 * $$\hat{\theta }-{{z}_{\alpha /2}}\cdot \sqrt{Var(\hat{\theta })}<\theta <\hat{\theta }+{{z}_{\alpha /2}}\cdot \sqrt{Var(\hat{\theta })}$$

where $$Var(\hat{\theta })$$  represents the variance of  $$\hat{\theta }$$  and  $${{z}_{\alpha /2}}$$  is the critical value corresponding to a significance level of  $$\alpha /2$$  on the standard normal distribution. The variance of the parameter, $$Var(\hat{\theta })$$, is obtained using the Fisher information matrix. For $$k$$  parameters, the Fisher information matrix is obtained from the log-likelihood function  $$\Lambda $$  as follows:


 * $$F=\left[ \begin{matrix}

-\frac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} & ... & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{k}}} \\ -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} & -\frac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}} & ... & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{2}}\partial {{\theta }_{k}}} \\ . & . & ... & . \\   . & . & ... & .  \\   -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{k}}} &. & ... & -\frac{{{\partial }^{2}}\Lambda }{\partial \theta _{k}^{2}} \\ \end{matrix} \right]$$

The variance-covariance matrix is obtained by inverting the Fisher matrix $$F$$ :


 * $$\left[ \begin{matrix}

Var({{{\hat{\theta }}}_{1}}) & Cov({{{\hat{\theta }}}_{1}},{{{\hat{\theta }}}_{2}}) & ... & {} \\   Cov({{{\hat{\theta }}}_{1}},{{{\hat{\theta }}}_{2}}) & Var({{{\hat{\theta }}}_{2}}) & ... & {} \\   . & . & ... & {}  \\   . & . & ... & {}  \\   Cov({{{\hat{\theta }}}_{1}},{{{\hat{\theta }}}_{k}}) &. & ... & Var({{{\hat{\theta }}}_{k}}) \\ \end{matrix} \right]=$$


 * $${{\left[ \begin{matrix}

-\frac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} & ... & {} \\   -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} & -\frac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}} & ... & {} \\   . & . & ... & {}  \\   . & . & ... & {}  \\   -\frac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{k}}} &. & ... & -\frac{{{\partial }^{2}}\Lambda }{\partial \theta _{k}^{2}} \\ \end{matrix} \right]}^{-1}}$$

Once the variance-covariance matrix is known the variance of any parameter can be obtained from the diagonal elements of the matrix. Note that if a parameter, $$\theta $$, can take only positive values, it is assumed that the  $$\ln (\hat{\theta })$$  follows the normal distribution [MeekerAndEscobar]. The bounds on the parameter in this case are:


 * $$CI\text{ }on\text{ }\ln (\hat{\theta })=\ln (\hat{\theta })\pm {{z}_{\alpha /2}}\sqrt{Var(\ln (\hat{\theta }))}$$

Using $$Var[f(\hat{\theta })]={{(\partial f/\partial \theta )}^{2}}\cdot Var(\hat{\theta })$$  we get  $$Var(\ln (\hat{\theta }))={{(1/\hat{\theta })}^{2}}Var(\hat{\theta })$$. Substituting this value we have:


 * $$\begin{align}

& CI\text{ }on\text{ }\ln (\hat{\theta })= & \ln (\hat{\theta })\pm {{z}_{\alpha /2}}\sqrt{{{(1/\hat{\theta })}^{2}}Var(\hat{\theta })} \\ & = & \ln (\hat{\theta })\pm ({{z}_{\alpha /2}}/\hat{\theta })\sqrt{Var(\hat{\theta })} \\ & or\text{  }CI\text{ }on\text{ }\hat{\theta }= & \exp [\ln (\hat{\theta })\pm ({{z}_{\alpha /2}}/\hat{\theta })\sqrt{Var(\hat{\theta })}] \\ & = & \hat{\theta }\cdot \exp [\pm ({{z}_{\alpha /2}}/\hat{\theta })\sqrt{Var(\hat{\theta })}] \end{align}$$

Knowing $$Var(\hat{\theta })$$  from the variance-covariance matrix, the confidence bounds on  $$\hat{\theta }$$  can then be determined. Example 2

Continuing with Example 1, the confidence bounds on the MLE estimates of the parameters $${{\beta }_{0}}$$,  $${{\beta }_{1}}$$ ,  $${{\beta }_{2}}$$  and  $${\sigma }'$$  can now be obtained. The Fisher information matrix for the example is:


 * $$\begin{align}

& F= & \left[ \begin{matrix} -\frac{{{\partial }^{2}}\Lambda }{\partial \beta _{0}^{2}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\beta }_{0}}\partial {{\beta }_{1}}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\beta }_{0}}\partial {{\beta }_{2}}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\beta }_{0}}\partial {\sigma }'} \\ {} & -\frac{{{\partial }^{2}}\Lambda }{\partial \beta _{1}^{2}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\beta }_{1}}\partial {{\beta }_{2}}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\beta }_{1}}\partial {\sigma }'} \\ {} & {} & -\frac{{{\partial }^{2}}\Lambda }{\partial \beta _{2}^{2}} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\beta }_{2}}\partial {\sigma }'} \\ sym. & {} & {} & -\frac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{\prime 2}}} \\ \end{matrix} \right] \\ & = & \left[ \begin{matrix} \tfrac{4} & \tfrac{1}\underset{i=1}{\overset{4}{\mathop{\sum }}}\,{{x}_{i1}} & \tfrac{1}\underset{i=1}{\overset{4}{\mathop{\sum }}}\,{{x}_{i2}} & \tfrac{2}[\underset{i=1}{\overset{4}{\mathop{\sum }}}\,(\ln {{t}_{i}}-\mu _{i}^{\prime })] \\ {} & \tfrac{1}\underset{i=1}{\overset{4}{\mathop{\sum }}}\,x_{i1}^{2} & \tfrac{1}\underset{i=1}{\overset{4}{\mathop{\sum }}}\,{{x}_{i1}}{{x}_{i2}} & \tfrac{2}[\underset{i=1}{\overset{4}{\mathop{\sum }}}\,{{x}_{i1}}\cdot (\ln {{t}_{i}}-\mu _{i}^{\prime })] \\ {} & {} & \tfrac{1}\underset{i=1}{\overset{4}{\mathop{\sum }}}\,x_{i2}^{2} & \tfrac{2}[\underset{i=1}{\overset{4}{\mathop{\sum }}}\,{{x}_{i2}}\cdot (\ln {{t}_{i}}-\mu _{i}^{\prime })] \\ sym. & {} & {} & \tfrac{4}+\tfrac{(-3)}\underset{i=1}{\overset{4}{\mathop{\sum }}}\,{{(\ln {{t}_{i}}-\mu _{i}^{\prime })}^{2}}] \\ \end{matrix} \right] \\ & = & \left[ \begin{matrix} 2165.6741 & 0 & 0 & -1.1195E-11 \\ {} & 2165.6741 & 0 & -1.1195E-11 \\ {} & {} & 2165.6741 & -3.358E-11 \\ sym. & {} & {} & 4330.8227 \\ \end{matrix} \right] \end{align}$$

The variance-covariance matrix can be obtained by taking the inverse of the Fisher matrix $$F$$ :


 * $$\left[ \begin{matrix}

Var({_{0}}) & Cov({{{\hat{\beta }}}_{0}},{{{\hat{\beta }}}_{1}}) & Cov({{{\hat{\beta }}}_{0}},{{{\hat{\beta }}}_{2}}) & Cov({{{\hat{\beta }}}_{0}},{{{\hat{\sigma }}}^{\prime }}) \\ {} & Var({{{\hat{\beta }}}_{1}}) & Cov({{{\hat{\beta }}}_{1}},{{{\hat{\beta }}}_{2}}) & Cov({{{\hat{\beta }}}_{0}},{{{\hat{\sigma }}}^{\prime }}) \\ {} & {} & Var({{{\hat{\beta }}}_{2}}) & Cov({{{\hat{\beta }}}_{0}},{{{\hat{\sigma }}}^{\prime }}) \\ sym. & {} & {} & Var({{{\hat{\sigma }}}^{\prime }}) \\ \end{matrix} \right]={{F}^{-1}}$$

Inverting $$F$$  returns the following matrix:


 * $${{F}^{-1}}=\left[ \begin{matrix}

4.617E-4 & 0 & 0 & 0 \\ {} & 4.617E-4 & 0 & 0 \\ {} & {} & 4.617E-4 & 0 \\ sym. & {} & {} & 2.309E-4 \\ \end{matrix} \right]$$

Therefore, the variance of the parameter estimates are:


 * $$\begin{align}

& Var({{{\hat{\beta }}}_{0}})= & 4.617E-4 \\ & Var({{{\hat{\beta }}}_{1}})= & 4.617E-4 \\ & Var({{{\hat{\beta }}}_{2}})= & 4.617E-4 \\ & Var({{{\hat{\sigma }}}^{\prime }})= & 2.309E-4 \end{align}$$

Knowing the variance, the confidence bounds on the parameters can be calculated. For example, the 90% bounds ( $$\alpha =0.1$$ ) on $${{\hat{\beta }}_{2}}$$  can be calculated as shown next:


 * $$\begin{align}

& CI= & {{{\hat{\beta }}}_{2}}\pm {{z}_{\alpha /2}}\cdot \sqrt{Var({{{\hat{\beta }}}_{2}})} \\ & = & {{{\hat{\beta }}}_{2}}\pm {{z}_{0.05}}\cdot \sqrt{Var({{{\hat{\beta }}}_{2}})} \\ & = & 0.3512\pm 1.645\cdot \sqrt{4.617E-4} \\ & = & 0.3512\pm 0.0354 \\ & = & 0.3158\text{ }and\text{ }0.3866 \end{align}$$

The 90% bounds on $${\sigma }'$$  are (considering that  $${\sigma }'$$  can only take positive values):


 * $$\begin{align}

& CI= & {{{\hat{\sigma }}}^{\prime }}\cdot \exp [\pm ({{z}_{0.05}}/{{{\hat{\sigma }}}^{\prime }})\sqrt{Var({{{\hat{\sigma }}}^{\prime }})}] \\ & = & 0.043\cdot \exp [\pm (1.645/0.043)\sqrt{2.309E-4}] \\ & = & 0.024\text{ }and\text{ }0.077 \end{align}$$

The standard error for the parameters can be obtained by taking the positive square root of the variance. For example, the standard error for $${{\hat{\beta }}_{1}}$$  is:


 * $$\begin{align}

& se({{{\hat{\beta }}}_{1}})= & \sqrt{Var({{{\hat{\beta }}}_{1}})} \\ & = & \sqrt{4.617E-4} \\ & = & 0.0215 \end{align}$$

The $$z$$  statistic for  $${{\hat{\beta }}_{1}}$$  is:


 * $$\begin{align}

& {{z}_{0}}= & \frac{{{{\hat{\beta }}}_{1}}}{se({{{\hat{\beta }}}_{1}})} \\ & = & \frac{0.0046}{0.0215} \\ & = & 0.21 \end{align}$$

The $$p$$  value corresponding to this statistic based on the standard normal distribution is:


 * $$\begin{align}

& p\text{ }value= & 2\cdot (1-P(Z\le |{{z}_{0}}|) \\ & = & 2\cdot (1-0.58435) \\  & = & 0.8313  \end{align}$$

The previous calculation results are displayed as MLE Information in the results obtained from DOE++ as shown in Figure Ex1MLEResults. In the figure, the Effect corresponding to each factor is simply twice the MLE estimate of the coefficient for that factor. Generally, the $$p$$  value corresponding to any coefficient in the MLE Information table should match the value obtained from the likelihood ratio test (displayed in the Likelihood Ratio Test table of Figure Ex1LRResults). If the sample size is not large enough, as in the case of the present example, a difference may be seen in the two values. In such cases, the $$p$$  value from the likelihood ratio test should be given preference. For the present example, the $$p$$  value of 0.8318 for  $${{\hat{\beta }}_{1}}$$, obtained from the likelihood ratio test, would be preferred to the  $$p$$  value of 0.8313 displayed under MLE information. For details see [MeekerAndEscobar]. $$$$

Figure 11.4: MLE information from DOE++ for Example 11.2.

R-DOE Analysis of Data Following the Weibull Distribution
The probability density function for the two parameter Weibull distribution is:


 * $$f(T)=\frac{\beta }{\eta }{{\left( \frac{T}{\eta } \right)}^{\beta -1}}\exp \left[ -{{\left( \frac{T}{\eta } \right)}^{\beta }} \right]$$

where $$\eta $$  is the scale parameter of the Weibull distribution and  $$\beta $$  is the shape parameter.[LDAReference] To distinguish the Weibull shape parameter from the effect coefficients, the shape parameter is represented as  $$Beta$$  instead of  $$\beta $$  in the remaining chapter. For data following the two parameter Weibull distribution, the life characteristic used in R-DOE analysis is the scale parameter, $$\eta $$ .[ALTReference] Since  $$\eta $$  represents life data that cannot take negative values, a logarithmic transformation is applied to it. The resulting model used in the R-DOE analysis for a two factor experiment with each factor at two levels can be written as follows:


 * $$\ln ({{\eta }_{i}})={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}+{{\beta }_{12}}{{x}_{i1}}{{x}_{i2}}$$

where: •	 $${{\eta }_{i}}$$ is the value of the scale parameter at the  $$i$$ th treatment combination of the two factors •	 $${{x}_{1}}$$ is the indicator variable representing the level of the first factor •	 $${{x}_{2}}$$ is the indicator variable representing the level of the second factor •	 $${{\beta }_{0}}$$ is the intercept term •	 $${{\beta }_{1}}$$ and  $${{\beta }_{2}}$$  are the effect coefficients for the two factors •	and $${{\beta }_{12}}$$  is the effect coefficient for the interaction of the two factors The model can be easily expanded to include other factors and their interactions. Note that when any data follows the Weibull distribution, the logarithmic transformation of the data follows the extreme-value distribution, whose probability density function is given as follows:


 * $$f(\ln (T))=\frac{1}\exp \left[ \frac{\ln (T)-{\mu }}-\exp \left( \frac{\ln (T)-{\mu }} \right) \right]$$

where the $$T$$ s follows the Weibull distribution,  $${\mu }$$  is the location parameter of the extreme-value distribution and  $${\sigma }$$  is the scale parameter of the extreme-value distribution. Eqns. (EtaEquation) and (EVD) show that for R-DOE analysis of life data that follows the Weibull distribution, the random error terms, $${{\epsilon }_{i}}$$, will follow the extreme-value distribution (and not the normal distribution). Hence, regression techniques are not applicable even if the data is complete. Therefore, maximum likelihood estimation has to be used.

Maximum Likelihood Estimation for the Weibull Distribution
The likelihood function for complete data in R-DOE analysis of Weibull distributed life data is:


 * $$\begin{align}

& {{L}_{failures}}= & \underset{i=1}{\overset{\mathop{\prod }}}\,f({{t}_{i}},{{\eta }_{i}}) \\ & = & \underset{i=1}{\overset{\mathop{\prod }}}\,\left[ \frac{Beta}{{\left( \frac{{{t}_{i}}} \right)}^{Beta-1}}\exp \left[ -{{\left( \frac{{{t}_{i}}}{{{\eta }_{i}}} \right)}^{Beta}} \right] \right] \end{align}$$

where: •	 $${{F}_{e}}$$ is the total number of observed times-to-failure •	 $${{\eta }_{i}}$$ is the life characteristic at the  $$i$$ th treatment •	and $${{t}_{i}}$$  is the time of the  $$i$$ th failure For right censored data, the likelihood function is:


 * $${{L}_{suspensions}}=\underset{i=1}{\overset{\mathop{\prod }}}\,\left[ \exp \left[ -{{\left( \frac{{{t}_{i}}}{{{\eta }_{i}}} \right)}^{Beta}} \right] \right]$$

where: •	 $${{S}_{e}}$$ is the total number of observed suspensions •	and $${{t}_{i}}$$  is the time of  $$i$$ th suspension For interval data, the likelihood function is:


 * $${{L}_{interval}}=\underset{i=1}{\overset{FI}{\mathop{\prod }}}\,\left[ \exp \left[ -{{\left( \frac{t_{i}^{2}}{{{\eta }_{i}}} \right)}^{Beta}} \right]-\exp \left[ -{{\left( \frac{t_{i}^{1}}{{{\eta }_{i}}} \right)}^{Beta}} \right] \right]$$

where: •	 $$FI$$ is the total number of interval data •	 $$t_{i}^{1}$$ is the beginning time of the  $$i$$ th interval •	and $$t_{i}^{2}$$  is the end time of the  $$i$$ th interval In each of the likelihood functions, $${{\eta }_{i}}$$  is substituted based on Eqn. (EtaEquation) as:


 * $${{\eta }_{i}}=\exp ({{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}+...)$$

The complete likelihood function when all types of data (complete, right and left censored) are present is:


 * $$L(Beta,{{\beta }_{0}},{{\beta }_{1}}...)={{L}_{failures}}\cdot {{L}_{suspensions}}\cdot {{L}_{interval}}$$

Then the log-likelihood function is:


 * $$\Lambda (Beta,{{\beta }_{0}},{{\beta }_{1}}...)=\ln (L)$$

The MLE estimates are obtained by solving for parameters $$(Beta,{{\beta }_{0}},{{\beta }_{1}}...)$$  so that:


 * $$\begin{align}

& \frac{\partial \Lambda }{\partial Beta}= & 0 \\ & \frac{\partial \Lambda }{\partial {{\beta }_{0}}}= & 0 \\ & \frac{\partial \Lambda }{\partial {{\beta }_{1}}}= & 0 \\ & & ...  \end{align}$$

Once the estimates are obtained, the significance of any parameter, $${{\theta }_{i}}$$, can be assessed using the likelihood ratio test. Other results can also be obtained as discussed in Sections 11.MLElognormal and 11.FMlognormal.

R-DOE Analysis of Data Following the Exponential Distribution
The exponential distribution is a special case of the Weibull distribution when the shape parameter $$Beta$$  is equal to 1. Substituting $$Beta=1$$  in the probability density function of Eqn. (WeibullPdf) gives:


 * $$\begin{align}

& f(T)= & \frac{1}{\eta }\exp \left( -\frac{T}{\eta } \right) \\ & = & \lambda \exp (-\lambda T) \end{align}$$

where $$1/\eta $$  of Eqn. (WeibullPdf) has been replaced by $$\lambda $$. Parameter $$\lambda $$  is called the failure rate [LDAReference]. Hence, R-DOE analysis for exponentially distributed data can be carried out by substituting $$Beta=1$$  and replacing  $$1/\eta $$  by  $$\lambda $$  in the Weibull distribution.

Model Diagnostics
Residual plots can be used to check if the model obtained, based on the MLE estimates, is a good fit to the data. DOE++ uses standardized residuals for R-DOE analyses. If the data follows the lognormal distribution, then standardized residuals are calculated using the following equation:


 * $$\begin{align}

& {{{\hat{e}}}_{i}}= & \frac{\ln ({{t}_{i}})-{{{\hat{\mu }}}_{i}}} \\ & = & \frac{\ln ({{t}_{i}})-({{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{i1}}+{{{\hat{\beta }}}_{2}}{{x}_{i2}}+...)} \end{align}$$

For the probability plot, the standardized residuals are displayed on a normal probability plot. This is because under the assumed model for the lognormal distribution, the standardized residuals should follow a normal distribution with a mean of 0 and a standard deviation of 1. For data that follows the Weibull distribution, the standardized residuals are calculated as shown next:


 * $$\begin{align}

& {{{\hat{e}}}_{i}}= & \hat{B}eta[\ln ({{t}_{i}})-\ln ({{{\hat{\eta }}}_{i}})] \\ & = & \hat{B}eta[\ln ({{t}_{i}})-({{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{i1}}+{{{\hat{\beta }}}_{2}}{{x}_{i2}}+...)] \end{align}$$

The probability plot, in this case, is used to check if the residuals follow the extreme-value distribution with a mean of 0. Note that in all residual plots, when an observation, $${{t}_{i}}$$, is censored the corresponding residual is also censored.

Application Examples
Example 3

Figure 11.5: The $$2^{5-2}$$ experiment design for Example 11.3 to study factors affecting the reliability of fluorescent lights. Figure 11.6: Results of the R-DOE analysis for the experiment in Example 11.3. This example illustrates the use of R-DOE analysis to design reliability into the products. An experiment was carried out to investigate the effect of five factors (each at two levels) on the reliability of fluorescent lights (Taguchi, 1987, p. 930). The factors, $$A$$  through  $$E$$, were studied using a 2 $$^{5-2}$$  design (with the defining relations  $$D=-AC$$  and  $$E=-BC$$ ) under the assumption that all interaction effects, except  $$AB$$   $$(=DE)$$ , can be assumed to be inactive. For each treatment, two lights were tested (two replicates) with the readings taken every two days. The experiment was run for 20 days and, if a light had not failed by the 20th day, it was assumed to be a suspension. The experimental design and the corresponding failure times are shown in Figure Ex3FlLightsDesign. The short duration of the experiment and failure times were probably because the lights were tested under conditions which resulted in stress higher than normal conditions. The failure of the lights was assumed to follow the lognormal distribution. The analysis results from DOE++ for this experiment are shown in Figure Ex3FlLightsResults. The results are obtained by selecting the main effects of the five factors and the interaction $$AB$$  using the Select Effects icon in the Control Panel. The results show that factors $$B$$,  $$D$$  and  $$E$$  are active at a significance level of 0.05. The MLE estimates of the effect coefficients corresponding to these factors are $$-0.2015$$,  $$0.2729$$  and  $$-0.1527$$ , respectively. Based on these coefficients, the best settings for these effects to improve the reliability of the fluorescent lights (by maximizing the response, which in this case is the failure time) are: •	Factor $$B$$  should be set at the lower level of  $$-1$$  since its coefficient is negative •	Factor $$D$$  should be set at the higher level of  $$1$$  since its coefficient is positive •	Factor $$E$$  should be set at the lower level of  $$-1$$  since its coefficient is negative Note that, since actual factor levels are not disclosed (presumably for proprietary reasons), predictions beyond the test conditions cannot be carried out in this case. Example 4 Consider a product whose reliability is thought to be affected by eight potential factors - $$A$$  (temperature),  $$B$$  (humidity),  $$C$$  (load),  $$D$$  (fan-speed),  $$E$$  (voltage),  $$F$$  (material),  $$G$$  (vibration) and  $$H$$  (current). Assuming that all interaction effects are absent, a 2 $$^{8-4}$$ design is used to investigate the eight factors at two levels. The generators used to obtain the design are $$E=ABC$$,  $$F=BCD$$ ,  $$G=ACD$$  and  $$H=ABD$$. The design and the corresponding life data obtained are shown in Figure Ex4Design. Readings for the experiment are taken every 20 time units and the test is terminated at 200 time units. The life of the product is assumed to follow the Weibull distribution. The results from DOE++ for this experiment are shown in Figure Ex4Results. The results show that only factors $$A$$  and  $$D$$  are active at a significance level of 0.1. Assume that, in terms of the actual units, the $$-1$$  level of factor  $$A$$  corresponds to a temperature of 333  $$K$$  and the  $$+1$$  level corresponds to a temperature of 383  $$K$$. Similarly, assume that the two levels of factor $$D$$  are 1000  $$rpm$$  and 2000  $$rpm$$  respectively. From the MLE estimates of the effect coefficients it can be noted that to improve reliability (by maximizing the response) factors $$A$$  and  $$D$$  should be set as follows: •	Factor $$A$$  should be set at the lower level of 333  $$K$$  since its coefficient is negative •	Factor $$D$$  should be set at the higher level of 2000  $$rpm$$  since its coefficient is positive

Figure 11.7: The 2 design to investigate the reliability of a product for Example 11.4.

Figure 11.8: Results for the experiment in Example 11.4.

Now assume that the use conditions for the product for the significant factors, $$A$$  and  $$D$$, are a temperature of 298  $$K$$  and a fan-speed of 3000  $$rpm,$$  respectively. The analysis can be taken a step further to obtain an estimate of the reliability of the product at the use conditions using ReliaSoft's ALTA software. The data is entered into ALTA as shown in Figure Ex4ALTA. ALTA allows for modeling of the nature of relationship between life and stress. It is assumed that the relation between life of the product and temperature follows the Arrhenius relation[ALTReference] while the relation between life and fan-speed follows the inverse power law relation[ALTReference]. Using these relations ALTA fits the following model for the data in Figure Ex4ALTA: $$$$


 * $$\eta =\exp [-0.4322+1037.2886\frac{1}{\text{Temp}}+0.3772\cdot \ln (\text{Fan-Speed})]$$

Figure 11.9: Additional reliability analysis for Example 11.4, conducted using ReliaSoft's ALTA software.

Based on this model the B10 life of the product at the use conditions is obtained as shown next. The Weibull reliability equation is:


 * $$R(t)=\exp \left[ -{{(\frac{t}{\eta })}^{Beta}} \right]$$

Substituting the value of $$\eta $$  from Eqn. (ALTAeta) and the value of $$Beta(=3.4582)$$  as obtained from ALTA, the reliability equation becomes:


 * $$R(t)=\exp -{{\left[ \frac{t}{\exp [-0.4322+1037.2886\tfrac{1}{\text{Temp}}+0.3772\cdot \ln (\text{Fan-Speed})]} \right]}^{3.4582}}$$

Finally, substituting the use conditions (Temp $$=298$$   $$K$$, Fan-Speed  $$=3000$$   $$rpm$$ ) and the desired reliability value of 90%, the B10 life is obtained:


 * $$\begin{align}

& 0.90= & \exp -{{\left[ \frac{t}{\exp [-0.4322+1037.2886\tfrac{1}{298}+0.3772\cdot \ln (3000)]} \right]}^{3.4582}} \\ & t= & 225.4482 \end{align}$$

Therefore, at the use conditions, the B10 life of the product is 225 time units. This result and other reliability metrics can be directly obtained from ALTA.

Additional R-DOE Analyses
DOE++ also allows for the analysis of single factor R-DOE experiments. This analysis is similar to the analysis of single factor designed experiments mentioned in Chapter 6. In single factor R-DOE analysis, the focus is on discovering whether change in the level of a factor affects reliability and how each of the factor levels are different from the other levels. The analysis models and calculations are similar to multi-factor R-DOE analysis.

Example 5

To illustrate single factor R-DOE analysis, consider the data in Table 11.1 where life data readings for a product are taken at three levels of a certain factor, $$A$$. Factor $$A$$  may either be a stress that is thought to affect life or three different designs of the same product or the same product manufactured by three different machines or operators, etc. The goal of the experiment is to see if there is a change in life due to change in the levels of the factor. The design for this experiment is shown in Figure Ex5SingleFactDesign. The life of the product is assumed to follow the Weibull distribution. Therefore, the life characteristic to be used in the R-DOE analysis is the scale parameter, $$\eta $$. Since factor $$A$$  has three levels, the model for the life characteristic,  $$\eta $$, is:


 * $$\ln ({{\eta }_{i}})={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}$$

where $${{\beta }_{0}}$$  is the intercept,  $${{\beta }_{1}}$$  is the effect coefficient for the first level of the factor ( $${{\beta }_{1}}$$  is represented as "A[1]" in DOE++) and  $${{\beta }_{2}}$$  is the effect coefficient for the second level of the factor ( $${{\beta }_{2}}$$  is represented as "A[2]" in DOE++). Two indicator variables, $${{x}_{1}}$$  and  $${{x}_{2}},$$  are the used to represent the three levels of factor  $$A$$  such that:


 * $$\begin{align}

& {{x}_{1}}= & 1,\text{  }{{x}_{2}}=0\text{           Level 1 Effect} \\ & {{x}_{1}}= & 0,\text{  }{{x}_{2}}=1\text{           Level 2 Effect} \\ & {{x}_{1}}= & -1,\text{  }{{x}_{2}}=-1\text{     Level 3 Effect} \end{align}$$

Table 11.1: Data obtained from a single factor R-DOE experiment.

The following hypothesis test needs to be carried out in this example:


 * $$\begin{align}

& {{H}_{0}}: & {{\theta }_{i}}=0 \\ & {{H}_{1}}: & {{\theta }_{i}}\ne 0 \end{align}$$

where $${{\theta }_{i}}=[{{\beta }_{1}},{{\beta }_{2}}{]}'$$. The statistic for this test is:


 * $$LR=-2\ln \frac{L({{{\hat{\theta }}}_{(-i)}})}{L(\hat{\theta })}$$

where $$L(\hat{\theta })$$  is the value of the likelihood function corresponding to the full model, and  $$L({{\hat{\theta }}_{(-i)}})$$  is the likelihood value for the reduced model. To calculate the statistic for this test, the MLE estimates of the parameters must be obtained. $$$$

Figure 11.10: Experiment design for Example 11.5.

MLE Estimates
Following the procedure used in the analysis of multi-factor R-DOE experiments, MLE estimates of the parameters are obtained by differentiating the log-likelihood function $$\Lambda $$ :


 * $$\begin{align}

& \Lambda = & \underset{i\epsilon 1}{\overset{\mathop{\sum }}}\,\ln \left[ \frac{Beta}{{\left( \frac{{{t}_{i}}} \right)}^{Beta-1}}\exp \left[ -{{\left( \frac{{{t}_{i}}}{{{\eta }_{i}}} \right)}^{Beta}} \right] \right] \\ & & +\underset{i\epsilon 1}{\overset{\mathop{\sum }}}\,\left[ -{{\left( \frac{{{t}_{i}}}{{{\eta }_{i}}} \right)}^{Beta}} \right] \\ & & +\underset{i\epsilon 1}{\overset{FI}{\mathop{\sum }}}\,\ln \left[ \exp \left[ -{{\left( \frac{t_{i}^{2}}{{{\eta }_{i}}} \right)}^{Beta}} \right]-\exp \left[ -{{\left( \frac{t_{i}^{1}}{{{\eta }_{i}}} \right)}^{Beta}} \right] \right] \end{align}$$

Substituting $${{\eta }_{i}}$$  from Eqn. (EtaSingleFactRDOE) and setting the partial derivatives $$\partial \Lambda /\partial {{\theta }_{i}}$$  to zero, the parameter estimates are obtained as  $$\hat{B}eta=1.8532$$,  $${{\hat{\beta }}_{0}}=6.4217$$ ,  $${{\hat{\beta }}_{1}}=-0.4983$$  and  $${{\hat{\beta }}_{2}}=0.1384$$. These parameters are shown in Figure Ex5SingleFactResults1 in the MLE Information table. $$$$

Figure 11.11: MLE results for the experiment in Example 11.5.

Likelihood Ratio Test
Knowing the MLE estimates, the likelihood ratio test for the significance of factor $$A$$  can be carried out. The likelihood value for the full model, $$L(\hat{\theta })$$, is the value of the likelihood function corresponding to the model  $$\ln ({{\eta }_{i}})={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}$$ :


 * $$\begin{align}

& L(\hat{\theta })= & L(\hat{B}eta,{{{\hat{\beta }}}_{0}},{{{\hat{\beta }}}_{1}},{{{\hat{\beta }}}_{2}}) \\ & = & 9.2E-50 \end{align}$$

The likelihood value for the reduced model, $$L({{\hat{\theta }}_{(-i)}})$$, is the value of the likelihood function corresponding to the model  $$\ln ({{\eta }_{i}})={{\beta }_{0}}$$ :


 * $$\begin{align}

& L({{{\hat{\theta }}}_{(-i)}})= & L(\hat{B}eta,{{{\hat{\beta }}}_{0}}) \\ & = & 2.9E-48 \end{align}$$

Then the likelihood ratio is:


 * $$\begin{align}

& LR= & -2\ln \frac{L({{{\hat{\theta }}}_{(-i)}})}{L(\hat{\theta })} \\ & = & 6.8858 \end{align}$$

If the null hypothesis, $${{H}_{0}}$$, is true then the likelihood ratio will follow the Chi-Squared distribution. The number of degrees of freedom for this distribution is equal to the difference in the number of parameters between the full and the reduced model. In this case, this difference is 2. The $$p$$  value corresponding to the likelihood ratio on the Chi-Squared distribution with two degrees of freedom is:


 * $$\begin{align}

& p\text{ }value= & 1-P(\chi _{2}^{2}<LR) \\ & = & 1-0.968 \\ & = & 0.032  \end{align}$$

Assuming that the desired significance is 0.1, since $$p$$   $$value<0.1$$,  $${{H}_{0}}\ \ :\ \ {{\theta }_{i}}=0$$  is rejected it is concluded that, at a significance of 0.1, at least one of the parameters,  $${{\beta }_{1}}$$  or  $${{\beta }_{2}}$$ , is non-zero. Therefore, factor $$A$$  affects the life of the product. This result is shown in the Likelihood Ratio Test table in Figure Ex5SingleFactResults1. Additional results for single factor R-DOE analysis obtained from DOE ++ include information on the life characteristic and comparison of life characteristics at different levels of the factor.

Life Characteristic Summary Results
Results in the Life Characteristic Summary table, include information about the life characteristic corresponding to each treatment level of the factor. If $$\ln ({{\eta }_{i}})$$  is represented as  $$E({{y}_{i}})$$, then Eqn. (EtaSingleFactRDOE) can be written as:


 * $$E({{y}_{i}})={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}$$

The respective equations for all three treatment levels for a single replicate of the experiment can be expressed in matrix notation as:


 * $$E(y)=X\beta $$

where:


 * $$E(y)=\left[ \begin{matrix}

E({{y}_{1}}) \\ E({{y}_{2}}) \\ E({{y}_{3}}) \\ \end{matrix} \right]\text{  }X=\left[ \begin{matrix} 1 & 1 & 0 \\   1 & 0 & 1  \\   1 & -1 & -1  \\ \end{matrix} \right]\text{   }\beta =\left[ \begin{matrix} {{\beta }_{0}} \\ {{\beta }_{1}} \\ {{\beta }_{2}} \\ \end{matrix} \right]$$

Knowing $${{\hat{\beta }}_{0}}$$,  $${{\hat{\beta }}_{1}}$$  and  $${{\hat{\beta }}_{2}}$$ , the predicted value of the life characteristic at any level can be obtained. For example, for the second level:


 * $$\begin{align}

& E({{{\hat{y}}}_{2}})= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{2}} \\ & or\text{  }\ln ({{{\hat{\eta }}}_{2}})= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{2}} \\ & = & 6.421743+0.138414 \\ & = & 6.560157  \end{align}$$

Thus:


 * $$\begin{align}

& {{{\hat{\eta }}}_{2}}= & \exp (6.560157) \\ & = & 706.3828 \end{align}$$

The variance for the predicted values of life characteristic can be calculated using the following equation:


 * $$Var(y)=XVar(\hat{\beta }){{X}^{\prime }}$$

where $$Var(\hat{\beta })$$  is the variance-covariance matrix for  $${{\hat{\beta }}_{0}}$$,  $${{\hat{\beta }}_{1}}$$  and  $${{\hat{\beta }}_{2}}$$. Substituting the required values:


 * $$\begin{align}

& Var(\hat{y})= & \left[ \begin{matrix} 1 & 1 & 0 \\   1 & 0 & 1  \\   1 & -1 & -1  \\ \end{matrix} \right]\left[ \begin{matrix} 0.0291 & -0.0174 & 0.0031 \\   -0.0174 & 0.0423 & -0.0154  \\   0.0031 & -0.0154 & 0.0478  \\ \end{matrix} \right]{{\left[ \begin{matrix} 1 & 1 & 0 \\   1 & 0 & 1  \\   1 & -1 & -1  \\ \end{matrix} \right]}^{\prime }} \\ & = & \left[ \begin{matrix} 0.0364 & -0.0006 & -0.0009 \\   -0.0006 & 0.0829 & 0.0141  \\   -0.0009 & 0.0141 & 0.1167  \\ \end{matrix} \right] \end{align}$$

From the previous matrix, $$Var({{\hat{y}}_{2}})=0.0829$$. Therefore, the 90% confidence interval ( $$\alpha =0.1$$ ) on $${{\hat{y}}_{2}}$$  is:


 * $$\begin{align}

& CI\text{ }on\text{ }{{{\hat{y}}}_{2}}= & E({{{\hat{y}}}_{2}})\pm {{z}_{\alpha /2}}\sqrt{Var({{{\hat{y}}}_{2}})} \\ & = & E({{{\hat{y}}}_{2}})\pm {{z}_{0.05}}\sqrt{Var({{{\hat{y}}}_{2}})} \\ & = & 6.560157\pm 1.645\sqrt{0.0829} \\ & = & 6.0867\text{ }and\text{ }7.0336 \end{align}$$

Since $${{\hat{y}}_{2}}=\ln ({{\hat{\eta }}_{2}}),$$  the 90% confidence interval on  $${{\hat{\eta }}_{2}}$$  is:


 * $$\begin{align}

& CI\text{ }on\text{ }{{{\hat{\eta }}}_{2}}= & \exp (6.0867)\text{ }and\text{ }\exp (7.0336) \\ & = & 439.9\text{ }and\text{ }1134.1 \end{align}$$

Results for other levels can be calculated in a similar manner and are shown in Figure Ex5SingleFactResults2. $$$$

Figure 11.12: Life characteristic results for the experiment in Example 11.5.

Life Comparisons Results
Results under Life Comparisons include information on how life is different at a level in comparison to any other level of the factor. For example, the difference between the predicted values of life at levels 1 and 2 is (in terms of the logarithmic transformation):


 * $$\begin{align}

& E({{{\hat{y}}}_{1}})-E({{{\hat{y}}}_{2}})= & 5.923453-6.560157 \\ & = & -0.6367 \end{align}$$

The pooled standard error for this difference can be obtained as:


 * $$\begin{align}

& Pooled\text{ }Std.\text{ }Error= & \sqrt{Var({{{\hat{y}}}_{1}}-{{{\hat{y}}}_{2}})} \\ & = & \sqrt{Var({{{\hat{y}}}_{1}})+Var({{{\hat{y}}}_{2}})} \\ & = & \sqrt{0.0366+0.0831} \\ & = & 0.3454 \end{align}$$

If the covariance between $${{\hat{y}}_{1}}$$  and  $${{\hat{y}}_{2}}$$  is taken into account, then the pooled standard error is:


 * $$\begin{align}

& Pooled\text{ }Std.\text{ }Error= & \sqrt{Var({{{\hat{y}}}_{1}}-{{{\hat{y}}}_{2}})} \\ & = & \sqrt{Var({{{\hat{y}}}_{1}})+Var({{{\hat{y}}}_{2}})-2\cdot Cov({{{\hat{y}}}_{1}},{{{\hat{y}}}_{2}})} \\ & = & \sqrt{0.0364+0.0829-2\cdot (-0.0006)} \\ & = & 0.3471 \end{align}$$

This is the value displayed by DOE++. Knowing the pooled standard error the confidence interval on the difference can be calculated. The 90% confidence interval on the difference in (logarithmic) life between levels 1 and 2 of factor $$A$$  is:


 * $$\begin{align}

& = & \{E({{{\hat{y}}}_{1}})-E({{{\hat{y}}}_{2}})\}\pm {{z}_{\alpha /2}}\cdot Pooled\text{ }Std.\text{ }Error \\ & = & \{E({{{\hat{y}}}_{1}})-E({{{\hat{y}}}_{2}})\}\pm {{z}_{0.05}}\cdot Pooled\text{ }Std.\text{ }Error \\ & = & -0.6367\pm 1.645\cdot 0.3471 \\ & = & \text{ }-1.208\text{ }and\text{ }-0.066 \end{align}$$

Since the confidence interval does not include zero it can be concluded that the two levels are significantly different at $$\alpha =0.1$$. Another way to test for the significance of the difference in levels is to observe the $$p$$  value. The $$z$$  statistic corresponding to this difference is:


 * $$\begin{align}

& {{z}_{(1-2)}}= & \frac{E({{{\hat{y}}}_{1}})-E({{{\hat{y}}}_{2}})}{Pooled\text{ }Std.\text{ }Error} \\ & = & \frac{-0.6367}{0.3471} \\ & = & -1.834 \end{align}$$

The $$p$$  value corresponding to this statistic, based on the standard normal distribution, is:


 * $$\begin{align}

& p\text{ }value= & 2\cdot (1-P(Z<|-1.8335|) \\ & = & 2\cdot (0.03336) \\  & = & 0.0667  \end{align}$$

Since $$p$$   $$value<\alpha ,$$  it can be concluded that the levels are significantly different at  $$\alpha =0.1$$. The results for other levels can be calculated in a similar manner and are shown in Figure Ex5SingleFactResults2.