Additional Tools

=Additional Tools= ALTA and ALTA PRO contain some additional analysis tools that allow you to perform supplementary analyses. These include tests of comparison between two data sets, likelihood ratio tests, degradation analysis and accelerated test planning. The principles and theory behind each of these analysis tools are presented next.

Common Shape Parameter Likelihood Ratio Test
In order to assess the assumption of a common shape parameter among the data obtained at various stress levels, the likelihood ratio (LR) test can be utilized [28]. This test applies to any distribution with a shape parameter. In the case of ALTA, it applies to the Weibull and lognormal distributions. When Weibull is used as the underlying life distribution, the shape parameter, $$\beta ,$$  is assumed to be constant across the different stress levels (i.e. stress independent). Similarly$${{\sigma }_}$$, the parameter   of the lognormal distribution is assumed to be constant across the different stress levels. The likelihood ratio test is performed by first obtaining the LR test statistic, $$T$$. If the true shape parameters are equal, then the distribution of $$T$$  is approximately chi-square with  $$n-1$$  degrees of freedom, where  $$n$$  is the number of test stress levels with two or more exact failure points. The LR test statistic, $$T$$, is calculated as follows:


 * $$T=2({{\hat{\Lambda }}_{1}}+...+{{\hat{\Lambda }}_{n}}-{{\hat{\Lambda }}_{0}})$$

$$\hat{\Lambda}_{1}, ..., \hat{\Lambda}_{n}$$ are the likelihood values obtained by fitting a separate distribution to the data from each of the $$n$$  test stress levels (with two or more exact failure times). The likelihood value, $${{\hat{\Lambda }}_{0}},$$  is obtained by fitting a model with a common shape parameter and a separate scale parameter for each of the  $$n$$  stress levels, using indicator variables.

Once the LR statistic has been calculated, then:

•	If $$T\le {{\chi }^{2}}(1-\alpha ;n-1),$$  the  $$n$$  shape parameter estimates do not differ statistically significantly at the 100 $$\alpha %$$  level.

•	If $$T>{{\chi }^{2}}(1-\alpha ;n-1),$$ the $$n$$  shape parameter estimates differ statistically significantly at the 100 $$\alpha %$$  level.

$${{\chi }^{2}}(1-\alpha ;n-1)$$ is the 100(1- $$\alpha )$$  percentile of the chi-square distribution with  $$n-1$$  degrees of freedom.

Example
Consider the following times-to-failure data at three different stress levels.

The data set was analyzed using an Arrhenius-Weibull model. The analysis yields:


 * $$\widehat{\beta }=\ 2.965820$$


 * $$\widehat{B}=\ 10,679.567542$$


 * $$\widehat{C}=\ 2.396615\cdot {{10}^{-9}}$$

The assumption of a common $$\beta $$  across the different stress levels can be assessed visually using a probability plot.



In Fig. 1, it can be seen that the plotted data from the different stress levels seem to be fairly parallel.



A better assessment can be made with the LR test, which can be performed using the Likelihood Ratio Test tool in ALTA. For example, in the following figure, the $$\beta s$$  are compared for equality at the 10% level.

The individual likelihood values for each of the test stresses can be found in the Results tab of the Likelihood Ratio Test window.



The LR test statistic, $$T$$, is calculated to be 0.481. Therefore, $$T=0.481\le 4.605={{\chi }^{2}}(0.9;2),$$  the  $${\beta }'$$ s do not differ significantly at the  $$10%$$  level.

Tests of Comparison
It is often desirable to be able to compare two sets of accelerated life data in order to determine which of the data sets has a more favorable life distribution. The units from which the data are obtained could either be from two alternate designs, alternate manufacturers or alternate lots or assembly lines. Many methods are available in statistical literature for doing this when the units come from a complete sample, i.e. a sample with no censoring. This process becomes a little more difficult when dealing with data sets that have censoring, or when trying to compare two data sets that have different distributions. In general, the problem boils down to that of being able to determine any statistically significant difference between the two samples of potentially censored data from two possibly different populations. This section discusses some of the methods that are applicable to censored data, and are available in ALTA.

Simple Plotting
One popular graphical method for making this determination involves plotting the data at a given stress level with confidence bounds and seeing whether the bounds overlap or separate at the point of interest. This can be effective for comparisons at a given point in time or a given reliability level, but it is difficult to assess the overall behavior of the two distributions, as the confidence bounds may overlap at some points and be far apart at others. This can be easily done using the multiple plot feature in ALTA.

Estimating $$P\left[ {{t}_{2}}\ge {{t}_{1}} \right]$$ Using the Comparison Wizard
Another methodology, suggested by Gerald G. Brown and Herbert C. Rutemiller, is to estimate the probability of whether the times-to-failure of one population are better or worse than the times-to-failure of the second. The equation used to estimate this probability is given by:


 * $$P\left[ {{t}_{2}}\ge {{t}_{1}} \right]=\int_{0}^{\infty }{{f}_{1}}(t)\cdot {{R}_{2}}(t)\cdot dt$$

where $${{f}_{1}}(t)$$  is the  $$pdf$$  of the first distribution and  $${{R}_{2}}(t)$$  is the reliability function of the second distribution. The evaluation of the superior data set is based on whether this probability is smaller or greater than 0.50. If the probability is equal to 0.50, that is equivalent to saying that the two distributions are identical.

If given two alternate designs with life test data, where X and Y represent the life test data from two different populations, and if we simply wanted to choose the component at time $$t$$  with the higher reliability, one choice would be to select the component with the higher reliability at time  $$t$$. However, if we wanted to design a product as long-lived as possible, we would want to calculate the probability that the entire distribution of one product is better than the other and choose X or Y when this probability is above or below 0.50 respectively.

The statement "the probability that X is greater than or equal to Y" can be interpreted as follows:

•	If $$P=0.50$$, then the statement is equivalent to saying that both X and Y are equal. •	If $$P<0.50$$  or, for example,  $$P=0.10$$, then the statement is equivalent to saying that  $$P=1-0.10=0.90$$ , or Y is better than X with a 90% probability.

ALTA's Comparison Wizard allows you to perform such calculations. The comparison is performed at the given use stress levels of each data set, using the equation:


 * $$P\left[ {{t}_{2}}\ge {{t}_{1}} \right]=\int_{0}^{\infty }{{f}_{1}}(t,{{V}_{Use,1}})\cdot {{R}_{2}}(t,{{V}_{Use,2}})\cdot dt$$

The disadvantage of this method is that the sample sizes are not taken into account, thus one should avoid using this method of comparison when the sample sizes are different.

Degradation Analysis
Given that products are frequently being designed with higher reliabilities and developed in shorter amounts of time, even accelerated life testing is often not sufficient to yield reliability results in the desired timeframe. In some cases, it is possible to infer the reliability behavior of unfailed test samples with only the accumulated test time information and assumptions about the distribution. However, this generally leads to a great deal of uncertainty in the results. Another option in this situation is the use of degradation analysis. Degradation analysis involves the measurement and extrapolation of degradation or performance data that can be directly related to the presumed failure of the product in question. Many failure mechanisms can be directly linked to the degradation of part of the product, and degradation analysis allows the user to extrapolate to an assumed failure time based on the measurements of degradation or performance over time. To reduce testing time even further, tests can be performed at elevated stresses and the degradation at these elevated stresses can be measured resulting in a type of analysis known as accelerated degradation. In some cases, it is possible to directly measure the degradation over time, as with the wear of brake pads or with the propagation of crack size. In other cases, direct measurement of degradation might not be possible without invasive or destructive measurement techniques that would directly affect the subsequent performance of the product. In such cases, the degradation of the product can be estimated through the measurement of certain performance characteristics, such as using resistance to gauge the degradation of a dielectric material. In either case, however, it is necessary to be able to define a level of degradation or performance at which a failure is said to have occurred. With this failure level of performance defined, it is a relatively simple matter to use basic mathematical models to extrapolate the performance measurements over time to the point where the failure is said to occur. This is done at different stress levels, and therefore each time-to-failure is also associated with a corresponding stress level. Once the times-to-failure at the corresponding stress levels have been determined, it is merely a matter of analyzing the extrapolated failure times in the same manner as you would conventional accelerated time-to-failure data.

Once the level of failure (or the degradation level that would constitute a failure) is defined, the degradation for multiple units over time needs to be measured (with different groups of units being at different stress levels). As with conventional accelerated data, the amount of certainty in the results is directly related to the number of units being tested, the number of units at each stress level, as well as in the amount of overstressing with respect to the normal operating conditions. The performance or degradation of these units needs to be measured over time, either continuously or at predetermined intervals. Once this information has been recorded, the next task is to extrapolate the performance measurements to the defined failure level in order to estimate the failure time. ALTA allows the user to perform such analysis using a linear, exponential, power, logarithmic, Gompertz or Lloyd-Lipow model to perform this extrapolation. These models have the following forms:

$$\begin{matrix} Linear\ \ : & y=a\cdot x+b \\ Exponential\ \ : & y=b\cdot {{e}^{a\cdot x}} \\ Power\ \ : & y=b\cdot {{x}^{a}} \\ Logarithmic\ \ : & y=a\cdot ln(x)+b \\ Gompertz\ \ : & y=a\cdot {{b}^{cx}} \\ Lloyd-Lipow\ \ : & y=a-b/x \\ \end{matrix}$$

where $$y$$  represents the performance,  $$x$$  represents time, and  $$a$$  and  $$b$$  are model parameters to be solved for. Once the model parameters $${{a}_{i}}$$  and  $${{b}_{i}}$$  (and  $${{c}_{i}}$$  for Lloyd-Lipow) are estimated for each sample  $$i$$, a time,  $${{x}_{i}},$$  can be extrapolated that corresponds to the defined level of failure  $$y$$. The computed $${{x}_{i}}$$  can now be used as our times-to-failure for subsequent accelerated life data analysis. As with any sort of extrapolation, one must be careful not to extrapolate too far beyond the actual range of data in order to avoid large inaccuracies (modeling errors). One may also define a censoring time past which no failure times are extrapolated. In practice, there is usually a rather narrow band in which this censoring time has any practical meaning. With a relatively low censoring time, no failure times will be extrapolated, which defeats the purpose of degradation analysis. A relatively high censoring time would occur after all of the theoretical failure times, thus being rendered meaningless. Nevertheless, certain situations may arise in which it is beneficial to be able to censor the accelerated degradation data.

Example
Consider a chemical solution (e.g. ink formulation, medicine, etc.) that degrades with time. A quantitative measure of the quality of the product can be obtained. This measure (QM) is said to be around 100 when the product is first manufactured and decreases with product age. The minimum acceptable value for QM is 50. Products with QM equal to or lower than 50 are considered to be out of compliance or failed. Engineering analysis has indicated that at higher temperatures the QM has a higher rate of decrease. Assuming that the product's normal use temperature is 20º C (or 293K), the goal is to determine the shelf life of the product via an accelerated degradation test. For the purpose of this analysis ``shelf life'' is defined as the time by which 10% of the products will have a QM that is out of compliance. For this experiment, 15 samples of the product were tested, with 5 samples in each of three accelerated stress environments: 323K, 373K and 383K. Once a month, and for a period of seven months, the QM for each sample was measured and recorded. The data obtained is given in the next table.

Accelerated Degradation Data

Since all of the readings are above the critical QM threshold of 50, none of the samples tested in this experiment had gone out of compliance (or failed) by the end of the test. However, there was sufficient data for the degradation of each sample to extrapolate a time-to-failure (i.e. the month at which we expect each sample to be at QM=50).



Using ALTA's Degradation Analysis Folio (shown in Fig. 2) the data for all samples were entered and individually fitted to multiple exponential curves. Fig. 3 shows sample graphs. From each respective curve there is a time-to-failure (i.e. the time the product is expected to go out of compliance).

To view click the "Transfer Life Data to Folio" button.





Several plots can be obtained from the analysis. Specifically, Fig. 5 shows a Weibull probability plot at the use stress level. Fig. 6 shows a Life vs. Stress plot where the line represents the time by which 10% of the units are expected to be out of compliance (at a given temperature).

Based on this analysis, the projected shelf life of this product is 15.6 months. The desired result could also have been obtained from the QCP, as shown next.



Accelerated Life Test Plans
Poor accelerated test plans waste time, effort and money and may not even yield the desired information. Before starting an accelerated test (which is sometimes an expensive and difficult endeavor), it is advisable to have a plan that helps in acurately estimating reliability at operating conditions while minimizing test time and costs. A test plan should be used to decide on the appropriate stress levels that should be used (for each stress type) and the amount of the test units that need to be allocated to the different stress levels (for each combination of the different stress types' levels). This section presents some common test plans for one-stress and two-stress accelerated tests.

General Assumptions
Most accelerated life testing plans use the following model and testing assumptions that correspond to many practical quantitative accelerated life testing problems.

1. The log-time-to-failure for each unit follows a location-scale distribution such that:


 * $$\underset{}{\overset{}{\mathop{\Pr }}}\,(Y\le y)=\Phi \left( \frac{y-\mu }{\sigma } \right)$$

where $$\mu $$  and  $$\sigma $$  are the location and scale parameters respectively and  $$\Phi $$ ( $$\cdot $$ ) is the standard form of the location-scale distribution. 2. Failure times for all test units, at all stress levels, are statistically independent. 3. The location parameter $$\mu $$  is a linear function of stress. Specifically, it is assumed that:


 * $$\mu =\mu ({{z}_{1}})={{\gamma }_{0}}+{{\gamma }_{1}}x$$

4. The scale parameter, $$\sigma ,$$  does not depend on the stress levels. All units are tested until a pre-specified test time. 5. Two of the most common models used in quantitative accelerated life testing are the linear Weibull and lognormal models. The Weibull model is given by:


 * $$Y\sim SEV\left[ \mu (z)={{\gamma }_{0}}+{{\gamma }_{1}}x,\sigma \right]$$

where $$SEV$$  denotes the smallest extreme value distribution. The lognormal model is given by:


 * $$Y\sim Normal\left[ \mu (z)={{\gamma }_{0}}+{{\gamma }_{1}}z,\sigma \right]$$

That is, log life $$Y$$  is assumed to have either an  $$SEV$$  or a normal distribution with location parameter  $$\mu (z)$$, expressed as a linear function of  $$z$$  and constant scale parameter  $$\sigma $$.

Planning Criteria and Problem Formulation
Without loss of generality, a stress can be standardized as follows:


 * $$\xi =\frac{x-{{x}_{D}}}{{{x}_{H}}-{{x}_{D}}}$$


 * where:

•	 $${{x}_{D}}$$ is the use stress or design stress at which product life is of primary interest.

•	 $${{x}_{H}}$$ is the highest test stress level. The values of $$x$$,  $${{x}_{D}}$$  and  $${{x}_{H}}$$  refer to the actual values of stress or to the transformed values in case a transformation (e.g. the reciprocal transformation to obtain the Arrhenius relationship or the log transformation to obtain the power relationship) is used.

Typically, there will be a limit on the highest level of stress for testing because the distribution and life-stress relationship assumptions hold only for a limited range of the stress. The highest test level of stress, $${{x}_{H}},$$  can be determined based on engineering knowledge, preliminary tests or experience with similar products. Higher stresses will help end the test faster, but might violate your distribution and life-stress relationship assumptions. Therefore, $$\xi =0$$  at the design stress and  $$\xi =1$$  at the highest test stress. A common purpose of an accelerated life test experiment is to estimate a particular percentile (unreliability value of $$p$$ ),  $${{T}_{p}}$$, in the lower tail of the failure distribution at use stress. Thus a natural criterion is to minimize $$Var({{\hat{T}}_{p}})$$  or  $$Var({{\hat{Y}}_{p}})$$  such that  $${{Y}_{p}}=\ln ({{T}_{p}})$$. $$Var({{\hat{Y}}_{p}})$$ measures the precision of the  $$p$$  quantile estimator; smaller values mean less variation in the value of  $${{\hat{Y}}_{p}}$$  in repeated samplings. Hence a good test plan should yield a relatively small, if not the minimum, $$Var({{\hat{Y}}_{p}})$$  value. For the minimization problem, the decision variables are $${{\xi }_{i}}$$  (the standardized stress level used in the test) and  $${{\pi }_{i}}$$  (the percentage of the total test units allocated at that level). The optimization problem can be formulized as follows.
 * Minimize:


 * $$Var({{\hat{Y}}_{p}})=f({{\xi }_{i}},{{\pi }_{i}})$$


 * Subject to:


 * $$0\le {{\pi }_{i}}\le 1,\text{ }i=1,2,...n$$


 * where:


 * $$\underset{i=1}{\overset{n}{\mathop{\sum }}}\,{{\pi }_{i}}=1$$

vAn optimum accelerated test plan requires algorithms to minimize $$Var({{\hat{Y}}_{p}})$$. Planning tests may involve compromise between efficiency and extrapolation. More failures correspond to better estimation efficiency, requiring higher stress levels but more extrapolation to the use condition. Choosing the best plan to consider must take into account the trade-offs between efficiency and extrapolation. Test plans with more stress levels are more robust than plans with fewer stress levels because they rely less on the validity of the life-stress relationship assumption. However, test plans with fewer stress levels can be more convenient.

Test Plans for a Single Stress Type
This section presents a discussion of some of the most popular test plans used when only one stress factor is applied in the test. In order to design a test, the following information needs to be determined beforehand: 1. The design stress, $${{x}_{D}},$$  and the highest test stress,  $${{x}_{H}}$$. 2. The test duration (or censoring time), $$\Upsilon $$. 3. The probability of failure at $${{x}_{D}}$$   $$(\xi =0)$$  by  $$\Upsilon $$, denoted as  $${{P}_{D}},$$  and at  $${{x}_{H}}$$   $$(\xi =1)$$  by  $$\Upsilon $$ , denoted as  $${{P}_{H}}$$.

Two Level Statistically Optimum Plan
The Two Level Statistically Optimum Plan is the most important plan, as almost all other plans are derived from it. For this plan, the highest stress, $${{x}_{H}}$$, and the design stress,  $${{x}_{D}}$$ , are pre-determined. The test is conducted at two levels. The high test level is fixed at $${{x}_{H}}$$. The low test stress, $${{x}_{L}}$$, together with the proportion of the test units allocated to the low level,  $${{\pi }_{L}}$$ , are calculated such that  $$Var({{\hat{Y}}_{p}})$$  is minimized. Meeker [36] presents more details about this test plan.

Three Level Best Standard Plan
In this plan, three stress levels are used. Let us use $${{\xi }_{L}},$$   $${{\xi }_{M}}$$  and  $${{\xi }_{H}}$$  to denote the three standardized stress levels from lowest to highest with:


 * $${{\xi }_{M}}=\frac{{{\xi }_{L}}+{{\xi }_{H}}}{2}=\frac{{{\xi }_{L}}+1}{2}$$

An equal number of units is tested at each level, $${{\pi }_{L}}={{\pi }_{M}}={{\pi }_{H}}=1/3$$. Therefore, the test plan is $$({{\xi }_{L}},{{\xi }_{M}}$$, $${{\xi }_{H}},{{\pi }_{L}},{{\pi }_{M}},{{\pi }_{H}})=({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}$$ , $$1,1/3,1/3,1/3)$$  with  $${{\xi }_{L}}$$  being the only decision variable. $${{\xi }_{L}}$$ is determined such that  $$Var({{\hat{Y}}_{p}})$$  is minimized. Escobar [37] presents more details about this test plan.

Three Level Best Compromise Plan
In this plan, three stress levels are used $$({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}$$, $$1).$$   $${{\pi }_{M}}$$ , which is a value between 0 and 1, is pre-determined. $${{\pi }_{M}}=0.1$$ and  $${{\pi }_{M}}=0.2$$  are commonly used; values less than or equal to 0.2 can give good results. The test plan is $$({{\xi }_{L}},{{\xi }_{M}}$$, $${{\xi }_{H}},{{\pi }_{L}},{{\pi }_{M}},{{\pi }_{H}})$$ =  $$({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}$$ , $$1,{{\pi }_{L}},{{\pi }_{M}},1-{{\pi }_{L}}-{{\pi }_{M}})$$  with  $${{\xi }_{L}}$$  and  $${{\pi }_{L}}$$  being the decision variables determined such that  $$Var({{\hat{Y}}_{p}})$$  is minimized. Meeker [38] presents more details about this test plan.

Three Level Best Equal Expected Number Failing Plan
In this plan, three stress levels are used $$({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}$$, $$1)$$  and there is a constraint that an equal number of failures at each stress level is expected. The constraint can be written as:
 * $${{\pi }_{L}}{{P}_{L}}={{\pi }_{M}}{{P}_{M}}={{\pi }_{H}}{{P}_{H}}$$

where $${{P}_{L}},{{P}_{M}}$$  and  .. are the failure probability at the low, middle and high test level, respectively. $${{P}_{L}}$$ and  $${{P}_{M}}$$  can be expressed in terms of  $${{\xi }_{L}}$$  and  $${{\xi }_{M}}$$. Therefore, all variables can be expressed in terms of $${{\xi }_{L}},$$  which is chosen such that  $$Var({{\hat{Y}}_{p}})$$  is minimized. Meeker [38] presents more details about this test plan.

Three Level 4:2:1 Allocation Plan
In this plan, three stress levels are used  $$({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}$$, $$1).$$  The allocation of units at each level is pre-given as  $${{\pi }_{L}}\ \ :\ \ {{\pi }_{M}}\ \ :\ \ {{\pi }_{H}}=4\ \ :\ \ 2\ \ :\ \ 1$$. Therefore $${{\pi }_{L}}=4/7,$$   $${{\pi }_{M}}=2/7$$  and  $${{\pi }_{H}}=1/7$$. $${{\xi }_{L}}$$ is the only decision variable that is chosen such that  $$Var({{\hat{Y}}_{p}})$$  is minimized. The optimum $${{\xi }_{L}}$$  can also be multiplied by a constant  $$k$$  (defined by the user) to make the low stress level closer to the use stress than to the optimized plan, in order to make a better extrapolation at the use stress. Meeker [39] presents more details about this test plan.

Example
A reliability engineer is planning an accelerated test for a mechanical component. Torque is the only factor in the test. The purpose of the experiment is to estimate the $$B10$$  life (Time equivalent to Unreliability = 0.1) of the diodes. The reliability engineer wants to use a Two Level Statistically Optimum Plan because it would require fewer test chambers than a three level test plans. 40 units are available for the test. The mechanical component is assumed to follow a Weibull distribution with $$\beta =3.5$$  and a Power model is assumed for the life-stress relationship. The test is planned to last for 10000 cycles. The engineer has estimated that there is a 0.0006 probability that a unit will fail by 1000 cycles at the use stress level of 60Nm. The highest level allowed in the test is 120Nm and a unit is estimated to fail with a probability of 0.99999 at 120Nm. The following is the setup to generate the test plan in ALTA. The Two Level Statistically Optimum Plan is shown next. The Two Level Statistically Optimum Plan is to test 28.24 units at 95.39Nm and 11.76 units at 120Nm. The variance of the test at .. is $$Var({{T}_{p}}=B1)=StdDev{{({{T}_{p}}=B1)}^{2}}={{14380}^{2}}.$$

Test Plans for Two Stress Types
This section presents a discussion of some of the most popular test plans used when two stress factors are applied in the test and interactions are assumed not to exists between the factors. The location parameter $$\mu $$  can be expressed as function of stresses  $${{x}_{1}}$$  and  $${{x}_{2}}$$  or as a function of their normalized stress levels as follows:


 * $$\mu ={{\gamma }_{0}}+{{\gamma }_{1}}{{\xi }_{1}}+{{\gamma }_{2}}{{\xi }_{2}}$$

In order to design a test, the following information needs to be determined beforehand: 1. The stress limits (the design stress, $${{x}_{D}},$$  and the highest test stress,  $${{x}_{H}}$$ ) of each stress type. 2. The test time (or censoring time), $$\Upsilon $$. . 3. The probability of failure at $$\Upsilon $$  at three stress combinations. Usually $${{P}_{DD}}$$,  $${{P}_{HD}}$$  and  $${{P}_{DH}}$$  are used ( $$P$$  indicates probability and the subscript  $$D$$  represents the design stress, while  $$H$$  represents the highest stress level in the test). For two-stress test planning, two methods are available: the Three Level Optimum Plan and the Five Level Best Compromise Plan.

Three Level Optimum Plan
The Three Level Optimum Plan is obtained by first finding a one-stress degenerate Two Level Statistically Optimum Plan and splitting this degenerate plan into an appropriate two-stress plan. In a degenerate test plan, the test is conducted at any two (or more) stress level combinations on a line with slope $$s$$  that passes through the design  $${{\xi }_{D}}=\left( {{\xi }_{1D}},{{\xi }_{2D}} \right)$$. Therefore, in the case of a degenerate design, Eqn. (locationEqn1) becomes:


 * $$\mu ={{\gamma }_{0}}+\left( {{\gamma }_{1}}+{{\gamma }_{2}}s \right){{\xi }_{1}}$$

Degenerate plans help reducing the two-stress problem into a one-stress problem. Although these degenerate plans do not allow the estimation of all the model parameters and would be an unlikely choice in practice, they are used as a starting point for developing more reasonable optimum and compromise test plans. After finding the one stress degenerate Two Level Statistically Optimum Plan using the methodology explained in 13.4.3.1, the plan is split into an appropriate Three Level Optimum Plan. Fig. 10 illustrates the concept of the Three Level Optimum Plan for a two-stress test. $${{\xi }_{D}}$$ is the (0,0) point. $${{C}_{O}}$$ and  $${{C}_{1}}$$  are the one-stress degenerate Two Level Statistically Optimum Plan. $${{C}_{1}}$$, which corresponds to ( $${{\xi }_{1}}=1,{{\xi }_{2}}=1$$ ), is always used for this type of test and is the high stress level of the degenerate plan. $${{C}_{O}}$$ corresponds to the low stress level of the degenerate plan. A line, $$L$$, is drawn passing through  $${{C}_{O}}$$  such that all the points along the line have the same probability of failures at the end of the test with the stress levels of the  $${{C}_{O}}$$  plan. $${{C}_{2}}$$ and  $${{C}_{3}}$$  are then determined by obtaining the intersections of  $$L$$  with the boundaries of the square.


 * $${{C}_{1}}$$, $${{C}_{2}}$$  and  $${{C}_{3}}$$  represent the the Three Level Optimum Plan. Readers are encouraged to review Escobar [37] for more details about this test plan.

Five Level Best Compromise Plan
The Five Level Best Compromise Plan is obtained by first finding a degenerate one-stress Three Level Best Compromise Plan, using the methodology explained in 13.4.3.3 (with $${{\pi }_{M}}=0.2$$), and splitting this degenerate plan into an appropriate two-stress plan. In Fig. 11, $${{\xi }_{D}}$$  is the (0,0) point. $${{C}_{O1}},{{C}_{O2}}$$ and  $${{C}_{1}}$$  are the degenerate one-stress Three Level Best Compromise Plan. Points along the $${{L}_{1}}$$  line have the same probability of failure at the end of the  $${{C}_{O1}}$$  test plan, while points on  $${{L}_{2}}$$  have the same probability of failure at the end of the  $${{C}_{O2}}$$  test plan. $${{C}_{2}},{{C}_{3}}$$, $${{C}_{4}}$$  and  $${{C}_{5}}$$  are then determined by obtaining the intersections of  $${{L}_{1}}$$  and  $${{L}_{2}}$$  with the boundaries of the square.


 * $${{C}_{1}}$$, $${{C}_{2}},{{C}_{3}}$$ ,  $${{C}_{4}}$$  and  $${{C}_{5}}$$  represent the the Five Level Best Compromise Plan. Readers are encouraged to review Escobar [37] for more details about this test plan.

Example
A reliability group in a semiconductor company is planing an accelerated test for an electronic device. 100 test units will be employed for the test. Temperature and voltage have been determined to be the main factors affecting the reliability of the device. The purpose of the experiment is to estimate the $$B10$$  life (Time equivalent to Unreliability = 0.1) of the devices. The reliability engineer wants to use a three-level optimum plan because it would be easier to manage than a five-level test plan. The devices are assumed to follow a Weibull distribution with $$\beta =3$$. An Arrhenius model is assumed for the life-stress relationship associated with temperature and a power model is assumed for the life-stress relationship associated with voltage. The test is planned to last for 600 hours. The normal use conditions of the devices are 300K for temperature and 4V for voltage. The reliability group has estimated that there is a $${{P}_{DD}}=0.02$$  probability that a unit will fail by 600 hours while operating under typical use conditions. The highest level allowed in the test is 360K for temperature and 10V for voltage. The probability of failure at 360K  and 4V is estimated to be $${{P}_{HD}}=0.4$$. The probability of failure at 300K  and 10V is estimated to be  $${{P}_{DH}}=0.9$$. The following is the setup to generate the test plan in ALTA.

The three level optimum plan is shown next. It requires that 19.4 units be tested at 360K and 10V, 32.68 units be tested at 357.09K and 4V and 47.91 units be tested at 300K and 7.2V.

Test Plan Evaluation
In addition to assessing $$Var({{\hat{T}}_{p}})$$  ( $$Var({{\hat{T}}_{p}})$$  is explained in 13.4.2) an accelerated test plan can also be evaluated based on three different criteria. These criteria can be assessed before conducting a test to decide whether a test plan is satisfactory or whether some modifications would be beneficial. In the Control Panel shown on the right hand side of Figs. 8 and 12, the analyst can solve for any one of three criteria (confidence level, bounds ratio or sample size) given the two other criteria. The bounds ratio is defined as follows:


 * $$\text{Bounds Ratio}=\frac{\text{Two Sided Upper Bound on }{{T}_{p}}}{\text{Two Sided Lower Bound on }{{T}_{p}}}$$

This ratio is analogous to the ratio that can be calculated if a test is conducted and life data are obtained and used to calculate the ratio of the confidence bounds based on the results. Let us use the example in 13.4.3.6 for illustration. For example, if a 90% confidence is desired and 40 units are to be used in the test, then the bounds ratio is calculated as $$2.9463,$$  as shown in Fig. 13.

If this calculated bounds ratio is unsatisfactory, the analyst can calculate the required number of units that would meet a certain bounds ratio criterion. For example, if a bounds ratio of 2 is desired, the required sample size is calculated as 97.21, as shown in Fig. 14.

If the sample size is kept at 40 units and a bounds ratio of 2 is desired, the equivalent confidence level we have in the test drops to 70.86%, as shown in Fig. 15.