Template:Degradation data analysis

Introduction
Given that products are more frequently being designed with higher reliability and developed in a shorter amount of time, it is often not possible to test new designs to failure under normal operating conditions. In some cases, it is possible to infer the reliability behavior of unfailed test samples with only the accumulated test time information and assumptions about the distribution. However, this generally leads to a great deal of uncertainty in the results. Another option in this situation is the use of degradation analysis. Degradation analysis involves the measurement of performance data that can be directly related to the presumed failure of the product in question. Many failure mechanisms can be directly linked to the degradation of part of the product, and degradation analysis allows the analyst to extrapolate to an assumed failure time based on the measurements of degradation over time.

In some cases, it is possible to directly measure the degradation over time, as with the wear of brake pads or with the propagation of crack size. In other cases, direct measurement of degradation might not be possible without invasive or destructive measurement techniques that would directly affect the subsequent performance of the product. In such cases, the degradation of the product can be estimated through the measurement of certain performance characteristics, such as using resistance to gauge the degradation of a dielectric material. In either case, however, it is necessary to be able to define a level of degradation or performance at which a failure is said to have occurred. With this failure level defined, it is a relatively simple matter to use basic mathematical models to extrapolate the measurements over time to the point where the failure is said to occur. Once these have been determined, it is merely a matter of analyzing the extrapolated failure times in the same manner as conventional time-to-failure data.

Once the level of failure (or the degradation level that would constitute a failure) is defined, the degradation for multiple units over time needs to be measured. As with conventional reliability data, the amount of certainty in the results is directly related to the number of units being tested. The degradation of these units needs to be measured over time, either continuously or at predetermined intervals.

Degradation Models
Once the degradation information has been recorded, the next task is to extrapolate the measurements to the defined failure level in order to estimate the failure time. Weibull++ allows the user to perform such extrapolation using a linear, exponential, power or logarithmic model. These models have the following forms:

$$\begin{matrix} Linear\ \ & y=a\cdot x+b  \\ Exponential & y=b\cdot {{e}^{a\cdot x}} \\ Power &  y=b\cdot {{x}^{a}} \\ Logarithmic & y=a\cdot ln(x)+b \\ Gompertz & y=a\cdot {{b}^} \\ Lloyd-Lipow & y=a-\frac{b}{x} \\ \end{matrix}$$

where $$y$$  represents the performance,  $$x$$  represents time, and  $$a,$$   $$b$$  and  $$c$$  are model parameters to be solved for.

Once the model parameters $${{a}_{i}}$$,  $${{b}_{i}}$$  (and  $${{c}_{i}}$$ ) are estimated for each sample  $$i$$ , a time,  $${{x}_{i}}$$ , can be extrapolated, which corresponds to the defined level of failure  $$y$$. The computed $${{x}_{i}}$$  values can now be used as our times-to-failure  for subsequent life data analysis. As with any sort of extrapolation, one must be careful not to extrapolate too far beyond the actual range of data in order to avoid large inaccuracies (modeling errors).

Example 1:

Using Extrapolated Intervals
The parameters in a degradation model are estimated using available degradation data. If the data is large, the uncertainty of the estimated parameters will be small. Otherwise, the uncertainty will be large. Since the failure time for a test unit is predicted based on the estimated model, we sometimes would like to see how the parameter uncertainty affects the failure time prediction. Let’s use the exponential model as an example. Assume the critical degradation value is $${{y}_{crit}}$$. The predicted failure time will be:

$$ \hat{x}=\frac{\ln ({{y}_{crit}})-\ln (\hat{b})}$$

The variance of the predicted failure time will be:

$$Var(\hat{x})={{\left( \frac{\partial x}{\partial a} \right)}^{2}}Var(\hat{a})+{{\left( \frac{\partial x}{\partial b} \right)}^{2}}Var(\hat{b})+2\left( \frac{\partial x}{\partial a} \right)\left( \frac{\partial x}{\partial b} \right)Cov(\hat{a},\hat{b})$$

The variance and covariance of the model parameters are calculated from using Least Squares Estimation. The detail of the calculation is not given here.

The 2-sided upper and lower bounds for the predicted failure time, with a confidence level of $$1-\alpha $$ are: $${{x}_{U}}=\hat{x}+{{K}_{1-\alpha /2}}\sqrt{Var(\hat{x})}$$ $${{x}_{L}}=\hat{x}-{{K}_{1-\alpha /2}}\sqrt{Var(\hat{x})}$$

In Weibull++, the default confidence level is 90%.

Example 2: