Template:Degradation data analysis

Degradation Analysis
Given that products are more frequently being designed with higher reliability and developed in a shorter amount of time, it is often not possible to test new designs to failure under normal operating conditions. In some cases, it is possible to infer the reliability behavior of unfailed test samples with only the accumulated test time information and assumptions about the distribution. However, this generally leads to a great deal of uncertainty in the results. Another option in this situation is the use of degradation analysis. Degradation analysis involves the measurement and extrapolation of degradation or performance data that can be directly related to the presumed failure of the product in question. Many failure mechanisms can be directly linked to the degradation of part of the product, and degradation analysis allows the user to extrapolate to an assumed failure time based on the measurements of degradation or performance over time.

In some cases, it is possible to directly measure the degradation over time, as with the wear of brake pads or with the propagation of crack size. In other cases, direct measurement of degradation might not be possible without invasive or destructive measurement techniques that would directly affect the subsequent performance of the product. In such cases, the degradation of the product can be estimated through the measurement of certain performance characteristics, such as using resistance to gauge the degradation of a dielectric material. In either case, however, it is necessary to be able to define a level of degradation or performance at which a failure is said to have occurred. With this failure level of performance defined, it is a relatively simple matter to use basic mathematical models to extrapolate the performance measurements over time to the point where the failure is said to occur. Once these have been determined, it is merely a matter of analyzing the extrapolated failure times like conventional time-to-failure data.

Once the level of failure (or the degradation level that would constitute a failure) is defined, the degradation for multiple units over time needs to be measured. As with conventional reliability data, the amount of certainty in the results is directly related to the number of units being tested. The performance or degradation of these units needs to be measured over time, either continuously or at predetermined intervals. Once this information has been recorded, the next task is to extrapolate the performance measurements to the defined failure level in order to estimate the failure time. Weibull++ allows the user to perform such analysis using a linear, exponential, power or logarithmic model to perform this extrapolation. These models have the following forms:

$$\begin{matrix} Linear\ \ : & y=a\cdot x+b \\ Exponential & y=b\cdot {{e}^{a\cdot x}} \\ Power &  y=b\cdot {{x}^{a}} \\ Logarithmic & y=a\cdot ln(x)+b \\ Gompertz & y=a\cdot {{b}^} \\ Lloyd-Lipow & y=a-\frac{b}{x} \\ \end{matrix}$$

where $$y$$  represents the performance,  $$x$$  represents time, and  $$a,$$   $$b$$  and  $$c$$  are model parameters to be solved for.

Once the model parameters $${{a}_{i}}$$,  $${{b}_{i}}$$  (and  $${{c}_{i}}$$ ) are estimated for each sample  $$i$$ , a time,  $${{x}_{i}}$$ , can be extrapolated, which corresponds to the defined level of failure  $$y$$. The computed $${{x}_{i}}$$  values can now be used as our times-to-failure  for subsequent life data analysis. As with any sort of extrapolation, one must be careful not to extrapolate too far beyond the actual range of data in order to avoid large inaccuracies (modeling errors).